🚨 New paper alert! 👁️ 𝗙𝘂𝗻𝗱𝘂𝘀 𝗶𝗺𝗮𝗴𝗲𝘀 are crucial for the 𝗲𝗮𝗿𝗹𝘆 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘀𝗰𝗿𝗲𝗲𝗻𝗶𝗻𝗴 𝗼𝗳 𝗲𝘆𝗲 𝗱𝗶𝘀𝗲𝗮𝘀𝗲𝘀. ⚠️ But deep learning models trained on fundus images often fail when deployed in new clinical environments, due to variations in imaging devices and protocols #DomainShift 🚀 The authors below present FunOTTA, 𝗙𝘂𝗻𝗱𝘂𝘀 𝗢𝗻-𝘁𝗵𝗲-𝗳𝗹𝘆 𝗧𝗲𝘀𝘁-𝗧𝗶𝗺𝗲 𝗔𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻, a new framework that effectively generalizes a fundus image diagnosis model to unseen environments, even under strong domain shifts. Some of it its key innovations include: 1️⃣ A dynamic filtering mechanism that selectively identifies and retains informative instances for memory bank updates 2️⃣ 2 new training objective during adaptation that enables the classifier to incrementally adapt to target patterns with reliable class conditional estimation and consistency regularization. 💡 This is also the first application of training-based TTA for fundus image diagnosis! Read the paper: 🔗 https://lnkd.in/ekZ_CQkC Code: https://lnkd.in/egnYpKeX Authors: Qian Zeng; Le Zhang; Yipeng Liu; Ce Zhu; Fan Zhang
IEEE Transactions on Medical Imaging (TMI)
Non-profit Organization Management
The focus of the journal is on unifying the sciences of medicine, biology, and imaging.
About us
IEEE TRANSACTIONS ON MEDICAL IMAGING (TMI) encourages the submission of manuscripts on imaging of body structure, morphology and function, including cell and molecular imaging and all forms of microscopy. The journal publishes original contributions on medical imaging achieved by modalities including ultrasound, x-rays, magnetic resonance, radionuclides, microwaves, and optical methods. Contributions describing novel acquisition techniques, medical image processing and analysis, visualization and performance, pattern recognition, machine learning, and related methods are encouraged. Studies involving highly technical perspectives are most welcome. The focus of the journal is on unifying the sciences of medicine, biology, and imaging. It emphasizes the common ground where instrumentation, hardware, software, mathematics, physics, biology, and medicine interact through new analysis methods. Strong application papers that describe novel methods are particularly encouraged. Papers describing important applications based on medically adopted and/or established methods without significant innovation in methodology will be directed to other journals.
- Website
-
https://www.embs.org/tmi/
External link for IEEE Transactions on Medical Imaging (TMI)
- Industry
- Non-profit Organization Management
- Company size
- 51-200 employees
- Headquarters
- Buffalo
- Type
- Nonprofit
Locations
-
Primary
Get directions
Buffalo, US
Employees at IEEE Transactions on Medical Imaging (TMI)
Updates
-
🚨 New paper alert! 🔬If you work in histopathology, you know that H&E is one of the most common stain. But traditional H&E staining of parafin-embedded (FFPE) slides is slow & costly 😥 🤖 Virtual staining, powered by digital pathology and generative models, is emerging as a promising alternative. ⚠️ Yet FFPE samples introduce a real challenge: their blurred or ambiguous cellular structures, which make FFPE→HE virtual staining particularly difficult. Most existing work simply adapts standard generative models, without addressing this core issue. 💡 The authors of the paper below introduce MCS-Stain, a new generative model guided by multiple cell semantics, leveraging a pretrained cell segmentation model to provide biologically meaningful guidance. 🚀 Substantial improvements over SOTA approaches on FFPE→HE virtual staining, across multiple datasets and segmentation models. And the method also generalizes to HE→IHC virtual staining! Read the paper: 🔗 https://lnkd.in/eBj-b-n9 Code: 💻 https://lnkd.in/ejuZMZM9 Authors: Yihuang Hu; Zhicheng Du; Weiping Lin; Shurong Yang; Lequan Yu; Guojun Zhang; Liansheng Wang
-
-
🚨 New paper alert! 🔬 Polyp segmentation in endoscopic imaging is crucial for early cancer detection but is complicated by diverse and irregular polyp morphologies. 💡 In this new paper, the authors introduce the 𝗘𝗻𝗱𝗼𝘀𝗰𝗼𝗽𝗶𝗰 𝗔𝗱𝗮𝗽𝘁𝗶𝘃𝗲 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿 (𝗘𝗔𝗧), which uses an adaptive perception module to 𝗱𝘆𝗻𝗮𝗺𝗶𝗰𝗮𝗹𝗹𝘆 𝗰𝗮𝗽𝘁𝘂𝗿𝗲 𝗳𝗶𝗻𝗲 𝗮𝗻𝗮𝘁𝗼𝗺𝗶𝗰𝗮𝗹 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 𝗮𝗻𝗱 𝗴𝗹𝗼𝗯𝗮𝗹 𝗰𝗼𝗻𝘁𝗲𝘅𝘁. 🎯EAT achieves impressive performance across various datasets: ✅ on CVC-ClinicDB: 97.77% Dice and 4.50mm HD95 ✅ on Kvasir-SEG: 97.09% Dice and 6.60mm HD95 ✅ on the multi-center PolypGen dataset, 95.18% Dice and 10.57mm HD95 ✅ on SUN-SEG-Hard: 89.90% Dice and 19.22mm HD95 Read the paper: 🔗 https://lnkd.in/eH5MQbZ4 Code: https://lnkd.in/exUBAKw3 Authors: YAN PANG; Yucheng Long; Zibin Chen; Ying Hu; Hao Chen; Qiong Wang
-
-
💡 Is your research ready for IEEE Transactions on Medical Imaging? ⚙️ Despite AI’s prominence today, IEEE TMI’s scope extends well beyond AI-based imaging to encompass the full spectrum of imaging methods, including CT, MRI, PET, SPECT, ultrasound, optical and hybrid systems, reconstruction algorithms, radiomics, biomarkers, image-guided therapy, and more. 🏆With a 2024 Impact Factor of 9.8, TMI continues to serve as a leading platform for rigorously vetted, technically impactful, and clinically relevant research. 📰 In their new editorial, Hongming Shan, Uwe Kruger, and Ge Wang outline what it takes for a paper to stand out and introduce the four dimensions, Significance, Innovation, Evaluation, and Reproducibility (SIER), which are the foundations for impactful, trustworthy, and publishable research in TMI. 🚀 We hope this editorial will inspire and help you shape your next paper — and submit it to IEEE TMI! 🤗 Read the editorial 🔗 https://lnkd.in/epYDe2KM
-
-
🚨 New paper alert! ℹ️ Manual annotation of volumetric medical images (e.g., CT, MRI and ultrasound videos) is often intensive and time-consuming. ️ ⚙️ Segment Anything Model 2 (SAM 2) offers a potential opportunity to significantly speed up the annotation process by manually annotating one or few slices and then propagating target masks across the entire volume. However, experiments show that relying on a single memory bank and attention module is prone to error propagation (e.g., over-propagation) after fine-tuning. 💡 To overcome this problem, a team of researchers from Duke University propose Short-Long Memory SAM 2 (SLM-SAM 2), a novel architecture that integrates distinct short-term and long-term memory banks, with separate attention modules to improve segmentation accuracy! 🚀 Evaluated across diverse body parts (organs, bones and muscles) and imaging modalities (CT, MRI, US), SLM-SAM 2 significantly outperforms existing baselines and substantially reducing total annotation time. Read the paper: 🔗 https://lnkd.in/eaGS7yaF Authors: Yuwen Chen; Zafer Y. Yildiz; Qihang Li; Yaqian Chen; Haoyu Dong; Hanxue Gu; Nick Konz; Maciej Mazurowski
-
-
🚨 New paper alert! ℹ️ The spatial resolution of PET remains limited to around 0.5 mm at best. ⚠️ This causes a substantial partial volume effect, especially for small mouse brain structures! 🚀 A team of researchers from the National Institutes for Quantum Science and Technology (QST) in Japan developed a sub-0.5 mm resolution PET scanner, with optimized 3-layer depth-of-interaction detectors! 🔥 By enabling the visualization of fine brain structures in mice, the new technology allows, for the first time, the separate identification of the hypothalamus, amygdala, and cerebellar nuclei. Read the paper: 🔗 https://lnkd.in/eYRexPzw Authors: Han Gyu Kang; Hideaki Tashima; Hidekatsu Wakizaka; Makoto Higuchi; Taiga Yamaya
-
-
🚨 New paper alert! ℹ️ 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗗𝗼𝗺𝗮𝗶𝗻 𝗔𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻 (UDA) is the process of adapting a model trained on labeled data from one domain (eg CT) to perform well on an unlabeled but related target domain (eg MRI), by learning domain-invariant representations. ⚠️ Usually, UDA state-of-the-art methods address the 𝗱𝗼𝗺𝗮𝗶𝗻 𝘀𝗵𝗶𝗳𝘁 problem via image- or feature-level alignment, but they often rely on 𝘀𝗽𝘂𝗿𝗶𝗼𝘂𝘀 𝗰𝗼𝗿𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗱𝗮𝘁𝗮, which limits generalization across domains. 💡 The authors below introduces 𝗖𝗶𝗦𝗲𝗴, 𝘁𝗵𝗲 𝗖𝗮𝘂𝘀𝗮𝗹 𝗜𝗻𝘁𝗲𝗿𝘃𝗲𝗻𝘁𝗶𝗼𝗻 𝗦𝗲𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗡𝗲𝘁𝘄𝗼𝗿𝗸: this is the first integration of 𝗰𝗮𝘂𝘀𝗮𝗹 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗶𝗻𝘁𝗼 𝗨𝗗𝗔 for medical image segmentation! 🚀 𝗞𝗲𝘆 𝗶𝗱𝗲𝗮: Instead of relying on statistical correlations that may not hold across domains, CiSeg builds a 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗮𝗹 𝗖𝗮𝘂𝘀𝗮𝗹 𝗠𝗼𝗱𝗲𝗹 to separate causal factors (what is meaningful for segmentation) from bias factors (domain-specific noise). ➕ CiSeg employs additional modules to decompose source domain latent features i𝗻𝘁𝗼 𝗰𝗮𝘂𝘀𝗮𝗹 𝗮𝗻𝗱 𝗯𝗶𝗮𝘀 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀, and to 𝘁𝗿𝗮𝗻𝘀𝗳𝗲𝗿 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗮𝗹𝗹𝘆 𝗶𝗻𝘃𝗮𝗿𝗶𝗮𝗻𝘁 𝗰𝗮𝘂𝘀𝗮𝗹 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 to the target domain, effectively mitigating spurious correlations. Read the paper: 🔗 https://lnkd.in/eHhBDYYF Code: 💻 https://lnkd.in/eUrUSkUe Authors: Peiqing Lv, Yaonan Wang, Min Liu, Zhe Zhang, Yunfeng Ma, Licheng Liu, and Erik Meijering
-
-
🚨 New paper alert! 🧠 The brain cortex is a thin layer of gray matter, lying between the white matter underneath ⬆ and the pial surface on top ⬇️ ℹ️ Cortical surface reconstruction (CSR) from MRIs is widely employed in imaging studies of neurodegenerative diseases. ⚠️ But deep learning–based CSR methods face several issues: they often 𝗶𝗴𝗻𝗼𝗿𝗲 𝘁𝗵𝗲 𝗿𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝘀𝗵𝗶𝗽 between the white matter and pial surfaces, use coarse initialization meshes that 𝘀𝘁𝗿𝘂𝗴𝗴𝗹𝗲 𝘄𝗶𝘁𝗵 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗰𝗼𝗿𝘁𝗶𝗰𝗮𝗹 𝗳𝗼𝗹𝗱𝘀, and require separate steps to compute cortical thickness. 🚀 The authors below present SurfNet, a deep learning framework that 𝗷𝗼𝗶𝗻𝘁𝗹𝘆 𝗿𝗲𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝘀 𝘄𝗵𝗶𝘁𝗲 𝗺𝗮𝘁𝘁𝗲𝗿, 𝗽𝗶𝗮𝗹, 𝗮𝗻𝗱 𝗺𝗶𝗱𝘁𝗵𝗶𝗰𝗸𝗻𝗲𝘀𝘀 𝗰𝗼𝗿𝘁𝗶𝗰𝗮𝗹 𝘀𝘂𝗿𝗳𝗮𝗰𝗲𝘀 via coupled 𝗱𝗶𝗳𝗳𝗲𝗼𝗺𝗼𝗿𝗽𝗵𝗶𝗰 𝗱𝗲𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻𝘀, achieving fast, topology-preserving cortical surface reconstruction and accurate cortical thickness estimation from MRI. Read the paper: 🔗 https://lnkd.in/eJx2D_n6 Authors: Hao Zheng; Hongming Li; Yong Fan
-
-
🚨 New paper alert! ⚖️ 𝗙𝗮𝗶𝗿𝗻𝗲𝘀𝘀 𝗶𝗻 𝗔𝗜 means ensuring that models perform equitably across all demographic groups. ⚠️ 𝗙𝗲𝗱𝗲𝗿𝗮𝘁𝗲𝗱 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (FL) offers a collaborative and privacy-preserving way to train models across institutions, but 𝗲𝗻𝘀𝘂𝗿𝗶𝗻𝗴 𝗳𝗮𝗶𝗿𝗻𝗲𝘀𝘀 𝗶𝗻 𝗙𝗟 remains an open challenge, especially given data heterogeneity and the lack of medical benchmarks! 🚀 To address this, authors below introduce FairFedMed, the 𝟭𝘀𝘁 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝗮𝗹 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝗳𝗼𝗿 𝗳𝗮𝗶𝗿𝗻𝗲𝘀𝘀 𝗶𝗻 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗙𝗟. FairFedMed includes: 1️⃣ FairFedMed-Oph: ophthalmology datasets (2D fundus + 3D OCT) with 6 demographic attributes 2️⃣ FairFedMed-Chest: real cross-institutional FL simulations using CheXpert and MIMIC-CXR ➡️ Authors also propose FairLoRA, a 𝗳𝗮𝗶𝗿𝗻𝗲𝘀𝘀-𝗮𝘄𝗮𝗿𝗲 𝗙𝗟 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 based on SVD-based low-rank approximation. By customizing singular value matrices for each demographic group while sharing singular vectors, FairLoRA ensures both fairness and efficiency! Read the paper: 🔗 https://lnkd.in/eijKdNhJ Code and dataset 💻: https://lnkd.in/etbuqNTb Authors: Minghan Li; Congcong Wen; Yu Tian; Min Shi; Yan Luo; Hao Huang
-
-
🚨 New paper alert! ⏩ The disparity between image and text representations, often referred to as the 𝗺𝗼𝗱𝗮𝗹𝗶𝘁𝘆 𝗴𝗮𝗽, remains a significant obstacle for 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 (VLMs) in 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗶𝗺𝗮𝗴𝗲 𝘀𝗲𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻. ⚠️ This gap complicates multi-modal fusion, thereby restricting segmentation performance. 💡 The 𝗘𝘃𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (EL) theory can help! As an advanced uncertainty modeling framework, EL provides an efficient and stable basis for 𝗾𝘂𝗮𝗻𝘁𝗶𝗳𝘆𝗶𝗻𝗴 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻 𝘂𝗻𝗰𝗲𝗿𝘁𝗮𝗶𝗻𝘁𝘆. ✅ It can be regarded as an alternative 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝗼𝗳 𝗺𝗼𝗱𝗮𝗹𝗶𝘁𝘆 𝗱𝗶𝘀𝘁𝗮𝗻𝗰𝗲, enabling the estimation of the modality gap. 🚀 This is the idea of authors listed below, who present a new framework, 𝗘𝘃𝗶𝗩𝗟𝗠, 𝘁𝗵𝗮𝘁 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝘀 𝗘𝘃𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝗩𝗟𝗠𝘀! 🚀 𝗘𝘃𝗶𝗩𝗟𝗠 bridges the modality gap by aggregating cross-modal opinions for modality gap estimation, thus improving modality fusion and subsequent medical image segmentation performance. Read the paper: 🔗 https://lnkd.in/e8rRKxUm Authors: Qingtao Pan; Zhengrong Li; Guang Yang; Qing Yang; Bing Ji 💻 Code available at: https://lnkd.in/eD98hxPp.
-