跳至主導覽 跳至搜尋 跳過主要內容

Dual-Uncertainty Guided Multimodal MRI-Based Visual Pathway Extraction

  • Alou Diakite
  • , Cheng Li
  • , Yousuf Babiker M. Osman
  • , Zan Chen
  • , Yiang Pan
  • , Jiawei Zhang
  • , Tao Tan
  • , Hairong Zheng
  • , Shanshan Wang
  • Shenzhen Institute of Advanced Technology
  • University of Chinese Academy of Sciences
  • Zhejiang University of Technology

研究成果: Article同行評審

4 引文 斯高帕斯(Scopus)

摘要

Objective: This study aims to accurately extract the visual pathway (VP) from multimodal MR images while minimizing reliance on extensive labeled data and enhancing extraction performance. Method: We propose a novel approach that incorporates a Modality-Relevant Feature Extraction Module (MRFEM) to effectively extract essential features from T1-weighted and fractional anisotropy (FA) images. Additionally, we implement a mean-teacher model integrated with dual uncertainty-aware ambiguity identification (DUAI) to enhance the reliability of the VP extraction process. Results: Experiments conducted on the Human Connectome Project (HCP) and Multi-Shell Diffusion MRI (MDM) datasets demonstrate that our method reduces annotation efforts by at least one-third compared to fully supervised techniques while achieving superior extraction performance over six state-of-the-art semi-supervised methods. Conclusion: The proposed label-efficient approach alleviates the burdens of manual annotation and enhances the accuracy of multimodal MRI-based VP extraction. Significance: This work contributes to the field of medical imaging by facilitating more efficient and accurate visual pathway extraction, thereby improving the analysis and understanding of complex brain structures with reduced reliance on expert annotation.

原文English
頁(從 - 到)1993-2000
頁數8
期刊IEEE Transactions on Biomedical Engineering
72
發行號6
DOIs
出版狀態Published - 2025

指紋

深入研究「Dual-Uncertainty Guided Multimodal MRI-Based Visual Pathway Extraction」主題。共同形成了獨特的指紋。

引用此