Dual-Uncertainty Guided Multimodal MRI-Based Visual Pathway Extraction

Alou Diakite, Cheng Li, Yousuf Babiker M. Osman, Zan Chen, Yiang Pan, Jiawei Zhang, Tao Tan, Hairong Zheng, Shanshan Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Objective: This study aims to accurately extract the visual pathway (VP) from multimodal MR images while minimizing reliance on extensive labeled data and enhancing extraction performance. Method: We propose a novel approach that incorporates a Modality-Relevant Feature Extraction Module (MRFEM) to effectively extract essential features from T1-weighted and fractional anisotropy (FA) images. Additionally, we implement a mean-teacher model integrated with dual uncertainty-aware ambiguity identification (DUAI) to enhance the reliability of the VP extraction process. Results: Experiments conducted on the Human Connectome Project (HCP) and Multi-Shell Diffusion MRI (MDM) datasets demonstrate that our method reduces annotation efforts by at least one-third compared to fully supervised techniques while achieving superior extraction performance over six state-of-the-art semi-supervised methods. Conclusion: The proposed label-efficient approach alleviates the burdens of manual annotation and enhances the accuracy of multimodal MRI-based VP extraction. Significance: This work contributes to the field of medical imaging by facilitating more efficient and accurate visual pathway extraction, thereby improving the analysis and understanding of complex brain structures with reduced reliance on expert annotation.

Original languageEnglish
JournalIEEE Transactions on Biomedical Engineering
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • multimodal MRI
  • Uncertainty
  • Visual Pathway Extraction

Fingerprint

Dive into the research topics of 'Dual-Uncertainty Guided Multimodal MRI-Based Visual Pathway Extraction'. Together they form a unique fingerprint.

Cite this