跳至主導覽 跳至搜尋 跳過主要內容

AMGL: Adaptive Multimodal Graph Learning for Brain Disease Prediction

  • Runsheng Wu
  • , Jianbin He
  • , Guoheng Huang
  • , Xiaochen Yuan
  • , Zhoule Feng
  • , Yan Li
  • , Guo Zhong
  • , Wing Kuen Ling
  • , Chi Man Pun
  • , Qi Yang

研究成果: Article同行評審

摘要

Graph-based approaches have been widely adopted in biomedical applications for modeling multimodal data, particularly in the accurate diagnosis and effective treatment of brain disorders. Most existing graph-based multimodal medical data processing methods typically extract features by fusing multimodal information through weighted operations, and then manually define graph structures based on specific modalities to learn patient representations via graph embedding. However, these methods often overlook the complex correlations and discrepancies across modalities, making it difficult to obtain highly relevant information. Moreover, the prior construction of an appropriate graph presents a considerable challenge, as manually defined structures are susceptible to spurious or noisy edges. These factors inevitably lead to incorrect predictions in real-world clinical scenarios. To address these limitations, we propose an end-to-end Adaptive Multimodal Graph Learning (AMGL) framework that comprises two key modules: Modal-Aware Integration Learning (MAIL) and Cluster-constrained Adaptive Graph Learning (CAGL). MAIL captures both inter-modal relevance and complementarity to construct enriched modality-aware representations, while CAGL performs adaptive graph learning based on data clustering and utilizes a Graph-Gated Neural Network (GGNN) for disease prediction. Experimental results on the TADPOLE and ABIDE datasets demonstrate that our method achieves superior classification accuracy and generalization capability, with an average performance gain of 2%–3% over state-of-the-art approaches.

指紋

深入研究「AMGL: Adaptive Multimodal Graph Learning for Brain Disease Prediction」主題。共同形成了獨特的指紋。

引用此