TY - JOUR
T1 - AMGL
T2 - Adaptive Multimodal Graph Learning for Brain Disease Prediction
AU - Wu, Runsheng
AU - He, Jianbin
AU - Huang, Guoheng
AU - Yuan, Xiaochen
AU - Feng, Zhoule
AU - Li, Yan
AU - Zhong, Guo
AU - Ling, Wing Kuen
AU - Pun, Chi Man
AU - Yang, Qi
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2026
Y1 - 2026
N2 - Graph-based approaches have been widely adopted in biomedical applications for modeling multimodal data, particularly in the accurate diagnosis and effective treatment of brain disorders. Most existing graph-based multimodal medical data processing methods typically extract features by fusing multimodal information through weighted operations, and then manually define graph structures based on specific modalities to learn patient representations via graph embedding. However, these methods often overlook the complex correlations and discrepancies across modalities, making it difficult to obtain highly relevant information. Moreover, the prior construction of an appropriate graph presents a considerable challenge, as manually defined structures are susceptible to spurious or noisy edges. These factors inevitably lead to incorrect predictions in real-world clinical scenarios. To address these limitations, we propose an end-to-end Adaptive Multimodal Graph Learning (AMGL) framework that comprises two key modules: Modal-Aware Integration Learning (MAIL) and Cluster-constrained Adaptive Graph Learning (CAGL). MAIL captures both inter-modal relevance and complementarity to construct enriched modality-aware representations, while CAGL performs adaptive graph learning based on data clustering and utilizes a Graph-Gated Neural Network (GGNN) for disease prediction. Experimental results on the TADPOLE and ABIDE datasets demonstrate that our method achieves superior classification accuracy and generalization capability, with an average performance gain of 2%–3% over state-of-the-art approaches.
AB - Graph-based approaches have been widely adopted in biomedical applications for modeling multimodal data, particularly in the accurate diagnosis and effective treatment of brain disorders. Most existing graph-based multimodal medical data processing methods typically extract features by fusing multimodal information through weighted operations, and then manually define graph structures based on specific modalities to learn patient representations via graph embedding. However, these methods often overlook the complex correlations and discrepancies across modalities, making it difficult to obtain highly relevant information. Moreover, the prior construction of an appropriate graph presents a considerable challenge, as manually defined structures are susceptible to spurious or noisy edges. These factors inevitably lead to incorrect predictions in real-world clinical scenarios. To address these limitations, we propose an end-to-end Adaptive Multimodal Graph Learning (AMGL) framework that comprises two key modules: Modal-Aware Integration Learning (MAIL) and Cluster-constrained Adaptive Graph Learning (CAGL). MAIL captures both inter-modal relevance and complementarity to construct enriched modality-aware representations, while CAGL performs adaptive graph learning based on data clustering and utilizes a Graph-Gated Neural Network (GGNN) for disease prediction. Experimental results on the TADPOLE and ABIDE datasets demonstrate that our method achieves superior classification accuracy and generalization capability, with an average performance gain of 2%–3% over state-of-the-art approaches.
KW - Adaptive graph learning
KW - Brain disease prediction
KW - Modality-aware learning
KW - Multi-modality data
UR - https://www.scopus.com/pages/publications/105029450054
U2 - 10.1109/TRPMS.2026.3658322
DO - 10.1109/TRPMS.2026.3658322
M3 - Article
AN - SCOPUS:105029450054
SN - 2469-7311
JO - IEEE Transactions on Radiation and Plasma Medical Sciences
JF - IEEE Transactions on Radiation and Plasma Medical Sciences
ER -