Quaternion Cross-Modality Spatial Learning for Multi-Modal Medical Image Segmentation

Junyang Chen, Guoheng Huang, Xiaochen Yuan, Guo Zhong, Zewen Zheng, Chi Man Pun, Jian Zhu, Zhixin Huang

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Recently, the Deep Neural Networks (DNNs) have had a large impact on imaging process including medical image segmentation, and the real-valued convolution of DNN has been extensively utilized in multi-modal medical image segmentation to accurately segment lesions via learning data information. However, the weighted summation operation in such convolution limits the ability to maintain spatial dependence that is crucial for identifying different lesion distributions. In this paper, we propose a novel Quaternion Cross-modality Spatial Learning (Q-CSL) which explores the spatial information while considering the linkage between multi-modal images. Specifically, we introduce to quaternion to represent data and coordinates that contain spatial information. Additionally, we propose Quaternion Spatial-association Convolution to learn the spatial information. Subsequently, the proposed De-level Quaternion Cross-modality Fusion (De-QCF) module excavates inner space features and fuses cross-modality spatial dependency. Our experimental results demonstrate that our approach compared to the competitive methods perform well with only 0.01061 M parameters and 9.95G FLOPs.

Original languageEnglish
Pages (from-to)1412-1423
Number of pages12
JournalIEEE Journal of Biomedical and Health Informatics
Volume28
Issue number3
DOIs
Publication statusPublished - 1 Mar 2024

Keywords

  • Cross-modality
  • Multi-modal medical image
  • Quaternion
  • Spatial dependency

Fingerprint

Dive into the research topics of 'Quaternion Cross-Modality Spatial Learning for Multi-Modal Medical Image Segmentation'. Together they form a unique fingerprint.

Cite this