TY - GEN
T1 - Non-adversarial Learning
T2 - 27th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2024
AU - Han, Luyi
AU - Tan, Tao
AU - Zhang, Tianyu
AU - Wang, Xin
AU - Gao, Yuan
AU - Lu, Chunyao
AU - Liang, Xinglong
AU - Dou, Haoran
AU - Huang, Yunzhi
AU - Mann, Ritse
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Adversarial learning helps generative models translate MRI from source to target sequence when lacking paired samples. However, implementing MRI synthesis with adversarial learning in clinical settings is challenging due to training instability and mode collapse. To address this issue, we leverage intermediate sequences to estimate the common latent space among multi-sequence MRI, enabling the reconstruction of distinct sequences from the common latent space. We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of vector-quantized common (VQC) latent space between multiple sequences. Moreover, we improve the latent space consistency with contrastive learning and increase model stability by domain augmentation. Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods, and VQC latent space aids our model to achieve (1) anti-interference ability, which can eliminate the effects of noise, bias fields, and artifacts, and (2) solid semantic representation ability, with the potential of oneshot segmentation. Our code is publicly available (https://github.com/fiy2W/mriseq2seq).
AB - Adversarial learning helps generative models translate MRI from source to target sequence when lacking paired samples. However, implementing MRI synthesis with adversarial learning in clinical settings is challenging due to training instability and mode collapse. To address this issue, we leverage intermediate sequences to estimate the common latent space among multi-sequence MRI, enabling the reconstruction of distinct sequences from the common latent space. We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of vector-quantized common (VQC) latent space between multiple sequences. Moreover, we improve the latent space consistency with contrastive learning and increase model stability by domain augmentation. Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods, and VQC latent space aids our model to achieve (1) anti-interference ability, which can eliminate the effects of noise, bias fields, and artifacts, and (2) solid semantic representation ability, with the potential of oneshot segmentation. Our code is publicly available (https://github.com/fiy2W/mriseq2seq).
KW - Latent Space
KW - MRI synthesis
KW - Multi-Sequence MRI
UR - https://www.scopus.com/pages/publications/105007827044
U2 - 10.1007/978-3-031-72120-5_45
DO - 10.1007/978-3-031-72120-5_45
M3 - Conference contribution
AN - SCOPUS:105007827044
SN - 9783031721199
T3 - Lecture Notes in Computer Science
SP - 484
EP - 491
BT - Medical Image Computing and Computer Assisted Intervention - MICCAI 2024 - 27th International Conference, Proceedings
A2 - Linguraru, Marius George
A2 - Feragen, Aasa
A2 - Glocker, Ben
A2 - Giannarou, Stamatia
A2 - Schnabel, Julia A.
A2 - Dou, Qi
A2 - Lekadir, Karim
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 6 October 2024 through 10 October 2024
ER -