TY - GEN
T1 - All-in-One Multi-Organ Segmentation in 3D CT Images via Self-Supervised and Cross-Dataset Learning
AU - Huang, Jiaju
AU - Chen, Shaobin
AU - Liang, Xinglong
AU - Sun, Yue
AU - Hu, Menghan
AU - Tan, Tao
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Accurate segmentation of organs related to breast cancer metastasis in 3D CT images is crucial for clinical applications such as surgical planning, radiation therapy, and personalized treatment strategies. However, the scarcity of annotated datasets poses challenges in training robust models. This work introduces a novel framework combining self-supervised learning (SSL) and cross-dataset label integration to develop an All-In-One (AIO) segmentation model. We pretrain an encoder using contrastive learning on over 6,000 unlabeled CT images, enhancing feature extraction for the segmentation of 6 key organs without annotations. Organ-specific models are trained on individual datasets, and cross-dataset inference generates pseudo labels for unannotated organs. These pseudo labels, combined with ground truth, create a comprehensive training set for the AIO model. Our approach improves the Dice coefficient for segmentation from an average of 89.48% to 91.40%, effectively addressing the challenge of limited annotations. This advancement has the potential to enhance diagnostic accuracy and reduce the workload of imaging specialists.
AB - Accurate segmentation of organs related to breast cancer metastasis in 3D CT images is crucial for clinical applications such as surgical planning, radiation therapy, and personalized treatment strategies. However, the scarcity of annotated datasets poses challenges in training robust models. This work introduces a novel framework combining self-supervised learning (SSL) and cross-dataset label integration to develop an All-In-One (AIO) segmentation model. We pretrain an encoder using contrastive learning on over 6,000 unlabeled CT images, enhancing feature extraction for the segmentation of 6 key organs without annotations. Organ-specific models are trained on individual datasets, and cross-dataset inference generates pseudo labels for unannotated organs. These pseudo labels, combined with ground truth, create a comprehensive training set for the AIO model. Our approach improves the Dice coefficient for segmentation from an average of 89.48% to 91.40%, effectively addressing the challenge of limited annotations. This advancement has the potential to enhance diagnostic accuracy and reduce the workload of imaging specialists.
KW - breast cancer
KW - computed tomography
KW - contrastive learning
KW - multi-organ segmentation
KW - self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=105005830965&partnerID=8YFLogxK
U2 - 10.1109/ISBI60581.2025.10980981
DO - 10.1109/ISBI60581.2025.10980981
M3 - Conference contribution
AN - SCOPUS:105005830965
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - ISBI 2025 - 2025 IEEE 22nd International Symposium on Biomedical Imaging, Proceedings
PB - IEEE Computer Society
T2 - 22nd IEEE International Symposium on Biomedical Imaging, ISBI 2025
Y2 - 14 April 2025 through 17 April 2025
ER -