跳至主導覽 跳至搜尋 跳過主要內容

Superpixel-Enhanced Quaternion Feature Fusion and Contextualization Graph Contrastive Learning for Cervical Cancer Diagnosis

  • Jiajun Ma
  • , Guoheng Huang
  • , Xiaochen Yuan
  • , Xuhang Chen
  • , Jiawang Chen
  • , Lianglun Cheng
  • , Chi Man Pun
  • , Guo Zhong
  • , Qingjian Ye

研究成果: Conference contribution同行評審

摘要

Coarse-grained classification methods have demonstrated robust performance across various image classification tasks. However, in colposcopy classification, these methods often struggle to effectively capture subtle lesion features. Fine-grained methods address this by merging multi-layer features to locate regions with high discriminative power. Nevertheless, such approaches frequently overlook contextual relationships between features and lose original shape information. Additionally, the similarity between lesion and normal regions further exacerbates classification challenges. To address these limitations, we propose SQG-net, a novel fine-grained classification method for cervical cancer diagnosis. SQG-net incorporates three innovative modules. First, the Quaternion Superpixel Encoder (QSE) preserves lesion shape and color features through superpixel segmentation and quaternion convolution. Next, the Hierarchical Quaternion Feature Selection (HQFS) network identifies fine-grained discriminative features, enhancing subtle feature differentiation. Finally, a Graph Context Learning Module (GCLM) captures contextual relationships between features. Additionally, contrastive learning is utilized to improve feature space separation, enhancing classification accuracy. The method was evaluated on both a private cervical imaging dataset and a publicly available dataset. SQG-net achieved significant improvements in classification accuracy, recording 88.97% on the private dataset and 79.11% on the publicly available dataset, establishing new state-of-the-art performance in cervical cancer classification. The code will be released upon conference acceptance.

原文English
主出版物標題Neural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings
編輯Tadahiro Taniguchi, Chi Sing Andrew Leung, Tadashi Kozuno, Junichiro Yoshimoto, Mufti Mahmud, Maryam Doborjeh, Kenji Doya
發行者Springer Science and Business Media Deutschland GmbH
頁面340-355
頁數16
ISBN(列印)9789819543779
DOIs
出版狀態Published - 2026
事件32nd International Conference on Neural Information Processing, ICONIP 2025 - Okinawa, Japan
持續時間: 20 11月 202524 11月 2025

出版系列

名字Lecture Notes in Computer Science
16310 LNCS
ISSN(列印)0302-9743
ISSN(電子)1611-3349

Conference

Conference32nd International Conference on Neural Information Processing, ICONIP 2025
國家/地區Japan
城市Okinawa
期間20/11/2524/11/25

UN SDG

此研究成果有助於以下永續發展目標

  1. Good health and well being
    Good health and well being

指紋

深入研究「Superpixel-Enhanced Quaternion Feature Fusion and Contextualization Graph Contrastive Learning for Cervical Cancer Diagnosis」主題。共同形成了獨特的指紋。

引用此