TY - JOUR
T1 - Self-Supervised Learning for Domain Generalization With a Multi-Classifier Ensemble Approach
AU - Qin, Zhenkai
AU - Luo, Qining
AU - Nong, Xunyi
AU - Chen, Xiaolong
AU - Zhang, Hongfeng
AU - Wong, Cora Un In
N1 - Publisher Copyright:
© 2025 The Author(s). IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Domain generalization poses significant challenges, particularly as models must generalize effectively to unseen target domains after training on multiple source domains. Traditional approaches typically aim to minimize domain discrepancies; however, they often fall short when handling complex data variations and class imbalance. In this paper, we propose an innovative model, the self-supervised learning multi-classifier ensemble (SSL-MCE), to address these limitations. SSL-MCE integrates self-supervised learning within a dynamic multi-classifier ensemble framework, leveraging ResNet as a shared feature extraction backbone. By combining four distinct classifiers, it captures diverse and complementary features, thereby enhancing adaptability to new domains. A self-supervised rotation prediction task enables SSL-MCE to focus on intrinsic data structures rather than domain-specific details, learning robust domain-invariant features. To mitigate class imbalance, we incorporate adaptive focal attention loss (AFAL), which dynamically emphasizes challenging and rare instances, ensuring improved accuracy on difficult samples. Furthermore, SSL-MCE adopts a dynamic loss-based weighting scheme to prioritize more reliable classifiers in the final prediction. Extensive experiments conducted on public benchmark datasets, including PACS and DomainNet, indicate that SSL-MCE outperforms state-of-the-art methods, achieving superior generalization and resource efficiency through its streamlined ensemble framework.
AB - Domain generalization poses significant challenges, particularly as models must generalize effectively to unseen target domains after training on multiple source domains. Traditional approaches typically aim to minimize domain discrepancies; however, they often fall short when handling complex data variations and class imbalance. In this paper, we propose an innovative model, the self-supervised learning multi-classifier ensemble (SSL-MCE), to address these limitations. SSL-MCE integrates self-supervised learning within a dynamic multi-classifier ensemble framework, leveraging ResNet as a shared feature extraction backbone. By combining four distinct classifiers, it captures diverse and complementary features, thereby enhancing adaptability to new domains. A self-supervised rotation prediction task enables SSL-MCE to focus on intrinsic data structures rather than domain-specific details, learning robust domain-invariant features. To mitigate class imbalance, we incorporate adaptive focal attention loss (AFAL), which dynamically emphasizes challenging and rare instances, ensuring improved accuracy on difficult samples. Furthermore, SSL-MCE adopts a dynamic loss-based weighting scheme to prioritize more reliable classifiers in the final prediction. Extensive experiments conducted on public benchmark datasets, including PACS and DomainNet, indicate that SSL-MCE outperforms state-of-the-art methods, achieving superior generalization and resource efficiency through its streamlined ensemble framework.
KW - image classification
KW - image recognition
UR - http://www.scopus.com/inward/record.url?scp=105007611864&partnerID=8YFLogxK
U2 - 10.1049/ipr2.70098
DO - 10.1049/ipr2.70098
M3 - Article
AN - SCOPUS:105007611864
SN - 1751-9659
VL - 19
JO - IET Image Processing
JF - IET Image Processing
IS - 1
M1 - e70098
ER -