Abstract
A critical challenge in EEG-based emotion recognition is the poor generalization of models across different datasets due to significant domain shifts. Traditional methods struggle because they either overfit to source-domain characteristics or fail to bridge large discrepancies between datasets. To address this, we propose the Cross-corpus Attention-based Transfer Enhancement network (CATE), a novel two-stage framework. The core novelty of CATE lies in its dual-view self-supervised pre-training strategy, which learns robust, domain-invariant representations by approaching the problem from two complementary perspectives. Unlike single-view models that capture an incomplete picture, our framework synergistically combines: (1) Noise-Enhanced Representation Modeling (NERM), which builds resilience to domain-specific artifacts and noise, and (2) Wavelet Transform Representation Modeling (WTRM), which captures the essential, multi-scale spectral patterns fundamental to emotion. This dual approach moves beyond the brittle assumptions of traditional domain adaptation, which often fails when domains are too dissimilar. In the second stage, a supervised fine-tuning process adapts these powerful features for classification using attention-based mechanisms. Extensive experiments on six transfer tasks across the SEED, SEED-IV, and SEED-V datasets demonstrate that CATE establishes a new state-of-the-art, achieving accuracies from 68.01% to 81.65% and outperforming prior methods by up to 15.65 percentage points. By effectively learning transferable features from these distinct, synergistic views, CATE provides a robust framework that significantly advances the practical applicability of cross-corpus EEG emotion recognition.
| Original language | English |
|---|---|
| Article number | 5718 |
| Journal | Sensors |
| Volume | 25 |
| Issue number | 18 |
| DOIs | |
| Publication status | Published - Sept 2025 |
Keywords
- cross-corpus
- domain adaptation
- emotion recognition
- self-supervised learning