HAResformer: A Hybrid ResNet-Transformer Hierarchical Aggregation Architecture for Visible-Infrared Person Re-Identification

Yongheng Qian, Su Kit Tang

Research output: Contribution to journalArticlepeer-review

Abstract

Modality differences and intra-modality variations make the visible-infrared person re-identification (VI-ReID) task highly challenging. Most existing methods focus on building network frameworks based on convolutional neural networks (CNN) or pure vision transformers (ViT) to extract discriminative features and address these challenges. However, these methods neglect several key issues: deeply fusing local features with global spatial information enhances comprehensive discriminative representation, patch tokens contain rich semantic information, and different feature extraction stages within the network emphasize various semantic elements. To address these issues, we propose a novel hybrid ResNet-transformer hierarchical aggregation architecture named HAResformer. HAResformer comprises three key components: hierarchical feature extraction (HFE) framework, deeply supervised aggregation (DSA), and hierarchical global aggregate encoder (HGAE). Specifically, HFE introduces a lightweight cross-encoder feature fusion module (CFFM) to deeply integrate the local features and global spatial information of a person extracted by the ResNet encoder (RE) and transformer encoder (TE). Subsequently, the fused features are fed as global priors into the next-stage TE for deep interaction, aiming to extract specific local features and global contextual clues. Additionally, DSA and HGAE provide auxiliary supervision and aggregation on multi-scale features to enhance multi-granularity feature representation. HAResformer effectively alleviates modality differences and reduces intra-modality variations. Extensive experiments on three benchmarks demonstrate the effectiveness and generalization of our architecture and outperform most state-of-the-art methods. HAResformer has the potential to become a new VI-ReID baseline, promoting high-quality research in the future.

Original languageEnglish
JournalIEEE Internet of Things Journal
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • CNN
  • Cross-Modality
  • Feature Fusion
  • Multi-Scale Supervision
  • Person Re-Identification
  • Vision Transformer

Fingerprint

Dive into the research topics of 'HAResformer: A Hybrid ResNet-Transformer Hierarchical Aggregation Architecture for Visible-Infrared Person Re-Identification'. Together they form a unique fingerprint.

Cite this