Wavelet-based multi-level information compensation learning for visible-infrared person re-identification

Haobiao Fan, Yanbing Chen, Yibo Chen, Zhixin Tie, Hao Sheng, Wei Ke

Research output: Contribution to journalArticlepeer-review

Abstract

The main challenge in cross-modal person re-identification (VI-ReID) is extracting discriminative features from different modalities. Most existing methods focus on minimizing modal differences but overlook the shallow modality-invariant information lost as network depth increases. To address this, we propose the Wavelet-based Multi-level Information Compensation (WMIC) learning method. At multiple network stages, we design an Information Compensation Block (ICB) that applies wavelet decomposition to deep features, producing four wavelet subbands to preserve modality-invariant details and enlarge the receptive field. These subbands are used to compute an attention matrix with shallow features, which is then applied to enhance shallow features' local information. Additionally, we represent each person image with two sets of embeddings by introducing a Wavelet Enhancement Block (WEB) to generate an additional embedding. Finally, we use a dual-branch center-guided loss to make the two embeddings complementary, thereby reducing the disparity between infrared and visible images. Extensive experiments on the SYSU-MM01, RegDB, and LLCM datasets demonstrate that WMIC outperforms existing methods.

Original languageEnglish
Article number105471
JournalDigital Signal Processing: A Review Journal
Volume168
DOIs
Publication statusPublished - Jan 2026

Keywords

  • Cross-modality
  • Feature alignment
  • Person re-identification
  • Wavelet transformation

Fingerprint

Dive into the research topics of 'Wavelet-based multi-level information compensation learning for visible-infrared person re-identification'. Together they form a unique fingerprint.

Cite this