Pose Attention-Guided Paired-Images Generation for Visible-Infrared Person Re-Identification

Yongheng Qian, Su Kit Tang

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

A key challenge of visible-infrared person re-identification (VI-ReID) comes from the modality difference between visible and infrared images, which further causes large intra-person and small inter-person distances. Most existing methods design feature extractors and loss functions to bridge the modality gap. However, the unpaired-images constrain the VI-ReID model's ability to learn instance-level alignment features. Different from these methods, in this paper, we propose a pose attention-guided paired-images generation network (PAPG) from the standpoint of data augmentation. PAPG can generate cross-modality paired-images with shape and appearance consistency with the real image to perform instance-level feature alignment by minimizing the distances of every pair of images. Furthermore, our method alleviates data insufficient and reduces the risk of VI-ReID model overfitting. Comprehensive experiments conducted on two publicly available datasets validate the effectiveness and generalizability of PAPG. Especially, on the SYSU-MM01 dataset, our method accomplishes 7.76% and 5.87% gains in Rank-1 and mAP.

Original languageEnglish
Pages (from-to)346-350
Number of pages5
JournalIEEE Signal Processing Letters
Volume31
DOIs
Publication statusPublished - 2024

Keywords

  • Cross-modality person re-identification
  • attention mechanism
  • paired-images
  • pose-guided

Fingerprint

Dive into the research topics of 'Pose Attention-Guided Paired-Images Generation for Visible-Infrared Person Re-Identification'. Together they form a unique fingerprint.

Cite this