Learning irregular space transformation for person re-identification

Yanwei Zheng, Hao Sheng, Yang Liu, Kai Lv, Wei Ke, Zhang Xiong

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Person re-identification (ReID) classifies the discriminative features of different people. Human perception usually depends on the minority of discriminative colors to classify targets, rather than the majority of mutual colors. ReID uses a small number of fixed cameras, which create a small account of similar backgrounds, leading to the majority of background pixels becoming non-discriminative (this is expanded in the feature map). This paper analyzes the distributions of feature maps to discover their different discriminative power. It also collects statistics that classify feature map values into individual ones and general ones according to the deviation of the mean value on each mini-batch. Finally, our findings introduce a learning irregular space transformation model in convolutional neural networks by enlarging the individual variance while reducing the general one to enhance the discrimination of features. We demonstrate our theories as valid on various public data sets, and achieve competitive results via quantitative evaluation.

Original languageEnglish
Article number8468189
Pages (from-to)53214-53225
Number of pages12
JournalIEEE Access
Volume6
DOIs
Publication statusPublished - 2018

Keywords

  • Irregular space transformation
  • convolutional neural networks
  • discriminative power enhancement
  • person re-identification

Fingerprint

Dive into the research topics of 'Learning irregular space transformation for person re-identification'. Together they form a unique fingerprint.

Cite this