A heuristic transformation in discriminative dictionary learning for person re-identification

Hao Sheng, Yanwei Zheng, Yang Liu, Kai Lv, Abbas Rajabifard, Yiqun Chen, Wei Ke, Zhang Xiong

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Person re-identification (ReID) is an important technology for target association in surveillance applications. Recently, sparse representation-based classification has been applied to person ReID with the advantage of discriminative feature extraction and has produced excellent results. The dictionary learning (DL) method is vital to the sparse representation, and the discriminative power of the learned dictionary determines the performance of ReID. Unlike previous approaches that only added constraints in DL, we propose a discriminative dictionary learning model (DDLM) that learns the discriminative dictionary by transforming the dictionary representation space in the training process. We determine the statistical distribution from the training data and divide the data into two categories according to the contribution for sparse representation: The high-contribution data and low-contribution data. Then, we extend the information space that contains the most high-contribution data and shrink the remaining parts. As the representation space of the dictionary is transformed, the solving process is modified accordingly. The experiments on the benchmark datasets (CAVIAR4REID, ETHZ, and i-LIDS) demonstrate that the proposed model outperforms the state-of-the-art approaches.

Original languageEnglish
Article number8668409
Pages (from-to)40313-40322
Number of pages10
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019

Keywords

  • Piecewise linear transformation
  • discriminative dictionary learning
  • person re-identification
  • sparse representation

Fingerprint

Dive into the research topics of 'A heuristic transformation in discriminative dictionary learning for person re-identification'. Together they form a unique fingerprint.

Cite this