Abstract
Due to the influence of factors such as camera angle and pose changes, some salient local features are often suppressed in person re-identification tasks. Moreover, many existing person re-identification methods do not consider the relation between features. To address these issues, this paper proposes two novel approaches: (1) To solve the problem of being confused and misidentified when local features of different individuals have similar attributes, we design a contextual relation network that focuses on establishing the relationship between local features and contextual features, so that all local features of the same person both contain contextual information. (2) To fully and correctly express key local features, we propose an uncertainty-guided joint attention module. The module focuses on the joint representation of individual pixels and local spatial features to enhance the credibility of local features. Finally, our method achieves competitive performance on four widely recognized datasets compared with state-of-the-art methods.
| Original language | English |
|---|---|
| Article number | 103822 |
| Journal | Journal of Visual Communication and Image Representation |
| Volume | 93 |
| DOIs | |
| Publication status | Published - May 2023 |
Keywords
- Attention mechanism
- Contextual relation network
- Person re-identification
- Relation between features
- Uncertainty-guided joint attention
Fingerprint
Dive into the research topics of 'Uncertainty-guided joint attention and contextual relation network for person re-identification'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver