摘要
In person re-identification (re-id), the key to retrieving the correct person image is to extract discriminative features. The features at different levels are considered complementary. In this work, we design a person re-id learning network that can extract mutually multi-level features called ASCLNet. ASCLNet contains three feature branches, and each branch can extract mutually different levels of features. Furthermore, we propose two novel modules and apply them to learning local and attribute features in ASCLNet. One is the contextual local module, which can learn the local feature with context information from the local body part; the other is the attribute soft-sharing module, which enables shared feature representation among attributes. With the support of these two modules, ASCLNet can extract multi-level features that are more discriminative. Moreover, experimental results show that ASCLNet achieves excellent performances on Market-1501 and DukeMTMC-reID datasets with mAP of 88.85% and 80.18%, respectively.
| 原文 | English |
|---|---|
| 頁(從 - 到) | 2251-2264 |
| 頁數 | 14 |
| 期刊 | Visual Computer |
| 卷 | 40 |
| 發行號 | 4 |
| DOIs | |
| 出版狀態 | Published - 4月 2024 |
指紋
深入研究「Joint attribute soft-sharing and contextual local: a multi-level features learning network for person re-identification」主題。共同形成了獨特的指紋。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver