跳至主導覽 跳至搜尋 跳過主要內容

面向跨视角地理定位的感知特征融合网络

  • Guangdong University of Technology

研究成果: Article同行評審

3 引文 斯高帕斯(Scopus)

摘要

Cross-view geo-localization represents that the same geographic target can be located by retrieving multiple platform views (UAV, satellite, and street view). The main challenge of this localization task currently is the drastic changes between different viewpoints, which reduces the retrieval performance of the model. Currently, such networks for cross-view geo-localization suffers from the following problems. Firstly, due to the diversity of scales and perspectives of geographical targets, current networks are vulnerable to the interference of localized areas when perceiving target information. Secondly, among different viewpoint targets belonging to the same category, the angles of these targets vary greatly. Therefore, a perceptual feature fusion network (PFFNet) for cross-view geo-localizationis proposed to learn location-aware features and establish semantic correlations between each viewpoint. In each viewpoint in PFFNet, a shunted contextual embedding network (SCENet) is built as the backbone network to extract the contextual information of each viewpoint separately and construct the target location encoding space. The proposed method is compared with the state-of-the-art methods on the cross-viewpoint geo-localization dataset University-1652. The experimental results show that the proposed perceptual feature fusion network achieves high adaptive performance in large-scale datasets.

貢獻的翻譯標題Perceptual Feature Fusion Network for Cross-View Geo-Localization
原文Chinese (Traditional)
頁(從 - 到)255-262
頁數8
期刊Computer Engineering and Applications
60
發行號3
DOIs
出版狀態Published - 1 2月 2024

指紋

深入研究「面向跨视角地理定位的感知特征融合网络」主題。共同形成了獨特的指紋。

引用此