Distributed Spatial Transformer for Object Tracking in Multi-Camera

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)


Today's video surveillance devices are almost ubiquitous and ready to collect data. Tracking targets in surveillance applications can be a challenging task. However, the work of manual surveillance is tiresome. A camera can only record information within the area it records, and that area is quite limited. Therefore, in order to monitor critical areas, it is necessary to setup many cameras in different places. As a result, personnel monitoring of all cameras remains a more difficult task. By re-identifying individuals captured by several cameras, this work presents a framework for tracking people in the presence of many cameras, while giving some potential methods to achieve fast people detection and tracking in multi-view cameras. By employing spatial transformations to achieve real-time multi-view tracking capability, the scheme is feasible and has been implemented.

Original languageEnglish
Title of host publication25th International Conference on Advanced Communications Technology
Subtitle of host publicationNew Cyber Security Risks for Enterprise Amidst COVID-19 Pandemic!!, ICACT 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages4
ISBN (Electronic)9791188428106
Publication statusPublished - 2023
Event25th International Conference on Advanced Communications Technology, ICACT 2023 - Pyeongchang, Korea, Republic of
Duration: 19 Feb 202322 Feb 2023

Publication series

NameInternational Conference on Advanced Communication Technology, ICACT
ISSN (Print)1738-9445


Conference25th International Conference on Advanced Communications Technology, ICACT 2023
Country/TerritoryKorea, Republic of


  • Distribution
  • Mulit-View
  • Person Tracking
  • STN


Dive into the research topics of 'Distributed Spatial Transformer for Object Tracking in Multi-Camera'. Together they form a unique fingerprint.

Cite this