Local feature-based video captioning with multiple classifier and CARU-attention

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)

摘要

Video captioning aims to identify multiple objects and their behaviours in a video event and generate captions for the current scene. This task aims to generate a detailed description of the current video in real-time using natural language, which requires deep learning to analyze and determine the relationships between interesting objects in the frame sequence. In practice, existing methods typically involve detecting objects in the frame sequence and then generating captions based on features extracted through object coverage locations. Therefore, the results of caption generation are highly dependent on the performance of object detection and identification. This work proposes an advanced video captioning approach that works in adaptively and effectively addresses the interdependence between event proposals and captions. Additionally, an attention-based multimodel framework is introduced to capture the main context from the frame and sound in the video scene. Also, an intermediate model is presented to collect the hidden states captured from the input sequence, which performs to extract the main features and implicitly produce multiple event proposals. For caption prediction, the proposed method employs the CARU layer with attention consideration as the primary RNN layer for decoding. Experimental results showed that the proposed work achieves improvements compared to the baseline method and also better performance compared to other state-of-the-art models on the ActivityNet dataset, presenting competitive results in the tasks of video captioning.

原文English
頁(從 - 到)2304-2317
頁數14
期刊IET Image Processing
18
發行號9
DOIs
出版狀態Published - 20 7月 2024

指紋

深入研究「Local feature-based video captioning with multiple classifier and CARU-attention」主題。共同形成了獨特的指紋。

引用此