Abstract
Video captioning aims to identify multiple objects and their behaviours in a video event and generate captions for the current scene. This task aims to generate a detailed description of the current video in real-time using natural language, which requires deep learning to analyze and determine the relationships between interesting objects in the frame sequence. In practice, existing methods typically involve detecting objects in the frame sequence and then generating captions based on features extracted through object coverage locations. Therefore, the results of caption generation are highly dependent on the performance of object detection and identification. This work proposes an advanced video captioning approach that works in adaptively and effectively addresses the interdependence between event proposals and captions. Additionally, an attention-based multimodel framework is introduced to capture the main context from the frame and sound in the video scene. Also, an intermediate model is presented to collect the hidden states captured from the input sequence, which performs to extract the main features and implicitly produce multiple event proposals. For caption prediction, the proposed method employs the CARU layer with attention consideration as the primary RNN layer for decoding. Experimental results showed that the proposed work achieves improvements compared to the baseline method and also better performance compared to other state-of-the-art models on the ActivityNet dataset, presenting competitive results in the tasks of video captioning.
Original language | English |
---|---|
Pages (from-to) | 2304-2317 |
Number of pages | 14 |
Journal | IET Image Processing |
Volume | 18 |
Issue number | 9 |
DOIs | |
Publication status | Published - 20 Jul 2024 |
Keywords
- convolutional neural nets
- feature extraction
- pattern classification
- recurrent neural nets
- video signal processing
Fingerprint
Dive into the research topics of 'Local feature-based video captioning with multiple classifier and CARU-attention'. Together they form a unique fingerprint.Press/Media
-
New Image Processing Study Findings Reported from Faculty of Applied Sciences (Local Feature-based Video Captioning With Multiple Classifier and Caru-attention)
13/05/24
1 item of Media coverage
Press/Media