Abstract
The task of dense video captioning is to generate detailed natural-language descriptions for an original video, which requires deep analysis and mining of semantic captions to identify events in the video. Existing methods typically follow a localisation-then-captioning sequence within given frame sequences, resulting in caption generation that is highly dependent on which objects have been detected. This work proposes a parallel-based dense video captioning method that can simultaneously address the mutual constraint between event proposals and captions. Additionally, a deformable Transformer framework is introduced to reduce or free manual threshold of hyperparameters in such methods. An information transfer station is also added as a representation organisation, which receives the hidden features extracted from a frame and implicitly generates multiple event proposals. The proposed method also adopts LSTM (Long short-term memory) with deformable attention as the main layer for caption generation. Experimental results show that the proposed method outperforms other methods in this area to a certain degree on the ActivityNet Caption dataset, providing competitive results.
Original language | English |
---|---|
Article number | 3685 |
Journal | Mathematics |
Volume | 11 |
Issue number | 17 |
DOIs | |
Publication status | Published - Sept 2023 |
Keywords
- dense video caption
- feature extraction
- multimodal feature fusion
- neural network
- video captioning
Fingerprint
Dive into the research topics of 'Parallel Dense Video Caption Generation with Multi-Modal Features'. Together they form a unique fingerprint.Press/Media
-
Findings in Mathematics Reported from Faculty of Applied Sciences (Parallel Dense Video Caption Generation with Multi-Modal Features)
12/09/23
1 item of Media coverage
Press/Media