Parallel Dense Video Caption Generation with Multi-Modal Features

Xuefei Huang, Ka Hou Chan, Wei Ke, Hao Sheng

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


The task of dense video captioning is to generate detailed natural-language descriptions for an original video, which requires deep analysis and mining of semantic captions to identify events in the video. Existing methods typically follow a localisation-then-captioning sequence within given frame sequences, resulting in caption generation that is highly dependent on which objects have been detected. This work proposes a parallel-based dense video captioning method that can simultaneously address the mutual constraint between event proposals and captions. Additionally, a deformable Transformer framework is introduced to reduce or free manual threshold of hyperparameters in such methods. An information transfer station is also added as a representation organisation, which receives the hidden features extracted from a frame and implicitly generates multiple event proposals. The proposed method also adopts LSTM (Long short-term memory) with deformable attention as the main layer for caption generation. Experimental results show that the proposed method outperforms other methods in this area to a certain degree on the ActivityNet Caption dataset, providing competitive results.

Original languageEnglish
Article number3685
Issue number17
Publication statusPublished - Sept 2023


  • dense video caption
  • feature extraction
  • multimodal feature fusion
  • neural network
  • video captioning


Dive into the research topics of 'Parallel Dense Video Caption Generation with Multi-Modal Features'. Together they form a unique fingerprint.

Cite this