Abstract
Image captions are abstract expressions of content representations using text sentences, helping readers to better understand and analyse information between different media. With the advantage of encoder-decoder neural networks, captions can provide a rational structure for tasks such as image coding and caption prediction. This work introduces a Convolutional Neural Network (CNN) to Bidirectional Content-Adaptive Recurrent Unit (Bi-CARU) (CNN-to-Bi-CARU) model that performs bidirectional structure to consider contextual features and captures major feature from image. The encoded feature coded form image is respectively passed into the forward and backward layer of CARU to refine the word prediction, providing contextual text output for captioning. An attention layer is also introduced to collect the feature produced by the context-adaptive gate in CARU, aiming to compute the weighting information for relationship extraction and determination. In experiments, the proposed CNN-to-Bi-CARU model outperforms other advanced models in the field, achieving better extraction of contextual information and detailed representation of image captions. The model obtains a score of 41.28 on BLEU@4, 31.23 on METEOR, 61.07 on ROUGE-L, and 133.20 on CIDEr-D, making it competitive in the image captioning of MSCOCO dataset.
Original language | English |
---|---|
Pages (from-to) | 84934-84943 |
Number of pages | 10 |
Journal | IEEE Access |
Volume | 11 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Bi-CARU
- CNN
- NLP
- RNN
- attention mechanism
- context-adaptive
- image captioning
Fingerprint
Dive into the research topics of 'Context-Adaptive-Based Image Captioning by Bi-CARU'. Together they form a unique fingerprint.Press/Media
-
Findings on Engineering Reported by Investigators at Faculty of Applied Sciences (Context-adaptive-based Image Captioning By Bi-caru)
18/09/23
1 item of Media coverage
Press/Media