Abstract
Complex tasks in the real world involve different modal models, such as visual question answering (VQA). However, traditional multimodal learning requires a large amount of aligned data, such as image text pairs, and constructing a large amount of training data is a challenge for multimodal learning. Therefore, we propose VL-Few, which is a simple and effective method to solve the multimodal few-shot problem. VL-Few (1) proposes the modal alignment, which aligns visual features into language space through a lightweight model network and improves the multimodal understanding ability of the model; (2) adopts few-shot meta learning in the multimodal problem, which constructs a few-shot meta task pool to improve the generalization ability of the model; (3) proposes semantic alignment to enhance the semantic understanding ability of the model for the task, context, and demonstration; (4) proposes task alignment that constructs training data into the target task form and improves the task understanding ability of the model; (5) proposes generation alignment, which adopts the token-level training and multitask fusion loss to improve the generation ability of the model. Our experimental results show the effectiveness of VL-Few for multimodal few-shot problems.
Original language | English |
---|---|
Article number | 1169 |
Journal | Applied Sciences (Switzerland) |
Volume | 14 |
Issue number | 3 |
DOIs | |
Publication status | Published - Feb 2024 |
Keywords
- few-shot learning
- meta learning
- multimodal learning
- representation alignment
- vision language learning
- visual question answering
Fingerprint
Dive into the research topics of 'VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning'. Together they form a unique fingerprint.Press/Media
-
New Applied Sciences Study Findings Has Been Reported by a Researcher at Faculty of Applied Sciences (VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning)
CHAN TONG LAM & HAN MA
16/02/24
1 item of Media coverage
Press/Media