VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning

Han Ma, Baoyu Fan, Benjamin K. Ng, Chan Tong Lam

研究成果: Article同行評審

摘要

Complex tasks in the real world involve different modal models, such as visual question answering (VQA). However, traditional multimodal learning requires a large amount of aligned data, such as image text pairs, and constructing a large amount of training data is a challenge for multimodal learning. Therefore, we propose VL-Few, which is a simple and effective method to solve the multimodal few-shot problem. VL-Few (1) proposes the modal alignment, which aligns visual features into language space through a lightweight model network and improves the multimodal understanding ability of the model; (2) adopts few-shot meta learning in the multimodal problem, which constructs a few-shot meta task pool to improve the generalization ability of the model; (3) proposes semantic alignment to enhance the semantic understanding ability of the model for the task, context, and demonstration; (4) proposes task alignment that constructs training data into the target task form and improves the task understanding ability of the model; (5) proposes generation alignment, which adopts the token-level training and multitask fusion loss to improve the generation ability of the model. Our experimental results show the effectiveness of VL-Few for multimodal few-shot problems.

原文English
文章編號1169
期刊Applied Sciences (Switzerland)
14
發行號3
DOIs
出版狀態Published - 2月 2024

指紋

深入研究「VL-Few: Vision Language Alignment for Multimodal Few-Shot Meta Learning」主題。共同形成了獨特的指紋。

引用此