Abstract
Training large-scale models needs big data. However, the few-shot problem is difficult to resolve due to inadequate training data. It is valuable to use only a few training samples to perform the task, such as using big data for application scenarios due to cost and resource problems. So, to tackle this problem, we present a simple and efficient method, contrastive label generation with knowledge for few-shot learning (CLG). Specifically, we: (1) Propose contrastive label generation to align the label with data input and enhance feature representations; (2) Propose a label knowledge filter to avoid noise during injection of the explicit knowledge into the data and label; (3) Employ label logits mask to simplify the task; (4) Employ multi-task fusion loss to learn different perspectives from the training set. The experiments demonstrate that CLG achieves an accuracy of 59.237%, which is more than about 3% in comparison with the best baseline. It shows that CLG obtains better features and gives the model more information about the input sentences to improve the classification ability.
Original language | English |
---|---|
Article number | 472 |
Journal | Mathematics |
Volume | 12 |
Issue number | 3 |
DOIs | |
Publication status | Published - Feb 2024 |
Keywords
- contrastive learning
- few-shot learning
- knowledge graph
- natural language processing
- transfer learning
Fingerprint
Dive into the research topics of 'CLG: Contrastive Label Generation with Knowledge for Few-Shot Learning'. Together they form a unique fingerprint.Press/Media
-
Study Data from Faculty of Applied Sciences Provide New Insights into Mathematics (CLG: Contrastive Label Generation with Knowledge for Few-Shot Learning)
CHAN TONG LAM & HAN MA
15/02/24
1 item of Media coverage
Press/Media