A Self-Weighting Module to Improve Sentiment Analysis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

This article introduces a self-weighting module for filtering meaningless words and normalizing them before RNN encoding, with the purpose of alleviating the long-term dependencies problem. We make use of the concept of weights in our design to analyze the transition of hidden states and indicate the complete architecture for processing the weighted feature and embedded word within the proposed module. In particular, we investigate the conditions that can enhance convergence and show that the proposed classifiers are able to improve the accuracy in the experimental cases significantly, not only giving better performance but also producing faster convergence. Moreover, the proposed module is general and can be applied to all RNN related network models.

Original languageEnglish
Title of host publicationIJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9780738133669
DOIs
Publication statusPublished - 18 Jul 2021
Event2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China
Duration: 18 Jul 202122 Jul 2021

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2021-July

Conference

Conference2021 International Joint Conference on Neural Networks, IJCNN 2021
Country/TerritoryChina
CityVirtual, Shenzhen
Period18/07/2122/07/21

Keywords

  • CARU
  • NLP
  • Self-Weighting
  • Sentiment Analysis
  • Sign Function

Fingerprint

Dive into the research topics of 'A Self-Weighting Module to Improve Sentiment Analysis'. Together they form a unique fingerprint.

Cite this