Improved growing learning vector quantification for text classification

Xiu Jun Wang, Hong Shen

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

As a simple classification method KNN has been widely applied in text classification. There are two problems in KNN-based text classification: the large computation load and the deterioration of classification accuracy caused by the non-uniform distribution of training samples. To solve these problems, based on minimizing the increment of learning errors and combining LVQ and GNG, the authors propose a new growing LVQ method and apply it to text classification. The method can generate an effective representative sample set after one phase of selective training of the training sample set, and hence has a strong learning ability. Experimental results show that this method can not only reduce the testing time of KNN, but also maintain or even improve the accuracy of classification.

Original languageEnglish
Pages (from-to)1277-1285
Number of pages9
JournalJisuanji Xuebao/Chinese Journal of Computers
Volume30
Issue number8
Publication statusPublished - Aug 2007
Externally publishedYes

Keywords

  • Growing neural gas
  • Inter-class distance
  • Learning error
  • Learning probability
  • Learning vector quantification

Fingerprint

Dive into the research topics of 'Improved growing learning vector quantification for text classification'. Together they form a unique fingerprint.

Cite this