Abstract
As a simple classification method KNN has been widely applied in text classification. There are two problems in KNN-based text classification: the large computation load and the deterioration of classification accuracy caused by the non-uniform distribution of training samples. To solve these problems, based on minimizing the increment of learning errors and combining LVQ and GNG, the authors propose a new growing LVQ method and apply it to text classification. The method can generate an effective representative sample set after one phase of selective training of the training sample set, and hence has a strong learning ability. Experimental results show that this method can not only reduce the testing time of KNN, but also maintain or even improve the accuracy of classification.
Original language | English |
---|---|
Pages (from-to) | 1277-1285 |
Number of pages | 9 |
Journal | Jisuanji Xuebao/Chinese Journal of Computers |
Volume | 30 |
Issue number | 8 |
Publication status | Published - Aug 2007 |
Externally published | Yes |
Keywords
- Growing neural gas
- Inter-class distance
- Learning error
- Learning probability
- Learning vector quantification