Paralinguistic and spectral feature extraction for speech emotion classification using machine learning techniques

Tong Liu, Xiaochen Yuan

研究成果: Article同行評審

4 引文 斯高帕斯(Scopus)

摘要

Emotion plays a dominant role in speech. The same utterance with different emotions can lead to a completely different meaning. The ability to perform various of emotion during speaking is also one of the typical characters of human. In this case, technology trends to develop advanced speech emotion classification algorithms in the demand of enhancing the interaction between computer and human beings. This paper proposes a speech emotion classification approach based on the paralinguistic and spectral features extraction. The Mel-frequency cepstral coefficients (MFCC) are extracted as spectral feature, and openSMILE is employed to extract the paralinguistic feature. The machine learning techniques multi-layer perceptron classifier and support vector machines are respectively applied into the extracted features for the classification of the speech emotions. We have conducted experiments on the Berlin database to evaluate the performance of the proposed approach. Experimental results show that the proposed approach achieves satisfied performances. Comparisons are conducted in clean condition and noisy condition respectively, and the results indicate better performance of the proposed scheme.

原文English
文章編號23
期刊Eurasip Journal on Audio, Speech, and Music Processing
2023
發行號1
DOIs
出版狀態Published - 12月 2023

指紋

深入研究「Paralinguistic and spectral feature extraction for speech emotion classification using machine learning techniques」主題。共同形成了獨特的指紋。

引用此