Paralinguistic and spectral feature extraction for speech emotion classification using machine learning techniques

Tong Liu, Xiaochen Yuan

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Emotion plays a dominant role in speech. The same utterance with different emotions can lead to a completely different meaning. The ability to perform various of emotion during speaking is also one of the typical characters of human. In this case, technology trends to develop advanced speech emotion classification algorithms in the demand of enhancing the interaction between computer and human beings. This paper proposes a speech emotion classification approach based on the paralinguistic and spectral features extraction. The Mel-frequency cepstral coefficients (MFCC) are extracted as spectral feature, and openSMILE is employed to extract the paralinguistic feature. The machine learning techniques multi-layer perceptron classifier and support vector machines are respectively applied into the extracted features for the classification of the speech emotions. We have conducted experiments on the Berlin database to evaluate the performance of the proposed approach. Experimental results show that the proposed approach achieves satisfied performances. Comparisons are conducted in clean condition and noisy condition respectively, and the results indicate better performance of the proposed scheme.

Original languageEnglish
Article number23
JournalEurasip Journal on Audio, Speech, and Music Processing
Volume2023
Issue number1
DOIs
Publication statusPublished - Dec 2023

Keywords

  • Multi-layer perceptron classifier
  • Paralinguistic features
  • Spectral features
  • Speech emotion classification
  • Support vector machine

Fingerprint

Dive into the research topics of 'Paralinguistic and spectral feature extraction for speech emotion classification using machine learning techniques'. Together they form a unique fingerprint.

Cite this