Emotion classification of audio signals using ensemble of support vector machines


Danisman T., ALPKOÇAK A.

4th IEEE Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems, Kloster Irsee, Almanya, 16 - 18 Haziran 2008, cilt.5078, ss.205-216 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 5078
  • Doi Numarası: 10.1007/978-3-540-69369-7_23
  • Basıldığı Şehir: Kloster Irsee
  • Basıldığı Ülke: Almanya
  • Sayfa Sayıları: ss.205-216
  • Akdeniz Üniversitesi Adresli: Hayır

Özet

This study presents an approach for emotion classification of speech utterances based on ensemble of support vector machines. We considered feature level fusion of the MFCC, total energy and F0 as input feature vectors, and choose bagging method for the classification. Additionally, we also present a new emotional dataset based on a popular animation film, Finding Nemo where emotions are much emphasized to attract attention of spectators. Speech utterances are directly extracted from video audio channel including all background noise. Totally 2054 utterances from 24 speakers were annotated by a group of volunteers based on seven emotion categories. We concentrated on perceived emotion. Our approach has been tested on our newly developed dataset besides publically available datasets of DES and EmoDB. Experiments showed that our approach achieved 77.5% and 66.8% overall accuracy for four and five class classification on EFN dataset respectively. In addition, we achieved 67.6% accuracy on DES (five classes) and 63.5% on EmoDB (seven classes) dataset using ensemble of SVM's with 10 fold cross-validation.