Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Speech emotion recognition using affective saliency

Chorianopoulou Arodami, Koutsakis Polychronis, Potamianos Alexandros

Full record


URI: http://purl.tuc.gr/dl/dias/2B1694B7-F7A0-4314-A57A-C9FF41F19F83
Year 2016
Type of Item Conference Full Paper
License
Details
Bibliographic Citation A. Chorianopoulou, P. Koutsakis and A. Potamianos, "Speech emotion recognition using affective saliency," in 17th Annual Conference of the International Speech Communication Association, 2016, pp. 500-504. doi: 10.21437/Interspeech.2016-1311 https://doi.org/10.21437/Interspeech.2016-1311
Appears in Collections

Summary

We investigate an affective saliency approach for speech emotion recognition of spoken dialogue utterances that estimates the amount of emotional information over time. The proposed saliency approach uses a regression model that combines features extracted from the acoustic signal and the posteriors of a segment-level classifier to obtain frame or segment-level ratings. The affective saliency model is trained using a minimum classification error (MCE) criterion that learns the weights by optimizing an objective loss function related to the classification error rate of the emotion recognition system. Affective saliency scores are then used to weight the contribution of frame-level posteriors and/or features to the speech emotion classification decision. The algorithm is evaluated for the task of anger detection on four call-center datasets for two languages, Greek and English, with good results.

Services

Statistics