2010 |
Gajšek, Rok; Štruc, Vitomir; Mihelič, France Multi-modal Emotion Recognition based on the Decoupling of Emotion and Speaker Information Proceedings Article V: Proceedings of Text, Speech and Dialogue (TSD), str. 275-282, Springer-Verlag, Berlin, Heidelberg, 2010. Povzetek | Povezava | BibTeX | Oznake: emotion recognition, facial expression recognition, multi modality, speech processing, speech technologies, spontaneous emotions, video processing @inproceedings{TSD_Emo_Gajsek, The standard features used in emotion recognition carry, besides the emotion related information, also cues about the speaker. This is expected, since the nature of emotionally colored speech is similar to the variations in the speech signal, caused by different speakers. Therefore, we present a gradient descent derived transformation for the decoupling of emotion and speaker information contained in the acoustic features. The Interspeech ’09 Emotion Challenge feature set is used as the baseline for the audio part. A similar procedure is employed on the video signal, where the nuisance attribute projection (NAP) is used to derive the transformation matrix, which contains information about the emotional state of the speaker. Ultimately, different NAP transformation matrices are compared using canonical correlations. The audio and video sub-systems are combined at the matching score level using different fusion techniques. The presented system is assessed on the publicly available eNTERFACE’05 database where significant improvements in the recognition performance are observed when compared to the stat-of-the-art baseline. |
Objave
2010 |
Multi-modal Emotion Recognition based on the Decoupling of Emotion and Speaker Information Proceedings Article V: Proceedings of Text, Speech and Dialogue (TSD), str. 275-282, Springer-Verlag, Berlin, Heidelberg, 2010. |