URI | http://purl.tuc.gr/dl/dias/1C94DA49-2645-4D31-BDDB-260D98E52C8B | - |
Identifier | https://doi.org/10.26233/heallink.tuc.68631 | - |
Language | en | - |
Extent | 74 pages | en |
Title | Affective modeling on spoken dialogue | en |
Creator | Chorianopoulou Arodami | en |
Creator | Χωριανοπουλου Αροδαμη | el |
Contributor [Thesis Supervisor] | Koutsakis Polychronis | en |
Contributor [Thesis Supervisor] | Κουτσακης Πολυχρονης | el |
Contributor [Committee Member] | Potamianos Alexandros | en |
Contributor [Committee Member] | Petrakis Evripidis | en |
Contributor [Committee Member] | Πετρακης Ευριπιδης | el |
Publisher | Πολυτεχνείο Κρήτης | el |
Publisher | Technical University of Crete | en |
Academic Unit | Technical University of Crete::School of Electrical and Computer Engineering | en |
Academic Unit | Πολυτεχνείο Κρήτης::Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών | el |
Content Summary | Emotions are fundamental for human-human communication, impacting people’s perception, communication and decision-making. These are expressed through speech, facial expressions, gestures and other non-verbal cues. Speech is the main channel of human communication, interpreting emotional and semantic cues. Affective computing and specifically emotion recognition, is the process of decoding communication signals. It aims to improve the human-computer interaction (HCI) in a cognitive level allowing computers to adapt to the users needs. Hence, speech emotion recognition suggests that vocal parameters reflect the affective state of a person. This assumption is supported
by the fact that most affective states involve physiological reactions which in
turn modify the process by which voice is produced. There are a number of potential applications for speech emotion recognition, including anger detection for Spoken Dialogue Systems (SDS) and emotional aids for people with autism.
Attention is a concept studied in cognitive psychology that refers to how a person actively processes information. Salience is the level to which something in the environment can catch and retain one’s attention. While research on affective speech saliency is not extensive, salient information from audio and video has been investigated.
It is argued that modeling the affective variation of speech can be approached
by integrating acoustic parameters from various prosodic timescales, ummarizing information from more localized (e.g. syllable-level) to more global prosodic phenomena (e.g. utterance-level).
In this thesis, speech prosody and related acoustic features, e.g., spectral and voice quality, are investigated for the task of emotion recognition. Features derived from the Amplitude and Frequency Modulation (AM-FM) model are also examined. Moreover, the contribution of different information levels is also addressed for the task of emotion recognition. Additionally, we investigate the affective salient information over time on spoken dialogue utterances using prosodic variations from different timescales of the speech signal, by weighting speech segments. The proposed models are evaluated on datasets of spontaneous speech.
For a human social and mental states are highly correlated. As a result affective
speech is introduced on several areas of the computational community. For instance, people with Autism Spectrum Disorder (ASD) suffer from symptoms of anxiety and depression that significantly compromise their quality of life. Additionally, language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Motivated by these findings, we investigate the degree of engagament for children with ASD in interactions with their parents. | en |
Type of Item | Μεταπτυχιακή Διατριβή | el |
Type of Item | Master Thesis | en |
License | http://creativecommons.org/licenses/by/4.0/ | en |
Date of Item | 2017-07-10 | - |
Date of Publication | 2017 | - |
Subject | Speech emotion recognition | en |
Subject | Affective modeling | en |
Bibliographic Citation | Arodami Chorianopoulou, "Affective modeling on spoken dialogue", Master Thesis, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2017 | en |