Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Affective modeling on spoken dialogue

Chorianopoulou Arodami

Simple record


URIhttp://purl.tuc.gr/dl/dias/1C94DA49-2645-4D31-BDDB-260D98E52C8B-
Identifierhttps://doi.org/10.26233/heallink.tuc.68631-
Languageen-
Extent74 pagesen
TitleAffective modeling on spoken dialogueen
CreatorChorianopoulou Arodamien
CreatorΧωριανοπουλου Αροδαμηel
Contributor [Thesis Supervisor]Koutsakis Polychronisen
Contributor [Thesis Supervisor]Κουτσακης Πολυχρονηςel
Contributor [Committee Member]Potamianos Alexandrosen
Contributor [Committee Member]Petrakis Evripidisen
Contributor [Committee Member]Πετρακης Ευριπιδηςel
PublisherΠολυτεχνείο Κρήτηςel
PublisherTechnical University of Creteen
Academic UnitTechnical University of Crete::School of Electrical and Computer Engineeringen
Academic UnitΠολυτεχνείο Κρήτης::Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστώνel
Content SummaryEmotions are fundamental for human-human communication, impacting people’s perception, communication and decision-making. These are expressed through speech, facial expressions, gestures and other non-verbal cues. Speech is the main channel of human communication, interpreting emotional and semantic cues. Affective computing and specifically emotion recognition, is the process of decoding communication signals. It aims to improve the human-computer interaction (HCI) in a cognitive level allowing computers to adapt to the users needs. Hence, speech emotion recognition suggests that vocal parameters reflect the affective state of a person. This assumption is supported by the fact that most affective states involve physiological reactions which in turn modify the process by which voice is produced. There are a number of potential applications for speech emotion recognition, including anger detection for Spoken Dialogue Systems (SDS) and emotional aids for people with autism. Attention is a concept studied in cognitive psychology that refers to how a person actively processes information. Salience is the level to which something in the environment can catch and retain one’s attention. While research on affective speech saliency is not extensive, salient information from audio and video has been investigated. It is argued that modeling the affective variation of speech can be approached by integrating acoustic parameters from various prosodic timescales, ummarizing information from more localized (e.g. syllable-level) to more global prosodic phenomena (e.g. utterance-level). In this thesis, speech prosody and related acoustic features, e.g., spectral and voice quality, are investigated for the task of emotion recognition. Features derived from the Amplitude and Frequency Modulation (AM-FM) model are also examined. Moreover, the contribution of different information levels is also addressed for the task of emotion recognition. Additionally, we investigate the affective salient information over time on spoken dialogue utterances using prosodic variations from different timescales of the speech signal, by weighting speech segments. The proposed models are evaluated on datasets of spontaneous speech. For a human social and mental states are highly correlated. As a result affective speech is introduced on several areas of the computational community. For instance, people with Autism Spectrum Disorder (ASD) suffer from symptoms of anxiety and depression that significantly compromise their quality of life. Additionally, language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Motivated by these findings, we investigate the degree of engagament for children with ASD in interactions with their parents.en
Type of ItemΜεταπτυχιακή Διατριβήel
Type of ItemMaster Thesisen
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2017-07-10-
Date of Publication2017-
SubjectSpeech emotion recognitionen
SubjectAffective modelingen
Bibliographic CitationArodami Chorianopoulou, "Affective modeling on spoken dialogue", Master Thesis, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2017en

Available Files

Services

Statistics