Show simple item record

dc.contributor.authorDe Velasco Vázquez, Mikel ORCID
dc.contributor.authorJusto Blanco, Raquel ORCID
dc.contributor.authorLópez Zorrilla, Asier ORCID
dc.contributor.authorTorres Barañano, María Inés ORCID
dc.date.accessioned2020-05-27T17:57:43Z
dc.date.available2020-05-27T17:57:43Z
dc.date.issued2019
dc.identifier.citation10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 2019 : 289-294 (2019)es_ES
dc.identifier.isbn978-1-7281-4793-2
dc.identifier.issn2380-7350
dc.identifier.urihttp://hdl.handle.net/10810/43562
dc.descriptionAccepted paperes_ES
dc.description.abstractDecoding emotional states from multimodal signals is an increasingly active domain, within the framework of affective computing, which aims to a better understanding of Human-Human Communication as well as to improve Human- Computer Interaction. But the automatic recognition of sponta- neous emotions from speech is a very complex task due to the lack of a certainty of the speaker states as well as to the difficulty to identify a variety of emotions in real scenarios. In this work we explore the extent to which emotional states can be decoded from speech signals extracted from TV political debates. The labelling procedure was supported by perception experiments where only a small set of emotions has been identified. In addition, some scaled judgements of valence, arousal and dominance were also provided. In this framework the paper shows meaningful comparisons between both, the dimensional and the categorical models of emotions, which is a new con- tribution when dealing with spontaneous emotions. To this end Support Vector Machines (SVM) as well as Feedforward Neural Networks (FNN) have been proposed to develop classifiers and predictors. The experimental evaluation over a Spanish corpus has shown the ability of both models to be identified in speech segments by the proposed artificial systems.es_ES
dc.description.sponsorshipThis work has been partially funded by the Spanish Government under grant TIN2017-85854-C4-3-R (AEI/FEDER,UE) and conducted in the project EMPATHIC (Grant n769872) funded by the European Union’s H2020 research andinnovation program.es_ES
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/769872es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/TIN2017-85854-C4-3-R (AEI/FEDER,UE)es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectspeech processinges_ES
dc.subjectemotion detection from Speeches_ES
dc.subjecthuman-AIes_ES
dc.subjectaffective computinges_ES
dc.titleCan Spontaneous Emotions be Detected from Speech on TV Political Debates?es_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder© 2019 IEEE. "Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”es_ES
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9089948es_ES
dc.identifier.doi10.1109/CogInfoCom47531.2019.9089948
dc.contributor.funderEuropean Commission
dc.departamentoesElectricidad y electrónicaes_ES
dc.departamentoeuElektrizitatea eta elektronikaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record