dc.contributor.author | De Velasco Vázquez, Mikel | |
dc.contributor.author | Justo Blanco, Raquel | |
dc.contributor.author | Ben Letaifa Zouari, Leila | |
dc.contributor.author | Torres Barañano, María Inés | |
dc.date.accessioned | 2021-04-29T17:26:53Z | |
dc.date.available | 2021-04-29T17:26:53Z | |
dc.date.issued | 2020-03-24 | |
dc.identifier.citation | IberSPEECH 2021 : 51-55 (2021) | es_ES |
dc.identifier.uri | http://hdl.handle.net/10810/51251 | |
dc.description | Conferencia presentada en IberSPEECH 2021, 24-25 March 2021, Valladolid, Spain | es_ES |
dc.description.abstract | This work is aimed to contrast the similarities and differences for the emotions identified in two very different scenarios: human-to-human interaction on Spanish TV debates and human-machine interaction with a virtual agent in Spanish. To this end we developed a crowd annotation procedure to label the speech signal in terms of both, emotional categories and Valence-Arousal-Dominance models. The analysis of these data showed interesting findings that allowed to profile both the speakers and the task. Then, Convolutional Neural Networks were used for the automatic classification of the emotional samples in both tasks. Experimental results drew up a different human behavior in both tasks and outlined different speaker profiles. | es_ES |
dc.description.sponsorship | The research presented in this paper is conducted as part of the AMIC and EMPATHIC projects project that have received funding from the Spanish Minister of Science under grant TIN2017-85854-C4-3-R and from the European Union’s Horizon 2020 research and innovation program under grant agreement No 769872. First author has also received a PhD scholarship from the University of the Basque Country UPV/EHU, PIF17/310. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | ISCA | es_ES |
dc.relation | info:eu-repo/grantAgreement/EC/H2020/769872 | es_ES |
dc.relation | info:eu-repo/grantAgreement/MINECO/TIN2017-85854-C4-3-R | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.subject | emotions recognition from speech | es_ES |
dc.subject | perception | es_ES |
dc.subject | communication | es_ES |
dc.subject | human-machine interaction | es_ES |
dc.subject | crowd annotation | es_ES |
dc.subject | speech processing | es_ES |
dc.title | Contrasting the Emotions identified in Spanish TV debates and in Human-Machine Interactions | es_ES |
dc.type | info:eu-repo/semantics/conferenceObject | es_ES |
dc.rights.holder | (c) 2021 ISCA | es_ES |
dc.relation.publisherversion | https://www.isca-speech.org/archive/IberSPEECH_2021/abstracts/11.html | es_ES |
dc.identifier.doi | 10.21437/IberSPEECH.2021-11 | |
dc.contributor.funder | European Commission | |
dc.departamentoes | Electricidad y electrónica | es_ES |
dc.departamentoeu | Elektrizitatea eta elektronika | es_ES |