Show simple item record

dc.contributor.authorÁlvarez, Aitor
dc.contributor.authorSierra Araujo, Basilio ORCID
dc.contributor.authorArruti Illarramendi, Andoni ORCID
dc.contributor.authorLópez Gil, Juan Miguel
dc.contributor.authorGaray Vitoria, Néstor ORCID
dc.date.accessioned2016-05-18T11:20:42Z
dc.date.available2016-05-18T11:20:42Z
dc.date.issued2016-01
dc.identifier.citationSensors 16(1) : (2016) // Article ID 21es
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10810/18274
dc.description.abstractIn this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one.es
dc.description.sponsorshipThis research work was partially funded by the Spanish Ministry of Economy and Competitiveness (Project TIN2014-52665-C2-1-R) and by the Department of Education, Universities and Research of the Basque Government (Grants IT395-10 and IT313-10). Egokituz Laboratory of HCI for Special Needs, Galan research group and Robotika eta Sistema Autonomoen Ikerketa Taldea (RSAIT) are part of the Basque Advanced Informatics Laboratory (BAILab) unit for research and teaching supported by the University of the Basque Country (UFI11/45). The authors would like to thank Karmele Lopez de Ipina and Innovae Vision S.L. for giving permission to use RekEmozio database for this research.es
dc.language.isoenges
dc.publisherMDPIes
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.subjectaffective computinges
dc.subjectmachine learninges
dc.subjectspeech emotion recognitiones
dc.subjectbayesian networkses
dc.subjectfeatureses
dc.subjectmodelses
dc.subjectdatabaseses
dc.titleClassifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speeches
dc.typeinfo:eu-repo/semantics/articlees
dc.rights.holderc 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).es
dc.relation.publisherversionhttp://www.mdpi.com/1424-8220/16/1/21es
dc.identifier.doi10.3390/s16010021
dc.departamentoesArquitectura y Tecnología de Computadoreses_ES
dc.departamentoeuKonputagailuen Arkitektura eta Teknologiaes_ES
dc.subject.categoriaBIOCHEMISTRY AND MOLECULAR BIOLOGY
dc.subject.categoriaCHEMISTRY, ANALYTICAL
dc.subject.categoriaELECTRICAL AND ELECTRONIC ENGINEERING
dc.subject.categoriaPHYSICS, ATOMIC, MOLECULAR AND CHEMICAL


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record