Show simple item record

dc.contributor.authorApfelbaum, Keith S.
dc.contributor.authorKutlu, Ethan
dc.contributor.authorMcMurray, Bob
dc.contributor.authorKapnoula, Efthymia C.
dc.date.accessioned2023-02-22T14:58:20Z
dc.date.available2023-02-22T14:58:20Z
dc.date.issued2022
dc.identifier.citationKeith S. Apfelbaum, Ethan Kutlu, Bob McMurray, and Efthymia C. Kapnoula , "Don't force it! Gradient speech categorization calls for continuous categorization tasks", The Journal of the Acoustical Society of America 152, 3728-3745 (2022) https://doi.org/10.1121/10.0015201es_ES
dc.identifier.citationThe Journal of the Acoustical Society of America
dc.identifier.issn0001-4966
dc.identifier.urihttp://hdl.handle.net/10810/60034
dc.descriptionPublished Online: 20 December 2022es_ES
dc.description.abstractResearch on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses that have not been thoroughly investigated. Here, we identify critical challenges in the link between these tasks and theories of speech categorization. In particular, we show that patterns that have traditionally been linked to categorical perception could arise despite continuous underlying perception and that patterns that run counter to categorical perception could arise despite underlying categorical perception. We describe an alternative measure of speech perception using a visual analog scale that better differentiates between processes at play in speech categorization, and we review some recent findings that show how this task can be used to better inform our theories.es_ES
dc.description.sponsorshipThis project was supported by National Institutes of Health (NIH) Grant No. DC008089 awarded to B.M. This work was supported by the Basque Government through the Basque Excellence Research Center (BERC) 2018–2021 and BERC 2022–2025 programs, by the Spanish State Research Agency through BCBL Severo Ochoa Excellence Accreditation Nos. SEV-2015-0490 and CEX2020-001010- S, and by the Spanish Ministry of Science and Innovation through Grant No. PID2020-113348GB-I00, awarded to E.C.K. This project has received funding from the European Union’s Horizon 2020 research and innovation program, under Marie Skłodowska-Curie Grant Agreement No. 793919, awarded to E.C.K.es_ES
dc.language.isoenges_ES
dc.publisherASAes_ES
dc.relationinfo:eu-repo/grantAgreement/GV/BERC2018-2021es_ES
dc.relationinfo:eu-repo/grantAgreement/GV/BERC2022-2025es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/SEV-2015-0490es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/CEX2020-001010-Ses_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PID2020-113348GB-I00es_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/MSCA-793919es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.titleDon’t force it! Gradient speech categorization calls for continuous categorization taskses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder©2022 Acoustical Society of Americaes_ES
dc.relation.publisherversionhttps://asa.scitation.org/es_ES
dc.identifier.doi10.1121/10.0015201


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record