dc.contributor.author | Apfelbaum, Keith S. | |
dc.contributor.author | Kutlu, Ethan | |
dc.contributor.author | McMurray, Bob | |
dc.contributor.author | Kapnoula, Efthymia C. | |
dc.date.accessioned | 2023-02-22T14:58:20Z | |
dc.date.available | 2023-02-22T14:58:20Z | |
dc.date.issued | 2022 | |
dc.identifier.citation | Keith S. Apfelbaum, Ethan Kutlu, Bob McMurray, and Efthymia C. Kapnoula , "Don't force it! Gradient speech categorization calls for continuous categorization tasks", The Journal of the Acoustical Society of America 152, 3728-3745 (2022) https://doi.org/10.1121/10.0015201 | es_ES |
dc.identifier.citation | The Journal of the Acoustical Society of America | |
dc.identifier.issn | 0001-4966 | |
dc.identifier.uri | http://hdl.handle.net/10810/60034 | |
dc.description | Published Online: 20 December 2022 | es_ES |
dc.description.abstract | Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen
to stimuli from a speech continuum and are asked to either classify each stimulus (identification) or discriminate
between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses
that have not been thoroughly investigated. Here, we identify critical challenges in the link between these tasks and
theories of speech categorization. In particular, we show that patterns that have traditionally been linked to
categorical perception could arise despite continuous underlying perception and that patterns that run counter to
categorical perception could arise despite underlying categorical perception. We describe an alternative measure of
speech perception using a visual analog scale that better differentiates between processes at play in speech
categorization, and we review some recent findings that show how this task can be used to better inform our theories. | es_ES |
dc.description.sponsorship | This project was supported by National Institutes
of Health (NIH) Grant No. DC008089 awarded to B.M. This
work was supported by the Basque Government through the
Basque Excellence Research Center (BERC) 2018–2021
and BERC 2022–2025 programs, by the Spanish State
Research Agency through BCBL Severo Ochoa Excellence
Accreditation Nos. SEV-2015-0490 and CEX2020-001010-
S, and by the Spanish Ministry of Science and Innovation
through Grant No. PID2020-113348GB-I00, awarded to
E.C.K. This project has received funding from the European
Union’s Horizon 2020 research and innovation program,
under Marie Skłodowska-Curie Grant Agreement No.
793919, awarded to E.C.K. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | ASA | es_ES |
dc.relation | info:eu-repo/grantAgreement/GV/BERC2018-2021 | es_ES |
dc.relation | info:eu-repo/grantAgreement/GV/BERC2022-2025 | es_ES |
dc.relation | info:eu-repo/grantAgreement/MINECO/SEV-2015-0490 | es_ES |
dc.relation | info:eu-repo/grantAgreement/MINECO/CEX2020-001010-S | es_ES |
dc.relation | info:eu-repo/grantAgreement/MINECO/PID2020-113348GB-I00 | es_ES |
dc.relation | info:eu-repo/grantAgreement/EC/H2020/MSCA-793919 | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.title | Don’t force it! Gradient speech categorization calls for continuous categorization tasks | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.rights.holder | ©2022 Acoustical Society of America | es_ES |
dc.relation.publisherversion | https://asa.scitation.org/ | es_ES |
dc.identifier.doi | 10.1121/10.0015201 | |