Show simple item record

dc.contributor.authorClark, Catherine
dc.contributor.authorGuediche, Sara
dc.contributor.authorLallier, Marie
dc.date.accessioned2021-12-15T10:40:27Z
dc.date.available2021-12-15T10:40:27Z
dc.date.issued2021
dc.identifier.citationClark, C., Guediche, S. & Lallier, M. Compensatory cross-modal effects of sentence context on visual word recognition in adults. Read Writ 34, 2011–2029 (2021). https://doi.org/10.1007/s11145-021-10132-xes_ES
dc.identifier.issn0922-4777
dc.identifier.urihttp://hdl.handle.net/10810/54494
dc.descriptionPublished online: 11 February 2021es_ES
dc.description.abstractReading involves mapping combinations of a learned visual code (letters) onto meaning. Previous studies have shown that when visual word recognition is challenged by visual degradation, one way to mitigate these negative effects is to provide "top–down" contextual support through a written congruent sentence context. Crowding is a naturally occurring visual phenomenon that impairs object recognition and also affects the recognition of written stimuli during reading. Thus, access to a supporting semantic context via a written text is vulnerable to the detrimental impact of crowding on letters and words. Here, we suggest that an auditory sentence context may provide an alternative source of semantic information that is not influenced by crowding, thus providing “top–down” support cross-modally. The goal of the current study was to investigate whether adult readers can cross-modally compensate for crowding in visual word recognition using an auditory sentence context. The results show a significant cross-modal interaction between the congruency of the auditory sentence context and visual crowding, suggesting that interactions can occur across multiple levels of processing and across different modalities to support reading processes. These findings highlight the need for reading models to specify in greater detail how top–down, cross-modal and interactive mechanisms may allow readers to compensate for deficiencies at early stages of visual processing.es_ES
dc.description.sponsorshipThis research is supported by the Basque Government through the BERC 2018-2021 program; the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation (SEV-2015-0490); the "Programa Estatal de Promoción del Talento y su Empleabilidad en I+D+i" fellowship, reference number: PRE2018-083945" to C.C; funding from European Union's Horizon 2020 Marie Sklodowska-Curie grant agreement No-79954 to S.G.; and the grants from the Spanish Ministry of Science and Innovation, Ramon y Cajal-RYC-2015-1735 and Plan Nacional-RTI2018-096242-B-I0 to M.L.es_ES
dc.language.isoenges_ES
dc.publisherReading and Writinges_ES
dc.relationinfo:eu-repo/grantAgreement/Basque Government/BERC2018-2021es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/SEV-2015-0490es_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/MC/79954es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/RYC-2015-1735es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/RTI2018-096242-B-I0es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PRE2018-083945
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectAuditory sentence contextes_ES
dc.subjectCrowdinges_ES
dc.subjectLexical decisiones_ES
dc.subjectOrthographic processinges_ES
dc.subjectWord recognitiones_ES
dc.titleCompensatory cross‑modal effects of sentence context on visual word recognition in adultses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature 2021es_ES
dc.relation.publisherversionhttps://www.springer.com/journal/11145es_ES
dc.identifier.doi10.1007/s11145-021-10132-x


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record