Compensatory cross‑modal effects of sentence context on visual word recognition in adults
Fecha
2021Autor
Clark, Catherine
Guediche, Sara
Lallier, Marie
Metadatos
Mostrar el registro completo del ítem
Clark, C., Guediche, S. & Lallier, M. Compensatory cross-modal effects of sentence context on visual word recognition in adults. Read Writ 34, 2011–2029 (2021). https://doi.org/10.1007/s11145-021-10132-x
Resumen
Reading involves mapping combinations of a learned visual code (letters) onto
meaning. Previous studies have shown that when visual word recognition is challenged
by visual degradation, one way to mitigate these negative effects is to provide
"top–down" contextual support through a written congruent sentence context.
Crowding is a naturally occurring visual phenomenon that impairs object recognition
and also affects the recognition of written stimuli during reading. Thus, access
to a supporting semantic context via a written text is vulnerable to the detrimental
impact of crowding on letters and words. Here, we suggest that an auditory sentence
context may provide an alternative source of semantic information that is not influenced
by crowding, thus providing “top–down” support cross-modally. The goal of
the current study was to investigate whether adult readers can cross-modally compensate
for crowding in visual word recognition using an auditory sentence context.
The results show a significant cross-modal interaction between the congruency of
the auditory sentence context and visual crowding, suggesting that interactions can
occur across multiple levels of processing and across different modalities to support
reading processes. These findings highlight the need for reading models to specify
in greater detail how top–down, cross-modal and interactive mechanisms may allow
readers to compensate for deficiencies at early stages of visual processing.