The Presence of Background Noise Extends the Competitor Space in Native and Non-Native Spoken-Word Recognition: Insights from Computational Modeling
- PMID: 35188686
- PMCID: PMC9286693
- DOI: 10.1111/cogs.13110
The Presence of Background Noise Extends the Competitor Space in Native and Non-Native Spoken-Word Recognition: Insights from Computational Modeling
Abstract
Oral communication often takes place in noisy environments, which challenge spoken-word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken-word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non-native) using computational modeling. We developed ListenIN (Listen-In-Noise), a neural-network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non-native spoken-word comprehension. We also examined the model's activation states during online spoken-word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words, which are engaged in phonological competition and that this happens in similar ways intra and interlinguistically and in native and non-native listening. Taken together, our results support accounts positing a "many-additional-competitors scenario" for the effects of noise on spoken-word recognition.
Keywords: Competitor space; Computational modeling; Deep neural networks; Neurocomputational model; Noise; Non-native listening; Phonological competition; Spoken-word recognition.
© 2022 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS).
Figures







References
-
- Allopenna, P. D. , Magnuson, J. S. , & Tanenhaus, M. K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38, 419–439.
-
- Baayen, R.H. , Davidson, D.J. , Bates, D.M. (2008). Mixed‐effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412. 10.1016/j.jml.2007.12.005 - DOI
-
- Ben‐David, B. M. , Chambers, C. G. , Daneman, M. , Pichora‐Fuller, M. K. , Reingold, E. M. , & Schneider, B. A. (2011). Effects of aging and noise on real‐time spoken word recognition: Evidence from eye movements. Journal of Speech Language and Hearing Research, 54, 243–262. - PubMed
-
- Broersma, M. (2012). Increased lexical activation and reduced competition in second‐language listening. Language and Cognitive Processes, 27(7‐8), 1205–1224. 10.1080/01690965.2012.660170 - DOI
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources