The Learning Signal in Perceptual Tuning of Speech: Bottom Up Versus Top-Down Information
- PMID: 33682208
- DOI: 10.1111/cogs.12947
The Learning Signal in Perceptual Tuning of Speech: Bottom Up Versus Top-Down Information
Abstract
Cognitive systems face a tension between stability and plasticity. The maintenance of long-term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short-term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom-up acoustic information or top-down word knowledge is available to disambiguate speech input, there is short-term adaptive plasticity such that subsequent speech perception is shifted even in the absence of the disambiguating information. Although such effects are well-documented, it is not yet known whether bottom-up and top-down resolution of ambiguity may operate through common processes, or how these information sources may interact in guiding the adaptive plasticity of speech perception. The present study investigates the joint contributions of bottom-up information from the acoustic signal and top-down information from lexical knowledge in the adaptive plasticity of speech categorization according to short-term input regularities. The results implicate speech category activation, whether from top-down or bottom-up sources, in driving rapid adjustment of listeners' reliance on acoustic dimensions in speech categorization. Broadly, this pattern of perception is consistent with dynamic mapping of input to category representations that is flexibly tuned according to interactive processing accommodating both lexical knowledge and idiosyncrasies of the acoustic input.
Keywords: Adaptive plasticity; Dimension-based statistical learning; Lexically guided phonetic tuning; Speech perception.
© 2021 Cognitive Science Society, Inc.
Similar articles
-
Dimension-based statistical learning of vowels.J Exp Psychol Hum Percept Perform. 2015 Dec;41(6):1783-98. doi: 10.1037/xhp0000092. Epub 2015 Aug 17. J Exp Psychol Hum Percept Perform. 2015. PMID: 26280268 Free PMC article.
-
Dimension-Based Statistical Learning Affects Both Speech Perception and Production.Cogn Sci. 2017 Apr;41 Suppl 4(Suppl 4):885-912. doi: 10.1111/cogs.12413. Epub 2016 Sep 25. Cogn Sci. 2017. PMID: 27666146 Free PMC article.
-
Distributional learning for speech reflects cumulative exposure to a talker's phonetic distributions.Psychon Bull Rev. 2019 Jun;26(3):985-992. doi: 10.3758/s13423-018-1551-5. Psychon Bull Rev. 2019. PMID: 30604404 Free PMC article.
-
Hearing speech sounds: top-down influences on the interface between audition and speech perception.Hear Res. 2007 Jul;229(1-2):132-47. doi: 10.1016/j.heares.2007.01.014. Epub 2007 Jan 18. Hear Res. 2007. PMID: 17317056 Review.
-
The time-course of speech perception revealed by temporally-sensitive neural measures.Wiley Interdiscip Rev Cogn Sci. 2021 Mar;12(2):e1541. doi: 10.1002/wcs.1541. Epub 2020 Aug 7. Wiley Interdiscip Rev Cogn Sci. 2021. PMID: 32767836 Review.
Cited by
-
Phonetic category activation predicts the direction and magnitude of perceptual adaptation to accented speech.J Exp Psychol Hum Percept Perform. 2022 Sep;48(9):913-925. doi: 10.1037/xhp0001037. Epub 2022 Jul 18. J Exp Psychol Hum Percept Perform. 2022. PMID: 35849375 Free PMC article.
-
Statistical learning across passive listening adjusts perceptual weights of speech input dimensions.Cognition. 2023 Sep;238:105473. doi: 10.1016/j.cognition.2023.105473. Epub 2023 May 19. Cognition. 2023. PMID: 37210878 Free PMC article.
-
Non-sensory Influences on Auditory Learning and Plasticity.J Assoc Res Otolaryngol. 2022 Apr;23(2):151-166. doi: 10.1007/s10162-022-00837-3. Epub 2022 Mar 2. J Assoc Res Otolaryngol. 2022. PMID: 35235100 Free PMC article. Review.
-
Short-term perceptual reweighting in suprasegmental categorization.Psychon Bull Rev. 2023 Feb;30(1):373-382. doi: 10.3758/s13423-022-02146-5. Epub 2022 Aug 1. Psychon Bull Rev. 2023. PMID: 35915382 Free PMC article.
-
Individual differences in the use of top-down versus bottom-up cues to resolve phonetic ambiguity.Atten Percept Psychophys. 2024 Jul;86(5):1724-1734. doi: 10.3758/s13414-024-02889-4. Epub 2024 May 29. Atten Percept Psychophys. 2024. PMID: 38811489 Free PMC article.
References
-
- Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390-412. https://doi.org/10.1016/j.jml.2007.12.005
-
- Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278. https://doi.org/10.1016/j.jml.2012.11.001
-
- Boersma, P., & Weenink, D. (2017). Praat: doing phonetics by computer [Computer program]. Version 6.1.38. Retrieved from http://www.praat.org/ Accessed January 2, 2021.
-
- Breslow, N. E., & Clayton, D. G. (1993). Approximate inference in generalized linear mixed models. Journal of the American Statistical Association, 88(421), 9-25.
-
- Chang, L. (2010). Using lme4. University of Arizona. Retrieved from www.u.arizona.edu/~ljchang/NewSite/papers/LME4_HO.pdf Accessed December 12, 2013.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources