Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar;26(2):e13290.
doi: 10.1111/desc.13290. Epub 2022 Jun 8.

Looking is not enough: Multimodal attention supports the real-time learning of new words

Affiliations

Looking is not enough: Multimodal attention supports the real-time learning of new words

Sara E Schroer et al. Dev Sci. 2023 Mar.

Abstract

Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants' visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants' attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention-when infants' hands and eyes were attending to the same object-predicted word learning. Our results implicate a causal pathway through which infants' bodily actions play a critical role in early word learning.

Keywords: attention; eye tracking; multimodal behaviors; parent-infant interaction; word learning.

PubMed Disclaimer

References

REFERENCES

    1. Amatuni, A., Schroer, S. E., Peters, R. E., Reza, M. A., Zhang, Y., Crandall, D., & Yu, C. (2021). In the-moment visual information from the infant's egocentric view determines the success of infant word learning: A computational study. In Proceedings of the 43rd Annual Meeting of the Cognitive Science Society.
    1. Baldwin, D. A. (1993). Early referential understanding: Infants' ability to recognize referential acts for what they are. Developmental Psychology, 29(5), 832-843.
    1. Bambach, S., Crandall, D. J., Smith, L. B., & Yu, C. (2018). Toddler-inspired visual object learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), (pp. 31).
    1. Bunce, J. P., & Scott, R. M. (2017). Finding meaning in a noisy world: Exploring the effects of referential ambiguity and competition on 2.5-year-olds’ cross-situational word learning. Journal of Child Language, 44(3), 650-676.
    1. Chang, L., de Barbaro, K., & Deák, G. (2016). Contingencies between infants’ gaze, vocal, and manual actions and mothers’ object-naming: Longitudinal changes from 4 to 9 months. Developmental Neuropsychology, 41(5-8), 342-361.

Publication types

LinkOut - more resources