Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec:121:104279.
doi: 10.1016/j.jml.2021.104279. Epub 2021 Jul 10.

The pictures who shall not be named: Empirical support for benefits of preview in the Visual World Paradigm

Affiliations

The pictures who shall not be named: Empirical support for benefits of preview in the Visual World Paradigm

Keith S Apfelbaum et al. J Mem Lang. 2021 Dec.

Abstract

A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is meaningfully altered by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins. However, this could bias their interpretations of the later speech or even lead to abnormal processing strategies (e.g., comparing the input to only preactivated working memory representations). Prior work has focused only on whether preview duration changes fixation patterns. However, preview could affect a number of processes, such as visual search, that would not challenge the interpretation of the VWP. The present study uses a series of targeted manipulations of the preview period to ask if preview alters looking behavior during a trial, and why. Results show that evidence of incremental processing and phonological competition seen in the VWP are not dependent on preview, and are not enhanced by manipulations that directly encourage phonological prenaming. Moreover, some forms of preview can eliminate nuisance variance deriving from object recognition and visual search demands in order to produce a more sensitive measure of linguistic processing. These results deepen our understanding of how the visual scene interacts with language processing to drive fixations patterns in the VWP, and reinforce the value of the VWP as a tool for measuring real-time language processing. Stimuli, data and analysis scripts are available at https://osf.io/b7q65/.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Figure 1:
Figure 1:
Schematics of the preview conditions.
Figure 2:
Figure 2:
Proportion of looks to the displayed objects over time in each preview condition. Time is indexed from the onset of the auditory stimulus presenting the target. Note the Unrelated lines display the mean proportion of looks to the two unrelated objects. Error ribbons signify the standard error of the mean at each time sample. A) No preview. B) Text preview. C) Visual-new locations. D) Visual-same locations. E) Self-paced.
Figure 3:
Figure 3:
Timecourse of fixations to target objects by condition, and curvefit parameters for these curves. The overall timecourse plots the raw fixation data. The individual paramaters plot the curvefit values. A) Timecourse of target looks (raw data). B) Curvefit maximum parameters. C) Curvefit crossover parameters. D) Curvefit slope parameters.
Figure 4:
Figure 4:
Timecourse of fixations to non-target objects. A) Fixations to cohort objects. B) Fixations to unrelated objects. Plots the mean of the two unrelated objects. C) The difference between cohort and the mean of the unrelated objects. This panel represents the degree of cohort fixation over and above looks to unrelated objects. D) The mean proportion of looks across time to cohort and unrelated items over the time-window 250-1000 msec.
Figure 5:
Figure 5:
Schematic of linking functions for the VWP with and without preview. A) With preview. B) Without preview.

References

    1. Allopenna PD, Magnuson JS, & Tanenhaus MK (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38(4), 419–439. 10.1006/jmla.1997.2558 - DOI
    1. Altmann GTM, & Kamide Y (2007). The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing. Journal of Memory and Language, 57(4), 502–518. 10.1016/j.jml.2006.12.004 - DOI
    1. Altmann GTM, & Kamide Y (2009). Discourse-mediation of the mapping between language and the visual world: eye movements and mental representation. Cognition, 111(1), 55–71. 10.1016/j.cognition.2008.12.005 - DOI - PMC - PubMed
    1. Altmann GTM, & Mirković J (2009). Incrementality and prediction in human sentence processing. Cognitive Science, 33(4), 583–609. 10.1111/j.1551-6709.2009.01022.x - DOI - PMC - PubMed
    1. Andersson R, Ferreira F, & Henderson JM (2011). I see what you’re saying: The integration of complex speech and scenes during language comprehension. Acta Psychologica, 137(2), 208–216. 10.1016/j.actpsy.2011.01.007 - DOI - PubMed

LinkOut - more resources