Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 May 23;14(5):e0217051.
doi: 10.1371/journal.pone.0217051. eCollection 2019.

Extrafoveal attentional capture by object semantics

Affiliations

Extrafoveal attentional capture by object semantics

Antje Nuthmann et al. PLoS One. .

Erratum in

Abstract

There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. An example search display in the different experimental conditions across three experiments.
On this particular trial, the spoken target word was “banana.” The relevant displays contained no target but included objects that were semantically related to the target (“monkey”) or unrelated (“hat” and “tambourine”). In most experiments (except for Experiment 2 in the visual absent condition) these trials also contained a visually related object (“canoe”). In all but one condition (i.e., the accessory condition in Experiment 3), participants indicated whether the target object (“banana”) was present or absent in the display. Pictures are from the license-free Hemera Photo-Object database (Vols. I, II, & III; Hemera Technologies Inc.).
Fig 2
Fig 2. Results for extrafoveal and foveal semantic processing across three experiments.
The two main continuous response variables are organized by row. In a given row, each column depicts data from a different experiment or experimental condition. Each facet summarizes the fixed-effects results from the relevant LMM. In the statistical models, the intercept represents the estimate for the semantically related object, and this numeric value is included in the figure panels. The bar charts show the difference scores. The zero line represents the semantically related object as the reference category. The red bars, comparing unrelated to semantically related objects (unrelated—semantic), show the disadvantage of unrelated over semantically related objects, which is equivalent to an advantage of semantically related over unrelated objects. The blue bars, comparing visually related objects to semantically related objects (visual—semantic), show the additional advantage of visually related objects over semantically related objects. Error bars are 95% confidence intervals (CI = ± 1.96 × SE); thus, effects are significant when the error bar does not include 0.
Fig 3
Fig 3. Results for the probability of immediate fixation across three experiments.
Each column depicts data from a different experiment or experimental condition. For each of these, two separate intercept-only GLMMs were fitted, the first one comparing semantically related objects with unrelated objects (top row, red bars), and the second one comparing semantically related objects with visually related objects (bottom row, blue bars). The two subplots on the left show the results for a condition from Experiment 2 in which the visually related object was replaced with a second unrelated object. In each facet, the height of the bar represents the estimate for the fixed-effect intercept. The zero line represents the intercept under the null hypothesis. In the analyses represented by the red bars, a positive estimate corresponds to a higher probability for semantically related than for unrelated objects. In the analyses represented by the blue bars, a negative estimate corresponds to a lower probability for semantically related than for visually related objects. Error bars are 95% confidence intervals; thus, the effect is significant if the error bar does not include 0.

References

    1. Strasburger H, Rentschler I, Jüttner M. Peripheral vision and pattern recognition: A review. Journal of Vision. 2011;11(5):13:1–82. 10.1167/11.5.13 . - DOI - PMC - PubMed
    1. Henderson JM, Hollingworth A. Eye movements during scene viewing: An overview In: Underwood G, editor. Eye guidance in reading and scene perception. Oxford: Elsevier; 1998. p. 269–93.
    1. Nuthmann A. On the visual span during object search in real-world scenes. Visual Cognition. 2013;21(7):803–37. 10.1080/13506285.2013.832449 - DOI
    1. Pajak M, Nuthmann A. Object-based saccadic selection during scene perception: Evidence from viewing position effects. Journal of Vision. 2013;13(5):2:1–21. 10.1167/13.5.2 . - DOI - PubMed
    1. Stoll J, Thrun M, Nuthmann A, Einhäuser W. Overt attention in natural scenes: Objects dominate features. Vision Research. 2015;107:36–48. 10.1016/j.visres.2014.11.006 . - DOI - PubMed

Publication types