Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul 6;21(7):3.
doi: 10.1167/jov.21.7.3.

The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality

Affiliations

The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality

Erwan Joël David et al. J Vis. .

Abstract

Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Views from four virtual rooms. The two top images show an example of “grammatically constrained” targets (alarm clock, clothes hanger), which are placed next to objects (bed, coat rack) that are semantically related and anchor objects to these targets. The lower images show two “grammatically unconstrained” targets, located without an anchor object (gong, American football). Zoomed-in views of the target objects are shown within the red rectangles.
Figure 2.
Figure 2.
Masking conditions are presented here in a viewport measuring 90 by 90 degrees of field of view; mask radii are proportionally accurate. From left to right: control no-mask condition, central mask of 6 degrees of radius, and peripheral mask of 6 degrees of radius. The captured scene view shows the training room.
Figure 3.
Figure 3.
Visual search measures are presented as a function of mask conditions and target object grammatical constraint (mean and 95% CI). On the x-axis are presented object grammatical constraint (“Const.” and “Unconst.”) while mask conditions appear as facets of the subplots. An asterisk to the right of a variable's name indicates that it was log-transformed in linear mixed models and is presented on a log-scale here.
Figure A1.
Figure A1.
Visual search measures are presented as a function of mask conditions and target object grammatical constraint (mean and 95% CI). On the x-axis are presented object grammatical constraint (“Const.” and “Unconst.”) while mask conditions appear as facets of the subplots. An asterisk to the right of a variable's name indicates that it was log-transformed in linear mixed models and is presented on a log-scale here.

References

    1. Anderson, N. C., Bischof, W. F., Foulsham, T., & Kingstone, A. (2020). Turning the (virtual) world around: patterns in saccade direction vary with picture orientation and shape in virtual reality. Journal of Vision, 20(8), 1–19, doi:10.1167/jov.20.8.21. - PMC - PubMed
    1. Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412.
    1. Beitner, J., Helbing, J., Draschkow, D., & Võ, M. L.-H. (2021). Get your guidance going: Investigating the activation of spatial priors for efficient search in virtual reality. Brain Sciences, 11(1), 44. - PMC - PubMed
    1. Biederman, I., Mezzanotte, R. J., & Rabinowitz, J. C. (1982). Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14(2), 143–177. - PubMed
    1. Bizzi, E., Kalil, R. E., & Morasso, P. (1972). Two modes of active eye-head coordination in monkeys. Brain Research, 40(1), 45–48, doi:10.1016/0006-8993(72)90104-7. - DOI - PubMed

Publication types