Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 31;15(3):10.16910/jemr.15.3.5.
doi: 10.16910/jemr.15.3.5. eCollection 2022.

Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes

Affiliations

Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes

Julia Beitner et al. J Eye Mov Res. .

Abstract

Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.

Keywords: Eye movements; eye tracking; incidental memory; scene inversion; scene perception; virtual reality; visual search.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the contents of the article are in agreement with the ethics described in http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.html and that there is no conflict of interest regarding the publication of this paper.

Figures

Figure 1.
Figure 1.
Bird’s-eye view of one of the bathrooms and one sample view of all the scenes that were used in the experiment. Blue squares indicate the starting position of the participants and were not visible during searching.
(1)
(1)
(2)
(2)
(3)
(3)
(4)
(4)
Figure 2.
Figure 2.
Response times of correct searches from trial 1 to 10 within one scene. Solid straight lines represent regression lines. Solid points indicate means calculated on log-transformed response times which were converted back to their original form for visual purposes, error bars indicate within-subject standard errors.
Figure 3.
Figure 3.
Response time split up into initiation time (a), scanning time (b), and verification time (c). Solid points indicate means, error bars indicate within-subject standard errors, colored points represent individual participants. ***p < .001.
Figure 4.
Figure 4.
Fixation measures. (a) Fixation durations. (b) Fixation count. (c) Fixated objects count. (d) Refixations. Solid points indicate means, error bars indicate within-subject standard errors, colored points represent individual participants. ***p < .001.
Figure 5.
Figure 5.
Joint distributions of relative (a) and absolute (b) directions of gaze and head movements and their amplitudes as a function of scene orientation. The lower right plot in each (a) and (b) includes labels for the direction. The lighter the color, the more gaze or head movements were executed into the direction. Radial ticks represent degrees, while ticks from inside out represent saccade amplitudes.
Figure 6.
Figure 6.
Mean proportion of absolute (a) gaze and (b) head directions as a function of scene orientation. Error bars indicate standard errors.
Figure 7.
Figure 7.
Gaze latitude (a), and amplitudes for (b) gaze and (c) head movements. Solid points indicate means, error bars indicate withinsubject standard errors, colored points represent individual participants. In (a), the dashed red line represents the horizon. *p < .05, ***p < .001.

References

    1. Anderson, B. A., & Lee, D. S. (2023). Visual search as effortful work. Journal of Experimental Psychology. General. Advance online publication. 10.1037/xge0001334 - DOI - PubMed
    1. Anderson, N. C., Bischof, W. F., Foulsham, T., & Kingstone, A. (2020). Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality. Journal of Vision (Charlottesville, Va.), 20(8), 21. 10.1167/jov.20.8.21 - DOI - PMC - PubMed
    1. Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412. 10.1016/j.jml.2007.12.005 - DOI
    1. Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278. 10.1016/j.jml.2012.11.001 - DOI - PMC - PubMed
    1. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). Advance online publication. 10.18637/jss.v067.i01 - DOI

LinkOut - more resources