Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Aug 3;20(8):21.
doi: 10.1167/jov.20.8.21.

Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality

Affiliations

Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality

Nicola C Anderson et al. J Vis. .

Abstract

Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influence eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, whereas others did not. Interestingly, the head rotated in line with the rotation of landscape but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control and how insights from VR might inform more traditional eye-tracking studies.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Example square (Experiment 1), circular (Experiment 2), and square frame (Experiment 3) landscape and fractal images rotated 45° counter-clockwise from the participant's perspective in the VR headset. Note that in Experiment 1 the rotated image is also zoomed relative to Experiments 2 and 3 due to the greater amount of cropping required (see Figure 2).
Figure 2.
Figure 2.
Example of the (A) crop and (B) zoom of an image rotated 45° in Experiment 1. The largest rotated square that fit obliquely into the original image was used to extract image content for each stimulus rotation. For cardinal image rotations (including un-rotated images), a square the same size as the largest rotated square was cut from the center of the image. This kept the content as close as possible to similar across image rotations in Experiment 1. In Experiments 2 and 3, the original, un-zoomed images were used.
Figure 3.
Figure 3.
Angular deviation between eye and head positions as a function of stimulus type and stimulus rotation.
Figure 4.
Figure 4.
Saccade direction distributions (with respect to image coordinates) as a function of image orientation and image type. Each subplot shows the relative frequency of saccades in each of 36 bins.
Figure 5.
Figure 5.
Saccade direction axes for each image type and rotation. Saccades were split into four symmetrical groups. Saccades were defined as screen-based such that, in an image rotated at 45°/225°, saccades along the image horizon are the saccade directions of 45°/225° in this figure.
Figure 6.
Figure 6.
Mean saccade amplitudes at each binned saccade direction, stimulus type, and stimulus rotation.
Figure 7.
Figure 7.
Cumulative head rotation across fixation index for each image rotation. Individual subjects are plotted as separate lines, and overall subject means are plotted as black lines with error bars representing the standard error of the mean.

Similar articles

Cited by

References

    1. Anderson N. C., & Donk M. (2017). Salient object changes influence overt attentional prioritization and object-based targeting in natural scenes. PLoS One, 12(2), e0172132. - PMC - PubMed
    1. Anderson N. C., Ort E., Kruijne W., Meeter M., & Donk M. (2015). It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time. Journal of Vision, 15(5):9, 1–22, 10.1167/15.5.9. - DOI - PubMed
    1. Backhaus D., Engbert R., Rothkegel L. O. M., & Trukenbrod H. A. (2020). Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking. Journal of Vision, 20(5):3, 1–21, 10.1167/jov.20.5.3. - DOI - PMC - PubMed
    1. Barnes G. R. (1979). Vestibulo-ocular function during co-ordinated head and eye movements to acquire visual targets. The Journal of Physiology, 287(1), 127–147. - PMC - PubMed
    1. Birmingham E., Bischof W. F., & Kingstone A. (2008). Gaze selection in complex social scenes. Visual Cognition, 16(2–3), 341–355.