Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011;21(4):193-208.
doi: 10.3233/VES-2011-0416.

Differences between perception and eye movements during complex motions

Affiliations

Differences between perception and eye movements during complex motions

Jan E Holly et al. J Vestib Res. 2011.

Abstract

During passive whole-body motion in the dark, the motion perceived by subjects may or may not be veridical. Either way, reflexive eye movements are typically compensatory for the perceived motion. However, studies are discovering that for certain motions, the perceived motion and eye movements are incompatible. The incompatibility has not been explained by basic differences in gain or time constants of decay. This paper uses three-dimensional modeling to investigate gondola centrifugation (with a tilting carriage) and off-vertical axis rotation. The first goal was to determine whether known differences between perceived motions and eye movements are true differences when all three-dimensional combinations of angular and linear components are considered. The second goal was to identify the likely areas of processing in which perceived motions match or differ from eye movements, whether in angular components, linear components and/or dynamics. The results were that perceived motions are more compatible with eye movements in three dimensions than the one-dimensional components indicate, and that they differ more in their linear than their angular components. In addition, while eye movements are consistent with linear filtering processes, perceived motion has dynamics that cannot be explained by basic differences in time constants, filtering, or standard GIF-resolution processes.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Hypothetical perceived motion for which one-dimensional perception and eye movements would appear to be incompatible—in perceived pitch versus vertical eye movements—but are actually compatible during the three dimensional motion: backward curve with rightward yaw velocity, starting in a rightward roll orientation. The motion is shown in freeze-frame format by displaying a head at 0.5 s intervals through an arc that lasts 2 s. The perceived motion has nonzero rate of change of forward pitch, but compensatory eye movements would have zero vertical component.
Figure 2
Figure 2
The models, representing well-known properties of perception and eye movements in three dimensions. Input of linear and angular acceleration is at left, and output is at right. Eye movement computations are shown by the thinner lines, with output of horizontal, vertical and torsional vestibulo-ocular reflex (VOR). Perception uses the same core, with additional portions shown by thicker lines to produce separate output of angular (ω) and linear (v) velocity, as well as position (r) in the Earth-fixed reference frame, and orientation as given by three vectors in head-based coordinates: an Earth-upward vector (g) of magnitude g, and two heading vectors (i, j) representing fixed orthogonal directions in the Earth horizontal plane. Computations of ω, v, r, g, i, and j are based upon standard physical relationships between the vectors modified by the nervous system's tendency toward angular and linear stationarity, and vertical alignment with the GIA, according to time constants τa, τl and τt, respectively. The only exception occurs in the dashed box, which shows the two different versions of the model, the GIF-Resolution Model and the Filter Model. The GIF-Resolution Model mirrors physics in three dimensions, while the Filter Model disregards the tilt in processing linear acceleration. For eye movements, VOR output is computed as that compensatory for the computed three-dimensional combination of velocities, scaled by gains. To handle possible mismatch between angular tilt velocity and change in tilt orientation, the model allows VOR to arise from angular velocity and/or from change in orientation relative to vertical by using the “MAX”imum calculation after applying a fractional weight, w, to the change in orientation.
Figure 3
Figure 3
Three-dimensional motions that are consistent with data on eye movements, and are also as compatible as possible with published reports of perceived motion. Numerical values of motion parameters are given in the text. (A) Potentially perceived motion during acceleration in a gondola centrifuge, shown with a head at 1 s intervals, at ten times normal size for viewability. The subject spirals counterclockwise, forward and upward, while in an orientation tilted slightly leftward in roll and backward in pitch. (B) Potentially perceived motion during deceleration in a gondola centrifuge. The motion begins upward, transitioning into forward pitch velocity with simultaneous rightward yaw velocity. Shown is the first 6 s of motion; further simulation, which would display a blob of indiscernible heads at the top position, indicates that the yaw velocity later produces a slight wobble in the forward tumble. This version has no sideways slippage, while a version with balanced interaural and yaw motion has slow rightward motion mostly after the tumbling begins (not shown). (C) Potentially perceived motion during OVAR. The motion is circular and tilted outward 15° (seen upon close inspection), i.e. a cone, and with slight oscillations in yaw.
Figure 4
Figure 4
Simulations of eye movements in a gondola centrifuge as compared with the pattern of data, for the two different versions of the model: Filter Model and GIF-Resolution Model. The actual pattern of data (x's) is as shown in McGrath et al. [31]. Details of the models’ parameter values are given in the text. For the Filter Model, the two different values of τl gave graphs that were barely distinguishable, so the value τl = 0.25 s is used for this display. (A) Centrifuge acceleration, horizontal slow phase velocity (SPV) of eye movements. (B) Centrifuge acceleration, vertical SPV. (C) Centrifuge deceleration, horizontal SPV. (D) Centrifuge deceleration, vertical SPV.
Figure 5
Figure 5
The perceptions that would be associated with recorded eye movements. The Filter Model (“Filter” in the legends) associates perception directly with eye movements, while the GIF-Resolution Model (“GIF-Res” in the legends) gives an alternative, that perception and eye movements differ only in their processing of linear acceleration. Both models give the same results for angular components. Details of the models’ parameter values are given in the text, with the Filter Model using τl = 0.25 s for these graphs; τl = 0.5 s produced the same results except for translation as described in parts D and F. (A) Centrifuge acceleration (Acc) and deceleration (Dec), pitch angle. (B) Centrifuge acceleration and deceleration, pitch velocity. (C) Centrifuge acceleration and deceleration, roll angle. (D) Centrifuge acceleration and deceleration, velocity Earth-upward. For the Filter Model with τl = 0.5 s, the results were essentially identical in shape to these, but with double the amplitude. (E) OVAR, roll angle. The x's show perception data by means of a sinusoid with amplitude 20° and phase –25°, typical values within the range of amplitude and phase given by subject reports of perceived tilt angle (Table 1). (F) OVAR, interaural linear velocity. The x's show perception data by means of a sinusoid with amplitude 0.7 m/s = 0.9(π/4) m/s and phase –25° (Table 1). For the Filter Model with τl = 0.5 s, the resulting sine waves were slightly shifted, approximately 0.25 s to the right (phase shift around 10°), and with double the amplitude. For all graphs, the comparison with perception data (Table 1) is discussed in the text.

Similar articles

Cited by

References

    1. Angelaki DE, Shaikh AG, Green AM, Dickman JD. Neurons compute internal models of the physical laws of motion. Nature. 2004;430:560–564. - PubMed
    1. Bles W, de Graaf B. Postural consequences of long duration centrifugation. Journal of Vestibular Research. 1993;3:87–95. - PubMed
    1. Bockisch CJ, Straumann D, Haslwanter T. Eye movements during multiaxis whole-body rotations. Journal of Neurophysiology. 2003;89:355–366. - PubMed
    1. Borah J, Young LR, Curry RE. Optimal estimator model for human spatial orientation. Annals of the New York Academy of Sciences. 1988;545:51–73. - PubMed
    1. Boring EG, Langfeld HS, Weld HP. Foundations of Psychology. John Wiley and Sons, Inc.; New York: 1948.

Publication types

LinkOut - more resources