Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Mar;25(3):619-30.
doi: 10.1093/cercor/bht247. Epub 2013 Sep 22.

Multisensory self-motion compensation during object trajectory judgments

Affiliations

Multisensory self-motion compensation during object trajectory judgments

Kalpana Dokka et al. Cereb Cortex. 2015 Mar.

Abstract

Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual-vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual-vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals.

Keywords: flow parsing; object motion; optic flow; self-motion; vestibular.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Schematic illustration of interactions between object and observer motion and the experimental protocol. (A) An object (black circle) located to the left of the observer-fixed fixation point moves downward in the world, while the subject is translated rightward. (B) This stimulus results in a retinal motion vector that can be computed by the vector sum of components associated with object movement (speed Vo) and observer movement (speed Vs). The direction of retinal motion (θ) is tan−1 (Vs/Vo). (C) The task was a rightward/leftward object trajectory discrimination (around a purely downward movement) in the world, while subjects also experienced rightward or leftward self-motion at 1 of 3 different velocities. (D) Screenshot showing the object within a starfield.
Figure 2.
Figure 2.
Data from a representative subject. (A and B) Staircase histories of a representative subject during individual sessions, shown separately for rightward and leftward self-motion (peak velocity of 22 cm/s). Data are shown for perceived object trajectory judgments in the absence of self-motion (Obj, dashed-gray), with vestibular self-motion (Obj + Vest, thick-gray), with optic flow (Obj + Vis, thin-black), and with combined visual–vestibular self-motion cues (Obj + Com, thick-black). Color- and style-coded solid horizontal lines indicate averages of staircase reversals used to compute object trajectory bias (see Materials and Methods). (C and D) Psychometric functions for this subject's cumulative data during rightward (filled symbols) and leftward (open symbols) self-motion (peak velocity of 22 cm/s), coded as in A and B. (E and F) Average bias values for this subject across sessions, during rightward and leftward self-motion, respectively. Dotted horizontal black line illustrates zero bias. Error bars indicate the SEM.
Figure 3.
Figure 3.
Average bias and thresholds across subjects. (A and B) Mean bias in the perceived object direction for rightward (filled symbols) and leftward (open symbols) self-motion, both before (A) and after (B) feedback training. Note that 12 subjects were tested in all conditions before feedback training, whereas 9 and 10 subjects were tested with vestibular and visual/combined self-motion after feedback training, respectively. Dotted horizontal black lines illustrate zero bias. The solid horizontal (corresponding to self-motion velocity of 12 cm/s), dashed (corresponding to 22 cm/s), and dash-dotted (corresponding to 32 cm/s) lines indicate the bias expected if subjects judged object trajectory in observer-centered coordinates. (C and D) Mean object trajectory discrimination thresholds during rightward (top) and leftward (bottom) self-motion before (C) and after (D) feedback. Error bars indicate SEM.
Figure 4.
Figure 4.
Comparison between bias and observer coordinate predictions. Data are shown separately for the vestibular condition (A and B), as well as the visual (green) and combined (red) conditions (C and D). Left and right columns show data obtained before and after feedback training. Magenta symbols in B represent data obtained from 2 novel self-motion velocities for which no feedback was ever given to subjects. Black dashed lines represent unity slope. Solid colored lines represent best-fitting linear models obtained from regression. Note that the slope of this line quantifies the average ratio of measured to predicted bias for each data set. Small random offsets were added to the observer coordinate predictions to allow better visualization of the data. Twelve subjects were tested in all conditions before feedback training, whereas 9 and 10 subjects were tested with vestibular and visual/combined self-motion after feedback, respectively. (E and F) Percent compensation before and after feedback. Dashed horizontal line represents perfect (100%) compensation for self-motion. Error bars indicate SEM. Filled and hatched bars correspond to rightward and leftward self-motion directions, respectively.

Similar articles

Cited by

References

    1. Banks MS, Ehrlich SM, Backus BT, Crowell JA. Estimating heading during real and simulated eye movements. Vision Res. 1996;36:431–443. doi: 10.1016/0042-6989(95)00122-0. - DOI - PubMed
    1. Blohm G, Missal M, Lefevre P. Processing of retinal and extraretinal signals for memory-guided saccades during smooth pursuit. J Neurophysiol. 2005;93:1510–1522. doi: 10.1152/jn.00543.2004. - DOI - PubMed
    1. Blohm G, Missal M, Lefevre P. Smooth anticipatory eye movements alter the memorized position of flashed targets. J Vis. 2003;3:761–770. doi: 10.1167/3.11.10. - DOI - PubMed
    1. Blohm G, Optican LM, Lefevre P. A model that integrates eye velocity commands to keep track of smooth eye displacements. J Comput Neurosci. 2006;21:51–70. doi: 10.1007/s10827-006-7199-6. - DOI - PubMed
    1. Branch Coslett H, Buxbaum LJ, Schwoebel J. Accurate reaching after active but not passive movements of the hand: evidence for forward modeling. Behav Neurol. 2008;19:117–125. - PMC - PubMed

Publication types