Gaze-centered remapping of remembered visual space in an open-loop pointing task
- PMID: 9454863
- PMCID: PMC6792733
- DOI: 10.1523/JNEUROSCI.18-04-01583.1998
Gaze-centered remapping of remembered visual space in an open-loop pointing task
Abstract
Establishing a coherent internal reference frame for visuospatial representation and maintaining the integrity of this frame during eye movements are thought to be crucial for both perception and motor control. A stable headcentric representation could be constructed by internally comparing retinal signals with eye position. Alternatively, visual memory traces could be actively remapped within an oculocentric frame to compensate for each eye movement. We tested these models by measuring errors in manual pointing (in complete darkness) toward briefly flashed central targets during three oculomotor paradigms; subjects pointed accurately when gaze was maintained on the target location (control paradigm). However, when steadily fixating peripheral locations (static paradigm), subjects exaggerated the retinal eccentricity of the central target by 13.4 +/- 5.1%. In the key "dynamic" paradigm, subjects briefly foveated the central target and then saccaded peripherally before pointing toward the remembered location of the target. Our headcentric model predicted accurate pointing (as seen in the control paradigm) independent of the saccade, whereas our oculocentric model predicted misestimation (as seen in the static paradigm) of an internally shifted retinotopic trace. In fact, pointing errors were significantly larger than were control errors (p </= 0.003) and were indistinguishable (p >/= 0.25) from the static paradigm errors. Scatter plots of pointing errors (dynamic vs static paradigm) for various final fixation directions showed an overall slope of 0.97, contradicting the headcentric prediction (0. 0) and supporting the oculocentric prediction (1.0). Varying both fixation and pointing-target direction confirmed that these errors were a function of retinotopically shifted memory traces rather than eye position per se. To reconcile these results with previous pointing experiments, we propose a "conversion-on-demand" model of visuomotor control in which multiple visual targets are stored and rotated (noncommutatively) within the oculocentric frame, whereas only select targets are transformed further into head- or bodycentric frames for motor execution.
Figures









References
-
- Andersen RA, Essick GK, Siegel RM. Encoding of spatial location by posterior parietal neurons. Science. 1985;230:456–458. - PubMed
-
- Bock O. Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Exp Brain Res. 1986;64:467–482. - PubMed
-
- Bock O, Eckmiller R. Goal-directed arm movements in absence of visual guidance: evidence for amplitude rather than position control. Exp Brain Res. 1986;64:451–458. - PubMed
-
- Brotchie PR, Andersen RA, Snyder H, Goodman SJ. Head position signals used by parietal neurons to encode locations of visual stimuli. Nature. 1995;375:232–235. - PubMed
-
- Cai RH, Pouget A, Schlag-Rey M, Schlag J. Perceived geometrical relationships affected by eye-movement signals. Nature. 1997;386:601–604. - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Medical