Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1998 Feb 15;18(4):1583-94.
doi: 10.1523/JNEUROSCI.18-04-01583.1998.

Gaze-centered remapping of remembered visual space in an open-loop pointing task

Affiliations

Gaze-centered remapping of remembered visual space in an open-loop pointing task

D Y Henriques et al. J Neurosci. .

Abstract

Establishing a coherent internal reference frame for visuospatial representation and maintaining the integrity of this frame during eye movements are thought to be crucial for both perception and motor control. A stable headcentric representation could be constructed by internally comparing retinal signals with eye position. Alternatively, visual memory traces could be actively remapped within an oculocentric frame to compensate for each eye movement. We tested these models by measuring errors in manual pointing (in complete darkness) toward briefly flashed central targets during three oculomotor paradigms; subjects pointed accurately when gaze was maintained on the target location (control paradigm). However, when steadily fixating peripheral locations (static paradigm), subjects exaggerated the retinal eccentricity of the central target by 13.4 +/- 5.1%. In the key "dynamic" paradigm, subjects briefly foveated the central target and then saccaded peripherally before pointing toward the remembered location of the target. Our headcentric model predicted accurate pointing (as seen in the control paradigm) independent of the saccade, whereas our oculocentric model predicted misestimation (as seen in the static paradigm) of an internally shifted retinotopic trace. In fact, pointing errors were significantly larger than were control errors (p </= 0.003) and were indistinguishable (p >/= 0.25) from the static paradigm errors. Scatter plots of pointing errors (dynamic vs static paradigm) for various final fixation directions showed an overall slope of 0.97, contradicting the headcentric prediction (0. 0) and supporting the oculocentric prediction (1.0). Varying both fixation and pointing-target direction confirmed that these errors were a function of retinotopically shifted memory traces rather than eye position per se. To reconcile these results with previous pointing experiments, we propose a "conversion-on-demand" model of visuomotor control in which multiple visual targets are stored and rotated (noncommutatively) within the oculocentric frame, whereas only select targets are transformed further into head- or bodycentric frames for motor execution.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Schematic illustration of the headcentric (left) and oculocentric (right) models of visuospatial memory and the basic experimental design that we used to test between them. The key observation that led to this test is that subjects usually exaggerate the retinal eccentricity (independent of eye position) of nonfoveal targets in the visuomotor transformation for pointing (Bock, 1986; Enright, 1995), and this “retinal magnification effect” must occur at different stages relative to the memory storage process in the two models. A, Initially, the subject looks straight ahead (center) toward a distant, briefly flashed target (solid circle). An internal representation of target direction is formed (dashed line) and is either retained in the oculocentric frame (right) or transformed immediately into the headcentric frame (left) by comparing the retinal signal with eye position. In one dimension, the latter amounts to an addition of the horizontal angles, i.e., 0° retinotopic + 0° eye position = 0° craniotopic or straight ahead. Humans are known to be quite accurate at pointing toward the remembered locations of central, foveated targets, but now a new “twist” is added.B–D, After viewing the target, the subject rotates (i.e., saccades) the eyes, e.g., 30° leftward (B,center), and only then (C,D) points toward the remembered target location (virtual target). According to the headcentric model, the intervening eye movement should have no systematic effect on the stable, headcentric memory trace (B, left) or, hence, on subsequent pointing accuracy (C). In contrast, the oculocentric model must compensate for the leftward gaze shift by countershifting the retinotopic memory trace (B, right), in effect rotating the oculocentric direction vector 30° to the right (B,gray sector). Now the subject must point based on a peripherally shifted retinotopic memory trace (D,gray sector). Based on previous observations, this should result in an angular overshoot in pointing direction (D, black sector) opposite to the gaze line (thin arrow).
Fig. 2.
Fig. 2.
Schematic representation of our three paradigms. Horizontal eye (dashed lines) and arm (solid lines) positions are plotted schematically against time.Thick black boxes indicate the location and duration of the pointing target (T) and fixation (F) lights; downward arrowsidentify the approximate time of selection for final pointing direction; and * indicates the time of the auditory warning signal.A, Control paradigm. B, Static paradigm.C, Dynamic paradigm. See Materials and Methods for explanation.
Fig. 3.
Fig. 3.
Trajectories of the eye and arm in the three basic paradigms. A–C, Horizontal eye (thin traces) and arm (thick traces) positions plotted against time for five consecutive trials for one subject. Black boxes indicate the central pointing-target light;hatched boxes indicate the 15° leftward fixation light. A, Control paradigm. B, Static paradigm. C, Dynamic paradigm. D–F, Two-dimensional eye (solid diamonds) and arm (open squares) trajectories for the same subject for five trials in each paradigm. D, Control paradigm.E, Static paradigm, 15° leftward fixation target.F, Dynamic paradigm, 15° leftward final fixation target.
Fig. 4.
Fig. 4.
A–F, Final 2-D pointing direction of arm (open squares, solid squares) and eye (open circles, solid circles) relative to central target light in the control (A,B), static (C, D), and dynamic (E, F) paradigms for 20 trials in one subject (A, C,E) and for averaged responses for all subjects (B, D, F).G, Scatter plot of mean individual horizontal pointing errors (in the dynamic paradigm) as a function of mean individual horizontal errors (in the static paradigm). H, Scatter plot as described for G but after subtraction of control horizontal pointing errors (i.e., mean dynamic minus control vs mean static minus control). Open symbols indicate 15° rightward fixation tasks; solid symbols identify 15° leftward fixation tasks.
Fig. 5.
Fig. 5.
Static and dynamic series. Final 2-D arm pointing directions (open squares) and eye fixations (solid circles) in one subject are shown, staggered 8° vertically for each of the seven different fixation lights.A, Static series. B, Dynamic series.Dashed lines join the corresponding groups of gaze and pointing directions.
Fig. 6.
Fig. 6.
Final horizontal pointing errors in the static (dashed lines) and dynamic (solid lines) series averaged across trials and plotted as a function of angular eye displacement relative to the pointing target for each subject (A–G) and further averaged across subjects (H). Vertical linesindicate SEs between means of subjects.
Fig. 7.
Fig. 7.
Regression lines for horizontal pointing errors in the static and dynamic series. A, Predicted slopes of the oculocentric model (dashed line) and headcentric model (dotted line) of visuospatial memory.B, Regression line of average pointing responses in one subject. Error bars represent SEs of each mean. C, Similar regression lines for all seven subjects. D, Grand average slope (dashed line), i.e., average of the seven individual slopes, and slope fit to the indicated averages of means across all subjects (solid line).
Fig. 8.
Fig. 8.
Three series of average (across subjects) pointing responses to three different Ts: center (solid squares), 15° right (solid diamonds), and 30° right (solid triangles). In each case, the subject always began the trial by fixating T and only saccaded peripherally when T was extinguished. A, Plotted as a function of fixation direction, i.e., eye-in-head position. B, Plotted as a function of gaze relative to target. The latter is the negative of retinal displacement. (We used this reversal so that the slope could be more easily compared with that in A.) Only fixation targets at 15° intervals are plotted, so that all points are vertically comparable in both coordinate systems.
Fig. 9.
Fig. 9.
Conversion-on-demand hypothesis of visuomotor representation and control. Object location is initially perceived and stored in an oculocentric frame (A).Broken lines denote internally represented vectors. During intervening saccades, this representation is rotated by an internal estimate of the inverse of eye rotation in space (see Appendix). When a final target representation is chosen for action (B), it is rotated by an internal estimate of eye-in-head orientation to provide a representation in the headcentric frame (C). The visuomotor magnification effect would most likely occur near this stage. Further compensations for head-on-torso position provide a body-centered representation (D). This peripersonal target representation (Soechting et al., 1991; Flanders et al., 1992; Brotchie et al., 1995) is then converted (through inverse kinematics) into a desired arm position in multijoint space (E). This stage seems to optimize kinematic constraints for extended-arm pointing (Hore et al., 1992; Crawford and Vilis, 1995) but also optimizes dynamic constraints related to initial position (F) for less-constrained pointing (Soechting et al., 1995). Finally, the command for desired arm position is compared with an internal representation of current arm position (F) to compute the “motor” error signal that drives the downstream inverse dynamics and forward dynamics/kinematics of the arm. As outlined in Appendix , we hypothesize that the conversion from sensory (A,B) to motor (C, D) frames occurs between posterior parietal/premotor cortex and primary motor cortex, perhaps coordinated across the cortex by the caudate loop of the basal ganglia.

References

    1. Andersen RA, Essick GK, Siegel RM. Encoding of spatial location by posterior parietal neurons. Science. 1985;230:456–458. - PubMed
    1. Bock O. Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Exp Brain Res. 1986;64:467–482. - PubMed
    1. Bock O, Eckmiller R. Goal-directed arm movements in absence of visual guidance: evidence for amplitude rather than position control. Exp Brain Res. 1986;64:451–458. - PubMed
    1. Brotchie PR, Andersen RA, Snyder H, Goodman SJ. Head position signals used by parietal neurons to encode locations of visual stimuli. Nature. 1995;375:232–235. - PubMed
    1. Cai RH, Pouget A, Schlag-Rey M, Schlag J. Perceived geometrical relationships affected by eye-movement signals. Nature. 1997;386:601–604. - PubMed

Publication types

LinkOut - more resources