Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1998 Oct 15;18(20):8423-35.
doi: 10.1523/JNEUROSCI.18-20-08423.1998.

Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames

Affiliations

Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames

J McIntyre et al. J Neurosci. .

Abstract

Pointing to a remembered visual target involves the transformation of visual information into an appropriate motor output, with a passage through short-term memory storage. In an attempt to identify the reference frames used to represent the target position during the memory period, we measured errors in pointing to remembered three-dimensional (3D) targets. Subjects pointed after a fixed delay to remembered targets distributed within a 22 mm radius volume. Conditions varied in terms of lighting (dim light or total darkness), delay duration (0.5, 5.0, and 8.0 sec), effector hand (left or right), and workspace location. Pointing errors were quantified by 3D constant and variable errors and by a novel measure of local distortion in the mapping from target to endpoint positions. The orientation of variable errors differed significantly between light and dark conditions. Increasing the memory delay in darkness evoked a reorientation of variable errors, whereas in the light, the viewer-centered variability changed only in magnitude. Local distortion measurements revealed an anisotropic contraction of endpoint positions toward an "average" response along an axis that points between the eyes and the effector arm. This local contraction was present in both lighting conditions. The magnitude of the contraction remained constant for the two memory delays in the light but increased significantly for the longer delays in darkness. These data argue for the separate storage of distance and direction information within short-term memory, in a reference frame tied to the eyes and the effector arm.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Definition of local distortion. a, When pointing accurately, the endpoint positions (•) reproduce the spatial organization of the target locations (○). b, Transformation from target to endpoint positions with a large constant error but no local distortion. c–f, Types of local distortion that can be introduced by a linear transformation, excluding rotations: local expansion (c), local contraction (d), anisotropic expansion, and contraction aligned with two different axes (e, f).
Fig. 2.
Fig. 2.
Intersubject variability and the computation of ensemble averages. Each panel represents an equal-area projection of direction vectors into the horizontal plane, for trials to the mid-target region in the dark–short (A) and dark–long (B) conditions. Each filled circle represents the average response for a single subject for (1) the constant error, (2) variable error (first eigenvector indicating the direction of maximum variability), (3) local distortion (third eigenvector indicating the axis of maximum contraction), and (4) rotation axis within the local transformation. Points near the center of each panel represent upward pointing vectors, whereas points near the edge of the bounding circle indicate forward, backward, leftward, or rightward directions for the top, bottom, left, and right edges, respectively. Direction vectors are clustered for the variable error and local distortion vectors but not for the constant error directions or rotation axes. Open circles indicate the average of the individual direction vectors for the distributions showing significant clustering. The symbol X indicates the direction vector computed from the corresponding ensemble covariance or local transformation matrix.
Fig. 3.
Fig. 3.
Average variable errors across subjects for two lighting conditions and two delays, viewed from above (A), from the right side (B), and perpendicular to the plane of movement (C). Ellipsoids represent the tolerance region containing 95% of responses (see Materials and Methods). Dark line segments indicate the direction of the major eigenvector computed for the tolerance ellipsoid. For movements in the dark, a pattern of major eigenvector rotations upward and away from the starting position emerges in the ensemble averages, in comparison to head-centered eigenvector directions seen in the light.
Fig. 4.
Fig. 4.
Variable errors for two different starting positions and two different effector hands, averaged across subjects, for pointing in the dark with a 5.0 sec delay. The orientation of the variable error ellipsoid is affected by the relative starting position of the hand but not by the hand used to perform the pointing. Note the change of scale for ellipsoids viewed in the plane of movement.
Fig. 5.
Fig. 5.
Constant errors for two different starting positions and two different effector hands, averaged across subjects, for pointing in the dark with a 5.0 sec memory delay. Dark bars indicate the direction and extent of the average constant error vector (magnified 5× for visibility), pointing away from the target position indicated by the small sphere. Note the change of scale for data viewed in the plane of movement.
Fig. 6.
Fig. 6.
Average local transformation ellipsoids for two lighting conditions and two delays. Ellipsoids indicate the local distortions induced by the sensorimotor transformation, as estimated by a linear approximation to the local transformation (see Materials and Methods). The unit sphere indicates the ellipsoid corresponding to an ideal, distortion-free local transformation. Dark barsindicate the direction of the third (minor) eigenvector, indicating the axis of maximum local contraction. Under all lighting conditions, axes of maximal contraction point toward the subject.
Fig. 7.
Fig. 7.
Eigenvalues of the local transformation estimate. Eigenvalues are unitless gains indicating spatial expansion or contraction in 3D target-to-endpoint mappings. Eigenvalues >1 indicate magnification of the local space along the corresponding eigenvector, whereas eigenvalues <1 indicate spatial contraction. First and second eigenvalues are averaged (left column) and compared with the third eigenvalue (center column) representing the amount of maximal contraction along the corresponding eigenvector. The right column shows the ratio of the third eigenvalue over the average of the first and second, indicating the amount of distortion introduced in the visuomotor transformation. Contraction is relatively constant in the light, whereas contraction increases with memory delay duration in the dark.
Fig. 8.
Fig. 8.
Effects of workspace region and movement starting position on estimates of the local transformation. Axes of maximum contraction are biased slightly toward the side of the effector hand, independent of the starting hand position.
Fig. 9.
Fig. 9.
Summary of results regarding the sensorimotor chain for pointing to remembered targets. A, Viewer-centered visual inputs are passed through internal transformations that compress the target position along a body-centered axis as a function of memory delay and then transformed into a motor command. Ellipsoids marked with red bars indicate variable errors, for which the red bar indicates the direction of maximum variability. Ellipsoids marked with blue bars indicate estimates of local distortion, and the corresponding axis of maximum contraction. B, C, In the schematic diagrams of the sensorimotor processes used in pointing to remembered targets, circles depict data representations within a specific reference frame, whereas squaresindicate transformations between coordinate systems. Two models can capture the observed behavior. In both models, binocular visual inputs are transformed into a viewer-centered visual reference frame, with contraction of data along the sight line. Data are then transformed into a motor reference frame linked to the effector arm, with additional contraction along a shoulder-centered axis. InB the final output stage includes a distortionless transformation through a hand-centered reference frame. InC, a parallel, dynamic component is added to the remembered endpoint position to generate the final motor command. In both cases, if vision of the hand is permitted during the pointing movement, the observed final finger position is compared with the visual memory of the target to reduce errors at the output.

Similar articles

Cited by

References

    1. Berkinblit MB, Fookson OI, Smetanin B, Adamovich SV, Poizner H. The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets. Exp Brain Res. 1995;107:326–330. - PubMed
    1. Bizzi E, Hogan N, Mussa-Ivaldi FA, Giszter S. Does the nervous system use equilibrium-point control to guide single and multiple joint movements? Behav Brain Sci. 1992;15:603–613. - PubMed
    1. Bock O, Eckmiller R. Goal-directed arm movements in absence of visual guidance: evidence for amplitude rather than position control. Exp Brain Res. 1986;62:451–458. - PubMed
    1. Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y. Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets. J Neurosci. 1991;11:1182–1197. - PMC - PubMed
    1. Darling WG, Miller GF. Transformations between visual and kinesthetic coordinate systems in reaches to remembered object locations and orientations. Exp Brain Res. 1993;93:534–547. - PubMed

Publication types

LinkOut - more resources