Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Aug;104(2):1077-89.
doi: 10.1152/jn.00326.2010. Epub 2010 Jun 10.

Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex

Affiliations

Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex

Nikolaas N Oosterhof et al. J Neurophysiol. 2010 Aug.

Abstract

Many lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.g., in the case of unconscious postural imitation) and transitive actions (e.g., grasping an object). Although the discovery of "mirror neurons" in macaques has inspired explanations of these processes in human action behaviors, the evidence for areas in the human brain that similarly form a crossmodal visual/motor representation of actions remains incomplete. To address this, in the present study, participants performed and observed hand actions while being scanned with functional MRI. We took a data-driven approach by applying whole-brain information mapping using a multivoxel pattern analysis (MVPA) classifier, performed on reconstructed representations of the cortical surface. The aim was to identify regions in which local voxelwise patterns of activity can distinguish among different actions, across the visual and motor domains. Experiment 1 tested intransitive, meaningless hand movements, whereas experiment 2 tested object-directed actions (all right-handed). Our analyses of both experiments revealed crossmodal action regions in the lateral occipitotemporal cortex (bilaterally) and in the left postcentral gyrus/anterior parietal cortex. Furthermore, in experiment 2 we identified a gradient of bias in the patterns of information in the left hemisphere postcentral/parietal region. The postcentral gyrus carried more information about the effectors used to carry out the action (fingers vs. whole hand), whereas anterior parietal regions carried more information about the goal of the action (lift vs. punch). Taken together, these results provide evidence for common neural coding in these areas of the visual and motor aspects of actions, and demonstrate further how MVPA can contribute to our understanding of the nature of distributed neural representations.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Schematic illustration of the trial structure in experiment 1. Each block began with a warning signal, followed by a 1.5 s movie showing one of 3 simple, intransitive manual actions. A task cue (“see” or “do”) and a blank interval then followed. On “see” trials, the same movie was then presented 8 times in succession, with a 0.5 s blank interval between each movie presentation. On “do” trials, a central fixation dot grew larger for 1.5 s and then shrank again for 0.5 s, in a cycle that repeated 8 times and that was matched to the cycle of movie presentations in the “see” condition. In the “do” condition, participants were required to perform the action that had appeared at the start of the block, in synchrony with the expansion of the fixation point.
Fig. 2.
Fig. 2.
Comparison of voxel selection methods in information mapping. A: schematic representation of a brain slice, with white matter, gray matter, and matter outside the brain indicated. The curved lines represent the white matter–gray matter boundary, the gray matter–pial surface boundary, and the skull. With the traditional volume-based voxel selection method for multivoxel pattern analysis, a voxel (blue) is taken as the center of a sphere (red; represented by a circle) and all voxels within the sphere are selected for further pattern analysis. B: an improvement over A, in that only gray matter voxels are selected. The gray matter can be defined either using a probability map or using cortical surface reconstruction. A limitation, however, is that voxels close in Euclidian distance but far in geodesic distance (i.e., measured along the cortical surface) are included in the selection, as illustrated by the 3 voxels on the left. C: using surface reconstruction, the white matter–gray matter and gray matter–pial surfaces are averaged, resulting in an intermediate surface that is used to measure geodesic distances. A node on the intermediate surface (blue) is taken as the center of a circle (red; represented by a solid line), the corresponding circles on the white–gray matter and gray matter–pial surfaces are constructed (red dashed lines) and only voxels in between these 2 circles are selected.
Fig. 3.
Fig. 3.
Group crossmodal surface information map for experiment 1, generated using multivoxel pattern analysis with a linear discriminant analysis (LDA) classifier with training and test data from different (“see” vs. “do”) modalities. A: the colored brain clusters (see Table 1) indicate vertices where gray matter voxels within the surrounding circle on the cortical surface show above-chance crossmodal information (random effects analysis, thresholded for cluster size). Crossmodal visuomotor information about intransitive manual actions is found in the left hemisphere at the junction of the intraparietal and postcentral sulci and bilaterally in lateral occipitotemporal cortex. For each node this is based on 2 classifications, in which either the data from the “see” condition were used to train the classifier and the data from the “do” condition were used as test data or vice versa. Insets: detailed view of the significant clusters. B: the same map as A, but without cluster thresholding. The color map legend (bottom left) shows the t-value of the group analysis against chance accuracy for A and B. C: like A, except that mean classification accuracy values (chance = 33.3%) are depicted. D: like C, without cluster thresholding. The color map legend (bottom right) shows the accuracy scale for C and D. CS, central sulcus; PoCS, postcentral sulcus; IPS, intraparietal sulcus; STS, superior temporal sulcus.
Fig. 4.
Fig. 4.
Similarity matrices for evaluation of experiment 2 cross-validation classification results. Each row and each column (for training set and test set, respectively) represents one of the 8 conditions in the experiment, formed by the combination of modality (see, do) × effector (finger, hand) × goal (lift, punch). Where functional magnetic resonance imaging (fMRI) activity patterns are predicted to be similar (across training and test set, for a given brain region and a given participant), a cell matrix is marked with a pink square. Conversely, trials that were used in the cross-validation scheme but where no similarity between patterns is predicted, are indicated with a gray square. A: this example represents predicted similarity for within-modality “do” action representation. The fMRI activity patterns elicited by performing a given action are predicted to be similar across multiple executions of that action, compared with a different action. B and C: similarity matrices for within-modality “see” and crossmodal action representation. In the crossmodal case (C), the prediction is that the fMRI activity pattern elicited by performing a given action will be similar to that elicited by seeing that action (relative to other actions) and vice versa. D and E: similarity matrices for representation of goal irrespective of effector and vice versa. Note that both cases reflect information carried across modalities. F: similarity matrix for the contrast of goal vs. effector, where blue squares indicate similarity of patterns, but with a negative weight. Note that this matrix represents the difference between the matrices in D and E. Also note that the matrices in AC are equally applicable to experiment 1, but with 3 actions in each modality instead of 4.
Fig. 5.
Fig. 5.
Experimental stimuli from experiment 2. A: frame capture from video recording during experiment 2, showing the position of the participant's hand, experimenter's hand, and the target object during a null (no action) trial. B: similar to A, but the experimenter performs a “punch hand” action that is observed by the participant. C: frames illustrating each of the 4 actions used in the experiment, formed by crossing effector (finger, hand) × goal (lift, punch).
Fig. 6.
Fig. 6.
Schematic of the trial structure for experiment 2. The top row shows the series of events in “see” trials and the bottom row events in “do” trials.
Fig. 7.
Fig. 7.
Group crossmodal surface information map for experiment 2. A: cluster-thresholded map (conventions as in Fig. 3) of crossmodal visuomotor information about transitive manual actions is found in the left hemisphere, around the junction of the intraparietal and postcentral sulci, and in lateral occipitotemporal cortex bilaterally (see Table 2). B: the same map as that in A, without cluster thresholding. The color map legend (bottom left) shows the t-value of the group analysis against chance accuracy for A and B. C: like A, except that mean classification accuracy values (chance = 25%) are depicted. D: like C, without cluster thresholding. The color map legend (bottom right) shows the accuracy scale for C and D. CS, central sulcus; PoCS, postcentral sulcus; IPS, intraparietal sulcus; STS, superior temporal sulcus.
Fig. 8.
Fig. 8.
Regions in which representations are biased for effector or goal (experiment 2). These data were first masked to select regions for which accuracy in the overall crossmodal analysis (Fig. 7) was above chance. Vertices are colored to indicate a bias in favor of either discrimination of the action effector (blue/cyan) or discrimination of the action goal (red/yellow). Areas with no bias are shown in green. Note a gradient in the bias from effector (postcentral gyrus) to action (superior parietal cortex).

Similar articles

Cited by

References

    1. Agnew ZK, Bhakoo KK, Puri BK. The human mirror system: a motor resonance theory of mind-reading. Brain Res Rev 54: 286–293, 2007 - PubMed
    1. Aguirre GK. Continuous carry-over designs for fMRI. NeuroImage 35: 1480–1494, 2007 - PMC - PubMed
    1. Astafiev SV, Stanley CM, Shulman GL, Corbetta M. Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat Neurosci 7: 542–548, 2004 - PubMed
    1. Avikainen S, Forss N, Hari R. Modulated activation of the human SI and SII cortices during observation of hand actions. NeuroImage 15: 640–646, 2002 - PubMed
    1. Barsalou LW, Niedenthal PM, Barbey AK, Ruppert JA. Social embodiment. Psychol Learn Motiv 43: 43–92, 2003

Publication types

MeSH terms

LinkOut - more resources