Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Jul;197(1):59-68.
doi: 10.1007/s00221-009-1892-4. Epub 2009 Jun 21.

A multisensory approach to spatial updating: the case of mental rotations

Affiliations

A multisensory approach to spatial updating: the case of mental rotations

Manuel Vidal et al. Exp Brain Res. 2009 Jul.

Abstract

Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. These changes arise either from the rotation of the test object array or from the rotation of the observer. Previous studies showed that the cognitive cost of mental rotations is reduced when viewpoint changes result from the observer's motion, which was explained by the spatial updating mechanism involved during self-motion. However, little is known about how various sensory cues available might contribute to the updating performance. We used a Virtual Reality setup in a series of experiments to investigate table-top mental rotations under different combinations of modalities among vision, body and audition. We found that mental rotation performance gradually improved when adding sensory cues to the moving observer (from None to Body or Vision and then to Body & Audition or Body & Vision), but that the processing time drops to the same level for any of the sensory contexts. These results are discussed in terms of an additive contribution when sensory modalities are co-activated to the spatial updating mechanism involved during self-motion. Interestingly, this multisensory approach can account for different findings reported in the literature.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Left A sketch of the experimental setup. Participants sat inside a closed cabin mounted on a motion platform that contained a front projection screen displaying the virtual scene, a table placed in the middle of the cabin with a screen and touch screen embedded displaying the test object layout and recording the participant answers. The different motion cues available during the viewpoint changes were achieved with a combination of the following manipulations: P the platform rotation, R the room rotation on the front screen, T the layout rotation on the table screen, and a speaker providing a stable external sound cue. Right The five objects used in the spatial layouts: a mobile phone, a shoe, an iron, a teddy bear and a roll of film
Fig. 2
Fig. 2
Illustration of the experimental conditions according to different simulated self-motion sensory contexts (consistent manipulations of body physical position, visual orientation in the virtual room and external sound source). P, R and T indicate the technical manipulations detailed in Fig. 1 involved in each condition. The first column shows the learning context while the other two show the corresponding test conditions with an egocentric rotation of the layout’s view (5 mental rotation conditions) or not (5 control conditions). The two asterisked sensory contexts in the bottom were studied in the validation experiment published elsewhere (Lehmann et al. 2008)
Fig. 3
Fig. 3
The mental rotation task performance plotted together with the results from the previous experiment. The average accuracy and reaction times as a function of the change in layout view (top plots), and the corresponding mental rotation costs (bottom plots), for the various sensory combination contexts: Body, Vision and Body & Audition (from the current experiments), None and Body & Vision (from the previous validation experiment). The error bars correspond to the inter-individual standard error

Similar articles

Cited by

References

    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1016/S0926-6410(96)00073-0', 'is_inner': False, 'url': 'https://doi.org/10.1016/s0926-6410(96)00073-0'}, {'type': 'PubMed', 'value': '9088559', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/9088559/'}]}
    2. Amorim MA, Stucchi N (1997) Viewer- and object-centered mental explorations of an imagined environment are not equivalent. Brain Res Cogn Brain Res 5:229–239 - PubMed
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'PubMed', 'value': '9136270', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/9136270/'}]}
    2. Amorim MA, Glasauer S, Corpinot K, Berthoz A (1997) Updating an object’s orientation and location during nonvisual navigation: a comparison between two processing modes. Percept Psychophys 59:404–418 - PubMed
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1016/j.cognition.2004.01.001', 'is_inner': False, 'url': 'https://doi.org/10.1016/j.cognition.2004.01.001'}, {'type': 'PubMed', 'value': '15582624', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/15582624/'}]}
    2. Burgess N, Spiers HJ, Paleologou E (2004) Orientational manoeuvres in the dark: dissociating allocentric and egocentric influences on spatial memory. Cognition 94:149–166 - PubMed
    1. Christou C, Bülthoff HH (1999) The perception of spatial layout in a virtual world, vol 75. Max Planck Institute Technical Report, Tübingen, Germany
    1. {'text': '', 'ref_index': 1, 'ids': [{'type': 'DOI', 'value': '10.1167/3.3.1', 'is_inner': False, 'url': 'https://doi.org/10.1167/3.3.1'}, {'type': 'PubMed', 'value': '12723964', 'is_inner': True, 'url': 'https://pubmed.ncbi.nlm.nih.gov/12723964/'}]}
    2. Christou CG, Tjan BS, Bülthoff HH (2003) Extrinsic cues aid shape recognition from novel viewpoints. J Vis 3:183–198 - PubMed

Publication types