Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2007 Sep 12;2(9):e890.
doi: 10.1371/journal.pone.0000890.

Cross-modal object recognition is viewpoint-independent

Affiliations

Cross-modal object recognition is viewpoint-independent

Simon Lacey et al. PLoS One. .

Abstract

Background: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.

Methodology/principal findings: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.

Conclusions/significance: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. An example object used in the present study in the original orientation (A) and rotated 180° about the z-axis (B), x-axis (C) and y-axis (D).
Figure 2
Figure 2. The effect on recognition accuracy of rotating objects away from the learned orientation was confined to the within-modal conditions, with no effect in the cross-modal conditions.
(Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forced-choice task used).
Figure 3
Figure 3. Interaction between modality and rotation.
Rotation away from the learned orientation only affected within-modal, not cross-modal, recognition accuracy. (Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forced-choice task used).
Figure 4
Figure 4. Interaction between the within-modal conditions and the axis of rotation.
Haptic within-modal recognition accuracy was equally disrupted by rotation about each axis whereas visual within-modal recognition was disrupted by the x- and y-rotations more than the z-rotation. The graph shows the percentage decrease in accuracy due to rotating the object away from the learned view. (Error bars = s.e.m.; asterisk = significant difference).
Figure 5
Figure 5. Scatterplots showing that OSIQ-spatial imagery scores correlate with cross-modal (A & B) but not within-modal object recognition accuracy (C & D).

Similar articles

Cited by

References

    1. Jolicoeur P. The time to name disoriented objects. Mem Cognition, 1985;13:289–303. - PubMed
    1. Newell FN, Ernst MO, Tjan BS, Bulthoff HH. Viewpoint dependence in visual and haptic object recognition. Psychol Sci, 2001;12:37–42. - PubMed
    1. Klatzky RL, Lederman S, Reed C. There's more to touch than meets the eye: The salience of object attributes for haptics with and without vision. J Exp Psychol: Gen, 1987;116:356–369.
    1. Reales JM, Ballesteros S. Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. J Exp Psychol: Learn, 1999;25:644–663.
    1. Heller MA, Brackett DD, Scroggs E, Steffen H, Heatherly K, et al. Tangible pictures: Viewpoint effects and linear perspective in visually impaired people. Perception, 2002;31:747–769. - PubMed

Publication types