Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2004 Aug;22(4):300-11.
doi: 10.1002/hbm.20039.

Visual recognition of faces, objects, and words using degraded stimuli: where and when it occurs

Affiliations
Comparative Study

Visual recognition of faces, objects, and words using degraded stimuli: where and when it occurs

Alan J Pegna et al. Hum Brain Mapp. 2004 Aug.

Abstract

We studied time course and cerebral localisation of word, object, and face recognition using event-related potentials (ERPs) and source localisation techniques. To compare activation rates of these three categories, we used degraded images that easily pop out without any change in the physical features of the stimuli, once the meaning is revealed. Comparisons before and after identification show additional periods of activation beginning at 100 msec for faces and at around 200 msec for objects and words. For faces, this activation occurs predominantly in right temporal areas, whereas for objects, the specific time period gives rise to bilateral posterior but right dominant foci. Finally, words show a maximum area of activation in the left temporooccipital area at their specific time period. These results provide unequivocal evidence that when effects of low-level visual features are circumvented, faces, objects, and words are not only distinct in terms of their anatomic routes, but also in terms of their times of processing.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Examples of the three categories of stimuli used during ERP recordings. Left: Visual stimuli as presented during the ERP measures. Right: Where in the visual scene the actual stimuli are hidden. A, B, C: Examples of faces, objects, and words, respectively.
Figure 2
Figure 2
Results of ERP map series segmentation. A: Segment maps (SMs) for the eight scalp potentials, explaining the four grand mean ERP map series, are illustrated with positive values in red and negative values in blue. SM2 (red) was specific to faces and was named SMF. SM5 (blue) was specific to objects and was named SMO. Finally, SM6 (green) was specific to words and named SMW. B: Grand mean ERP traces (all 125 channels superimposed) are shown for the unidentified condition (UC), identified faces (IF), identified objects (IO), and identified words (IW) from top to bottom, respectively. Arrows below ERPs indicate the periods during which each SM was present.
Figure 3
Figure 3
Source localisation. A, B, C: Estimations obtained using the distributed linear source localisation (LAURA) procedure for the segment maps specific to faces (SMF), objects (SMO), and words (SMW), respectively. The current density (green, low current density; red, high current density) maxima are shown on six horizontal slices of the average brain. SMF shows right temporal gyrus activation, with a small secondary source in left parietal region. SMO shows bilateral regions of activation, maximal in right lingual and middle occipital gyri and with a symmetrical secondary source on the left. For SMW, a maximum is found in posterior left lingual and middle occipital gyri, with a small secondary right source in posterior cingulum.

Similar articles

Cited by

References

    1. Allison T, McCarthy G, Nobre A, Puce A, Belger A (1994): Human extrastriate visual cortex and the perception of faces, words, numbers, and colors. Cereb Cortex 4: 544–554. - PubMed
    1. Allison T, Puce A, Spencer DD, McCarthy G (1999): Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non‐face stimuli. Cereb Cortex 9: 415–430. - PubMed
    1. Bar M, Tootell RB, Schacter DL, Greve DN, Fischl B, Mendola JD, Rosen BR, Dale AM (2001): Cortical mechanisms specific to explicit visual object recognition. Neuron 29: 529–535. - PubMed
    1. Beauchamp MS, Lee KE, Haxby JV, Martin A (2002): Parallel visual motion processing streams for manipulable objects and human movements. Neuron 34: 149–159. - PubMed
    1. Bentin S, Allison T, Puce A, Perez E, McCarthy G (1996): Electrophysiological studies of face perception in humans. J Cogn Neurosci 8: 551–565. - PMC - PubMed

Publication types

LinkOut - more resources