Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Mar;20(3):694-703.
doi: 10.1093/cercor/bhp140. Epub 2009 Jul 17.

Modulation of perception and brain activity by predictable trajectories of facial expressions

Affiliations

Modulation of perception and brain activity by predictable trajectories of facial expressions

N Furl et al. Cereb Cortex. 2010 Mar.

Abstract

People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears to possess mechanisms for perceiving changeable facial attributes such as expressions, the nature of the underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions as anticipated motion trajectories. Using magnetoencephalography, we show predictable transitions between fearful and neutral expressions (compared with scrambled and static presentations) heighten activity in visual cortex as quickly as 165 ms poststimulus onset and later (237 ms) engage fusiform gyrus, STS and premotor areas. Consistent with proposed models of biological motion representation, we suggest that visual areas predictively represent coherent facial trajectories. We show that such representations bias emotion perception of subsequent static faces, suggesting that facial movements elicit predictions that bias perception. Our findings reveal critical processes evoked in the perception of dynamic stimuli such as facial expressions, which can endow perception with temporal continuity.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Stimuli and procedures. (a) A morph continuum for one face. S1 presentations comprised predictable and scrambled animated sequences constructed using the 6 images between 28%–45% and 45%–63% and static images (28%, 45%, and 63%). (b) Factorial design. The factor sequence type controls whether sequences depict a coherent transition between neutral and fearful expressions or a scrambled, unpredictable version of this transition. For the factor expression type, we describe as “fearful” sequences which transition predictably from neutral toward fear and the scrambled versions of these sequences. We describe as “neutral” sequences which transition predictably from fear toward neutral and scrambled versions of these sequences. (c) For each trial, S1 presentations were followed by an 800 ms-blank screen and then a static 250-ms target (S2) which participants rated for fearfulness.
Figure 2.
Figure 2.
Behavioral results. Z-normalized means and standard errors of fear ratings to S2 faces following fear-and neutral-predictable sequences, scrambled sequences, and static S1 faces expressing 28%, 45%, and 63% fearfulness. Participants’ fear perception is biased in the direction predicted by the preceding predictable sequence.
Figure 3.
Figure 3.
Early occipital effects in sensor space. (a) Statistical parametric map of the t-statistic in sensor space at 165 ms for the contrast predictable > scrambled, showing a cluster peaking at occipital sensors. (b) Time course of response at a sensor (denoted by magenta cross in [a]) near the peak occipital effect. The M100 deflection is labeled, and the arrow indicates the effect of predictable dynamics. (c) Mean adjusted responses at the occipital peak showing activation height over conditions, at 165 ms including 90% confidence intervals (based on between-participant variability). Predictable S1 sequences produce greater activation than scrambled and static.
Figure 4.
Figure 4.
Occipital effects around 165 ms in source space. (a) Statistical parametric map of the t-statistic in source space (Montreal Neurological Institute coordinate: z = 4) for the contrast predictable > scrambled, thresholded at P < 0.005 uncorrected and showing sensitivity to predictable S1 sequences in right visual cortex, and Brodmann areas 17 and 18. (b) Mean adjusted responses at peak voxel in right occipital cortex including 90% confidence intervals (based on between-participant variability).
Figure 5.
Figure 5.
Sensor space effects at 237 ms. (a) Statistical parametric map of the t-statistic in sensor space at 237 ms for the contrast predictable > scrambled, showing peaks over left medial and right lateral temporal sensors. (b) Time courses of response at sensors in left and right hemispheres shown in the red circles in (a). The M170 deflections are labeled, and the arrows indicate sensitivity to predictable dynamics. (c) Mean adjusted responses at lateral temporal voxels in left and right hemisphere showing pattern of effects in sensor space at 237 ms including 90% confidence intervals. Predictable S1 sequences produce greater activation than scrambled and static.
Figure 6.
Figure 6.
Source space effects around 237 ms. (a) Statistical parametric map of the t-statistic in source space for the contrast predictable > scrambled, thresholded at P < 0.005 uncorrected and showing effects in bilateral occipital cortex, right STS, right fusiform gyrus, and bilateral premotor areas. (b) Mean adjusted responses of all conditions at peak voxels in right fusiform, STS, and right premotor cortex including 90% confidence intervals.

References

    1. Akrami A, Liu Y, Treves A, Jagadeesh B. Converging neuronal activity in inferior temporal cortex during the classification of morphed stimuli. Cereb Cortex. 2008;19:760–776. - PMC - PubMed
    1. Andrews TJ, Ewbank MP. Distinct representations for facial identity and changeable aspects of faces in the human temporal lobe. Neuroimage. 2004;23:905–913. - PubMed
    1. Bar M, Kassam KS, Ghuman AS, Boshyan J, Schmid AM, Dale AM, Hämäläinen MS, Marinkovic K, Schacter DL, Rosen BR, et al. Top-down facilitation of visual recognition. Proc Natl Acad Sci USA. 2006;103:449–454. - PMC - PubMed
    1. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, Gallese V, Seitz RJ, Zilles K, Rizzolatti G, Freund HJ. Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur J Neurosci. 2001;13:400–404. - PubMed
    1. Calder AJ, Young AW. Understanding the recognition of facial identity and expression. Nat Rev Neurosci. 2005;6:641–651. - PubMed

Publication types