Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 10;43(19):3477-3494.
doi: 10.1523/JNEUROSCI.1484-22.2023. Epub 2023 Mar 31.

Face-Selective Patches in Marmosets Are Involved in Dynamic and Static Facial Expression Processing

Affiliations

Face-Selective Patches in Marmosets Are Involved in Dynamic and Static Facial Expression Processing

Audrey Dureux et al. J Neurosci. .

Abstract

The correct identification of facial expressions is critical for understanding the intention of others during social communication in the daily life of all primates. Here we used ultra-high-field fMRI at 9.4 T to investigate the neural network activated by facial expressions in awake New World common marmosets from both male and female sex, and to determine the effect of facial motions on this network. We further explored how the face-patch network is involved in the processing of facial expressions. Our results show that dynamic and static facial expressions activate face patches in temporal and frontal areas (O, PV, PD, MD, AD, and PL) as well as in the amygdala, with stronger responses for negative faces, also associated with an increase of the respiration rates of the monkey. Processing of dynamic facial expressions involves an extended network recruiting additional regions not known to be part of the face-processing network, suggesting that face motions may facilitate the recognition of facial expressions. We report for the first time in New World marmosets that the perception and identification of changeable facial expressions, vital for social communication, recruit face-selective brain patches also involved in face detection processing and are associated with an increase of arousal.SIGNIFICANCE STATEMENT Recent research in humans and nonhuman primates has highlighted the importance to correctly recognize and process facial expressions to understand others' emotions in social interactions. The current study focuses on the fMRI responses of emotional facial expressions in the common marmoset (Callithrix jacchus), a New World primate species sharing several similarities of social behavior with humans. Our results reveal that temporal and frontal face patches are involved in both basic face detection and facial expression processing. The specific recruitment of these patches for negative faces associated with an increase of the arousal level show that marmosets process facial expressions of their congener, vital for social communication.

Keywords: awake marmosets; fMRI; faces; facial expressions; respiration rate; social communication.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental setup and stimuli. A, fMRI task block design. In each run of the passive-viewing dynamic facial expressions task, four different types of videos lasting 12 s each were presented twice in a randomized order. The videos consisted of faces of marmosets depicting different facial expressions (neutral faces and negative faces conditions) and the corresponding scrambled version (scrambled neutral faces and scrambled negative faces conditions). Each video was separated by baseline blocks of 18 s, where a central dot was displayed in the center of the screen. This fMRI block design was also used for the static facial expressions and the face-localizer tasks. B, Stimuli of the face-localizer task: three different categories were used in the experiment (faces, objects, and body parts), resulting in blocks of 12 s with stimuli from the same category randomly selected and displayed for 500 ms. Only responses to face and object stimuli were used in this study. Thirty exemplars of each category were used. C, Stimuli used in the passive-viewing static facial expressions task. The following four types of images were used: neutral faces, scrambled neutral faces, negative faces, and scrambled negative faces. Thirty-five exemplars of each category were used. D–F, Density histogram of eye positions for each task (D, dynamic facial expressions task; E, static facial expressions task; F, face-localizer task). Plots showing density histograms of eye positions for all animals during the baseline (central dot) and for each condition of the tasks. In each graph, an example of stimuli for each condition is represented below, in its correct location.
Figure 2.
Figure 2.
Face patches identified by comparing faces with objects. Group functional maps depicting significantly higher activations for faces compared with objects obtained from 7 awake marmosets. The maps reveal six functional patches, displayed on lateral, dorsal, and ventral views of left and right fiducial marmoset cortical surfaces. No activations were found on medial view. The white circles delineate the peak of activation of the following face patches using the labeling described in the study by Hung et al. (2015): occipitotemporal face patches O (V2/V3), PV (V4/TEO), PD (V4t/FST), MD (posterior TE), and AD (anterior TE). The frontal face patch that we called PL patch (areas 45/47) has only previously been identified when faces were compared with scrambled faces (Schaeffer et al., 2020). Subcortical activations are illustrated on coronal slices. In the top map, brain areas reported have an activation threshold corresponding to z scores > 2.57 (p < 0.01, AFNI 3dttest++). In the bottom map, we increased the activation threshold to isolate face patch subregions and to delineate the highest z value (i.e., peak of activation) of each face patch, which allowed us to determine the ROIs for the face patches (z scores > 3 for the left hemisphere; z scores > 3.83 for the right hemisphere).
Figure 3.
Figure 3.
Brain networks activated by each condition versus baseline. Group functional maps showing significantly greater activations for dynamic neutral, negative, scrambled neutral, and scrambled negative faces, compared with baseline. Group maps obtained from 6 awake marmosets displayed on lateral and medial views of the right fiducial marmoset cortical surfaces as well as dorsal and ventral views of left and right fiducial marmoset cortical surfaces. The white line delineates the regions based on the Paxinos parcellation of the NIH marmoset brain atlas (Liu et al., 2017). The brain areas reported have activation threshold corresponding to z scores > 3.29 (p < 0.001, AFNI 3dttest++; cluster-size correction, α = 0.05 from 10,000 Monte-Carlo simulations).
Figure 4.
Figure 4.
Brain networks involved in dynamic facial expression processing. A–C, Group functional maps showing significantly greater activations for the comparison between the following: A, All emotional faces (i.e., neutral and negative faces) and all scrambled emotional faces (i.e., scrambled versions of neutral and negative faces); B, neutral facial expression and its scrambled version; and C, negative facial expression and its scrambled version. Group functional topology comparisons are displayed on the left and right fiducial marmoset cortical surfaces (lateral, medial, dorsal, and ventral views) as well as on coronal slices, to illustrate the activations in subcortical areas. The white line delineates the regions based on the Paxinos parcellation of the NIH marmoset brain atlas (Liu et al., 2018). The black circles delineate the position of face patches identified by the face-localizer task depicted in Figure 2. The brain areas reported have activation threshold corresponding to z scores > 1.96 (p < 0.05, AFNI's 3dttest++; cluster-size correction, α = 0.05 from 10,000 Monte-Carlo simulations).
Figure 5.
Figure 5.
Comparison between the two dynamic facial expressions. A, B, Group functional maps showing significantly greater activations for negative compared with neutral face videos (A) and for scrambled negative compared with scrambled neutral face videos (B) displayed on the left and right fiducial marmoset cortical surfaces (lateral, medial, dorsal, and ventral views). Coronal slices represent the activations in subcortical areas. The white line delineates the regions based on the Paxinos parcellation of the NIH marmoset brain atlas (Liu et al., 2018). The black circles delineate the position of face patches identified by our face-localizer task depicted in Figure 2. The brain areas reported have activation threshold corresponding to z scores > 1.96 (p < 0.05, AFNI 3dttest++; cluster-size correction, α = 0.05 from 10,000 Monte-Carlo simulations).
Figure 6.
Figure 6.
Brain networks involved in static facial expression processing. A–C, Group functional maps showing significantly greater activations for the comparison between the following: A, all emotional faces and all scrambled emotional faces; B, neutral facial expressions and its scrambled versions; and C, negative facial expression and its scrambled version. Group functional topology comparisons are displayed on the left and right fiducial marmoset cortical surfaces (lateral, medial, dorsal, and ventral views) as well as on coronal slices to illustrate the activations in subcortical areas. The white line delineates the regions based on the Paxinos parcellation of the NIH marmoset brain atlas (Liu et al., 2018). The black circles delineate the position of face patches identified by our face-localizer task depicted in Figure 2. The brain areas reported have activation threshold corresponding to z scores > 2.57 (p < 0.01, AFNI 3dttest++).
Figure 7.
Figure 7.
Comparison between the two static facial expressions. A, B, Group functional maps showing significantly greater activations for negative compared with neutral face pictures (A) and for scrambled negative face pictures compared with scrambled neutral face pictures (B) displayed on the left and right fiducial marmoset cortical surfaces (lateral, medial, dorsal, and ventral views). Coronal slices represent the activations in subcortical areas. The white line delineates the regions based on the Paxinos parcellation of the NIH marmoset brain atlas (Liu et al., 2018). The black circles delineate the position of the face patches identified by our face-localizer task and depicted in Figure 2. The brain areas reported have activation threshold corresponding to z scores > 2.57 (p < 0.01, AFNI 3dttest++).
Figure 8.
Figure 8.
ROI analysis: differences in the percentages of signal change responses among the four conditions (i.e., neutral and negative facial expressions and their scrambled versions) in face-selective patches. The magnitude of the percentage of signal change for each condition has been extracted from time series of 12 (6 right, 6 left) functional regions of interest extracted from the activation map obtained by faces > objects contrast in the face-localizer task (Fig. 2, Table 1). The differences from baseline (represented by asterisks below each bar graph) and between conditions (represented by asterisk on horizontal bar) were tested using one-sided paired t tests corrected for multiple comparisons (FDR): *p < 0.05, **p < 0.01, and ***p < 0.001. The error bars correspond to the SEM.
Figure 9.
Figure 9.
Respiration rate as a function of the viewing condition. Dot plot depicting δ RR of all six marmosets in bpm (i.e., mean RR during block of video clips – mean RR during baseline) according to each condition: neutral, negative, scrambled neutral, and scrambled negative faces. In the plot, the mean δ RR for each condition is represented by a different colored dot. For each condition, the vertical bars represent the SEM and the small dots indicate individual values obtained for each run. The following three paired t tests were performed: neutral faces versus scrambled neutral faces, negative faces versus scrambled negative faces, and negative faces versus neutral faces (*p < 0.05, **p < 0.01, and ***p < 0.001).
Figure 10.
Figure 10.
Respiration rate as a function of the viewing condition for first versus last runs. Dot plot depicting δ RR of all six marmosets in bpm (i.e., mean RR during block video clips conditions – mean RR during baseline) according to each condition for each first run and last run. In the plot, the mean δ RR for each condition is represented by a different colored dot. The vertical bars represent the SEM, and the small dots indicate individual values obtained for each run. Four paired t tests were tested to identify the difference between the first and last run conditions (*p < 0.05, **p < 0.01, and ***p < 0.001).

References

    1. Adolphs R (2002) Recognizing emotion from facial expressions: psychological and neurological mechanisms Behav Cogn Neurosci Rev 1:21–62. 10.1177/1534582302001001003 - DOI - PubMed
    1. Adolphs R, Tranel D, Damasio H, Damasio A (1994) Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature 372:669–672. 10.1038/372669a0 - DOI - PubMed
    1. Aggleton JP, Passingham RE (1981) Syndrome produced by lesions of the amygdala in monkeys (Macaca mulatta). J Comp Physiol Psychol 95:961–977. 10.1037/h0077848 - DOI - PubMed
    1. Albuquerque N, Guo K, Wilkinson A, Savalli C, Otta E, Mills D (2016) Dogs recognize dog and human emotions. Biol Lett 12:20150883. 10.1098/rsbl.2015.0883 - DOI - PMC - PubMed
    1. Arsalidou M, Morris D, Taylor MJ (2011) Converging evidence for the advantage of dynamic facial expressions. Brain Topogr 24:149–163. 10.1007/s10548-011-0171-4 - DOI - PubMed

Publication types

Grants and funding

LinkOut - more resources