Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2025 Jun 3:2025.06.03.657646.
doi: 10.1101/2025.06.03.657646.

Spatiotemporal Dynamics of Invariant Face Representations in the Human Brain

Affiliations

Spatiotemporal Dynamics of Invariant Face Representations in the Human Brain

Amita Giri et al. bioRxiv. .

Abstract

The human brain can effortlessly extract a familiar face's age, gender, and identity despite dramatic changes in appearance, such as head orientation, lighting, or expression. Yet, the spatiotemporal dynamics underlying this ability, and how they depend on task demands, remain unclear. Here, we used multivariate decoding of magnetoencephalography (MEG) responses and source localization to characterize the emergence of invariant face representations. Human participants viewed natural images of highly familiar celebrities that systematically varied in viewpoint, gender, and age, while performing a one-back task on the identity or the image. Time-resolved decoding revealed that identity information emerged rapidly and became increasingly invariant to viewpoint over time. We observed a temporal hierarchy: view-specific identity information appeared at 64 ms, followed by mirror-invariant representations at 75 ms and fully view-invariant identity at 89 ms. Identity-invariant age and gender information emerged around the same time as view-invariant identity. Task demands modulated only late-stage identity and gender representations, suggesting that early face processing is predominantly feedforward. Source localization at peak decoding times showed consistent involvement of the occipital face area (OFA) and fusiform face area (FFA), with stronger identity and age signals than gender. Our findings reveal the spatiotemporal dynamics by which the brain extracts view-invariant identity from familiar faces, suggest that age and gender are processed in parallel, and show that task demands modulate later processing stages. Together, these results offer new constraints on computational models of face perception.

Keywords: Age; Face processing; Familiar faces; Gender; Identity; MEG; Temporal dynamics.

PubMed Disclaimer

Figures

Fig. 1:
Fig. 1:
Experimental design. (a) A total of 15 images per celebrity were selected, consisting of three distinct images for each of the five head views: direct view, half left profile, full left profile, half right profile, and full right profile. (b) During the MEG experiment, subjects viewed a random sequence of familiar face images. Each trial started with the presentation of a face image for 0.4 s followed by a 0.7–1.55 s interstimulus interval. (c) To study task effects, participants conducted two tasks in different experimental sessions. In Task A, participants were instructed to press a button when the same person appeared consecutively, while in Task B, they responded when the same image appeared consecutively. The exact stimulus set is available at https://osf.io/eh54u/.
Fig. 2:
Fig. 2:
Time course of identity (ID) decoding non-invariant and invariant to head view. A classifier was trained to pairwise discriminate identities, utilizing either identical views for both training and testing sets (non-invariant), or varied views (invariant). Time 0 denotes stimulus onset. Lines below plots indicate significant time points using a cluster-based sign permutation test (cluster-defining threshold p < 0.05, and corrected significance level p < 0.05).
Fig. 3:
Fig. 3:
Time course of identity decoding non-invariant, mirror-invariant, and fully invariant to head view. Pictorial representations illustrate cross-decoding approaches for the different cases. Time 0 indicates image onset. Lines below plots indicate significant times using cluster-based sign permutation test (cluster-defining threshold p < 0.05, and corrected significance level p < 0.05).
Fig. 4:
Fig. 4:
Time courses of age, gender and identity decoding. (a) Temporal dynamics of age decoding. (b) Temporal dynamics of gender decoding. (c) Time courses of age, gender, and identity decoding. Lines below plots indicate significant times using cluster-based sign permutation test (cluster-defining threshold p < 0.05, and corrected significance level p < 0.05).
Fig. 5:
Fig. 5:
Task effects on identity, age, and gender representations (a) Time courses of age decoding for the two tasks, under cross-decoding over identity. (b) Time courses of gender decoding. (c) Time courses of identity decoding invariant to head view for the two tasks. Lines below plots indicate significant times, with black lines indicating significant differences between the two tasks, determined using cluster-based sign permutation tests (cluster-defining threshold p < 0.05, and corrected significance level p < 0.05).
Fig. 6:
Fig. 6:
Source localization of identity, age, and gender representations at peak decoding times. (a) Activation maps averaged across subjects, normalized to total unit power across the brain, highlighting the spatial distribution of neural activity. (b) Comparative analysis of activation levels in three face-selective regions of interest (ROIs)—the occipital face area (OFA), fusiform face area (FFA), and posterior superior temporal sulcus (STS). The primary visual cortex (V1) is included as a control region for comparison, providing a comprehensive view of the neural dynamics underlying face attribute processing.
Fig. 7:
Fig. 7:
Regions-of-interest displayed over the standard brain and typical cortical activity. Colored patches highlight the occipital face area (magenta) fusiform face area (blue), posterior superior temporal sulcus (green), and primary visual area (cyan). Orientation views (from left to right): right lateral, left lateral, posterior, anterior.

References

    1. Haxby J., Hoffman E., Gobbini M.: The distributed human neural system for face perception. Trends in Cognitive Sciences 4(6), 223–233 (2000) - PubMed
    1. Duchaine B., Yovel G.: A revised neural framework for face processing. Annual Review of Vision Science 1(November), 393–416 (2015) - PubMed
    1. Guntupalli J.S., Wheeler K.G., Gobbini M.I.: Disentangling the representation of identity from head view along the human face processing pathway. Cerebral Cortex 27(1), 46–53 (2017) - PMC - PubMed
    1. Anzellotti S., Fairhall S.L., Caramazza A.: Decoding representations of face identity that are tolerant to rotation. Cerebral Cortex 24(8), 1998–95 (2014) - PubMed
    1. Kietzmann T.C., Swisher J.D., König, P., Tong, F.: Prevalence of selectivity for mirror-symmetric views of faces in the ventral and dorsal visual pathways. Journal of Neuroscience 32(34), 11763–11772 (2012) - PMC - PubMed

Publication types

LinkOut - more resources