Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Dec 15;2(1):tgaa093.
doi: 10.1093/texcom/tgaa093. eCollection 2021.

The Time Course of Face Representations during Perception and Working Memory Maintenance

Affiliations

The Time Course of Face Representations during Perception and Working Memory Maintenance

Gi-Yeul Bae. Cereb Cortex Commun. .

Abstract

Successful social communication requires accurate perception and maintenance of invariant (face identity) and variant (facial expression) aspects of faces. While numerous studies investigated how face identity and expression information is extracted from faces during perception, less is known about the temporal aspects of the face information during perception and working memory (WM) maintenance. To investigate how face identity and expression information evolve over time, I recorded electroencephalography (EEG) while participants were performing a face WM task where they remembered a face image and reported either the identity or the expression of the face image after a short delay. Using multivariate event-related potential (ERP) decoding analyses, I found that the two types of information exhibited dissociable temporal dynamics: Although face identity was decoded better than facial expression during perception, facial expression was decoded better than face identity during WM maintenance. Follow-up analyses suggested that this temporal dissociation was driven by differential maintenance mechanisms: Face identity information was maintained in a more "activity-silent" manner compared to facial expression information, presumably because invariant face information does not need to be actively tracked in the task. Together, these results provide important insights into the temporal evolution of face information during perception and WM maintenance.

Keywords: ERP decoding; face decoding; face memory; face perception; temporal dynamics of faces.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Task procedure, behavioral performance, and ERP topography. a. On each trial, participants saw a face image and remembered both the face identity (ID) and facial expression. Images shown here are not scaled to actual size. This was typically followed after a delay by the next trial. However, on a random subset of the trials, memory was tested for the face ID or facial expression from that trial. b. Two types of test trials. For the face ID test trials (indicated by “Identity” in the middle of the display), 4 face images with different face IDs but the same facial expression (except for the facial expression of the original memory item) were presented, and participants reported which of the 4 face images was the same ID as the memory item. For facial expression test trials (indicated by “Expression” in the middle of the display), 4 face images with different facial expressions of the same face ID (except for the face ID of the original memory item) were presented and participants reported which of the 4 face images was the same facial expression as the memory item. I show examples of the NimStim face images due to the restriction placed on the stimulus set (Tottenham et al., 2009). c. Behavioral performance for the ID and Expression tests (n = 22). The difference between them was not statistically significant. Error bars indicate ±1 SEM. d,e. Topography of ERP activity for each of 4 face IDs (d, collapsed across facial expressions) or facial expressions (e, collapsed across face IDs), averaged across the participants and time points during the perception (0–500 ms) and the working memory interval (500–1500 ms).
Figure 2
Figure 2
Decoding accuracy and confusion matrices. a Time course of mean decoding accuracy for face ID and facial expression (n = 22). Time zero indicates the onset of the stimulus face. Chance-level performance (0.25 = 1/4) is indicated by the abscissa. The colored horizontal lines indicate clusters of time points in which the decoding was significantly different from chance after correction for multiple comparisons. The gray areas indicate clusters of time points in which the decoding was significantly different between face ID and facial expression after correction for multiple comparisons. The light shading indicates ±1 SEM. b Confusion matrices for face ID decoding and facial expression decoding for perception (0–500 ms) and working memory (500–1500 ms) periods. Each cell shows the probability that a given ID or expression was classified as a given ID or expression. Cells on the diagonal represent correct classifications.
Figure 3
Figure 3
Mean decoding accuracy from cross-dimension decoding analysis for (a) face ID and (b) facial expression (n = 22). Time zero indicates the onset of a face stimulus. Chance-level performance (0.25 = 1/4) is indicated by the abscissa. The black lines indicate clusters of time points in which the decoding was significantly different from chance after correction for multiple comparisons. The gray shading indicates ±1 SEM.
Figure 4
Figure 4
Temporal generalization of decoding for (a) the face ID and (b) the facial expression. The data were averaged for every 100-ms time window prior to the decoding analysis. The color scale represents decoding accuracy (chance = 0.25). c. Time course of mean decoding accuracy for face ID and facial expression when training and testing were done on the same time window (i.e., upper diagonal of panel (a) and (b)) in the time-averaged decoding. The gray areas indicate time windows in which the decoding was significantly different between face ID and facial expression after correction for multiple comparisons. The light shading indicates ±1 SEM.
Figure 5
Figure 5
Mean accuracy for (a) the decoding of the previous-trial face ID and (b) the previous-trial facial expression (n = 22). The black lines indicate clusters of time points in which the decoding was significantly different from chance after correction for multiple comparisons. The gray shading indicates ±1 SEM.

References

    1. Atkinson AP, Tipples J, Burt DM, Young AW. 2005. Asymmetric interference between sex and emotion in face perception. Percept Psychophys. 67:1199–1213. - PubMed
    1. Bae G-Y, Leonard CJ, Hahn B, Gold JM, Luck SJ. 2020. Assessing the information content of ERP signals in schizophrenia using multivariate decoding methods. Neuroimage Clin. 25:102179. - PMC - PubMed
    1. Bae G-Y, Luck SJ. 2018. Dissociable decoding of spatial attention and working memory from EEG oscillations and sustained potentials. J Neurosci. 38:409–422. - PMC - PubMed
    1. Bae G-Y, Luck SJ. 2019a. Decoding motion direction using the topography of sustained ERPs and alpha oscillations. NeuroImage. 184:242–255. - PMC - PubMed
    1. Bae G-Y, Luck SJ. 2019b. Reactivation of previous experiences in a working memory task. Psychol Sci. 30:587–595. - PMC - PubMed

LinkOut - more resources