Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 12;10(1):28.
doi: 10.1038/s41539-025-00316-3.

Understanding the role of eye movement pattern and consistency during face recognition through EEG decoding

Affiliations

Understanding the role of eye movement pattern and consistency during face recognition through EEG decoding

Guoyang Liu et al. NPJ Sci Learn. .

Abstract

Eye movement patterns and consistency during face recognition are both associated with recognition performance. We examined whether they reflect different mechanisms through EEG decoding. Eighty-four participants performed an old-new face recognition task with eye movement pattern and consistency quantified using eye movement analysis with hidden Markov models (EMHMM). Temporal dynamics of neural representation quality for face recognition were assessed through decoding old vs new faces using a support vector machine classifier. Results showed that a more eye-focused pattern was associated with higher decoding accuracy in the high-alpha band, reflecting better neural representation quality. In contrast, higher eye movement consistency was associated with shorter latency of peak decoding accuracy in the high-alpha band, which suggested more efficient neural representation development, in addition to higher ERP decoding accuracy. Thus, eye movement patterns are associated with neural representation effectiveness, whereas eye movement consistency reflects neural representation development efficiency, unraveling different aspects of cognitive processes.

PubMed Disclaimer

Conflict of interest statement

Competing interests: The authors declare no competing financial and/or non-financial interests.

Figures

Fig. 1
Fig. 1. The EMHMM clustering results.
The eye-focused and nose-focused patterns are generated. a The eye-focused patterns generated by EMHMM. b The nose-focused patterns generated by EMHMM. Ellipses show ROIs as 2-D Gaussian emissions. The table shows transition probabilities among the ROIs. Priors show the probabilities that a fixation sequence starts from the ellipse. The image in the middle shows the corresponding heatmap.
Fig. 2
Fig. 2. AUC performance curve for ERP decoding (0.5–6 Hz).
The black horizontal line represents the chance level performance (AUC = 0.5), and the black bold vertical line indicates the stimulus presentation onset time. The AUC values in the shaded area are significantly higher than the chance level, where the light gray area indicates the area with corrected p < 0.05, and the dark gray area indicates the area with corrected p < 0.001. The purple shading indicates ±1 standard error of the mean (SEM).
Fig. 3
Fig. 3. AUC performance curve for alpha band decoding (8–12 Hz).
The black horizontal line represents the chance level performance (AUC = 0.5), and the black bold vertical line indicates the stimulus presentation onset time. a AUC performance curve for low-alpha band decoding (8–10 Hz). b AUC performance curve for high-alpha band decoding (10–12 Hz). The AUC values in the shaded area are significantly higher than the chance level, where the light gray area indicates the area with corrected p < 0.05, and the dark gray area indicates the area with corrected p < 0.001. The purple shading indicates ±1 SEM.
Fig. 4
Fig. 4. Group-wise decoding performance in AUC in the high-alpha band (10–12 Hz) across time points.
The black horizon line represents the chance level performance (AUC = 0.5), and the black bold vertical line indicates the stimulus presentation onset time. The bold purple line marks the time period where the eyes-focused group’s performance is significantly higher than the nose-focused group’s performance (corrected p < 0.05). The bold red line marks the period which the eyes-focused group’s performance is significantly higher than the chance level (corrected p < 0.05). The blue and orange shading indicates ±1 SEM, and CL indicates the chance level.
Fig. 5
Fig. 5. Averaged topography map in the ERP band.
The left subplot shows the topography map, while the right subplot illustrates the top-10 average weights and their corresponding channels.
Fig. 6
Fig. 6. Averaged topography maps in the high-alpha band.
The topography maps (upper row) and bar charts (lower row) illustrate the distribution of the channel weights. a The overall topography map and its top-10 channel weights. b The nose group topography map and its top-10 channel weights. c The eyes group topography map and its top-10 channel weights.
Fig. 7
Fig. 7. The trial procedure of the study and recognition phase of the face recognition paradigm.
A trial of the study phase started with a drift check, followed by the face image to be learned that will be presented for 5000 ms, and ended with a screen for blinking. A trial of the recognition phase started with a drift check, followed by a face image to be judged, which would disappear until a response, and ended with a screen for blinking.
Fig. 8
Fig. 8. The power decoding performance of various frequency bands (4–40 Hz) for decoding old vs new faces.
The AUC values in the shaded areas are significantly higher than the chance level: light gray indicates areas with the corrected significance level p < 0.05, whereas darker gray indicates areas with the corrected significance level p < 0.001. The p-values are corrected by the Benjamini and Yekutieli method.

Similar articles

Cited by

References

    1. Arizpe, J., Walsh, V., Yovel, G. & Baker, C. I. The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces. Vis. Res.141, 191–203 (2017). - PMC - PubMed
    1. Abudarham, N., Shkiller, L. & Yovel, G. Critical features for face recognition. Cognition182, 73–83 (2019). - PubMed
    1. Royer, J. et al. Greater reliance on the eye region predicts better face recognition ability. Cognition181, 12–20 (2018). - PubMed
    1. Hills, P. J., Eaton, E. & Pake, J. M. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition. Q J. Exp. Psychol.69, 611–625 (2016). - PubMed
    1. Hsiao, J. H. W. & Cottrell, G. Two fixations suffice in face recognition. Psychol. Sci.19, 998–1006 (2008). - PMC - PubMed

LinkOut - more resources