Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jul 11;22(8):17.
doi: 10.1167/jov.22.8.17.

Characteristic fixation biases in Super-Recognizers

Affiliations

Characteristic fixation biases in Super-Recognizers

Marcel Linka et al. J Vis. .

Abstract

Neurotypical observers show large and reliable individual differences in gaze behavior along several semantic object dimensions. Individual gaze behavior toward faces has been linked to face identity processing, including that of neurotypical observers. Here, we investigated potential gaze biases in Super-Recognizers (SRs), individuals with exceptional face identity processing skills. Ten SRs, identified with a novel conservative diagnostic framework, and 43 controls freely viewed 700 complex scenes depicting more than 5000 objects. First, we tested whether SRs and controls differ in fixation biases along four semantic dimensions: faces, text, objects being touched, and bodies. Second, we tested potential group differences in fixation biases toward eyes and mouths. Finally, we tested whether SRs fixate closer to the theoretical optimal fixation point for face identification. SRs showed a stronger gaze bias toward faces and away from text and touched objects, starting from the first fixation onward. Further, SRs spent a significantly smaller proportion of first fixations and dwell time toward faces on mouths but did not differ in dwell time or first fixations devoted to eyes. Face fixation of SRs also fell significantly closer to the theoretical optimal fixation point for identification, just below the eyes. Our findings suggest that reliable superiority for face identity processing is accompanied by early fixation biases toward faces and preferred saccadic landing positions close to the theoretical optimum for face identification. We discuss future directions to investigate the functional basis of individual fixation behavior and face identity processing ability.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Example stimuli with pixel masks. Example stimuli with overlaid pixel masks for objects of the semantic categories: Faces (red), Eyes (blue), Mouths (green), Bodies (violet), Touched (cyan), and Text (yellow). All images were presented without pixel masks.
Figure 2.
Figure 2.
Individual gaze tendencies toward four semantic dimensions. (A–D) Right-hand leaves show the distributions of the control group for percent first fixations (green) and percent cumulative dwell time (gray) along each semantic dimension. Dots depicted in the left-hand raincloud plots indicate the corresponding individual data for each control subject. Superimposed red lines refer to the fixation ratios for each SR.
Figure 3.
Figure 3.
Group difference in gaze tendency across four semantic dimensions. (A–D) Bootstrapped null distributions of 10,000 random sample means drawn from all participants (pooled across groups) for percent first fixations (green) and percent of cumulative dwell time (gray). The superimposed red lines refer to the observed mean percent first fixations and mean percent cumulative dwell time of SRs for each given semantic dimension. *p < 0.05, **p < 0.01, ***p < 0.001 (Holm–Bonferroni corrected; see Methods).
Figure 4.
Figure 4.
Fixations toward eyes and mouths. (A, B) Left-hand sides show the distributions of the control group for percent first fixations (green) and percent cumulative dwell time (gray) for fixations toward (A) eyes and (B) mouths relative to the amount of time spent looking at faces. Dots depicted in the left-hand raincloud plots indicate the corresponding individual data for each control subject and red lines indicate the fixation ratios for each SR. Right-hand sides show bootstrapped null distributions of 10,000 random sample means drawn from all participants (pooled across groups) for percent first fixations (green) and percent of cumulative dwell time (gray), respectively, for fixations toward (A) eyes and (B) mouths. The superimposed red lines refer to the corresponding observed mean of SRs. *p < 0.05, **p < 0.01, ***p < 0.001 (Holm–Bonferroni corrected; see Methods).
Figure 5.
Figure 5.
Results for the stimulus category Faces. (A) Heatmaps showing the distribution of first fixations (top row) and all fixations (bottom row) on all observed faces with an eye-to-mouth distance > 2.5 DVA superimposed over an example image (selected from one stimulus used in the free-viewing task). The heatmap was horizontally compressed to match the example face. Heatmaps on the left-hand side show fixation data from controls, maps on the right-hand side show fixations from SRs. All face fixation coordinates were normalized to the horizontal (X) and vertical (Y) extent of the respective observed face. Warmer colors indicate higher density of fixations. (B) Average of relative distances from ideal fixation point for first fixations (top row) and all fixations (bottom row) toward faces with an eye to mouth distance >2.5 DVA for each observer. Distances from the ideal fixation point just below the eyes (Peterson & Eckstein, 2012) were calculated for each face and fixation, normalized as percentages from the distance from the respective mouth centroid to the ideal fixation point and then averaged for each observer. The resulting mean distances for controls are shown on the left-hand side, those for SRs on the right-hand side. See Methods for further details.

References

    1. Allen, M., Poggiali, D., Whitaker, K., Marshall, T., van Langen, J., & Kievit, R. (2019). Raincloud plots: A multi-platform tool for robust data visualization [version 2; peer review: 2 approved]. Wellcome Open Research, 4, 16, 10.12688/wellcomeopenres.15191.2. - DOI - PMC - PubMed
    1. Amestoy, A., Guillaud, E., Bouvard, M. P., & Cazalets, J.-R. (2015). Developmental changes in face visual scanning in autism spectrum disorder as assessed by data-based analysis. Frontiers in Psychology, 6, 989, 10.3389/fpsyg.2015.00989. - DOI - PMC - PubMed
    1. Arizpe, J., Walsh, V., Yovel, G., & Baker, C. I. (2017). The categories, frequencies, and stability of idiosyncratic eye-movement patterns to faces. Vision Research, 141, 191–203, 10.1016/j.visres.2016.10.013. - DOI - PMC - PubMed
    1. Avidan, G., & Behrmann, M. (2021). Spatial integration in normal face processing and its breakdown in congenital prosopagnosia. Annual Review of Vision Science, 7, 301–321, 10.1146/annurev-vision-113020-012740. - DOI - PubMed
    1. Bargary, G., Bosten, J.M., Goodbourn, P.T., Lawrance-Owen, A.J., Hogg, R.E., & Mollon, D. (2017). Individual differences in human eye movements: An oculomotor signature? Vision Research, 141, 157–169, 10.1016/j.visres.2017.03.001. - DOI - PubMed

Publication types