Faces and text attract gaze independent of the task: Experimental data and computer model
- PMID: 20053101
- DOI: 10.1167/9.12.10
Faces and text attract gaze independent of the task: Experimental data and computer model
Abstract
Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones-as a suitable control-attract attention by tracking the eye movements of subjects in two types of tasks-free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model's predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.
Similar articles
-
What stands out in a scene? A study of human explicit saliency judgment.Vision Res. 2013 Oct 18;91:62-77. doi: 10.1016/j.visres.2013.07.016. Epub 2013 Aug 15. Vision Res. 2013. PMID: 23954536
-
Eyes always attract attention but gaze orienting is task-dependent: evidence from eye movement monitoring.Neuropsychologia. 2007 Mar 14;45(5):1019-28. doi: 10.1016/j.neuropsychologia.2006.09.004. Epub 2006 Oct 24. Neuropsychologia. 2007. PMID: 17064739
-
Task and context determine where you look.J Vis. 2007 Dec 19;7(14):16.1-20. doi: 10.1167/7.14.16. J Vis. 2007. PMID: 18217811
-
Evidence for two distinct mechanisms directing gaze in natural scenes.J Vis. 2012 Apr 1;12(4):9. doi: 10.1167/12.4.9. J Vis. 2012. PMID: 22510977
-
Objects predict fixations better than early saliency.J Vis. 2008 Nov 20;8(14):18.1-26. doi: 10.1167/8.14.18. J Vis. 2008. PMID: 19146319
Cited by
-
The role that composition plays in determining how a viewer looks at landscape art.J Eye Mov Res. 2020 Dec 15;13(2):10.16910/jemr.13.2.13. doi: 10.16910/jemr.13.2.13. J Eye Mov Res. 2020. PMID: 33828794 Free PMC article.
-
Learning to Model Task-Oriented Attention.Comput Intell Neurosci. 2016;2016:2381451. doi: 10.1155/2016/2381451. Epub 2016 May 9. Comput Intell Neurosci. 2016. PMID: 27247561 Free PMC article.
-
Individual fixation tendencies in person viewing generalize from images to videos.Iperception. 2022 Nov 4;13(6):20416695221128844. doi: 10.1177/20416695221128844. eCollection 2022 Nov-Dec. Iperception. 2022. PMID: 36353505 Free PMC article.
-
Fixation-pattern similarity analysis reveals adaptive changes in face-viewing strategies following aversive learning.Elife. 2019 Oct 22;8:e44111. doi: 10.7554/eLife.44111. Elife. 2019. PMID: 31635690 Free PMC article.
-
Attention to faces in images is associated with personality and psychopathology.PLoS One. 2023 Feb 15;18(2):e0280427. doi: 10.1371/journal.pone.0280427. eCollection 2023. PLoS One. 2023. PMID: 36791081 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources