Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Mar;44(3):367-386.
doi: 10.1037/xhp0000468. Epub 2017 Aug 10.

Contrasting gist-based and template-based guidance during real-world visual search

Affiliations

Contrasting gist-based and template-based guidance during real-world visual search

Brett Bahle et al. J Exp Psychol Hum Percept Perform. 2018 Mar.

Abstract

Visual search through real-world scenes is guided both by a representation of target features and by knowledge of the sematic properties of the scene (derived from scene gist recognition). In 3 experiments, we compared the relative roles of these 2 sources of guidance. Participants searched for a target object in the presence of a critical distractor object. The color of the critical distractor either matched or mismatched (a) the color of an item maintained in visual working memory for a secondary task (Experiment 1), or (b) the color of the target, cued by a picture before search commenced (Experiments 2 and 3). Capture of gaze by a matching distractor served as an index of template guidance. There were 4 main findings: (a) The distractor match effect was observed from the first saccade on the scene, (b) it was independent of the availability of scene-level gist-based guidance, (c) it was independent of whether the distractor appeared in a plausible location for the target, and (d) it was preserved even when gist-based guidance was available before scene onset. Moreover, gist-based, semantic guidance of gaze to target-plausible regions of the scene was delayed relative to template-based guidance. These results suggest that feature-based template guidance is not limited to plausible scene regions after an initial, scene-level analysis. (PsycINFO Database Record

PubMed Disclaimer

Figures

Figure 1
Figure 1
Trial event sequence for Experiment 1. Each trial began with a cue presentation (700 ms). After a 1000-ms delay, a memory color was presented for 500 ms for a post-search memory test. After another 700-ms delay, a search scene was presented, which always contained the target (marked by green square, not present in experimental image). The memory color could either match or mismatch a critical distractor object in the scene (marked by red square, not present in experimental image). After responding to the orientation of an “F” (Arial font) on their target object, there was a 500-ms delay before subjects completed a two-alternative forced choice color memory test.
Figure 2
Figure 2
Data figures for Experiment 1. From left to right, data are plotted for the no-cue, category-label-cue, and picture-cue conditions. A) Time (ms) to first fixate the target as a function of distractor color match. B) Overall probability of fixating the critical distractor as a function of color match. C) Proportions of fixations landing on the critical distractor object as a function of ordinal fixation index and color match. D) The distance, in visual degrees, of the current fixation to the target and critical distractor object as a function of ordinal fixation index/number. Error bars indicate within-subjects 95% confidence intervals (Morey, 2008).
Figure 3
Figure 3
Data figures for the analysis of semantic guidance for Experiment 1. A) Elapsed time to first fixation on a target-plausible region as a function of cue type. B) Elapsed number of fixations until first fixation on a target-plausible region as a function of cue type. C) Proportion of fixations on target-plausible regions as a function of cue type and ordinal fixation index. Error bars are standard errors of the mean.
Figure 4
Figure 4
Example stimuli from Experiment 2. The top row depicts a match, plausible scene. The target (blender, outlined in green), matches the color of the critical distractor (outlined in red). Additionally, the critical distractor is in a plausible location for the blender. The bottom row depicts a mismatch, implausible scene. The target (cutting-board, outlined in green), mismatches the color of the critical distractor object (outlined in red). The critical distractor is in an implausible location for the cutting-board.
Figure 5
Figure 5
Data figures for Experiment 2. From left to right, data are plotted for the plausible and implausible conditions. A) Time (ms) to first fixate the target as a function of distractor color match. B) Overall probability of fixating the critical distractor as a function of color match. C) Proportions of fixations landing on the critical distractor object as a function of ordinal fixation index and color match. D) The distance, in visual degrees, of the current fixation to the target and critical distractor object as a function of ordinal fixation index/number. Error bars indicate within-subjects 95% confidence intervals (Morey, 2008).
Figure 6
Figure 6
Trial event sequence for Experiment 3. Each trial began with a 250-ms low-pass filtered preview of the upcoming search scene, allowing participants to extract the scene gist. The preview was followed by a 50-ms mask. After a 700-ms ISI, the picture cue of the search target was displayed for 1000 ms followed by a 500-ms ISI. Finally, the search scene appeared. On a match trial, there was a color match between the target object (backpack, outlined in green) and the critical distractor object (chair, outlined in red).
Figure 7
Figure 7
Data figures for Experiment 3. From left to right, data are plotted for the preview and no-preview conditions. A) Time (ms) to first fixate the target as a function of distractor color match. B) Overall probability of fixating the critical distractor as a function of color match. C) Proportions of fixations landing on the critical distractor object as a function of ordinal fixation index and color match. D) The distance, in visual degrees, of the current fixation to the target and critical distractor object. Error bars indicate within-subjects 95% confidence intervals (Morey, 2008).
None
None
None
None

References

    1. Beck VM, Hollingworth A. Competition in saccade target selection reveals attentional guidance by simultaneously active working memory representations. Journal of Experimental Psychology: Human Perception and Performance. 2017;43(2):225–230. doi: 10.1037/xhp0000306. - DOI - PMC - PubMed
    1. Beck VM, Hollingworth A, Luck SJ. Simultaneous control of attention by multiple working memory representations. Psychological Science. 2012;23(8):887–898. doi: 10.1177/0956797612439068. - DOI - PMC - PubMed
    1. Brockmole JR, Henderson JM. Recognition and attention guidance during contextual cueing in real-world scenes: Evidence from eye movements. Quarterly Journal of Experimental Psychology. 2006a;59(7):1177–1187. doi: 10.1080/17470210600665996. - DOI - PubMed
    1. Brockmole JR, Henderson JM. Using real-world scenes as contextual cues for search. Visual Cognition. 2006b;13(1):99–108. doi: 10.1080/13506280500165188. - DOI
    1. Brooks DI, Rasmussen IP, Hollingworth A. The nesting of search contexts within natural scenes: Evidence from contextual cuing. Journal of Experimental Psychology: Human Perception and Performance. 2010;36(6):1406–1418. doi: 10.1037/a0019257. - DOI - PMC - PubMed