Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019;27(5-8):487-501.
doi: 10.1080/13506285.2019.1645779. Epub 2019 Aug 1.

Learned feature variance is encoded in the target template and drives visual search

Affiliations

Learned feature variance is encoded in the target template and drives visual search

Phillip Witkowski et al. Vis cogn. 2019.

Abstract

Real world visual search targets are frequently imperfect perceptual matches to our internal target templates. For example, the same friend on different occasions is likely to wear different clothes, hairstyles, and accessories, but some of these may be more likely to vary than others. The ability to deal with template-to-target variability is important to visual search in natural environments, but we know relatively little about how feature variability is handled by the attentional system. In these studies, we test the hypothesis that top-down attentional biases are sensitive to the variance of target feature dimensions over time and prioritize information from less-variable dimensions. On each trial, subjects were shown a target cue composed of colored dots moving in a specific direction followed by a working memory probe (30%) or visual search display (70%). Critically, the target features in the visual search display differed from the cue, with one feature drawn from a distribution narrowly centered over the cued feature (low-variance dimension), and the other sampled from a broader distribution (high-variance dimension). The results demonstrate that subjects used knowledge of the likely cue-to-target variance to set template precision and bias attentional selection. Moreover, an individual's working memory precision for each feature predicted search performance. Our results suggest that observers are sensitive to the variance of feature dimensions within a target and use this information to weight mechanisms of attentional selection.

Keywords: Feature Based Attention; Template; Variability; Visual Attention.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
a) Schematic of task. Subjects were given a cue object at the beginning of each trial defined by a randomly selected color and direction of motion. On 70% of trials, this was followed by a visual search display. On 30% of trials, they were asked to report the remembered color or motion of the cue object. b) Distributions of target and distractor features for Experiment 1. Zero represents no difference from the cue. Half of the subjects saw motion follow the “low-variance target” distribution and color follow the “high-variance target” distribution. The other half of subjects experienced the reverse.
Figure 2:
Figure 2:
Response errors from probe trials. Errors were normalized to baseline values and compared. The representation of the low-variance dimension was more precise than that of the high-variance dimension.
Figure 3:
Figure 3:
Regression slopes from a multilevel model measuring the effect of stimulus properties on response time. Bar heights show the size of the fixed effect while error-bars show the variance of the subject specific random effects. The model tested the effects of cue-to-target similarity (CT), target-distractor similarity (TD) and distractor-distractor similarity (DD) for the low-variance (lv) and high-variance (hv) dimensions separately. The model showed significant effects of lvCT and hvCT only, with more of the variance in RT explained by lvCT.
Figure 4:
Figure 4:
Correlations between individual coefficients from the RT regression models with the difference in memory precision between feature dimensions. The difference in memory precision score was calculated for each subject by subtracting the mean response error of the low-variance dimension from that in the high-variance dimension. High values of the difference in memory precision indicate subjects had a more precise template for the low-variance dimension relative to the high-variance dimension. (a) The difference in memory precision positively correlated with lvCT and (b) negatively with hvCT, suggesting that subject with templates biased towards the low-variance dimension used that dimension more during visual search.
Figure 5:
Figure 5:
Regressions slopes measuring the effect of stimulus properties on (a) scan time and (b) fixation dwell time on the target, from a multilevel regression model. Bar heights show the size of the fixed effect while error-bars show the variance of the subject specific random effects. The model tested the effects of cue-to-target similarity (CT), target-distractor similarity (TD) and distractor-distractor similarity (DD) for the low-variance (lv) and high-variance (hv) dimensions separately. Results show that while there was a small effect of lvTD, lvCT and hvCT had greater effects on scan time, indicating that the cue-to-target similarity dominated scan times. However, only lvCT influenced fixation dwell times, suggesting that the similarity of the cue-to-target in the low-variance dimension played a unique role in deciding if the stimulus matched the target.
Figure 6:
Figure 6:
Distributions of target and distractor features for Experiment 2 for the “color group” and the “motion group”. Zero represents no difference from the cue. The distractor distributions were generated around category boundaries to increase distractor competition.
Figure 7:
Figure 7:
Response errors from probe trials normalized with the baseline measures (see Methods, Experiment 1). These data replicate Experiment 1 showing representations of the low-variance dimension were more precise than those of the high-variance dimension.
Figure 8:
Figure 8:
Comparison of RT from Experiments 1 and 2. These data show that correct RTs in Experiment 2 were significantly longer than in Experiment 1, suggesting the overall difficulty of Experiment 2 was successfully increased by distractor competition.
Figure 9:
Figure 9:
Slopes measuring the effect of stimulus properties on response time from a multilevel regression model. Bar heights show the size of the fixed effect while error-bars show the variance of the subject specific random effects. The model tested the effects of cue-to-target similarity (CT) combined from both dimension, and target-distractor similarity (TD) and distractor-distractor similarity (DD) for the low-variance (lv) and high-variance (hv) dimensions separately. Results show that while there was a small effect of lvDD, the combined effect of CT – the difference between the target and the way it was expected to appear – was again the primary predictor of response time variation.
Figure 10:
Figure 10:
Subject specific coefficients from the regression models were correlated with each subject’s difference in memory precision.
Figure 11:
Figure 11:
Slopes measuring the effect of stimulus properties on scan time (11a) and fixation dwell time on the target (11b), from a multilevel regression model. Bar heights show the size of the fixed effect while error-bars show the variance of the subject specific random effects. The models show that while variation in scan time was primarily a function of distractor-distractor differences, CT was still the only regressor that explained variation in dwell time.

Similar articles

Cited by

References

    1. Andersen SK, Hillyard SA, & Müller MM (2008). Attention Facilitates Multiple Stimulus Features in Parallel in Human Visual Cortex. Current Biology, 18(13), 1006–1009. 10.1016/j.cub.2008.06.030 - DOI - PubMed
    1. Chelazzi L, Duncan J, Miller EK, & Desimone R (1998). Responses of neurons in inferior temporal cortex during memory-guided visual search. Journal of Neurophysiology, 80(6), 2918–2940. 10.1152/jn.1998.80.6.2918 - DOI - PubMed
    1. Chelazzi L, Miller EK, Duncan J, & Desimone R (1993). A neural basis for visual search in inferior temporal cortex. Nature, 363(6427), 345–347. Retrieved from http://www.nature.com.globalproxy.cvt.dk/nature/journal/v333/n6176/pdf/3... - PubMed
    1. Chetverikov A, Campana G, & Kristjánsson Á (2017a). Learning features in a complex and changing environment: A distribution-based framework for visual attention and vision in general. Progress in Brain Research, 236, 97–120. 10.1016/bs.pbr.2017.07.001 - DOI - PubMed
    1. Chetverikov A, Campana G, & Kristjánsson Á (2017b). Representing Color Ensembles. Psychological Science, 28(10), 1510–1517. 10.1177/0956797617713787 - DOI - PubMed