Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec 15;12(12):e1005225.
doi: 10.1371/journal.pcbi.1005225. eCollection 2016 Dec.

An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

Affiliations

An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

Philipp Schwedhelm et al. PLoS Comput Biol. .

Abstract

Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Illustration of coherence response functions, relating behavioral and/or physiological responses to signal strength.
An attentional enhancement would be visible as a change in response gain (A) and/or coherence gain (B) on the psychometric, or neurometric function.
Fig 2
Fig 2. Experimental protocol.
Human observers performed a direction discrimination task and reported the rotational direction change between the motion direction shown in stimulus display 2 and the corresponding motion component of stimulus 1. Black arrows indicate two example direction components embedded in the transparent motion display 1, one of which is slightly rotated and shown again in display 2. Subjects were cued to which one of the two motion directions of the transparent motion display was likely to be the relevant direction. Cues indicated either a relatively small range of possible directions (right panel, narrow focus cues), or a wide range of possibly relevant motion directions (broad focus cues). The actually displayed motion was always jittered around the cued direction, such that the cue itself was non-informative about the precise direction of the relevant motion. In addition, cues indicated the correct motion component with a 75% validity, making it worthwhile for subjects to process both motion components of stimulus display 1.
Fig 3
Fig 3. Attention improves performance, especially when it is focused on a small range of directions.
Bars indicate mean discrimination performance of all six observers, pooled across all levels of coherence. Colors indicate cue type. For each cue type, there is a significant difference between validly and invalidly cued trials, indicating that the cue lead to deployment of feature-based attention. In addition, the two types of cues (narrow and broad focus cues) lead to a significant difference in discrimination performance for validly, but not invalidly cued trials. Error bars indicate plus/minus one standard error. P values correspond to paired t-tests.
Fig 4
Fig 4. A narrow focus of attention causes coherence gain, while a broad focus does not.
Fits indicate coherence response functions for pooled performance across 6 subjects. Data points are the mean discrimination performance across subjects for each tested attentional condition, cue validity and coherence level. Panel A corresponds to the narrow focus cue type (single headed arrow) and panel B to the broad focus cue type (three headed arrow). Performance (broad and narrow conditions) was fitted with four dependent Naka-Rushton equations, sharing a jointly optimized slope. Significance values indicate differences in Naka-Rushton fit coefficients of per-subject fits (see also Fig 5). When comparing invalidly and validly cued trials, increases in the asymptotic performance at high levels of coherence indicate response gain effects, while decreases in coherence level at half maximum indicate coherence gain effects. Error bars of data points indicate plus/minus one standard error, crosses around coefficient indicators represent individual coefficients obtained from per-subject fittings of the coherence response function.
Fig 5
Fig 5. Population effects are also evident in single subjects.
Data points indicate per subject fit coefficients c50 (A) and dmax (B), corresponding to coherence level at half maximum performance and asymptotic performance, respectively. For each subject, two Naka-Rushton equations per cue type were fit to the psychophysical data, revealing four informative coefficients. A decrease in c50 between validly and invalidly cued trials indicates a contrast gain effect and an increase in dmax a response gain effect. Dashed lines connect data points originating from the same subjects.
Fig 6
Fig 6. Task performance across coherences.
(A) Performance for groups of trials that differ in how far off the cued direction the test direction occurred. The possible range of test-cue differences was divided in three evenly spaced groups (close, medium, far). Lines above bars represent pairwise comparisons and stars indicate significant differences of adjacent bars. Error bars indicate plus/minus one standard error. (B) Like A, but groups were defined based on the differences between cue and sample.
Fig 7
Fig 7. Model predictions of coherence response functions for individual fittings to the empirical performance of 6 subjects.
Data points with plus/minus one standard error are the mean discrimination performance across subjects for each tested attentional condition, cue validity and coherence level. Panel A corresponds to the narrow focus cue type (single headed arrow) and panel B to the broad focus cue type (three headed arrow). The two evaluated models are the original NMoA with 5 free parameters and a NMoA with optimal, yet biologically implausible suppressive tuning width (NMoA free, 6 free parameters). Note the prediction of reduced response gain for the broad focus condition (panel B) in both models.
Fig 8
Fig 8. Model predictions of coherence response functions for two extended Normalization Models.
The NMoA+ciN model (7 free parameters) includes a coherence independent contribution of feature-based attention to normalization while the NMoA+cdN model (6 free parameters) includes a weighted contribution of tuned-normalization. Panel and data points like in Fig 7.

Similar articles

Cited by

References

    1. Maunsell JHR, Treue S. Feature-based attention in visual cortex. Trends in Neurosciences. 2006;29: 317–322. 10.1016/j.tins.2006.04.001 - DOI - PubMed
    1. O'Craven KM, Rosen BR, Kwong KK, Treisman A, Savoy RL. Voluntary attention modulates fMRI activity in human MT–MST. Neuron. 1997;18: 591–598. - PubMed
    1. Saenz M, Buracas GT, Boynton GM. Global effects of feature-based attention in human visual cortex. Nature Neuroscience. 2002;5: 631–632. 10.1038/nn876 - DOI - PubMed
    1. Stoppel CM, Boehler CN, Strumpf H, Heinze H-J, Noesselt T, Hopf J-M, et al. Feature-based attention modulates direction-selective hemodynamic activity within human MT. Hum Brain Mapp. 2011;32: 2183–2192. 10.1002/hbm.21180 - DOI - PMC - PubMed
    1. Treue S, Martinez-Trujillo JC. Feature-based attention influences motion processing gain in macaque visual cortex. Nature. 1999;399: 575–579. 10.1038/21176 - DOI - PubMed

Publication types

LinkOut - more resources