Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Sep 5;14(11):6.
doi: 10.1167/14.11.6.

Template changes with perceptual learning are driven by feature informativeness

Affiliations

Template changes with perceptual learning are driven by feature informativeness

Ilmari Kurki et al. J Vis. .

Abstract

Perceptual learning changes the way the human visual system processes stimulus information. Previous studies have shown that the human brain's weightings of visual information (the perceptual template) become better matched to the optimal weightings. However, the dynamics of the template changes are not well understood. We used the classification image method to investigate whether visual field or stimulus properties govern the dynamics of the changes in the perceptual template. A line orientation discrimination task where highly informative parts were placed in the peripheral visual field was used to test three hypotheses: (1) The template changes are determined by the visual field structure, initially covering stimulus parts closer to the fovea and expanding toward the periphery with learning; (2) the template changes are object centered, starting from the center and expanding toward edges; and (3) the template changes are determined by stimulus information, starting from the most informative parts and expanding to less informative parts. Results show that, initially, the perceptual template contained only the more peripheral, highly informative parts. Learning expanded the template to include less informative parts, resulting in an increase in sampling efficiency. A second experiment interleaved parts with high and low signal-to-noise ratios and showed that template reweighting through learning was restricted to stimulus elements that are spatially contiguous to parts with initial high template weights. The results suggest that the informativeness of features determines how the perceptual template changes with learning. Further, the template expansion is constrained by spatial proximity.

Keywords: classification image; perceptual learning; psychophysics.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Stimulus in position noise paradigm and perceptual learning. A mean baseline orientation of the line was determined by sampling a leftward or rightward tilted line. Magnitude and sign of the tilt (i.e., orientation of the underlying line) were varied using the method of constant stimulus. The final stimulus presented was constructed by varying the horizontal positions of 16 elements forming the line (A). Independent random position noise values were added to stimulus elements (B) generating a noisy tilted line (C). The observer's task was to assess whether the noisy line was tilted left or right. (D) The black line represents mean performance (d′) for five sessions (10 observers) in experiment 1. The green line represents mean percentage improvement in d′ with respect to the first session. (E) Perceptual learning in experiment 2. The black line is the mean performance for five sessions (five observers), while the green line represents mean percentage improvement.
Figure 2
Figure 2
Over-observer classification images in experiment 1. Blue curves correspond to the first session (day), red curves correspond to the last (fifth) session, and green curves correspond to the third session. Error bars represent 1 standard error of the mean (SEM). Asterisks represent classification image weights with significant change between the first and last sessions (p < 0.05; corrected for multiple comparisons). (A) Mean classification image for eight elements for 10 observers for three sessions. (B) Normalized classification image weights for all observers. Ideal template is plotted with a dashed line. (C) Mean classification images for all five sessions.
Figure 3
Figure 3
Individual results in experiment 1. Classification images were fitted with exponential functions (solid lines). d1 = initial discrimination performance (d′) in the first session; Δd′ = percentage change in discrimination performance between the first and last sessions; θ1 = width of the template fit (degrees) in the first session; θ5 = width of the template fit in the last session; p = p-value of the nested likelihood test for template equality in the first and last sessions.
Figure 4
Figure 4
Comparison of template change and perceptual learning. (A) Internal-to-external noise ratio as a function of session (averaged across observers). A downward trend can be seen, but the effect was not statistically significant (p > 0.05). (B) Average sampling efficiency, computed by cross-correlating the estimated perceptual template and ideal template, increased significantly across sessions (one-tailed p = 0.007). (C) Perceptual template efficiency is correlated with performance changes across sessions, ρ = 0.60, p = 0.03. (D) Large individual differences in both performance and amount of learning were observed. Higher performance in the first session was related to lower performance improvement across sessions, ρ = −0.73, p = 0.02. (E) Predicted d′ for the linear integrator model using estimated classification images and internal noise plotted against observed d′ for 10 observers (average of five sessions). Classification images predict observed performance, ρ = 0.74, p < 0.01, and on average, the prediction is only 2% higher than observed performance. (F) Average model prediction error for each session. Learning does not significantly increase the error (p = 0.41), suggesting that a linear integrator model with a template estimated from classification images and internal noise estimate can explain the majority of the observed performance and learning.
Figure 5
Figure 5
Stimuli and classification images in experiment 2. (A) Stimulus in experiment 2 (shown without external noise). We simplified the stimulus to four SNR groups (neighboring elements had the same SNR). Group d (top/bottom) had the highest SNR, group b the second highest, group c the third highest, and group a the lowest. (B) Average of classification images for five observers. Blue indicates the first session and red indicates the last (fifth) session; the dashed line indicates the ideal template. (C) Mean weights at different groups for the first (blue) and last (red) sessions. (D) Individual observer templates for the first (blue) and last (red) sessions. d1 = initial discrimination performance (d′) in the first session; Δd′ = percentage change in discrimination performance between the first and last sessions; p = p-value of the nested likelihood test for template equality in the first and last sessions.
Figure A1
Figure A1
GLM versus weighted sums simulation results. Bars represent mean correlation error between true template and estimated template. Purple bars: weighted sums, green bars: GLM. Bar groups represent different number of criteria. Upper row: Simulation with 1 MOCS signal level. Lower row: Simulation with five levels (800 trials; 160 trials/level). Error bars represent 1 SEM.
Figure A2
Figure A2
Comparison of GLM and weighted sums classification image methods with empirical data. (A) Averages of classification images estimated using weighted sums (Murray et al., 2002). (B) Averages of classification images using GLM (Knoblauch & Maloney, 2008). The red curve is the average classification image for the first session; the blue curve is the average classification image for the last session. Error bars represent 1 SEM. Shapes of classification images are similar but GLM results have less interindividual variance, suggesting that they contain less estimation error.
Figure A3
Figure A3
Control analyses. (A) Classification images without spatial averaging across the top and mirror-bottom of the stimulus. Average classification image (10 subjects) for the first session (blue) and the last session (red). Estimated template weights are plotted against element position (x-axis). (B) Classification images without spatial averaging were analyzed separately for trials where the target was tilted toward the center of the screen (green curve) and trials where the target was tilted away from the center (blue curve). (C) Classification images were analyzed separately for the left-tilted target (green curve) and the right-tilted target (blue curve). All subjects, levels, and trials were pooled together. (D) Classification images were analyzed separately for each of five target “baseline” tilt levels; red = no tilt; green = maximum tilt. Classification images are an average of all 10 observers and all five sessions.

Similar articles

Cited by

References

    1. Abbey C. K., Eckstein M. P., Bochud F. O. (1999). Estimation of human-observer templates in two-alternative forced-choice experiments. Proceedings of SPIE , 3663, 284–295
    1. Abbey C. K., Pham B. T., Shimozaki S. S., Eckstein M. P. (2008). Contrast and stimulus information effects in rapid learning of a visual task. Journal of Vision , 8 (2): 6 1–14, http://www.journalofvision.org/content/8/2/8, doi:10.1167/8.2.8. [PubMed] [Article] - PubMed
    1. Adini Y., Sagi D., Tsodyks M. (2002). Context-enabled learning in the human visual system. Nature , 415, 790–793 - PubMed
    1. Ahissar M., Hochstein S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences, USA , 90, 5718–5722 - PMC - PubMed
    1. Ahissar M., Hochstein S. (2000). The spread of attention and learning in feature search: Effects of target distribution and task difficulty. Vision Research , 40, 1349–1364 - PubMed

Publication types

LinkOut - more resources