Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 May 19;117(20):11167-11177.
doi: 10.1073/pnas.1912734117. Epub 2020 May 4.

Exemplar learning reveals the representational origins of expert category perception

Affiliations

Exemplar learning reveals the representational origins of expert category perception

Elliot Collins et al. Proc Natl Acad Sci U S A. .

Abstract

Irrespective of whether one has substantial perceptual expertise for a class of stimuli, an observer invariably encounters novel exemplars from this class. To understand how novel exemplars are represented, we examined the extent to which previous experience with a category constrains the acquisition and nature of representation of subsequent exemplars from that category. Participants completed a perceptual training paradigm with either novel other-race faces (category of experience) or novel computer-generated objects (YUFOs) that included pairwise similarity ratings at the beginning, middle, and end of training, and a 20-d visual search training task on a subset of category exemplars. Analyses of pairwise similarity ratings revealed multiple dissociations between the representational spaces for those learning faces and those learning YUFOs. First, representational distance changes were more selective for faces than YUFOs; trained faces exhibited greater magnitude in representational distance change relative to untrained faces, whereas this trained-untrained distance change was much smaller for YUFOs. Second, there was a difference in where the representational distance changes were observed; for faces, representations that were closer together before training exhibited a greater distance change relative to those that were farther apart before training. For YUFOs, however, the distance changes occurred more uniformly across representational space. Last, there was a decrease in dimensionality of the representational space after training on YUFOs, but not after training on faces. Together, these findings demonstrate how previous category experience governs representational patterns of exemplar learning as well as the underlying dimensionality of the representational space.

Keywords: category learning; mental representations; object recognition; perceptual learning; visual expertise.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
Stimuli for novel object (Upper) and ORF (Lower) experiments. The novel face image set consists of 30 young faces of East Asian descent. The novel object set (YUFOs) also contains 30 objects. In this figure, novel objects are not to scale with face stimuli.
Fig. 2.
Fig. 2.
Learning paradigm flow diagram. Representational space was quantified at the beginning, middle, and end of training, using 2 d of pairwise similarity ratings at each point. Perceptual training involved a visual search task completed over 20 d, with a break after day 10 to quantify the representational space with similarity ratings. Participants were instructed to complete one session per day, consecutively, for 26 d.
Fig. 3.
Fig. 3.
Participants completed 240 match-to-sample trials per training session to facilitate learning of four specific novel objects. Viewing angle in the first image always differs from all viewing angles of all objects in the visual search images. In the face version of this experiment, faces differed in viewing angle in the same manner, but also differed in expression.
Fig. 4.
Fig. 4.
Group-level summary of performance (inverse efficiency) during visual search paradigm across training sessions. Lower scores represent better performance. Error bars represent ±1 SEM. Note that individuals completed a second group of similarity ratings between sessions 10 and 11. Participants also switched to a new random set of four objects to learn for the second half of training, starting at session 11. Here, lines connecting sessions 10 and 11 show the cost associated with changes in training stimuli.
Fig. 5.
Fig. 5.
Group level similarity matrices for the average of both YUFOs (Upper) and ORFs (Lower) derived from pairwise similarity ratings (Euclidean distance in similarity space) across three sessions. Red (1, very different) and blue (7, very similar) correspond to different ends of the rating spectrum. A general increase in red (more negative) corresponds to objects being rated less similar, hence, moving farther apart. Black boxes correspond to the structure of stimulus sets. For YUFOs, stimuli 1 to 6 are from family 1; 7 to 18 are from family 2; 19 to 30 are from family 3. For ORF stimuli 1 to 15 are male faces and 16 to 30 are female faces.
Fig. 6.
Fig. 6.
Mean distance changes in representational space. Distance changes are separated by experiment version (ORFs/YUFOs) and by training section (T1/T2). Negative distance changes correspond to moving farther apart in representational space. Error bars represent ±1 SEM.
Fig. 7.
Fig. 7.
Correlations between pretraining similarity and representational distance change. Both ORFs and YUFOs are included across two consecutive training sections (T1 and T2). Negative correlations correspond to greater separation in representational space (moving apart) for objects that were closer together in space initially. Error bars correspond to ±1 SEM.
Fig. 8.
Fig. 8.
Principle component analysis (PCA) of the group-level similarity matrices from each of three ratings sessions. Components are plotted, on the Left, by variance explained, in decreasing order. On the Right, cumulative variance explained by adding each additional component is plotted.

References

    1. Martens F., Bulthé J., van Vliet C., Op de Beeck H., Domain-general and domain-specific neural changes underlying visual expertise. Neuroimage 169, 80–93 (2018). - PMC - PubMed
    1. Shen J., Mack M. L., Palmeri T. J., Studying real-world perceptual expertise. Front. Psychol. 5, 857 (2014). - PMC - PubMed
    1. Tanaka J., Taylor M., Object categories and expertise: Is the basic level in the eye of the beholder? Cognit. Psychol. 23, 457–482 (1991).
    1. Diamond R., Carey S., Why faces are and are not special: An effect of expertise. J. Exp. Psychol. Gen. 115, 107–117 (1986). - PubMed
    1. Harel A., Kravitz D., Baker C. I., Beyond perceptual expertise: Revisiting the neural substrates of expert object recognition. Front. Hum. Neurosci. 7, 885 (2013). - PMC - PubMed

Publication types

LinkOut - more resources