Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr;46(4):e13128.
doi: 10.1111/cogs.13128.

A Computational Model of Context-Dependent Encodings During Category Learning

Affiliations

A Computational Model of Context-Dependent Encodings During Category Learning

Paulo F Carvalho et al. Cogn Sci. 2022 Apr.

Abstract

Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). By fitting SAT-M to data from experiments comparing category learning with different sequences of trials (interleaved vs. blocked), we demonstrate that SAT-M captures the effect of local context and predicts when interleaved or blocked training will result in better testing performance across three different studies. Comparatively, ALCOVE, SUSTAIN, and a version of SAT-M without locally adaptive encoding provided poor fits to the results. Moreover, we evaluated the direct prediction of the model that different sequences of training change what learners encode and determined that the best-fit encoding parameter values match learners' looking times during training.

Keywords: Attention; Category learning models; Encoding; Interleaving; Sequencing.

PubMed Disclaimer

Conflict of interest statement

The authors have no conflict of interest to report.

Figures

Fig. 1
Fig. 1
Example stimuli used in Carvalho and Goldstone (2014b). Stimuli in the high similarity set (top panel) differed on only two features between any two categories and only one feature among items of the same category. Stimuli in the low similarity set (bottom panel) differed in many features between categories and among items of the same category. Gray boxes (not presented to participants) highlight the category‐defining features. For details of the structure of the categories used see: https://osf.io/s87tf/.
Fig. 2
Fig. 2
Fitting results (dots) for ALCOVE (top panel) and SUSTAIN (bottom panel) over the empirical results from Carvalho and Goldstone (; represented by the bars). Best‐fitting parameter values are presented in Table 1.
Fig. 3
Fig. 3
Fitting results (dots) for SAT‐M (top panel) and SAT‐M‐R (bottom panel) over the empirical results from Carvalho and Goldstone (; represented by the bars). SAT‐M provides a much better fit to the data than SAT‐M‐R.
Fig. 4
Fig. 4
Fitting results (dots) for SAT‐M over the empirical results from Carpenter and Mueller (, Experiment 1, represented by the bars), comparing New vs. Old items (panel a) and blocked vs. interleaved study (panel b).
Fig. 5
Fig. 5
Example stimuli from Zulkiply and Burt (2013). For modeling, we converted the stimuli into a feature space with nine dimensions: shape of the main object (dimension 1), color of the other category objects (dimension 2), match of shapes across category objects (dimension 3), position of the category object (dimension 4), and five distractor shape dimensions (dimensions 5–9). For details, see text and Appendix D.
Fig. 6
Fig. 6
Fit results (dots) for SAT‐M over the empirical results from Zulkiply and Burt (, Experiment 2, represented by the bars).
Fig. 7
Fig. 7
Example of stimuli from one of the families used by Carvalho and Goldstone (2017). Left panel includes an example of each of the categories studied. Right panel includes an example of each of the novel items presented during the transfer task (both transfer items belong to category A; equivalent items existed for category B). For details of the structure of the categories used, see: https://osf.io/2n8gy/.
Fig. 8
Fig. 8
Results from total looking time during study in Carvalho and Goldstone's (2017) Experiment 3 (left panel) and summed best‐fit values for encoding weights (ε) when SAT‐M is fit to the categorization results of Carvalho and Goldstone's Experiment 3 (right panel).

References

    1. Aha, D. W. , & Goldstone, R. L. (1992). Concept learning and flexible weighting. Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 534–539.
    1. Anderson, J. R. , Matessa, M. , & Lebiere, C. (1997). ACT‐R: A theory of higher level cognition and its relation to visual attention. Human–Computer Interaction, 124, 439–462. https://doi‐org.cmu.idm.oclc.org/10.1207/s15327051hci1204_5 - DOI
    1. Ashby, F. G. , & Maddox, W. T. (1993). Relations between prototype, exemplar, and decision bound models of categorization. Journal of Mathematical Psychology, 373, 372–400.
    1. Ashby, F. G. , Paul, E. J. , & Maddox, W. T. (2011). COVIS. In Pothos E. M. & Wills A. J. (Eds.), Formal approaches in categorization (pp. 65–87). Cambridge University Press. 10.1017/CBO9780511921322.004 - DOI
    1. Bareiss, E. R. , Porter, B. W. , & Wier, C. C. (1990). PROTOS: An exemplar‐based learning apprentice. In Kodratoff Y. & Michalski R. S. (Eds.), Machine learning (pp. 112–127). Morgan Kaufmann. 10.1016/B978-0-08-051055-2.50009-2 - DOI

Publication types

LinkOut - more resources