Bayesian learning of visual chunks by human observers
- PMID: 18268353
- PMCID: PMC2268207
- DOI: 10.1073/pnas.0708424105
Bayesian learning of visual chunks by human observers
Abstract
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.
Conflict of interest statement
The authors declare no conflict of interest.
Figures



References
-
- Harris ZS. Structural Linguistics. Chicago: Univ of Chicago Press; 1951.
-
- Peissig JJ, Tarr MJ. Visual object recognition: do we know more now than we did 20 years ago? Annu Rev Psychol. 2007;58:75–96. - PubMed
-
- Chomsky N, Halle M. The Sound Pattern of English. Cambridge, MA: MIT Press; 1968.
-
- Riesenhuber M, Poggio T. Hierarchical models of object recognition in cortex. Nat Neurosci. 1999;2:1019–1025. - PubMed
-
- Ullman S, Vidal-Naquet M, Sali E. Visual features of intermediate complexity and their use in classification. Nat Neurosci. 2002;5:682–687. - PubMed
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources