Building on prior knowledge without building it in
- PMID: 29342701
- DOI: 10.1017/S0140525X17000176
Building on prior knowledge without building it in
Abstract
Lake et al. propose that people rely on "start-up software," "causal models," and "intuitive theories" built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.
Comment in
-
Ingredients of intelligence: From classic debates to an engineering roadmap.Behav Brain Sci. 2017 Jan;40:e281. doi: 10.1017/S0140525X17001224. Behav Brain Sci. 2017. PMID: 29342708
Comment on
-
Building machines that learn and think like people.Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24. Behav Brain Sci. 2017. PMID: 27881212
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
