Concept learning in a probabilistic language-of-thought. How is it possible and what does it presuppose?
- PMID: 37766667
- DOI: 10.1017/S0140525X23002029
Concept learning in a probabilistic language-of-thought. How is it possible and what does it presuppose?
Abstract
Where does a probabilistic language-of-thought (PLoT) come from? How can we learn new concepts based on probabilistic inferences operating on a PLoT? Here, I explore these questions, sketching a traditional circularity objection to LoT and canvassing various approaches to addressing it. I conclude that PLoT-based cognitive architectures can support genuine concept learning; but, currently, it is unclear that they enjoy more explanatory breadth in relation to concept learning than alternative architectures that do not posit any LoT.
Comment in
-
The language-of-thought hypothesis as a working hypothesis in cognitive science.Behav Brain Sci. 2023 Sep 28;46:e292. doi: 10.1017/S0140525X23002431. Behav Brain Sci. 2023. PMID: 37766639
Comment on
-
The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Behav Brain Sci. 2022 Dec 6;46:e261. doi: 10.1017/S0140525X22002849. Behav Brain Sci. 2022. PMID: 36471543
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources