On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper
- PMID: 39712129
- PMCID: PMC11655761
- DOI: 10.1007/s11571-023-10061-1
On the ability of standard and brain-constrained deep neural networks to support cognitive superposition: a position paper
Abstract
The ability to coactivate (or "superpose") multiple conceptual representations is a fundamental function that we constantly rely upon; this is crucial in complex cognitive tasks requiring multi-item working memory, such as mental arithmetic, abstract reasoning, and language comprehension. As such, an artificial system aspiring to implement any of these aspects of general intelligence should be able to support this operation. I argue here that standard, feed-forward deep neural networks (DNNs) are unable to implement this function, whereas an alternative, fully brain-constrained class of neural architectures spontaneously exhibits it. On the basis of novel simulations, this proof-of-concept article shows that deep, brain-like networks trained with biologically realistic Hebbian learning mechanisms display the spontaneous emergence of internal circuits (cell assemblies) having features that make them natural candidates for supporting superposition. Building on previous computational modelling results, I also argue that, and offer an explanation as to why, in contrast, modern DNNs trained with gradient descent are generally unable to co-activate their internal representations. While deep brain-constrained neural architectures spontaneously develop the ability to support superposition as a result of (1) neurophysiologically accurate learning and (2) cortically realistic between-area connections, backpropagation-trained DNNs appear to be unsuited to implement this basic cognitive operation, arguably necessary for abstract thinking and general intelligence. The implications of this observation are briefly discussed in the larger context of existing and future artificial intelligence systems and neuro-realistic computational models.
Keywords: Artificial cognitive system; Brain-constrained modelling; Cell assembly; Concept combination; General intelligence; Multi-item working memory; Semantic representations.
© The Author(s) 2024.
Figures



Similar articles
-
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938. Materials (Basel). 2020. PMID: 32093164 Free PMC article.
-
Causal Influence of Linguistic Learning on Perceptual and Conceptual Processing: A Brain-Constrained Deep Neural Network Study of Proper Names and Category Terms.J Neurosci. 2024 Feb 28;44(9):e1048232023. doi: 10.1523/JNEUROSCI.1048-23.2023. J Neurosci. 2024. PMID: 38253531 Free PMC article.
-
Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.Top Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5. Top Cogn Sci. 2022. PMID: 34609080
-
Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks.Prog Neurobiol. 2023 Nov;230:102511. doi: 10.1016/j.pneurobio.2023.102511. Epub 2023 Jul 22. Prog Neurobiol. 2023. PMID: 37482195 Free PMC article. Review.
-
Deep Neural Networks as Scientific Models.Trends Cogn Sci. 2019 Apr;23(4):305-317. doi: 10.1016/j.tics.2019.01.009. Epub 2019 Feb 19. Trends Cogn Sci. 2019. PMID: 30795896 Review.
References
-
- Abeles M (1991) Corticonics - neural circuits of the cerebral cortex. Cambridge University Press, Cambridge
-
- Amir Y, Harel M, Malach R (1993) Cortical hierarchy reflected in the organization of intrinsic connections in macaque monkey visual cortex. J Comp Neurol 334(1):19–46 - PubMed
-
- Amit DJ, Brunel N (1997) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral cortex (New York, NY: 1991) 7(3):237–252 - PubMed
LinkOut - more resources
Full Text Sources
Research Materials