Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning
- PMID: 34366818
- PMCID: PMC8335547
- DOI: 10.3389/fncom.2021.686239
Learning Invariant Object and Spatial View Representations in the Brain Using Slow Unsupervised Learning
Abstract
First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.
Keywords: convolutional neural network; face cells; hippocampus; inferior temporal visual cortex; navigation; object recognition; spatial view cells; unsupervised learning.
Copyright © 2021 Rolls.
Conflict of interest statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures










Similar articles
-
Spatial coordinate transforms linking the allocentric hippocampal and egocentric parietal primate brain systems for memory, action in space, and navigation.Hippocampus. 2020 Apr;30(4):332-353. doi: 10.1002/hipo.23171. Epub 2019 Nov 7. Hippocampus. 2020. PMID: 31697002
-
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.Front Comput Neurosci. 2012 Jun 19;6:35. doi: 10.3389/fncom.2012.00035. eCollection 2012. Front Comput Neurosci. 2012. PMID: 22723777 Free PMC article.
-
How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex?Neural Netw. 2011 Dec;24(10):1050-61. doi: 10.1016/j.neunet.2011.04.004. Epub 2011 Apr 22. Neural Netw. 2011. PMID: 21596523
-
Invariant visual object recognition: a model, with lighting invariance.J Physiol Paris. 2006 Jul-Sep;100(1-3):43-62. doi: 10.1016/j.jphysparis.2006.09.004. Epub 2006 Oct 30. J Physiol Paris. 2006. PMID: 17071062 Review.
-
Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans.Hippocampus. 2023 May;33(5):533-572. doi: 10.1002/hipo.23467. Epub 2022 Sep 7. Hippocampus. 2023. PMID: 36070199 Free PMC article. Review.
Cited by
-
The human posterior cingulate, retrosplenial, and medial parietal cortex effective connectome, and implications for memory and navigation.Hum Brain Mapp. 2023 Feb 1;44(2):629-655. doi: 10.1002/hbm.26089. Epub 2022 Sep 30. Hum Brain Mapp. 2023. PMID: 36178249 Free PMC article.
-
The memory systems of the human brain and generative artificial intelligence.Heliyon. 2024 May 24;10(11):e31965. doi: 10.1016/j.heliyon.2024.e31965. eCollection 2024 Jun 15. Heliyon. 2024. PMID: 38841455 Free PMC article. Review.
-
A ventromedial visual cortical 'Where' stream to the human hippocampus for spatial scenes revealed with magnetoencephalography.Commun Biol. 2024 Aug 25;7(1):1047. doi: 10.1038/s42003-024-06719-z. Commun Biol. 2024. PMID: 39183244 Free PMC article.
-
Neuroscientific insights about computer vision models: a concise review.Biol Cybern. 2024 Dec;118(5-6):331-348. doi: 10.1007/s00422-024-00998-9. Epub 2024 Oct 9. Biol Cybern. 2024. PMID: 39382577 Review.
-
Hippocampal Discoveries: Spatial View Cells, Connectivity, and Computations for Memory and Navigation, in Primates Including Humans.Hippocampus. 2025 Jan;35(1):e23666. doi: 10.1002/hipo.23666. Hippocampus. 2025. PMID: 39690918 Free PMC article. Review.
References
-
- Abeles M. (1991). Corticonics - Neural Circuits of the Cerebral Cortex. New York: Cambridge University Press.
Publication types
LinkOut - more resources
Full Text Sources