Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Aug:75:102555.
doi: 10.1016/j.conb.2022.102555. Epub 2022 May 23.

Learning, fast and slow

Affiliations
Review

Learning, fast and slow

Markus Meister. Curr Opin Neurobiol. 2022 Aug.

Abstract

Animals can learn efficiently from a single experience and change their future behavior in response. However, in other instances, animals learn very slowly, requiring thousands of experiences. Here, I survey tasks involving fast and slow learning and consider some hypotheses for what differentiates the underlying neural mechanisms. It has been proposed that fast learning relies on neural representations that favor efficient Hebbian modification of synapses. These efficient representations may be encoded in the genome, resulting in a repertoire of fast learning that differs across species. Alternatively, the required neural representations may be acquired from experience through a slow process of unsupervised learning from the environment.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest statement Nothing declared.

Figures

Figure 1:
Figure 1:. Learning rates.
The complexity of various tasks plotted against the number of reinforcement trials required to learn them. Note the learning rates span 4 orders of magnitude. See text and Section A for literature and calculations.
Figure 2:
Figure 2:. Pattern separation in higher dimensions.
(a) Here m different events (dots) are encoded by the firing rate of 2 sensory neurons (x1 and x2). The brain wants to classify those events into good (blue) and bad (red). In the original sensory representation that would require computing the complex region inside the dashed line. (b) After projecting the sensory data into a high-dimensional space, represented by N > m neurons (y1, …, yN), one can generally find a hyperplane (green) such that all the good points are on one side and the bad ones on the other. The projection from sensory signals xj to the over-complete representation yi can take the form yi=f(j=12wijxj) where wij are random synaptic weights and f is some nonlinear activation function.
Figure 3:
Figure 3:. Associative learning and sparseness.
(a) A simple network to learn the mapping from a set of states onto a set of actions. Stimuli are represented by n neurons that are either active or inactive, si ∈ 0, 1. Similarly actions are represented by n neurons. During learning, the network is exposed to the k desired (state,action) pairs. The synapse from a state neuron to an action neuron is incremented if both pre- and post-synaptic neurons are active. (b) Recall of the stored association: A state vector is presented to the input and the output of the network is compared to the k possible action vectors; this shows the resulting similarity matrix. In this case the state and action vectors each have only m = 1 of n = 10 active neurons. The recall of actions associated with each state is perfect. (c) As in panel b, but each vector is represented by 3 active neurons. Note the extensive confusion from the intended mapping of states onto actions.
Figure 4:
Figure 4:. Graphs that define behavioral tasks.
(a) A typical 2-alternative-forced-choice task, with states defined by stimuli s1 and s2 and actions of the animal a1 and a2. The square states terminate a trial with reward (green) or no reward (red). (b) A more complex task in which the animal starts in state s1 and then navigates a binary tree by turning left (a1) or right (a2) or backing up to the previous state (a0). Only one of the end points of the tree is rewarded. The task can be implemented as navigation in a binary maze.

References

    1. Hick WE. On the rate of gain of information. The Quarterly Journal of Experimental Psychology, 4:11–26, 1952. ISSN 0033–555X. doi: 10.1080/17470215208416600. - DOI
    1. Amir Nadav, Reut Suliman-Lavie Maayan Tal, Shifman Sagiv, Tishby Naftali, and Nelken Israel. Value-complexity tradeoff explains mouse navigational learning. PLOS Computational Biology, 16(12):e1008497, December 2020. ISSN 1553–7358. doi: 10.1371/journal.pcbi.1008497. - DOI - PMC - PubMed
    1. Bourtchuladze R, Frenguelli B, Blendy J, Cioffi D, Schutz G, and Silva AJ. Deficient long-term memory in mice with a targeted mutation of the cAMP-responsive element-binding protein. Cell, 79(1):59–68, October 1994. ISSN 0092–8674. doi: 10.1016/0092-8674(94)90400-6. - DOI - PubMed
    1. Bruce HM. An exteroceptive block to pregnancy in the mouse. Nature, 184:105, July 1959. ISSN 0028–0836. doi: 10.1038/184105a0. - DOI - PubMed
    1. Rosser AE and Keverne EB. The importance of central noradrenergic neurones in the formation of an olfactory memory in the prevention of pregnancy block. Neuroscience, 15(4): 1141–1147, August 1985. ISSN 0306–4522. doi: 10.1016/0306-4522(85)90258-1. - DOI - PubMed

Publication types

LinkOut - more resources