Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Aug 22;38(34):7365-7374.
doi: 10.1523/JNEUROSCI.0153-18.2018. Epub 2018 Jul 13.

Deep(er) Learning

Affiliations

Deep(er) Learning

Shyam Srinivasan et al. J Neurosci. .

Abstract

Animals successfully thrive in noisy environments with finite resources. The necessity to function with resource constraints has led evolution to design animal brains (and bodies) to be optimal in their use of computational power while being adaptable to their environmental niche. A key process undergirding this ability to adapt is the process of learning. Although a complete characterization of the neural basis of learning remains ongoing, scientists for nearly a century have used the brain as inspiration to design artificial neural networks capable of learning, a case in point being deep learning. In this viewpoint, we advocate that deep learning can be further enhanced by incorporating and tightly integrating five fundamental principles of neural circuit design and function: optimizing the system to environmental need and making it robust to environmental noise, customizing learning to context, modularizing the system, learning without supervision, and learning using reinforcement strategies. We illustrate how animals integrate these learning principles using the fruit fly olfactory learning circuit, one of nature's best-characterized and highly optimized schemes for learning. Incorporating these principles may not just improve deep learning but also expose common computational constraints. With judicious use, deep learning can become yet another effective tool to understand how and why brains are designed the way they are.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
A schematic of a deep learning neural network for classifying images. a, The network consists of many simple computing nodes, each simulating a neuron, and organized in a series of layers. Neurons in each layer receive inputs from neurons in the immediately preceding layer, with inputs being weighted by the connection strengths between neurons. Each neuron is activated when the sum of input activity exceeds a threshold, and in turn contributes to the activity of neurons in successive layers. In the figure, the leftmost layer encodes the input, in this specific case, faces. The rightmost layer spits the output, in this case, whether the photo is that of Albert Einstein. The weights between neurons are pruned and perfected by training with millions of labeled trial faces. During each trial, the connection weights are adjusted by backpropagation to produce the right output. After sufficient training, the network evolves to a point where each successive layer in the neural network learns to recognize more complex features (e.g., from lips, nose, eyes, etc., to faces) and classifies correctly. The layer features are adapted from Lee et al. (2009). b, Schematic indicating how connection weights between successive layers are altered during training by the backpropagation algorithm to minimize error and produce the right output. This alteration proceeds backwards through the network using a gradient descent approach.
Figure 2.
Figure 2.
Fly olfactory associative learning schematic. Odor information from the fly's nose filters down to the MB, activating a sparse set of KCs that code for a particular odor. These KCs, in turn, synapse with MBONs that influence approach or avoidance behavior. During olfactory conditioning, simultaneous presentation of an aversive stimulus, such as an electric shock, and a neutral odor activates the DAN for shock and KCs encoding the odor. The DANs release dopamine, which modulates the KC/MBON synapse (inset). Repeated training alters the strength of the synapse over the long term. Thus, following training, odor presentation alone is sufficient to activate the MBON that influences an aversive response. Black circles represent active neurons and synapses. Gray circles represent inactive neurons and synapses. Each line of synapses between a DAN and MBON indicates a separate compartment. Increasing the gain of the hypothesized recurrent circuit from MBONs to DANs (as indicated by a dotted line) reduces the amount of training required for learning.

References

    1. Ackley DH, Hinton GE, Sejnowski TJ (1985) A learning algorithm for Boltzmann machines. Cogn Sci 9:147–169. 10.1207/s15516709cog0901_7 - DOI
    1. Ardin P, Peng F, Mangan M, Lagogiannis K, Webb B (2016) Using an insect mushroom body circuit to encode route memory in complex natural environments. PLoS Comput Biol 12:e1004683. 10.1371/journal.pcbi.1004683 - DOI - PMC - PubMed
    1. Aso Y, Hattori D, Yu Y, Johnston RM, Iyer NA, Ngo TT, Dionne H, Abbott LF, Axel R, Tanimoto H, Rubin GM (2014a) The neuronal architecture of the mushroom body provides a logic for associative learning. Elife 3:e04577. 10.7554/eLife.04577 - DOI - PMC - PubMed
    1. Aso Y, Sitaraman D, Ichinose T, Kaun KR, Vogt K, Belliart-Guérin G, Plaçais PY, Robie AA, Yamagata N, Schnaitmann C, Rowell WJ, Johnston RM, Ngo TT, Chen N, Korff W, Nitabach MN, Heberlein U, Preat T, Branson KM, Tanimoto H, et al. (2014b) Mushroom body output neurons encode valence and guide memory-based action selection in Drosophila. Elife 3:e04580. 10.7554/eLife.04580 - DOI - PMC - PubMed
    1. Barak O, Rigotti M, Fusi S (2013) The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off. J Neurosci 33:3844–3856. 10.1523/JNEUROSCI.2753-12.2013 - DOI - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources