Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Apr 4:13:18.
doi: 10.3389/fncom.2019.00018. eCollection 2019.

Deep Learning With Asymmetric Connections and Hebbian Updates

Affiliations

Deep Learning With Asymmetric Connections and Hebbian Updates

Yali Amit. Front Comput Neurosci. .

Abstract

We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights-a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.

Keywords: Hebbian learning; asymmetric backpropagation; convolutional networks; feedback connections; hinge loss.

PubMed Disclaimer

Figures

Figure 1
Figure 1
An illustration of the computations in a feedforward network.
Figure 2
Figure 2
The feedback signals δ3,k from layer 3 are combined linearly and then multiplied by σ(h2,2) to produce the feedback signal δ2,2. Then the update to the feedforward weights coming into unit (2, 2) and feedback weights coming out of that unit is computed. The red arrows indicate the order of computation.
Figure 3
Figure 3
(Left) Each row showing 10 images from one of the 10 cifar10 classes. (Right) One image from each of the 100 classes in cifar100.
Figure 4
Figure 4
Evolution of error rates for simpnet as a function of epochs. Solid lines training error, dotted lines validation error. Green–BP, Blue–URFB, Red–FRFB.
Figure 5
Figure 5
Error rates for simple network with different update protocols and different losses. (Left) CIFAR10, (Right) CIFAR100. BP, back-propagation with softmax and cross entropy loss; BP-H, back propagation with hinge loss, all other protocols use the hinge loss as well; URFB, Updated random feedback; FRFB, Fixed random feedback. 50% refers to random connectivity.
Figure 6
Figure 6
Error rates for the deepnet (Left) and deepernet (Right). BP, back-propagation with softmax and cross entropy loss; BP-H, back propagation with hinge loss, all other protocols use the hinge loss as well; URFB, Updated random feedback; FRFB, Fixed random feedback. 50% refers to random connectivity.
Figure 7
Figure 7
Evolution of error rates for deepernet as a function of epochs. Solid lines training error, dotted lines validation error. Green–BP, Blue–URFB, Red–FRFB.
Figure 9
Figure 9
Experiments with untying the convolutional layers on simpnet and deepnet_s. Blue–convolutional layers (tied), Red–untied.
Figure 8
Figure 8
Corresponding filters extracted from the sparse connectivity matrix at four different locations on the 32x32 grid. Each row corresponds to a different filter.
Figure 10
Figure 10
Correlation between Wl and Rl for the three layers in simpnet. (Left) URFB, (Right) FRFB.
Figure 11
Figure 11
Correlation between Wl and Rl for the seven updated layers in deepnet_s. (Left) URFB, (Right) FRFB.
Figure 12
Figure 12
Top: comparison of log-error rates as a function of iteration for original BP and for four different values of ϵ = 0, 0.25, 0.5, 1. Results for three runs of the experiment. Last three rows, for each level of the network we show the evolution of the correlation between the W and Rt weights, for each of the values of ϵ.

References

    1. Amit Y., Mascaro M. (2003). An integrated network for invariant visual detection and recognition. Vision Res. 43, 2073–2088. 10.1016/S0042-6989(03)00306-7 - DOI - PubMed
    1. Amit Y., Walker J. (2012). Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs. Front. Comput. Neurosci. 6:39. 10.3389/fncom.2012.00039 - DOI - PMC - PubMed
    1. Bartunov S., Santoro A., Richards B. A., Marris L., Hinton G. E., Lillicrap T. P. (2018) Assessing the scalability of biologically-motivated deep learning algorithms architectures, in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018 (Montréal: ), 9390–9400.
    1. Burbank K. S. (2015). Mirrored stdp implements autoencoder learning in a network of spiking neurons. PLoS Comput. Biol. 11:e1004566. 10.1371/journal.pcbi.1004566 - DOI - PMC - PubMed
    1. Fusi S. (2003) Spike-driven synaptic plasticity for learning correlated patterns of mean firing rates. Rev. Neurosci. 14:73–84. 10.1515/REVNEURO.2003.14.1-2.73 - DOI - PubMed

LinkOut - more resources