Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Nov 15:12:79.
doi: 10.3389/fninf.2018.00079. eCollection 2018.

Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity

Affiliations

Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity

Vyacheslav Demin et al. Front Neuroinform. .

Abstract

Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as "Family-Engaged Execution and Learning of Induced Neuron Groups," or FEELING.

Keywords: classification; digits recognition; neuron clustering; spiking neural networks; supervised learning; unsupervised learning.

PubMed Disclaimer

Figures

Figure 1
Figure 1
“784−100−10” architecture of the model with forward, lateral and reciprocal connections (the architecture is fully interconnected; only a few connections are shown). When training the network, an additional supervised current can be introduced into the neuron corresponding to image of a certain class presented at the input.
Figure 2
Figure 2
Approximation of instant and average neuron activity. Instant (green line) and average (blue line) firing activities are calculated as the Exponential Mean Average of the spike time-series. Spike train here was obtained from a Poisson distribution with firing probability of 0.3 at every time step (300 Hz rate). The difference between instant and average firing rates while presenting a Poisson-distributed input signal with constant probability has a noisy effect on the network training, because most of update rules depend on this difference. Moments of spikes are highlighted by vertical dashed lines.
Figure 3
Figure 3
Learning curves on MNIST dataset present the recognition accuracy on the test set for the supervised mode (green), the partially unsupervised mode (red). The latter implies the full image presentations for a few images (400 in this example) at the beginning of the training in the supervised mode (with a teacher's current), followed by training without a supervised current. Learning curve for the feed-forward formal neural network (blue) with “784−100−10” architecture is presented for comparison of the convergence speed.
Figure 4
Figure 4
Ablation study. Learning curves for different connectivity architectures are presented. In the legend, “output(+)” means the presence of inhibitory connections in the output layer, “reciprocal(−),” the absence of reciprocal connections from the output to the hidden layer, etc.
Figure 5
Figure 5
Last layer weights visualization. The first row contains reconstructed maximizing images for the output neurons (Nekhaev and Demin, 2017). The second row is a simple product of two forward weight matrices: one of the size 784 × 100 and the other of size 100 × 10. The third and the forth rows are visualization of the forward and reciprocal 100 × 10 weights. The last one is their difference: positive (green dots) and negative (red dots) values are shown.
Figure 6
Figure 6
Hidden layer neuron visualization. The feed-forward weight values from the input to all 100 neurons of the hidden layer (organized here into the square 10 × 10 for convenience).
Figure 7
Figure 7
Hierarchical clustering of the neuron families. (Left) The tree of family clusters built by a level of competition between them, which is reflected by the magnitude of negative weights between the corresponding hidden layer neurons (marked on the vertical axis). (Right) 10 clusters corresponding to the minimum level of competition between hidden neurons (the cut-off weight value was chosen equal to –0.2).

Similar articles

Cited by

References

    1. Abitz M., Nielsen R. D., Jones E. G., Laursen H., Graem N., Pakkenberg B., et al. . (2007). Excess of neurons in the human newborn mediodorsal thalamus compared with that of the adult. Cereb. Cortex 17, 2573–2578. 10.1093/cercor/bhl163 - DOI - PubMed
    1. Bakkum Douglas J., Chao Z. C., Potter S. M. (2008). Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. J. Neural. Eng. 5, 310–323. 10.1088/1741-2560/5/3/004 - DOI - PMC - PubMed
    1. Barlow H. (1961). Possible principles underlying the transformations of sensory messages, in Sensory Communication, ed Rosenblith W. A. (Cambridge, MA: MIT Press; ), 217–234.
    1. Bienenstock E. L., Cooper L. N., Munro P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48. 10.1523/JNEUROSCI.02-01-00032.1982 - DOI - PMC - PubMed
    1. Bottou L. (1998). Online algorithms and stochastic approximations, in Online Learning and Neural Networks, ed Saad D. (Cambridge: Cambridge University Press; ), 9–42.

LinkOut - more resources