Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 29:15:695357.
doi: 10.3389/fnins.2021.695357. eCollection 2021.

Characterization of Generalizability of Spike Timing Dependent Plasticity Trained Spiking Neural Networks

Affiliations

Characterization of Generalizability of Spike Timing Dependent Plasticity Trained Spiking Neural Networks

Biswadeep Chakraborty et al. Front Neurosci. .

Abstract

A Spiking Neural Network (SNN) is trained with Spike Timing Dependent Plasticity (STDP), which is a neuro-inspired unsupervised learning method for various machine learning applications. This paper studies the generalizability properties of the STDP learning processes using the Hausdorff dimension of the trajectories of the learning algorithm. The paper analyzes the effects of STDP learning models and associated hyper-parameters on the generalizability properties of an SNN. The analysis is used to develop a Bayesian optimization approach to optimize the hyper-parameters for an STDP model for improving the generalizability properties of an SNN.

Keywords: Bayesian optimization; Hausdorff dimension; addSTDP; generalization; leaky integrate and fire; logSTDP; multSTDP; spiking neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
(A) Resulting weight distribution for log-STDP (Gilson and Fukai, 2011);multSTDP (Van Rossum et al., 2000) and add-STDP (Song et al., 2000). (B) Plot for Functions a+ for LTP and − f for LTD in log-STDP (blue solid curve); mult-STDP (orange dashed line); and add-STDP model green dashed-dotted curve for depression and orange dashed curve for potentiation.
Figure 2
Figure 2
Plot showing the trajectories of the α−stable Lévy process Ltα for varying values of α.
Figure 3
Figure 3
Figure showing the probability density functions of the α−stable Lévy process Ltα for varying values of α.
Figure 4
Figure 4
The intensity values of the MNIST image are converted to Poisson-spike trains. The firing rates of the Poisson point process are proportional to the intensity of the corresponding pixel. These spike trains are fed as input in an all-to-all fashion to excitatory neurons. In the figure, the black shaded area from the input to the excitatory layer shows the input connections to one specific excitatory example neuron. The red shaded area denotes all connections from one inhibitory neuron to the excitatory neurons. While the excitatory neurons are connected to inhibitory neurons via one-to-one connection, each of the inhibitory neurons is connected to all excitatory ones, except for the one it receives a connection from.
Figure 5
Figure 5
Neuron connection weights for learned digit representations for (A) SFR = 0.9 and (B) SFR = 2.1.
Figure 6
Figure 6
Figure showing the variation in the training loss with increasing iterations for different types of STDP models keeping (A) SFR= 0.9 and (B) η = 0.05.
Figure 7
Figure 7
Figure showing the change in training loss with iterations for varying scaling function ratios for the log-STDP learning process.
Figure 8
Figure 8
Figure showing the change in training loss with iterations for varying learning rates for the log-STDP learning process.
Figure 9
Figure 9
(A) Plots for the impact of the scaling function ratios on generalization (results shown in Table 2). (B) Plots for the impact of the learning rates on generalization (results shown in Table 3).
Figure 10
Figure 10
Plot showing the change of BG Index, Training and Testing Accuracy between the add-STDP and log-STDP over functional evaluations during Bayesian Optimization.

Similar articles

Cited by

References

    1. Aceituno P. V., Ehsani M., Jost J. (2020). Spiking time-dependent plasticity leads to efficient coding of predictions. Biol. Cybernet. 114, 43–61. 10.1007/s00422-019-00813-w - DOI - PMC - PubMed
    1. Allen-Zhu Z., Li Y. (2019). Can SGD learn recurrent neural networks with provable generalization? arXiv preprint arXiv:1902.01028.
    1. Allen-Zhu Z., Li Y., Liang Y. (2018). Learning and generalization in overparameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918.
    1. Baity-Jesi M., Sagun L., Geiger M., Spigler S., Arous G. B., Cammarota C., et al. . (2018). Comparing dynamics: deep neural networks versus glassy systems, in International Conference on Machine Learning (PMLR: ) (Stockholm), 314–323. 10.1088/1742-5468/ab3281 - DOI
    1. Bell C. C., Han V. Z., Sugawara Y., Grant K. (1997). Synaptic plasticity in a cerebellum-like structure depends on temporal order. Nature 387, 278–281. 10.1038/387278a0 - DOI - PubMed

LinkOut - more resources