Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Dec 12:12:89.
doi: 10.3389/fninf.2018.00089. eCollection 2018.

BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python

Affiliations

BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python

Hananel Hazan et al. Front Neuroinform. .

Abstract

The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.

Keywords: GPU-computing; PyTorch; machine learning; python (programming language); reinforcement learning (RL); spiking Network.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Depiction of the BindsNET directory structure and description of major software modules.
Figure 2
Figure 2
A functional diagram of the Pipeline object. The four-step process involves an encoding function, network computation, converting network outputs into actions in an environment's action space, and a simulation step of the environment. An encoding function converts non-spiking observations from the environment into spike inputs to the network, and a action function maps network spiking activity into a non-spiking quantity: an action, fed back into the environment, where the procedure begins anew. Other modules come into play in various supporting roles: the network may use a learning method to update connection weights, or the environment may simply be a thin wrapper around a dataset (in which case there is no feedback), and it may be desirable to plot network state variables during the reinforcement learning loop.
Figure 3
Figure 3
Accompanying plots to the unsupervised training of the DiehlAndCook2015 spiking neural network architecture. The network is able to learn prototypical examples of images from the training set, and on a test images, the excitatory neuron with the most similar filter should fire the most. This network structure is able to achieve 95% accuracy on the MNIST digits (Diehl and Cook, ; Hazan et al., 2018). (A) Raw input and “reconstructed” input, computed by summing Poisson-distributed spike trains over the time dimension. (B) Spikes from the excitatory and inhibitory layers of the DiehlAndCook2015 model. (C) Voltages from the excitatory and inhibitory layers of the DiehlAndCook2015 model. (D) Reshaped 2D label assignments of excitatory neurons, assigned based on activity on examples from the training data. (E) Reshaped 2D connection weights from input to excitatory layers. The network is able to learn distinct prototypical examples from the dataset, corresponding to the categories in the data.
Figure 4
Figure 4
Unsupervised learning of the MNIST handwritten digits in BindsNET. The DiehlAndCook2015 model implements a simple spike timing-dependent plasticity rule between input and excitatory neuron populations as well as a competitive inhibition mechanism to learn prototypical digit filters from raw data. The DatasetEnvironment wraps the MNIST dataset object so it may be used as a component in the Pipeline. The network is trained on one pass through the 60K-example training data for 350ms each, with state variables (voltages and spikes) reset after each example.
Figure 5
Figure 5
A two-layer spiking neural network (a RealNodes object connected all-to-all with a IFNodes object) is trained with an approximated stochastic gradient descent algorithm using the Fashion-MNIST image dataset. The back-propagation algorithm operates on the summed_inputs to the groups of Nodes, while predictions are made based on the output layer's spiking activity.
Figure 6
Figure 6
Accompanying plots for the supervised training of a simple two-layer spiking neural network on the Fashion-MNIST dataset. The set of 10 28 × 28 tiled weights shown in (a) each correspond to a different class of Fashion-MNIST data. The plot of the input neurons' activity in (b) is simply the scaled input data, constant over the simulation length. This network architecture trained with stochastic gradient descent (SGD) achieves 85% test accuracy on this dataset. (A) Weights from the supervised spiking neural network trained on the Fashion-MNIST dataset. Each 28 × 28 region corresponds to the filter responsible for detecting a unique category of data. One can make out the profile of objects depicted in the filters; e.g., shirts, sneakers, and trousers. (B) Real-valued input activity and spikes from the input and output layers of the two-layer network, respectively.
Figure 7
Figure 7
A spiking neural network that accepts input from the BreakoutDeterministic-v4 gym Atari environment. The observations from the environment are downsampled and binarized. The history and delta keyword arguments are used to create difference images before they are converted into Bernoulli-distributed vectors of spikes, one per time step. The output layer of the network has 4 neurons in it, each representing a different action in the Breakout game. An action is selected at each time step using the select_softmax feedback function, which treats the summed spikes over each output layer neuron as a probability distribution over actions.
Figure 8
Figure 8
Accompanying plots for a custom spiking neural network's which interacts with the BreakoutDeterministic-v4 reinforcement learning environment. Spikes of all neuron populations are plotted, and the Breakout game is rendered, as well as the downsampled, history- and delta-altered observation, which is presented to the network. The performance of the network on 100 episodes of Breakout is also plotted. (note: The absence of spikes in the Input layer is due to the the large size of the layer and the way matplotlib library handles it. It is not a bug in our code). (A) Raw output from the Breakout game, provided by the OpenAI gymrender() method. (B) Pre-processed output from breakout game environment used as input to the SNN. (C) Spikes from the Input, Hidden, and Output layers of the spiking neural network. (D) The reward distribution of the initialized network on 100 episodes of Breakout.
Figure 9
Figure 9
A recurrent neural network built from 625 spiking neurons accepts inputs from the CIFAR-10 natural images dataset. An input population is connected all-to-all to an output population of LIF neurons with weights draw from the standard normal distribution, which has voltage thresholds drawn from N(-52,1) and is recurrently connected to itself with weights drawn from N(0,12). The reservoir is used to create a high-dimensional, temporal representation of the image data, which is used to train and test a logistic regression model created with PyTorch.
Figure 10
Figure 10
Plots accompanying another reservoir computing example, in which an input population of size equal to the CIFAR-10 data dimensionality is connected to a population of 625 LIF neurons, which is recurrently connected to itself. (A) Spikes recorded from the input and output layers of the two layer reservoir network. (B) Voltages recorded from the output of the two layer reservoir network. (C) Raw input and its reconstruction, computed by summing Poisson-distributed spike trains over the time dimension. (D) Weights from input to output neuron populations, initialized initialized from the distribution N(0,1). (E) Recurrent weights of the output population, initialized from the distribution N(0,12).
Figure 11
Figure 11
Benchmark comparison results from a number of SNN simulation frameworks. Variability in benchmarked times is likely due to randomness in the simulation and fluctuations in CPU load.

References

    1. Abadi M., Agarwal A., Barham P., Brevdo E., Chen Z., Citro C., et al. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Available online at: tensorflow.org
    1. Akopyan F., Sawada J., Cassidy A. S., Alvarez-Icaza R., Arthur J. V., Merolla P., et al. (2015). Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aid. Design Integr. Circ. Syst. 34, 1537–1557. 10.1109/TCAD.2015.2474396 - DOI
    1. Al-Rfou R., Alain G., Almahairi A., Angermueller C., Bahdanau D., Ballas N., et al. (2016). Theano: a Python framework for fast computation of mathematical expressions. arXiv e-prints:abs/1605.02688.
    1. Bekolay T., Bergstra J., Hunsberger E., DeWolf T., Stewart T. C., Rasmussen D., et al. . (2014). Nengo: a python tool for building large-scale functional brain models. Front. Neuroinformat. 7:48. 10.3389/fninf.2013.00048 - DOI - PMC - PubMed
    1. Bengio Y., Lee D., Bornschein J., Lin Z. (2015). Towards biologically plausible deep learning. CoRR:abs/1502.04156.

LinkOut - more resources