Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Apr 29:9:141.
doi: 10.3389/fnins.2015.00141. eCollection 2015.

A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses

Affiliations

A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses

Ning Qiao et al. Front Neurosci. .

Abstract

Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

Keywords: Spike-Timing Dependent Plasticity (STDP); Winner-Take-All (WTA); analog VLSI; asynchronous; attractor network; brain-inspired computing; real-time; spike-based learning.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Architecture of ROLLS neuromorphic processor. (A) Block diagram of the architecture, showing two distinct synapse arrays (short-term plasticity and long-term plasticity synapses), an additional row of synapses (virtual synapses) and a row of neurons (somas). A synapse de-multiplexer block is used to connect the rows from the synapse arrays to the neurons (see main text for details). Peripheral circuits include asynchronous digital AER logic blocks, an Analog-to-Digital converter, and a programmable on-chip bias-generator. (B) Block-diagram legend.
Figure 2
Figure 2
Micro-photograph of the ROLLS neuromorphic processor. The chip was fabricated using a 180 nm CMOS process and occupies an area of 51.4 mm2, comprising 12.2 million transistors.
Figure 3
Figure 3
Silicon neuron schematics. The NMDA block implements a voltage gating mechanism; the LEAK block models the neuron's leak conductance; the spike-frequency adaptation block AHP models the after-hyper-polarizing current effect; the positive-feedback block Na+ models the effect of the Sodium activation and inactivation channels; reset block K+ models the Potassium conductance functionality.
Figure 4
Figure 4
Different biologically plausible neuron's behaviors: (top-left) membrane potential with tunable reset potential (different colors represent different reset potential settings), (top-right) membrane potential with tunable refractory period duration (different colors represent different refractory period settings), (bottom-left) neuron's spike-frequency adaptation behavior: the top trace represents the membrane potential and the bottom one represents the input current, (bottom-right) neuron's bursting behavior.
Figure 5
Figure 5
Population response of all neurons in the array to constant injection currents. The variance in the measurements is due to device mismatch effects in the analog circuits.
Figure 6
Figure 6
Post-synaptic learning circuits for evaluating the algorithm's weight update and “stop-learning” conditions. The DPI circuit MD1−5 integrates the post-synaptic neuron spikes and produces a current proportional to the neuron's Calcium concentration. Three current-mode winner-take-all circuits WTA, WTAUP, and WTADN compare the Calcium concentration current to three set thresholds sl_thmin!, sl_thdn!, and sl_thup!, while the neuron's membrane current is compared to the threshold sl_memthr!.
Figure 7
Figure 7
Long-term plasticity synapse array element. (A) Plastic synapse configuration logic block diagram. (B) Timing diagram for broadcast and recurrent activation modes in one synapse using 4-phase handshaking protocol. Dashed red lines show the sequence between signals. (C) Schematic diagram of the bi-stable weight update and current generator blocks.
Figure 8
Figure 8
Spike-based learning circuit measurements. The bottom trace represents the pre-synaptic input spikes; the second trace from the bottom represents the bi-stable internal variable (node Vw of Figure 7); the third trace represents the post-synaptic neuron's membrane potential; and the top trace shows both a voltage trace proportional to the neuron's integrated spiking activity as well as the digital control signals that determine whether to increase (red shaded area), decrease (blue shaded area) or leave Vw unchanged (no shaded area). The horizontal lines represent the thresholds used in the learning algorithm (see Section 2.1.2), while the vertical lines at t = 273 s (blue line) and t = 405 s (red line) are visual guides to show where the membrane potential is, with respect to its threshold, for down and up jumps in Vw respectively.
Figure 9
Figure 9
Short-term plasticity synapse array element. (A) Block diagram of the synapse element. (B) Transistor level schematic diagram of the excitatory and inhibitory pulse-to-current converters.
Figure 10
Figure 10
The effect of short-term depression on EPSC magnitudes. Two bursts separated by 100 ms were sent to a programmable synapse. Each burst has 5 spikes with an inter-spike interval of 5 ms. Within a burst, The jumps in the neuron Vmem gradually get smaller as the synapse is depressed and the magnitude of the EPSCs it generates decreases. After the first burst, the synapse efficacy recovers as can be seen in the response to the second burst. The figure inset shows the derivative of the membrane potential which is equivalent to the synaptic EPSCs (minus the neuron leak).
Figure 11
Figure 11
Attractor networks. Three clustered pools of 64 neurons are configured as attractor networks. (A) The on-chip synaptic matrices in which every dot represents a connected synapse. (B) The output spiking activity of the network. Every pool of neuron is a competing working memory. The timing of the inputs is shown with square waves under the plot. Different colors represent different inputs. The down jump of the red line represents the stimulus to the inhibitory pool of neuron responsible for switching off the on-chip memory.
Figure 12
Figure 12
(A) Image classification example using inputs from a DVS. (A) Top: neural network architecture. Two different classes of images (here motorbikes or cars) are displayed on a screen with a small jitter applied at 10 Hz. A random subset of the spikes emitted by the DVS are mapped to 128 hidden layer neurons. Specifically, each of the 128 neurons is connected to 64 randomly selected pixels with either positive or negative weights, also set at random. The output neurons in the last layer receive spikes from all the 128 hidden layer neurons, via plastic synapses. The output layer neurons are also driven by an external “teacher” signal which is correlated with one of the image classes. (A) Bottom: diagram of the experimental protocol timeline. Notice the presence of a saccade inhibition mechanism which electronically suppresses DVS input during a virtual saccade, i.e., when the displayed image is replaced with the next one. (B) Synaptic matrices of the ROLLS neuromorphic processor showing the hardware configuration of the classification neural network. The STP synapses represent the synapses of the hidden layer; the LTP synapses represent the synapses of the output layer.
Figure 13
Figure 13
Spiking activity of the hardware neurons during the training (upper panel) and testing phase (middle and lower panels). Left column: examples of raw images from the Caltech 101 database. Middle column: heat map of the DVS spiking activity, where each pixel color represents the pixel's mean firing rate. Right column: raster plots of the ROLLS neuromorphic processor neurons. The star on the top panel label indicates that during the training phase an additional excitatory “teacher” signal was used to stimulate the “car” output neurons to induce plasticity. During testing no teacher signal is provided and only the excitatory currents from the input synapses drive the classifier activities. The average firing rates of the output layer for “motorbike” and “car” neuron pools are 6.0 Hz and 80.9 Hz during training. During testing with a “car” image they are 7.1 Hz and 11.1 Hz. During testing with a “motorbike” image they are 7.4 Hz and 4.9 Hz.

References

    1. Abbott L., Nelson S. (2000). Synaptic plasticity: taming the beast. Nat. Neurosci. 3, 1178–1183. 10.1038/81453 - DOI - PubMed
    1. Amit D. (1992). Modeling Brain Function: The World of Attractor Neural Networks. New York, NY: Cambridge University Press.
    1. Amit D., Mongillo G. (2003). Spike-driven synaptic dynamics generating working memory states. Neural Comput. 15, 565–596. 10.1162/089976603321192086 - DOI - PubMed
    1. Barak O., Rigotti M. (2011). A simple derivation of a bound on the perceptron margin using singular value decomposition. Neural Comput. 23, 1935–1943 10.1162/NECO_a_00152 - DOI
    1. Barak O., Rigotti M., Fusi S. (2013). The sparseness of mixed selectivity neurons controls the generalization–discrimination trade-off. J. Neurosci. 33, 3844–3856. 10.1523/JNEUROSCI.2753-12.2013 - DOI - PMC - PubMed