Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Sep 10;110(37):E3468-76.
doi: 10.1073/pnas.1212083110. Epub 2013 Jul 22.

Synthesizing cognition in neuromorphic electronic systems

Affiliations

Synthesizing cognition in neuromorphic electronic systems

Emre Neftci et al. Proc Natl Acad Sci U S A. .

Abstract

The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a "soft state machine" running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.

Keywords: analog very large-scale integration; artificial neural systems; decision making; sensorimotor; working memory.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Fig. 1.
Fig. 1.
Synthesis of a target FSM in neuromorphic VLSI neural networks. (A) State diagram of the high-level behavioral model. Circles represent states and arrows indicate the transitions between them, conditional on input symbol X. In this example state machine, the active state flips between S1 and S2 in response to X and outputs either the response A or the response B, depending on the previous state. (B) The computational substrate composed of three sWTA networks: two “state-holding” networks (vertical and horizontal rectangles) and a third transition network (central square). The shaded circles in each sWTA represent populations of spiking neurons that are in competition through a population of inhibitory neurons (not displayed). The state-holding sWTA networks are coupled population-wise (γ-labeled arrow, red with red, blue with blue, etc.) to implement working memory. Solid arrows indicate stereotypic couplings, and the dashed arrows indicate couplings that are specific to the FSM (in this case the one shown in A). The gain and threshold in the transition sWTA are configured such that each population becomes active only if both of its inputs are presented together. The sWTA competition ensures that only a single population in the network is active at any time. An additional output sWTA network is connected to the transition network to represent the output symbols. To program a different state machine, only the dashed arrows need to be modified. (C) The multineuron chips used in the neuromorphic setup feature a network of low-power I&F neurons with dynamic synapses. The chips are configured to provide the hardware neural substrate that supports the computational architecture consisting of sWTA shown in B. Each population of an sWTA network is represented in hardware by a small population of recurrently coupled spiking neurons formula image, which compete against other populations via an inhibitory population.
Fig. 2.
Fig. 2.
Context-dependent visual task. Two visual objects, a horizontal and a vertical bar, are moving on a screen and bouncing off its borders. A visual cue flashing at formula image on the upper right corner of the screen for formula image (red) indicates that the subject must attend to the horizontal bar (indicated by a circle) and report with output A if it enters the right half of the screen. If the initial cue appears on the upper left corner (blue), then the task is inverted: The subject must attend to the vertical bar and report B if the attended bar enters the left half of the screen. The experimental stimuli were presented as black bars against a light background (colors here are used only for the sake of clarity). The agent must respond as soon as the screen midline is judged to be crossed: this fuzzy condition results in different response latencies.
Fig. 3.
Fig. 3.
Real-time neuromorphic agent able to perform the context-dependent visual task. Two moving oriented bars are shown to an event-based formula image “silicon retina” (22). The silicon retina output events are preprocessed in software to detect orientation and routed accordingly to one of two possible feature maps, implemented as formula image sheets of VLSI I&F neurons. The events produced by the feature maps are retinotopically mapped to a selective attention chip (SAC), which selects the most salient region of the visual field by activating a spiking neuron at that position (black circle in the Saliency map box). The input–output space of the SAC is divided into five distinct functional regions: left (L), right (R), border (X), and cues (C1, C2). The events from each of these regions are routed to the appropriate transition neurons of the SSM. To focus on the desired target, the system must attend to one of the two bars. This is achieved by modulating the attentional layer with a state-dependent top–down attentional feedback from the SSM. In the neural architecture, this is implemented by inhibiting the features corresponding to the bar that should not be attended to (Materials and Methods). Transitions that do not change the state are omitted in the “State-Dependent Behavior” diagram, to avoid clutter. The snapshots shown in the “Pre-processing” and “Selective Attention” diagrams represent experimental data, measured during the experiment of Fig. 4, in the period when the state B0 was active. An additional sWTA network (not displayed) is stimulated by the transition populations to suppress noise and to produce output A or B.
Fig. 4.
Fig. 4.
Results of the visual task experiment. (Left) The silicon retina output (Upper) and the SAC output (Lower). The axes respectively represent the X-Y coordinates of the events and the color encodes time. The scattered events around the main stimulus are due to spontaneous activity in the silicon retina. The top–down modulation strongly inhibited the feature map corresponding to the distractor (Results). For this reason, in the Lower panels, only the target associated to the context in force is observed. The output of the SAC is routed to the corresponding transitions neurons in the SSM (Fig. 3). (Right Top) The raster plot of the routed events is shown. The detection of the patterns A and B is reported by the output populations OutA and OutB, as shown in the Right Middle raster plot. The arrows show a clear example of state-dependent computation: Input L induces either an output B or no output, depending on the context in force. Right Bottom plot shows the mean firing rates of the respective populations.
Fig. 5.
Fig. 5.
Robustness of randomly specified SSMs. (A) Performance measured as the percentage of correctly processed strings as a function of string length. To emphasize the effect of such errors, we separated the SSMs into two classes, with (red) and without (green) ambiguous transitions. The shaded regions show the SD over a collection of five randomly specified state machines. The blue curve shows the accuracy of the SSM used for the context-dependent visual task (Fig. 3). Each SSM was run with 50 different strings of length of 20. (B) Proportion of successful transitions per type of transition, namely self-transitions and ambiguous (AT) and nonambiguous transitions (T), computed from 3,542, 2,717, and 3,417 transition measurements, respectively. The theoretical chance level is computed by assuming arbitrary transitions regardless of the input (thick black line), meaning formula image.

Comment in

  • Reverse engineering the cognitive brain.
    Cauwenberghs G. Cauwenberghs G. Proc Natl Acad Sci U S A. 2013 Sep 24;110(39):15512-3. doi: 10.1073/pnas.1313114110. Epub 2013 Sep 12. Proc Natl Acad Sci U S A. 2013. PMID: 24029019 Free PMC article. No abstract available.

References

    1. Mead CA. Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley; 1989.
    1. von Neumann J. The Computer and the Brain. New Haven, CT: Yale Univ Press; 1958.
    1. Sarpeshkar R. Analog versus digital: Extrapolating from electronics to neurobiology. Neural Comput. 1998;10(7):1601–1638. - PubMed
    1. Indiveri G, Horiuchi TK (2011) Frontiers: Frontiers in neuromorphic engineering. Front Neurosci 5, 10.3389/fnins.2011.00118. - PMC - PubMed
    1. Seo J, et al. (2011) A 45nm cmos neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. Custom Integrated Circuits Conference (CICC) (Institute of Electrical and Electronic Engineers, New York), pp 1–4.

Publication types