Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011 Feb;7(2):e1001080.
doi: 10.1371/journal.pcbi.1001080. Epub 2011 Feb 17.

Spike-based population coding and working memory

Affiliations

Spike-based population coding and working memory

Martin Boerlin et al. PLoS Comput Biol. 2011 Feb.

Abstract

Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Illustrations.
(A) Illustration of the network task. An auditory and a visual cue (cue 1 and 2) about a dynamic stimulus (e.g. the position of a mouse) are integrated and combined during the integration period. During the memory period, this information is kept available such that it can be read out over a timescale of order formula image during the read-out period. (B) Schematic illustration of the network. The visual and the auditory cue about stimulus formula image are encoded in two independent input populations that send feed-forward inputs to the output population. The output population is recurrently connected. The connection weights formula image formula image and formula image are functions of the input kernels formula image and formula image as well as the output kernel formula image. (C) Illustration of the spike generation rule. formula image denotes the stimulus posterior given all inputs and formula image represents an approximation to formula image that is decoded from the output spike trains. formula image should be as close as possible to formula image. An output spike adds a kernel to formula image. If its effect is to reduce the mean squared distance between the curves (down right), the spike is fired. The spike is not generated however if it increases the distance between the two curves (top right).
Figure 2
Figure 2. Currents.
Averaged currents to a neuron with a preferred stimulus of 180 deg as a function of the presented stimulus location. (A) Currents during the integration period. Feed-forward input currents (blue) are excitatory for stimuli similar to the preferred stimulus of the neuron and inhibitory otherwise. The sum of fast and slow recurrent currents (red-green dashed line) follows an inverted profile of similar magnitude that counteracts the effect of the feed-forward input. The leak current (magenta) is small in magnitude compared to the synaptic currents. (B) Currents during the memory period. Feed-forward inputs are equal to zero. The individual lateral currents are enhanced with respect to the integration period. However, their total sum (red-green dashed line) is balanced and close to zero (see also the black dashed line in C). (C) Total currents (including leak) during the integration period (solid line) and during the memory period (dashed line). In both cases, the contributions of individual currents balance each other out such that the total current is small, slightly excitatory among neurons whose preferred stimuli are similar to the presented stimulus and inhibitory otherwise. The two maxima of the current during the memory period are due to the non-linear component of the slow recurrent currents (formula image) that codes for the stimulus diffusion. It has the effect of broadening the response during the memory period (see figure 3A).
Figure 3
Figure 3. Network performance.
(A) Input and output spike trains on a single trial. A stimulus with constant drift and diffusion is presented for 500 ms (gray area). (B) Time evolution of the stimulus posterior for the ideal observer (blue) and the network read-out (red). Thick lines show the mean of the posterior and narrow lines the corresponding width. The stimulus trajectory is shown in black. The dashed black line indicates the predictable (drift) part of the stimulus that the network is tracking during the memory period. (C) Snapshots of the posteriors, from left to right; after 500ms (end of integration period), after 2000 ms and after 5000 ms. (D) Coding performance measured as the standard deviation of the stimulus estimate formula image around its real value formula image. The blue and red curves depict the performance of the ideal observer and the network respectively and the green curve shows the performance of a network without slow currents formula image. (E) Width of the posterior decoded from the ideal observer (blue), the full network model (described in equations 7 and 8) (red), a network in which we approximate the nonlocal term in the slow currents formula image by a linear term (see equation 10) (green) and a network for which we completely remove the nonlocal term (magenta).
Figure 4
Figure 4. Cue combination and priors.
(A) Estimation accuracy for different reliabilities of the input cues: both input cues are equally reliable (bimodal) or one cue is more reliable than the other (cue 1 and cue 2). In each subgroup, bars depict from left to right the encoding accuracy of: cue 1, cue 2, the ideal observer, the network at the end of the integration period and the network after one second in the memory period. (B) Biasing effect of the prior measured as the difference between the real and the estimated stimulus, formula image. The effect is stronger for short integration times (200 ms, left) than for long integration times (500 ms, right). Black bars show the bias expected for a Bayesian observer, white bars depict the network bias. (C) Standard deviation of the estimator with a Gaussian prior (solid lines) and with a flat prior (dashed lines). A structured prior narrows the width of the posterior. Blue lines denote the ideal observer, red lines the network performance. (D) Input and output spike trains on a single trial. A constant stimulus is presented for 500 ms (gray area). The spontaneous activity before stimulus onset encodes the prior belief about the stimulus. (E) Time evolution of the posterior for the ideal observer (blue) and the network (red). Thick lines show the mean of the posterior and narrow lines the corresponding width. The stimulus is shown in black.
Figure 5
Figure 5. Output firing rates.
(A) Post-stimulus time histogram (PSTH) of the output activity in response to a stimulus with constant diffusion. Color indicates firing rates in Hz. The stimulus (magenta line) is presented during the first 500 ms. (B) Tuning curves of a sample neuron. Spikes are counted in 10ms bins centered at 50 ms (black), 200 ms (blue) and 500 ms (red) during the integration period and at 550 ms (green) and 2500 ms (magenta) during the memory period. (C) Traces of the average firing rate of a neuron whose preferred stimulus lies around the peak of the bump of activity. Different curves depict different levels of Fisher information in the input population codes: reference information, formula image for the regular parameters (red), formula image (green) and formula image (blue). (D) Traces of the average firing rate of three neurons whose preferred stimuli lie at the peak of the bump of activity (blue), the side of the bump (red) or far away from the bump (green). (E) PSTH of the output activity in response to a static stimulus presented for 500 ms. (F,G) Interspike interval (ISI) histogram during the integration period (F) and during the memory period (G) for a sample neuron. The red line shows the ISI histogram of a Poisson process with the same rate.
Figure 6
Figure 6. Response to multiple stimuli.
Two static stimuli (red lines) are consecutively presented to the network for 350 ms each. They are separated by a delay time interval of one second. Their spatial distance is (A) 180 deg, and (B) 45 deg. Top row: Spike trains on a single trial. Bottom row: Time evolution of the unnormalized log posterior (gray scale representation). The simulated network contains 200 instead of 50 neurons for better visual clarity.
Figure 7
Figure 7. Spike train variability.
(A) Output spike trains for two runs (blue and red) of activity starting with the same initial conditions. The red run is perturbed by the injection of one extra spike (shown by the red arrow). (B) Time course of the posterior of the two runs. (C) PSTH of the control (blue) and the perturbed (red) runs. The extra spike is injected at formula image. Spikes are counted in 2 ms time bins and averaged over all neurons and over 10000 trials. (D) Time course of the normalized cross-correlation between the two runs of activity. The vertical dotted line indicates the time at which the perturbation (one extra spike) was added. (E) Predictability (equation 33) of the activity of an output neuron if we record from a fraction formula image neurons of the output population. The predictability for neuron formula image is plotted for spikes that are generated from the deterministic network (blue) or from a Poisson process (red). The rightmost predictability (at a fraction of 1) corresponds to the predictability of the measured, i.e. not predicted, membrane potential. The inset shows the increase in predictability previous to a spike (for a fraction of recorded neurons of 0.8). (F) Schematic illustration of the error correcting properties of the network. The left panel shows a reference spike train. Each spike adds a kernel that when added together give the log posterior G (top). If an extra spike is added (middle panel, red spike), the spike train is reshuffled in a way that keeps the total log posterior constant. If the initial spike fails to be elicited (right panel, blue dotted spike), a neighboring neuron recognizes the “hole” of information transmission and fires a spike to fill it. This changes the initial condition (first firing neuron in black) and therefore shuffles the spike train. The total log posterior remains the same.
Figure 8
Figure 8. Robustness to noise.
(A) Coding performance of the network in the presence of synaptic background noise. The vertical axis plots the percentage excess of the standard deviation of the stimulus estimator above its optimal value. Results are reported for percentual decreases in the signal-to-noise ratio, SNR = mean(input)/std(input), of 0% (black), 20% (blue), 50% (red) and 100% (green). A static stimulus is presented during the first 500 ms (grey area). (B) Coding performance of a stochastic network for different output gains: formula image (green), formula image (magenta) and formula image (cyan). The ideal observer is plotted in blue and the performance of the deterministic network in red. A static stimulus is presented during the entire 1500 ms. (C) Schematic illustration of the difference between deterministic and stochastic spike generation. The left and middle panel show two spike trains encoding the same information but starting from different initial conditions. However, neurons in the output population are recurrently connected and “know” therefore perfectly well, when to fire a spike such that the log posterior formula image is represented. If the lateral connections are removed, neurons fire stochastic spike trains that look similar to the deterministic ones but do not encode the same log posterior.

Similar articles

Cited by

References

    1. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. - PubMed
    1. Kording KP, Wolpert DM. Bayesian integration in sensorimotor learning. Nature. 2004;427:244–247. - PubMed
    1. Knill DC, Richards W, editors. Perception as Bayesian inference. New York, NY, USA: Cambridge University Press; 1996.
    1. Todorov E, Jordan MI. Optimal feedback control as a theory of motor coordination. Nat Neurosci. 2002;5:1226–1235. - PubMed
    1. Zemel RS, Dayan P, Pouget A. Probabilistic interpretation of population codes. Neural Comput. 1998;10:403–430. - PubMed