Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 May 14;14(5):e0216322.
doi: 10.1371/journal.pone.0216322. eCollection 2019.

Neural decoding with visual attention using sequential Monte Carlo for leaky integrate-and-fire neurons

Affiliations

Neural decoding with visual attention using sequential Monte Carlo for leaky integrate-and-fire neurons

Kang Li et al. PLoS One. .

Abstract

How the brain makes sense of a complicated environment is an important question, and a first step is to be able to reconstruct the stimulus that give rise to an observed brain response. Neural coding relates neurobiological observations to external stimuli using computational methods. Encoding refers to how a stimulus affects the neuronal output, and entails constructing a neural model and parameter estimation. Decoding refers to reconstruction of the stimulus that led to a given neuronal output. Existing decoding methods rarely explain neuronal responses to complicated stimuli in a principled way. Here we perform neural decoding for a mixture of multiple stimuli using the leaky integrate-and-fire model describing neural spike trains, under the visual attention hypothesis of probability mixing in which the neuron only attends to a single stimulus at any given time. We assume either a parallel or serial processing visual search mechanism when decoding multiple simultaneous neurons. We consider one or multiple stochastic stimuli following Ornstein-Uhlenbeck processes, and dynamic neuronal attention that switches following discrete Markov processes. To decode stimuli in such situations, we develop various sequential Monte Carlo particle methods in different settings. The likelihood of the observed spike trains is obtained through the first-passage time probabilities obtained by solving the Fokker-Planck equations. We show that the stochastic stimuli can be successfully decoded by sequential Monte Carlo, and different particle methods perform differently considering the number of observed spike trains, the number of stimuli, model complexity, etc. The proposed novel decoding methods, which analyze the neural data through psychological visual attention theories, provide new perspectives to understand the brain.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. State-space model used for the decoding of stochastic stimuli.
Fig 2
Fig 2. Realizations of spike trains.
The left panels show the three response kernels. The top panels show different types of stimuli. Spike trains are shown for each combination of response kernel and stimulus. Each line represents an independent trial. For each combination, 50 example spike trains are simulated.
Fig 3
Fig 3. Decoding of stochastic stimulus mixtures using BF with filtering from a single spike train responding to stimulus mixtures containing 1 (upper panel), 2 (middle panel) or 3 (lower panel) components.
Blue curves show all the stimulus components in the mixture, and the black curve switching between the blue curves indicate the attended stimulus. Red piecewise-constant lines show the decoding results as the posterior mean, with each constant interval being 100ms long. The light red shaded area indicates the posterior distribution at each time step. The spike train is plotted above each decoding figure as sequences of dots. The rRMSD values are shown on the top-right corner of each figure. In the right side of each panel, the empirical posterior distributions at selected time points indicated by dashed lines in the left panels are shown, computed from weighted kernel density smoothing using the particles. The red vertical line indicates the posterior mean, i.e., the decoding estimates shown in the left panels. The black vertical line indicates the true stimulus averaged across the 100ms interval.
Fig 4
Fig 4. Decoding of stochastic stimulus mixtures from a single spike train.
Decoding by BF with filtering, BF-F (upper panel), fixed-lag smoothing, BF-lag (middle panel) and fixed-interval smoothing, BF-FB (lower panel). The three panels show the decoding of the same spike train. See caption of Fig 3 for explanation.
Fig 5
Fig 5. The rRMSD values of decoding stochastic mixtures with K = 1, 2 and 3 components using different particle methods, calculated from 50 repetitions.
In the labels of the x-axis, F: filtering, Lag: fixed-lag smoothing, FB: fixed-interval smoothing using the forward-filtering backward-smoothing algorithm. For example, APF-Lag means using APF and reporting estimates using fixed-lag smoothing.
Fig 6
Fig 6. Effective sample sizes.
ESS of BF and APF with K = 1, 2, 3 stimuli, shown in boxplots for 2500 samples of 50 repetitions at 50 time steps. The labels in the x-axis show the number of stimuli. For example, APF-2 means using APF with 2 stimuli.
Fig 7
Fig 7. Examples of parameter learning of γ over time.
The solid line is the mean of 500 particles, and dashed lines show ± the standard deviation. The red lines are the true values.
Fig 8
Fig 8. Decoding from 20 spike trains on a stimulus mixture with two components assuming serial processing.
Decoding is done by BF with online filtering (upper middle panel), fixed-lag smoothing (lower middle panel) and fixed-interval smoothing (lower panel).
Fig 9
Fig 9. Decoding from 20 spike trains using BF assuming parallel processing.
In the top panel 20 spike trains are shown. In the middle panel is shown the method using individual decoding and clustering. Short gray bars show the individual decoding results of stimulus at each time point from 20 spike trains. Thick bars show the medians of clustered categories. A more red color of the thick bars means less number of estimates inside the corresponding category. We mark by two stars if there are less than or equal to 5% estimates in a category (in this case, 5% × 20 = 1 estimates, which only happens once, at time 4.9 s). Blue curves show the true stimuli. The histograms to the right show the distribution of 20 estimates with red lines indicating the medians. In the lower panel is shown BF with marginal likelihood. For graphical reasons, we plot the two dimensional posterior estimation of the two stimuli in one dimension. For both decoding methods assuming parallel processing, all stimulus components are decoded at each time point. Blue curves show the true stimuli.
Fig 10
Fig 10. The rRMSD values using different particle methods for serial and parallel processing, calculated from 50 repetitions.
In the labels of the x-axis, APFg: APF with geometric mean, iBF: individual decoding using BF, iAPF: individual decoding using APF, mBF: BF with marginal likelihood, mAPF: APF with marginal likelihood, mAFPg: APF with marginal likelihood and geometric mean. For example, APFg-FB means using APF with geometric mean, and reporting estimates using fixed-interval smoothing by the forward-filtering backward-smoothing algorithm.
Fig 11
Fig 11. ESS using different methods in serial and parallel processing, shown in boxplots for 2500 samples of 50 repetitions at 50 time steps.
The labels in the x-axis show the methods used. For example, parallel-mAPFg means using mAPF with geometric mean for parallel processing.
Fig 12
Fig 12. Examples of parameter learning of γ over time.
The solid line is the mean of 500 particles, and dashed lines show ± the standard deviation. The red lines are the true value.
Fig 13
Fig 13. Decoding from 20 spike trains using BF assuming parallel processing.
In each spike train, neuronal attention switches at continuous times following a Poisson process.
Fig 14
Fig 14. Decoding of two example single spike trains selected from Fig 13 using BF.
Neuronal attention switches at continuous times following a Poisson process. Example switching times are indicated by dashed lines.
Fig 15
Fig 15. Decoding from 20 spike trains using BF assuming parallel processing, using the decay response kernel.
Fig 16
Fig 16. Decoding from 20 spike trains using BF assuming parallel processing, using the delay response kernel.

References

    1. Dayan P, Abbott LF. Theoretical neuroscience: computational and mathematical modeling of neural systems Computational neuroscience. Cambridge (Mass.), London: MIT Press; 2001. Available from: http://opac.inria.fr/record=b1100424.
    1. Bundesen C, Habekost T, Kyllingsbæk S. A neural theory of visual attention: bridging cognition and neurophysiology. Psychological Review. 2005;112(2):291 10.1037/0033-295X.112.2.291 - DOI - PubMed
    1. Li K, Kozyrev V, Kyllingsbæk S, Treue S, Ditlevsen S, Bundesen C. Neurons in primate visual cortex alternate between responses to multiple stimuli in their receptive field. Frontiers in Computational Neuroscience. 2016;10:141 10.3389/fncom.2016.00141 - DOI - PMC - PubMed
    1. Nobre K, Kastner S. The Oxford handbook of attention. Oxford University Press; 2013.
    1. Lebedev MA, Nicolelis MA. Brain–machine interfaces: past, present and future. TRENDS in Neurosciences. 2006;29(9):536–546. 10.1016/j.tins.2006.07.004 - DOI - PubMed

Publication types

LinkOut - more resources