Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Feb 12:9:13.
doi: 10.3389/fncom.2015.00013. eCollection 2015.

Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

Affiliations

Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

Dimitri Probst et al. Front Comput Neurosci. .

Abstract

The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

Keywords: Bayesian theory; MCMC; computational neural models; graphical models; neural coding; neuromorphic hardware; probabilistic models and methods; theoretical neuroscience.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Formulation of an example inference problem as a Bayesian network and translation to a Boltzmann machine. (A) Knill-Kersten illusion from Knill and Kersten (1991). Although the four objects are identically shaded, the left cube is perceived as being darker than the right one. This illusion depends on the perceived shape of the objects and does not occur for, e.g., cylinders. (B) The setup can be translated to a Bayesian network with four binary RVs. The (latent) variables Z1 and Z2 encode the (unknown) reflectance profile and 3D shape of the objects, respectively. Conditioned on these variables, the (observed) shading and 2D contour are encoded by Z3 and Z4, respectively. Figure modified from Pecevski et al. (2011). (C) Representation of the Bayesian network from (B) as a Boltzmann machine. Factors of order higher than 2 are replaced by auxiliary variables as described in the main text. The individual connections with weights Mexc, Minh → ∞ between each principal and auxiliary variable have been omitted for clarity.
Figure 2
Figure 2
Neural sampling: abstract model vs. implementation with LIF neurons. (A) Illustration of the Markov chain over the refractory variable ζk in the abstract model. Figure taken from Buesing et al. (2011). (B) Example dynamics of all the variables associated with an abstract model neuron. (C) Example dynamics of the equivalent variables associated with an LIF neuron. (D) Free membrane potential distribution and activation function of an LIF neuron: theoretical prediction vs. experimental results. The blue crosses are the mean values of 5 simulations of duration 200 s. The error bars are smaller than the size of the symbols. Table 1 lists the used parameter values of the LIF neuron. (E) Performance of sampling with LIF neurons from a randomly chosen Boltzmann distribution over 5 binary RVs. Both weights and biases are chosen from a normal distribution formula image (μ = 0, σ = 0.5). The green bars are the results of 10 simulations of duration 100 s. The error bars denote the standard error.
Figure 3
Figure 3
Comparison of the different implementations of the Knill-Kersten graphical model (Figure 1). LIF (green), LIF with noised parameters (yellow), LIF with small cross-correlations between noise channels (orange), mLIF PSPs mediated by a superposition of LIF PSP kernels (gray), abstract model with alpha-shaped PSPs (blue), abstract model with rectangular PSPs (red) and analytically calculated (black). The error bars for the noised LIF networks represent the standard error over 10 trials with different noised parameters. All other error bars represent the standard error over 10 trials with identical parameters. (A) Comparison of the four used PSP shapes. (B,C) Inferred marginals of the hidden variables Z1 and Z2 conditioned on the observed (clamped) states of Z3 and Z4. In (B) (Z3, Z4) = (1, 1). In (C) (Z3, Z4) = (1, 0). The duration of a single simulations is 10 s. (D) Marginal probabilities of the hidden variables reacting to a change in the evidence Z4 = 1 → 0. The change in firing rates (top) appears slower than the one in the raster plot (bottom) due to the smearing effect of the box filter used to translate spike times into firing rates. (E,F) Convergence toward the unconstrained equilibrium distributions compared to the target distribution. In (D) the performance of the four different PSP shapes from (A) is shown. The abstract model with rectangular PSPs converges to DnormKL = 0, since it is guaranteed to sample from the correct distribution in the limit t → ∞. In (E) the performance of the three different LIF implementations is shown.
Figure 4
Figure 4
In order to establish a coupling which is closer to the ideal one (rectangular PSP), the following network structure was set up: Instead of using one principal neuron ν per RV, each RV is represented by a neural chain. In addition to the network connections imposed by the translation of the modeled Bayesian graph, feedforward connections between the neurons in this chain are also generated. Furthermore, each of the chain neurons projects onto the first neuron of the postsynaptic interneuron chain (here: all connections from νi1 to ν12). By choosing appropriate synaptic efficacies and delays, the chain generates a superposition of single PSP kernels that results in a sawtooth-like shape which is closer to the desired rectangular shape than a single PSP.
Figure 5
Figure 5
Sampling from random distributions over 5 RVs with different networks: LIF (green), mLIF (gray), abstract model with alpha-shaped PSPs (blue) and abstract model with rectangular PSPs (red). (A) Distributions for different values of η from which conditionals are drawn. (B) DnormKL between the equilibrium and target distributions as a function of η. The error bars denote the standard error over 30 different random graphs drawn from the same distribution. (C) Evolution of the DnormKL over time for a sample network drawn from the distribution with η = 1. Error bars denote the standard error over 10 trials.

References

    1. Berkes P., Orbán G., Lengyel M., Fiser J. (2011). Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science 331, 83–87. 10.1126/science.1195870 - DOI - PMC - PubMed
    1. Brette R., Gerstner W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642. 10.1152/jn.00686.2005 - DOI - PubMed
    1. Buesing L., Bill J., Nessler B., Maass W. (2011). Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput. Biol. 7:e1002211. 10.1371/journal.pcbi.1002211 - DOI - PMC - PubMed
    1. Davison A. P., Brüderle D., Eppler J., Kremkow J., Muller E., Pecevski D., et al. . (2008). PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2:11. 10.3389/neuro.11.011.2008 - DOI - PMC - PubMed
    1. Deneve S. (2008). Bayesian spiking neurons i: inference. Neural Comput. 20, 91–117. 10.1162/neco.2008.20.1.91 - DOI - PubMed