Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jun;16(3):683-705.
doi: 10.1007/s11571-021-09731-9. Epub 2021 Nov 3.

Autonomous learning of nonlocal stochastic neuron dynamics

Affiliations

Autonomous learning of nonlocal stochastic neuron dynamics

Tyler E Maltba et al. Cogn Neurodyn. 2022 Jun.

Abstract

Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of random or stochastic ordinary differential equations. Such systems admit a distribution of solutions, which is (partially) characterized by the single-time joint probability density function (PDF) of system states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for the stochastic non-spiking leaky integrate-and-fire and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.

Keywords: Colored noise; Equation learning; Method of distributions; Nonlocal; Stochastic neuron model.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
(a) PDF fx(X;t) of the membrane voltage x(t) in (24), computed via the PDF equation with the semi-local closure, for several combinations of the strength (σ) and correlation length (τ) of colored noise ξ(t). (b) Direct comparison of the PDF fx(X;t) computed with the semi-local closure and the yardstick PDF fMC(X;t) for σ=0.133 and τ=0.20 at times t=1 and 3
Fig. 2
Fig. 2
Temporal evolution of the KL divergence, DKL(fMC||fx), between the membrane voltage PDF computed with the semi-local closure, fx(X;t), and its Monte Carlo counterpart, fMC(X;t), for several combinations of the strength (σ) and correlation length (τ) of colored noise ξ(t).
Fig. 3
Fig. 3
Diffusion coefficients associated with the semi-local closure, D2(t), the data-driven closure with constant coefficients, β^2, and the data-driven closure with a temporal Legendre basis expansion, β^2(t). We present the cases for which the KL divergence DKL(fMCfx) of the data-driven closure with constant β^ is highest. In all cases, the data-driven coefficients approximated with a Legendre basis expansion agree well with the other closures
Fig. 4
Fig. 4
Temporal snapshots (at times t=0.5, 5, 12.5, and 25; for σ=0.133 and τ=0.20) of the membrane voltage PDFs fx(X;t) alternatively computed with the semi-local closure (SL), the data-driven closure with constant coefficients (DDC), and the data-driven closure with a temporal Legendre basis expansion (DDP)
Fig. 5
Fig. 5
Temporal evolution of the KL divergence, DKL(fMC||fx), between the membrane voltage PDF computed with the data-driven closure, fx(X;t), and its Monte Carlo counterpart, fMC(X;t), for selected combinations of the strength (σ) and correlation length (τ) of colored noise ξ(t). The data array f^x used in the data-driven closure is computed with NMCtr=3×104 Monte Carlo realizations
Fig. 6
Fig. 6
Dependence of the KL divergence DKL(fMCfx), averaged over all temporal grid nodes, on the number of Monte Carlo runs used in the sparse regression, NMCtr.
Fig. 7
Fig. 7
The coefficients of the semi-local closure, D4 and D5, and the data-driven closure, β^3 and β^4. For σ=0.05 and τ=0.1, the optimization algorithm sets β^3=0 while D4=O(10-7), which is near zero when compared with β^4 and D5.
Fig. 8
Fig. 8
Temporal snapshots of the joint PDF fx(X;t) of the random state variables x1(t) and x2(t) in the FHN model (33). The dynamics of fx(X;t) is governed by the PDF equation (37), for σ=0.2 and τ=0.1.
Fig. 9
Fig. 9
Temporal evolution of the KL divergence DKL(fMCfx) between the yardstick Monte Carlo solution fMC(X;t) of (33) and the PDF fx(X;t) computed, alternatively, via (37) and (4) with the data-driven closure (38)
Fig. 10
Fig. 10
Snapshots (at times t=20 and 65) of the marginal PDFs fx1(X1;t) (left column) and fx2(X2;t) (right column) alternatively computed with the local (L), semi-local (SL) and data-driven (DD) closures and with Monte Carlo simulations (MC), for the stochastic FHN neuron with σ=0.05 and τ=0.1.
Fig. 11
Fig. 11
Dependence of the KL divergence DKL(fMCfx), averaged over all temporal grid nodes, on the number of Monte Carlo runs used in the sparse regression, NMCtr.
Fig. 12
Fig. 12
Temporal evolution of MI, I, between the different FHN neuron states and of the total correlation, C, between all three states. The PDF equation (4) is closed with the data-driven closure, and the parameter values are set to σ=0.05 and τ=0.01.

Similar articles

References

    1. Alzubaidi H, Shardlow T. Improved simulation techniques for first exit time of neural diffusion models. Comm Stat Simul Comput. 2014;43(10):2508–2520. doi: 10.1080/03610918.2012.755197. - DOI
    1. Asai Y, Kloeden PE. Numerical schemes for random odes with affine noise. Numer Algor. 2016;72(12):155–171. doi: 10.1007/s11075-015-0038-y. - DOI
    1. Bakarji J, Tartakovsky DM. Data-driven discovery of coarse-grained equations. J. Comput. Phys. 2021;434:110219. doi: 10.1016/j.jcp.2021.110219. - DOI
    1. Barajas-Solano DA, Tartakovsky AM. Probabilistic density function method for nonlinear dynamical systems driven by colored noise. Phys Rev E. 2016;93:052121-1–052121-13. doi: 10.1103/PhysRevE.93.052121. - DOI - PubMed
    1. Boelens AMP, Venturi D, Tartakovsky DM. Parallel tensor methods for high-dimensional linear PDEs. J Comput Phys. 2018;375(12):519–539. doi: 10.1016/j.jcp.2018.08.057. - DOI

LinkOut - more resources