Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jun 10;15(6):e1007122.
doi: 10.1371/journal.pcbi.1007122. eCollection 2019 Jun.

How single neuron properties shape chaotic dynamics and signal transmission in random neural networks

Affiliations

How single neuron properties shape chaotic dynamics and signal transmission in random neural networks

Samuel P Muscinelli et al. PLoS Comput Biol. .

Abstract

While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Microscopic network dynamics with firing rate adaptation.
In the top row (panels a and b), the network is below the bifurcation, (g = 0.96gc(γ, β)), and it exhibits a transient activity to the stable fixed point. In the bottom row (panels c and d), the fixed point is unstable (g = 1.3gc) and the network exhibits irregular, self-sustained oscillations. In the left column (panels a and c), the network is in the resonant regime (γ = 0.2, β = 0.5), as it can be seen from the single-neuron linear frequency response function G˜(f) (cf. Eq 9). In the right column (panels b and d), the network is in the non-resonant regime (γ = 1, β = 0.1). For each panel, ten randomly chosen units are shown, out of N = 1000 units. Panel c corresponds to the resonant chaotic state, while in panel d the system exhibits chaotic activity similar to the case described in [1]. The insets show the eigenvalue spectrum in the complex plane for the four different sets of parameters. The dashed black line indicates the imaginary axis. Comparing the eigenvalue spectrum of panel a with the one of panel c, we see that the network undergoes a Hopf bifurcation.
Fig 2
Fig 2. Self-consistent statistics in the chaotic regime.
a: Resonant (narrow-band) chaos. Power spectral density obtained from mean-field theory (solid line) and microscopic simulations (light blue, dashed) for γ = 0.25, β = 1 and g = 2gc(γ, β). The dashed, dark blue line indicates the square modulus of the linear response function G˜(f) for the same adaptation parameters. Inset: Normalized mean-field autocorrelation Cx(τ) for the same parameters, plotted against the time lag in units of τx. b: Non-resonant (broad-band) chaotic regime. Curves and inset are the same as in a, but with γ = 1, β = 0.1 and g = 2gc(γ, β). c: Maximum-power frequency fp of the recurrent network plotted against γ, for different β. Crosses depict results obtained from microscopic simulations, circles show the semi-analytical prediction based on the iterative method and dashed lines shows the theory based on the single neuron response function. For γ = 0 all curves start at fp = 0. d: Power spectral density Sx(f) for different levels of heterogeneity of the parameter β (solid lines), compared to the case without heterogeneity (dashed line). All the curves are almost superimposed, except at very low frequencies where small deviations are visible (inset). Parameters: γ = 0.25, β¯=1, g=2gc(γ,β¯). e: Distributions P(x) of the activation x from microscopic simulation (N = 2000, solid lines) and theoretical prediction (dashed lines). The adaptation parameter were γ = 0.25 and β = 1. f: Normalized power spectral density S^x(f)Sx(f)/maxfSx(f) (solid lines) at different iterations n, for the network with adaptation. For the first iterations, the powers of G^(f)G˜(f)/maxfG˜(f) (dashed lines), provide a good approximations of the power spectrum width. The initial power spectral density is a constant and the network parameters are the same as in panel a.
Fig 3
Fig 3. Correlation time and effect of recurrent connections.
a: Correlation time (blue solid line) and Q-factor (dashed line) as a function of the connectivity strength. The weakest connectivity level plotted is g = 1.01gc(γ, β). Adaptation parameters: γ = 0.1 and β = 1. The dash-dotted horizontal line indicates the Q-factor of a single unit with the same adaptation parameters, driven by white noise. b: Correlation time (blue) and Q-factor (black, dash-dotted line) as a function of the adaptation timescale τa ≔ γ−1. Both the recurrent network (solid line) and the single unit driven by white noise (dashed line) scale with τa. β = 1 and g = 1.5gc(γ, β).
Fig 4
Fig 4. Response of the mean-field network to an oscillatory input.
a: Schematic representation of the random network driven by an external input, with phase randomization. For g > gc, the chaotic activity can be seen as internally-generated noise. b: Effect of an oscillatory external input on the power spectral density Sx(f). In the example, γ = 0.25, β = 1, g = 2gc(γ, β), fI = 0.12, while AI = 0.5 (blue) and AI = 0 (gray). Simulations (solid blue) and theory (dashed blue) are superimposed. c: Top: Schematic representation of the separation of the power spectral density into its oscillatory (Aosc) and chaotic (Abkg) components. Note that these quantities depend on the size of the frequency discretization bin. Bottom: Graphical interpretation of Pbkg, i.e. the total variance of the network activity due to chaotic activity (shaded gray area).
Fig 5
Fig 5. Adaptation shapes the SNR in the chaotic regime.
a: For small g, a recurrent network driven by an oscillatory input and external noise can be analyzed in the linear response theory framework. Top row: response of the network to oscillatory drive and independent white noise to each neuron. Bottom row: response of the network to oscillatory drive and independent low-frequency noise to each neuron. For each row, from left to right, we plot the power spectrum of the input noise, the background component of the power spectrum A^osc, the oscillatory component of the power spectrum A^osc, and the SNR as a function of the driving frequency. The hat over the symbols Abkg and Aosc indicates that, to highlight the network shaping, they are normalized to have the same maximum height (equal to one). Notice that, since both signal and noise are shaped in the same way in the linear response framework, the introduction of adaptation does not affect the SNR. b: For large g, the network is subject to internally generated noise and driven by oscillatory input. We plot the same quantities as in panel a. Notice that, due to the nonlinearity of the network, signal and internally-generated noise are shaped in different ways, with the signal being subject to a broader effective filter. As a consequence, the introduction of adaptation in the nonlinear network shapes the SNR by favoring low frequencies. Parameters of the network with adaptation for all panels: γ = 0.25, β = 1 g = 2gc(γ, β) and AI = 0.5.
Fig 6
Fig 6. Effect of a strong oscillatory input.
a: SNR at the driving frequency fI as a function of the driving frequency, for different values of the signal amplitude AI. As AI increases, nonlinear interaction between signal and noise become stronger, leading to a qualitative change in the SNR profile. b: Total power of the chaotic (black dashed) and oscillatory (light blue) components of the power spectrum, in the case of strong input (AI = 1.5). For both panels, γ = 0.25, β = 1.0, and g = 2gc(γ, β).
Fig 7
Fig 7. Stability of the fixed point and local properties.
a: Critical value of the coupling gc (color code, right) for different adaptation parameters γ (horizontal axis) and β (vertical axis). The curve βH(γ) (solid black line) separates the regions of the γβ plane in which for increasing g we encounter a Hopf bifurcation (above βH(γ)) or a saddle-node bifurcation (below βH(γ)). Cross and filled circle: parameters used in Fig 1. Left inset: dependence of gc on β for fixed γ = 0.9. Top inset: dependence of gc on γ for fixed β = βH(γ = 0.9). Blue line: Hopf bifurcation; red line: saddle-node bifurcation. b: Resonance frequency fm for different adaptation parameters γ, β. Notice that in the non-resonant region the resonance frequency is not defined. Left inset: square-root increase of fm as a function of β for fixed γ = 0.9. Top inset: non-monotonic behavior of fm as a function of γ, for fixed β = βH(γ = 0.9).
Fig 8
Fig 8. Two examples of multi-dimensional rate models.
Parameters defining both models can be found in Table 1. a-b-c: Analysis of a three-dimensional rate model. Eigenvalue spectra (a) corresponding to the coupling values g1 = 1.28, g2 = 1.4 and g3 = 2. The dashed line indicates the imaginary axis. In b we plot the linear response function of the single unit G˜(f) (solid line), and the instability threshold corresponding to the three coupling values g1, g2 and g3 (dashed lines). In c we plot the solution of the mean field theory obtained with the iterative method for the three values of g, g1 = 1.5, g2 = 2 and g3 = 3. d-e-f: Same as a,b,c, but for a four-dimensional rate model.

Similar articles

Cited by

References

    1. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in Random Neural Networks. Physical Review Letters. 1988;61(3):259–262. 10.1103/PhysRevLett.61.259 - DOI - PubMed
    1. Kadmon J, Sompolinsky H. Transition to Chaos in Random Neuronal Networks. Phys Rev X. 2015;5:041030.
    1. van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science. 1996;274:1724 10.1126/science.274.5293.1724 - DOI - PubMed
    1. Rajan K, Abbott LF, Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Phys Rev E. 2010;82:011903 10.1103/PhysRevE.82.011903 - DOI - PMC - PubMed
    1. Huang C, Doiron B. Once upon a (slow) time in the land of recurrent neuronal networks…. Current Opinion in Neurobiology. 2017;46:31–38. 10.1016/j.conb.2017.07.003 - DOI - PubMed

Publication types

LinkOut - more resources