Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 5;18(12):e1010590.
doi: 10.1371/journal.pcbi.1010590. eCollection 2022 Dec.

Input correlations impede suppression of chaos and learning in balanced firing-rate networks

Affiliations

Input correlations impede suppression of chaos and learning in balanced firing-rate networks

Rainer Engelken et al. PLoS Comput Biol. .

Abstract

Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Suppression of chaos in balanced networks with common vs independent input.
A) Common input: External input Iiin(t)=NI0+δI(t) consist of a positive static input and a sinusoidally time-varying input with identical phase across neurons. B) Independent input: External input Iiin(t)=NI0+δIi(t) consist of a positive static input and a sinusoidally time-varying input with a random phase for each neuron. C) External inputs (top), recurrent feedback Iirec=jJijϕ(hj) and their population average (thick line) (middle), and synaptic currents (bottom) for three example neurons. Recurrent feedback has a strong time-varying component that is anti-correlated with the external input, resulting in cancellation. D) Same as in C, but for independent input. Here, no cancellation occurs and the network is entrained into a forced limit cycle. Throughout this work, green (violet) refers to common (independent) input. Model parameters: N = 5000, g = 2, f = 0.01/τ, I0 = J0 = 1, I1 = 6.
Fig 2
Fig 2. Largest Lyapunov exponent shows different chaos suppression for common vs independent input.
Largest Lyapunov exponent λ1 as a function of input modulation amplitude I1 for common (green) and independent (violet) input. I1crit are the zero-crossings of λ1 and thus the minimum I1 required to suppress chaotic dynamics. With common input, λ1 crosses zero at a much larger I1. Dots with error bars are numerical simulations, dashed lines are largest Lyapunov exponents computed by dynamic mean-field theory (DMFT). Error bars indicate ±2 std across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.2/τ, I0 = J0 = 1.
Fig 3
Fig 3. Difference in chaos suppression increases with network size, tightness of balance, and near the transition to chaos.
A) Dependence of I1crit on network size N. With common input, I1critN for large N, but is constant for independent input. Error bars indicate interquartile range around the median. B) Dependence of I1crit on ‘tightness of balance’ parameter K, which scales both I0 and J0. Results for large K are the same as in A but for small K, the network is no longer in the balanced regime, and results for common and independent input become similar. Error bars indicate ±2 std. C) Dependence of I1crit on gain parameter g for low input frequency f. Close to gcrit, an arbitrarily small independent input can suppress chaos; this is not the case with common input. The quasi-static approximation (dotted) and DMFT (dashed) results coincide. Error bars indicate ±2 std. Model parameters: I0 = J0 = 1 in A and C; g = 2, f = 0.2/τ in A and B; I0=J0=K/N, in B; f = 0.01/τ in C, N = 5000 in B and C.
Fig 4
Fig 4. Mechanism of chaos suppression with slowly varying common input.
A) External input Iiin(t)=NI0+δIi(t) (dashed) and recurrent input Iirec(t)=jJijϕ(hj(t)) (solid) for three example neurons. B) Synaptic currents hi for four example neurons. C) Local Lyapunov exponent from network simulation, which reflects the local exponential growth rates between nearby trajectories (solid), and Lyapunov exponent from stationary DMFT (dashed) used in quasi-static approximation. When I1>NI0, external input periodically becomes negative and silences the recurrent activity (gray bars). During these silent episodes, the network is no longer chaotic and λ1local=-1/τ. When the input is positive, dynamics remains chaotic and λ1local>0 on average. Model parameters: N = 5000, g = 2, f = 0.01/τ, I0 = J0 = 1.
Fig 5
Fig 5. Dynamic mean-field theory captures frequency-dependent effects on the suppression of chaos.
A) I1crit as a function of input frequency f (g = 1.6 light color, g = 2 dark color). I1crit has a minimum that is captured by the non-stationary DMFT (dashed green line) but not by the quasi-static approximation (dotted green line), which does not depend on frequency f. At high f, the low-pass filter effect of the leak term attenuates the external input modulation for both cases, thus resulting in a linearly increasing I1crit. B) Dependence of I1crit on the gain parameter g for high input frequency (f = 0.2/τ), showing a monotonic increase. The non-stationary DMFT results are in good agreement with numerical simulations. For comparison, we include the result of the quasi-static approximation (dotted green line), which shows a more gradual dependence on g and applies only at low frequencies (see Fig 3). Error bars indicate ±2 std. Model parameters: N = 5000, g = 2, f = 0.2/τ, I0 = J0 = 1.
Fig 6
Fig 6. Difference in chaos suppression in sparsely-connected E-I network.
λ1 as a function of I1 for common and independent inputs, showing a monotonic decrease with I1 and a larger zero-crossing for common input. This result is qualitatively similar to that obtained in the single population network with negative mean coupling (Fig 2). Error bars indicate ±2 std, lines are a guide for the eye. Increasing the excitatory efficacy α increases λ1 for both common and independent input (α ∈ {0, 0.5, 0.7}). Model parameters (parameters defined as in [16] for constant input and WI1 and WE1 are the modulation amplitudes of the input to the excitatory and inhibitory population): NE = NI = 3500, K = 700, g = 1.6, JEE=gα/K, JEI=-1.11g/K, JIE=gα/K, JII=-g/K, WE=gαK, WI=0.44gK, WE1 = gαI1, WI1 = 0.44gI1, f = 0.2/τ.
Fig 7
Fig 7. Common input impedes learning in balanced networks.
A) Schematic of the training setup. A ‘student network’ (S) is trained to autonomously generate the output Fout(t)=sin(2πft), by matching its recurrent inputs to those of a driven ‘teacher network’, whose weights are not changed during training. B) λ1 in the teacher network as a function of I1. C) Test error in the student network as a function of I1. Critical input amplitude I1crit is indicated by vertical dashed lines. Consistent with the difference in I1crit, the teacher networks driven with common input require a larger I1 to achieve small test errors in the student network. Error bars indicate interquartile range around the median. D) Top: Target output Fout (green) and actual output z (dashed orange) for two input amplitudes I1 ∈ {5, 15}. Bottom: Firing rate ϕ(hi) for two example neurons in teacher network with common input (green full line) and student network (orange dotted line) for two input amplitudes. E) Scatter plot of test error as a function of λ1 for each network realization in A and B, with both common and independent input. When chaos in the teacher network is not suppressed (λ1 > 0), test error is high. Training is successful (small test error) when targets are strong enough to suppress chaos in the teacher network. Training is terminated when error reaches below 10−2. Model parameters: N = 500, g = 2, I0 = J0 = 1, ϕ(x) = max(x, 0) in both teacher and student networks; f = 0.2/τ in the teacher network inputs and target Fout.
Fig 8
Fig 8. Activity, population firing rate and autocorrelations of balanced networks with common input.
A) Firing rates ϕi(t) = ϕ(hi(t)) of three example units. B) Mean population firing rate ν(t). C) Time-averaged two-time autocorrelation function (Eq 5) as a function of time difference with no external input (I1 = 0). D-F) Same as A-C but for input amplitude I1=0.8N56.5; activity remains chaotic. G-I Same as A-C but for stronger input (I1=10N707.1); activity is entrained by the external input and is no longer chaotic. Dashed lines (middle and right columns) are results of non-stationary DMFT, full lines are median across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.05/τ, I0 = J0 = 1.
Fig 9
Fig 9. Activity, population firing rate and autocorrelations of balanced networks with independent input.
A) Firing rates ϕi(t) = ϕ(hi(t)) of three example units. B) Mean population firing rate ν(t). C) Autocorrelation function with no external input (I1 = 0). D)-F) Same as A-C but for input amplitude of I1 = 0.8; activity remains chaotic. G)-I) Same as A-C but for stronger input (I1 = 10); activity is fully controlled by the external input and is no longer chaotic. Dashed lines (middle and right columns) are results of stationary DMFT, full lines are median across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.05/τ, I0 = J0 = 1.
Fig 10
Fig 10. No qualitative difference in chaos suppression by common vs independent input in canonical random networks.
A) I1crit as a function of input frequency f (g=2 light color, g = 2 dark color). I1crit has a minimum for both common and independent input. The independent input case is identical to the scenario studied in [3]. At high f, the low-pass filter effect of the leak term attenuates the external input for both cases, thus resulting in a linearly increasing I1crit. B) Dependence of I1crit on the gain parameter g for both low input frequency (f = 0.01/τ, dark color) and high input frequency (f = 0.2/τ, light color), showing a monotonic increase. Error bars indicate ±2 std. Model parameters: N = 5000, g{2,2}, f ∈ {0.01, 0.2}/τ, I0 = J0 = 0.

Similar articles

Cited by

References

    1. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in Random Neural Networks. Physical Review Letters. 1988;61(3):259–262. doi: 10.1103/PhysRevLett.61.259 - DOI - PubMed
    1. Molgedey L, Schuchhardt J, Schuster HG. Suppressing chaos in neural networks by noise. Physical Review Letters. 1992;69(26):3717–3719. doi: 10.1103/PhysRevLett.69.3717 - DOI - PubMed
    1. Rajan K, Abbott LF, Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Physical Review E. 2010;82(1):011903. doi: 10.1103/PhysRevE.82.011903 - DOI - PMC - PubMed
    1. Schuecker J, Goedeke S, Helias M. Optimal Sequence Memory in Driven Random Networks. Physical Review X. 2018;8(4):041029. doi: 10.1103/PhysRevX.8.041029 - DOI
    1. Sussillo D, Abbott LF. Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron. 2009;63(4):544–557. doi: 10.1016/j.neuron.2009.07.018 - DOI - PMC - PubMed

Publication types