Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Feb;93(2):022302.
doi: 10.1103/PhysRevE.93.022302. Epub 2016 Feb 5.

Low-dimensional dynamics of structured random networks

Affiliations

Low-dimensional dynamics of structured random networks

Johnatan Aljadeff et al. Phys Rev E. 2016 Feb.

Abstract

Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.

PubMed Disclaimer

Figures

FIG. 1
FIG. 1
Eigenspaces of two example networks – one with block structured connectivity (top) and another with continuous gain modulation (bottom). (a) The synaptic gain matrix gij. (b) The spectrum of the random connectivity matrix J in the complex plane. The spectrum is supported by a disk with radius r=Λ1 indicated in red. (c) The square root of the largest eigenvalues of GN(2). When these are greater than 1, the corresponding eigenvectors [shown in (d)] are active autocorrelation modes. For the continuous function we chose the circulant parametrization (see Sec. IV A) with g0 = 0.3, g1 = 3.0 and γ = 2.0. For the block structured connectivity, g was chosen such that the first five eigenvalues match exactly to those of the continuous network.
FIG. 2
FIG. 2
Low-dimensional structure of network dynamics. Traces of the firing rates ϕ[xi(t)] (a) and autocorrelations Ci(τ) (b) of eight example neurons chosen at random from the network with continuous gain modulation (shown in the bottom row of Fig. 1). (c) The sum of squared projections of the vector Ci(τ) on the vectors spanning UG(2) (the active modes, solid lines) or UG(2) (the inactive modes, dashed lines). The dimension of the subspace UG(2) is K★ = 1 for the network with g = const and K★ = 3 for the block and continuous cases (orange and red, respectively), much smaller than N – K★ ≈ N, orthogonal the dimension of the complement space UG(2). (d) Our analytically derived subspace accounts for almost 100 percent of the variance in the autocorrelation vector for τ10 (in units of the synaptic time constant). (e) Reducing the dimensionality of the dynamics via principal component analysis on ϕ(x) leads to vectors (inset) that account for a much smaller portion of the variance (when using same dimension K★ for the subspace), and lack structure that could be related to the connectivity. (f) Summary data from 50 simulated networks per parameter set (N, structure type) at τ = 0. As N grows the leak into UG(2) diminishes if one reduces the space of the Ci(τ) data while the fraction of variance explained becomes smaller when using PCA on the ϕ[xi(t)] data, a signature of the extensiveness of the dimension of the chaotic attractor.
FIG. 3
FIG. 3
Results for a toroidal network. (a) A grid strategy with K=N for tiling the [0, 1] × [0, 1] torus with N neurons (left) and the resulting deterministic gain matrix with elements gij for three values of N as defined in Eq. (35) (right). Unlike the ring network, here g depends on N, and its derivative is unbounded so as N increases the gain function “folds.” The parameters of the connectivity matrix are g0 = 0.7, g1 = 0.8. (b) The 25 nonzero eigenvalues of GN(2) for N = 1600 and the eigenvectors corresponding to eigenvalues that are greater than 1 plotted on a torus with coordinates (θi1,θi2). (c) The sum of squared projections of the vector Ci(τ) on the vectors spanning UG(2) (the active modes, red line) or UG(2) (the inactive modes, black line). Shades indicate the standard deviation computed from 50 realizations. (d) Comparison of the variance explained at τ = 0 by our predicted subspace (solid line) and by performing PCA on ϕ(x) (dashed line). Error bars represent 95% confidence intervals. Inset: the subspace we derived accounts for a large portion of the variance for time lags τ10 (in units of the synaptic time constant).
FIG. 4
FIG. 4
Spectrum of connectivity matrices with heterogeneous, correlated joint degree distribution. The network parameters were chosen to be κ = 0.7, θ = 28.57, NE = 1000, NI = 250, p0 = 0.05, W0 = 5, where κ and θ are the form and scale parameters respectively of the distribution from which the in- and out-degree sequences are randomly drawn. The average correlation ρ between the in- and out-degree sequences was varied between 0 and 1. For the values ρ = 0.2 (left) and ρ = 0.8 (right) we drew 25 degree sequences and based on them drew the connectivity matrix according to the prescription outlined in Sec. V. The eigenvalues of each matrix were computed numerically and are shown in black. For each value of ρ we computed the average functions T, S etc., and the roots of the characteristic polynomials A(Λ) and B(λ) (see Appendixes B and C for derivation). The predictions for the support of the bulk (solid red line) and the outliers (orange points and dotted line) are in agreement with the numerical calculation. Inset: as a function of ρ, there is a positive outlier that exits the disk to the right.

References

    1. Kleinfeld D, Bharioke A, Blinder P, Bock DD, Briggman KL, Chklovskii DB, Denk W, Helmstaedter M, Kaufhold JP, Lee W-CA, Meyer HS, Micheva KD, Oberlaender M, Prohaska S, Reid RC, Smith SJ, Takemura S, Tsai PS, Sakmann B. The Journal of Neuroscience. 2011;31:16125. - PMC - PubMed
    1. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD. Nature. 2011;473:87. - PMC - PubMed
    1. Karlebach G, Shamir R. Nature Reviews Molecular Cell Biology. 2008;9:770. - PubMed
    1. Chung K, Deisseroth K. Nature Methods. 2013;10:508. - PubMed
    1. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB. PLoS Biology. 2005;3:e68. - PMC - PubMed

Publication types

LinkOut - more resources