Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 May 9;12(5):e1004858.
doi: 10.1371/journal.pcbi.1004858. eCollection 2016 May.

High-Degree Neurons Feed Cortical Computations

Affiliations

High-Degree Neurons Feed Cortical Computations

Nicholas M Timme et al. PLoS Comput Biol. .

Abstract

Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Analysis diagram and partial information decomposition.
(A) We used a custom built 512-electrode array to record spiking activity from cortico-hippocampal organotypic cultures. We then used transfer entropy to detect effective connectivity among the recorded neurons. Finally, we studied the topology and the two-input computations in these networks. We also examined bounds on higher-order computation. (B) To study two-input computations, we used multivariate transfer entropy to deconstruct traditional transfer entropy measures into synergy, redundancy, and unique information terms [4]. Specifically, two-input computations were measured using the synergistic information computed for the system of two neurons sending significant amounts of information (as measured by transfer entropy) to a third neuron (see Materials and Methods – Multivariate Transfer Entropy and Computation).
Fig 2
Fig 2. Example multivariate TE interactions.
The PID multivariate TE is able to dissect different types of computations performed by one receiving neuron (I) with two transmitting neurons (J and K). Unique information is the portion of the information provided by one transmitter alone, redundancy is the portion of information provided by both transmitters, and synergy is the portion provided only by the combined input of both transmitters. Note that mutual information MI(JP; IF) does not detect common drive from the history of the receiving neuron (Hidden Self Interaction Example, red X). Note that the interaction information II(JP; KP; IF) is not able to detect simultaneous synergy and redundancy (Synergistic and Redundant Interaction Example, red X) and it is not able to detect unique information (Single and Redundant Interaction Example).
Fig 3
Fig 3. Computations were performed by neurons using information about the spiking state of other neurons.
Many previous studies have examined the ability of neurons to compute information about stimuli via functional connections from those stimuli to cortical neurons (green arrows). In our analysis, we examined the computations performed by neurons about the spiking states of functionally connected neurons (blue arrows). Also, note that our experimental system is ex vivo and we analyzed spontaneous activity.
Fig 4
Fig 4. Transfer entropy and degree distributions.
(A and C) Transfer entropy distributions for raw TE and normalized TE across the two time scales studied in this analysis (interactions with delays of 1.6–6.4 ms (A) and 3.5–14 ms (B)). Note that all distributions are roughly log-normal (see Eq 21, nonlinear regression performed in Matlab). Solid line is average of all recordings; shaded region represents ± one standard deviation across recordings. (B and D) In, out, and total degree distributions from the real data and total degree distributions from random networks with matching numbers of neurons, connections, and sampling statistics to the real data. Note that because the total degree distribution from the real data extends far beyond the distribution from the random networks, the real data are heavy-tailed. We did not assess whether the degree distributions were scale-free due to issues surrounding sub-sampling [93]. Also, note that the in-degree distribution had a shorter tail than the out-degree distribution, indicating that there were more high out-degree neurons than there were high in-degree neurons. Solid line is average of all recordings; shaded region represents ± one standard deviation across recordings.
Fig 5
Fig 5. Degree-dependent computation.
(A and B) Histograms across recordings of correlations between synergy (computation) and receiver in-degree (A) and between synergy (computation) and transmitter out-degree (B) (Ndata = 40). Also shown is the skew towards positive or negative correlation values along with the likelihood to observe a skew of that magnitude or larger under the assumption positive and negative correlation values are equally likely (binomial cdf with ppos = pneg = 0.5). Nearly all correlations were likely to be significant given the proximity of the null model correlations (no correlation) to zero (null model consisted of randomized degree/computation pairings) (Nnull = 400). Histogram bin size optimized using methods established in [95]. (C and D) Distributions of synergy values (computation) vs. degree averaged across all recordings. These plots show similar effects to (A and B). Solid line represents the median value; shaded region represents 1st quartile to 3rd quartile. Only degrees with 20 or more neuron groups are shown, so lower degrees, which had more neuron groupings, had a greater influence on correlation calculations in (A and B). Also, note that the in-degree distribution showed a shorter tail than the out-degree distribution (Fig 4B and 4D), so it was not possible to extend the computation performed plot to high in-degrees. (E and F) Explanatory computation performed (E) and contribution to computation (F) networks. In (E), notice that all neurons compute the same amount of information, but that in (F), neurons with high out-degrees contribute more information to computations. Dot size represents the median values from the matching degree in (C). This shows that computation was uncorrelated with the in-degree of the receiver neuron, but was correlated with the out-degree of the transmitter neuron.
Fig 6
Fig 6. Higher-order computations did not dominate high in-degree neurons.
Distributions of average information gains caused by adding more inputs for each neuron (Igain(n)) across all recordings (violin plots). Red bars and diamonds indicate medians. The nth order synergy (computation) is less than or equal to the information gain. The median values were fit using exponential decay (green line). Negative exponents indicate decreasing information gains caused by adding more inputs. Both fits produced small negative exponent values, one of which had an error overlapping zero. (Fit: nonlinear regression performed in Matlab, Error: 95% confidence range in fit exponent value). These results indicate that higher-order computation tended to remain constant or decrease with added inputs.
Fig 7
Fig 7. A degree-modified Hebbian rule qualitatively matched results from the original data and balanced reinforcement of network-wide activity with neuron-to-neuron communication.
(A) Structure of the simple feedforward network model. Note that neurons with high indexes were more strongly correlated with the binary signal b(t). (B) Connectivity probability diagrams before and after network rewiring. Probabilities were averaged across 100 models. Note that the Hebbian rule pooled all the connections between neurons with strong correlations to the binary signal, while the degree-modified Hebbian rule preserved many connections from input layer neurons that were not highly correlated with the binary signal. (C) Degree vs. synergy correlation values for models and the real biological data. Note that the modified Hebbian rules qualitatively reproduced the correlation pattern seen in the real data. (Light dots represent individual models or recordings, dark dots represent mean value, and bars represent standard deviation) (D) Distributions of average mutual information between connected neurons (unconditioned (top) and conditioned on the binary signal (bottom)) across models. Note that the degree-modified Hebbian model showed higher mutual information after the effects of the common binary signal were removed. (mean value, bars represent standard deviation, Mann-Whitney rank-sum test (three dots: p < 0.001), False Discovery Rate Control [–98]) (E) Though all rewiring methods increased the mutual information between connected pairs (which reinforces the common network activity defined by the binary signal), the degree-modified Hebbian rule also increased mutual information between connected neurons (neuron-to-neuron communication) independent of common network activity.

References

    1. Friston KJ (2011) Functional and effective connectivity: a review. Brain Connectivity 1. - PubMed
    1. Bullmore E, Sporns O (2009) Complex brain networks: graph theoretical analysis of structural and function systems. Nature Reviews Neuroscience 10. - PubMed
    1. Williams PL, Beer RD (2010) Nonnegative decomposition of multivariate information. arXiv: 1004.2515.
    1. Williams PL, Beer RD (2011) Generalized measures of information transfer. arXiv: 1102.1507.
    1. Wibral M, Lizier JT, Priesemann V (2014) Bits from brains for biologically inspired computing. Frontiers in Robotics and AI 2: 1–25.

Publication types

LinkOut - more resources