Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Jul 5;24(7):930.
doi: 10.3390/e24070930.

Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition

Affiliations
Review

Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition

Ehren L Newman et al. Entropy (Basel). .

Abstract

The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the "higher-order" information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.

Keywords: computation; cortical circuits; entropy; higher-order interactions; information theory; neural recording; neuroscience.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The total information contained in the activity of two source neurons X1 and X2 about the activity of a target neuron Y consists of multiple parts. This total information, I(X1,X2;Y), is represented by the outermost oval. Contained within this total is the information that X1 and X2 each independently carry about Y, represented by the red circle for I(X1;Y) and the blue circle for I(X2;Y). These independent sources can carry redundant information, represented by the overlapping striped section labeled Red(X1,X2;Y). The non-redundant information each source neuron accounts for is the unique information Unq(X1;Y) and Unq(X2;Y). Finally, and most relevantly for the study of information processing, the joint state of X1 and X2 can account for the activity of Y to some degree. This information is not accounted for by either source independently and is the synergistic information that X1 and X2 carry about Y, i.e., Syn(X1,X2;Y). The purple space making up the difference between what is included in the I(X1;Y) and I(X2;Y) circles and the total information I(X1,X2;Y) represents Syn(X1,X2;Y).
Figure 2
Figure 2
Partial information lattices. On the left is the lattice for two predictor variables, and on the right is the lattice for three predictor variables. Each lattice is constructed and annotated following the notation in [1]. Lattice vertices are partial information atoms, i.e., the unique modes of information-sharing that comprise the overall joint mutual information. Information atoms are denoted by index only: for example, {1}{2} is the information redundantly disclosed by X1 or X2, {1}{23} is the information disclosed by X1 or (X2 and X3), etc. Lattice edges indicate which atoms subsume other atoms. Atoms connected to and below other atoms consist of components of and/or subsets of the higher atoms, for example, {1}{2}{1}{23} and {2}{13}, since information disclosed by {1}{2} would also be visible to {1}{23} and {2}{13} if we did not use the Mobius inversion.
Figure 3
Figure 3
Example of how neuronal activity recordings can be subjected to PID-based study. Rapid extraction and sectioning of mouse brains created 400 μm sagittal slices of cortex. Incubation in nutrient media allowed the slice to culture, restoring organotypic connectivity patterns [47]. Placing the culture on a high-density 512-channel microelectrode array allowed for recordings permitting spike sorting via PCA waveform analysis, to attribute observed spikes to individual neurons. The resulting spike rasters reflect the millisecond-level precision activation patterns of hundreds of neurons. Analyzing the effective connectivity among neurons using transfer entropy enables extraction of the full effective network upon which the spiking dynamics occurred. The computational triads, consisting of two neurons (source neurons) connecting to a common third neuron (target neuron), can then be identified. PID can then be applied separately to each computational triad. Variations across triads can be used to analyze how synergy (or redundancy) varies as a function of network properties such as the boundaries of a rich club or as a function of the number of feedback and recurrent connections. Figure adapted from [2].
Figure 4
Figure 4
Empirical study of circuit dynamics using PID can reveal covariates of synergistic integration. (A) Synergistic integration is robustly positively correlated with the strength of feedforward connections in a triad. Left: A single representative network with 3000+ computational triads shows that feedforward connection strength (i.e., triad TE) was positively related to the synergy for each triad. Right: This positive relationship was reliably observed over 75 different effective networks analyzed from 25 different cultures. Images from [2]. (B) The mean synergy across all triads localized to rich clubs was significantly higher than the synergy in triads not localized to a rich club. The inset ‘’ indicates p<1×109 for the difference between conditions. Image from [2]. (C) Sorting computational triads based on the number of connections from the target neuron back to the source neurons (feedback connections) and connections between the source neurons (recurrent connections) revealed more synergy for triads with more recurrent connections. Image from [4]. (D) Feedforward and recurrent connections were both positively related to synergistic integration, while feedback connections were not reliably related to synergistic integration. Image from [4]. (E) Synergy (normalized to be the proportion of the receiver entropy, or pHrec) was non-monotonically related to the similarity of the spiking of the source neurons (normalized to be the proportion of the maximum possible mutual information, or pMImax) when analyzed across a wide range of timescales. Each panel shows the results for a single timescale. Note that the x-axis is logarithmically scaled, and the range varies across panels. The timescale varies from left to right across panels, ranging from short (e.g., 3 ms, 5 ms, and 11 ms) to long (e.g., 485 ms, 1044 ms, and 2250 ms), as indicated above each panel. At short timescales, the maximum similarity in spiking between source neurons was low, and both synergy (green solid line) and redundancy (purple dashed line) were positively related to the similarity of the spiking between source neurons. At long timescales, the total similarity was high, and only redundancy was positively related to the similarity of the source neurons. Synergy was maximized when the source neurons were intermediately similar, with pMImax=0.07, as marked with a vertical dashed line in each panel. Image related to the similarity from [3].

References

    1. Williams P.L., Beer R.D. Nonnegative Decomposition of Multivariate Information. arXiv. 20101004.2515
    1. Faber S.P., Timme N.M., Beggs J.M., Newman E.L. Computation is concentrated in rich clubs of local cortical networks. Netw. Neurosci. 2018;3:384–404. doi: 10.1162/netn_a_00069. - DOI - PMC - PubMed
    1. Sherrill S.P., Timme N.M., Beggs J.M., Newman E.L. Correlated activity favors synergistic processing in local cortical networks in vitro at synaptically relevant timescales. Netw. Neurosci. 2020;4:678–697. doi: 10.1162/netn_a_00141. - DOI - PMC - PubMed
    1. Sherrill S.P., Timme N.M., Beggs J.M., Newman E.L. Partial information decomposition reveals that synergistic neural integration is greater downstream of recurrent information flow in organotypic cortical cultures. PLoS Comput. Biol. 2021;17:e1009196. doi: 10.1371/journal.pcbi.1009196. - DOI - PMC - PubMed
    1. Varley T.F., Sporns O., Schaffelhofer S., Scherberger H., Dann B. Information processing dynamics in neural networks of macaque cerebral cortex reflect cognitive state and behavior. bioRxiv. 2022 doi: 10.1101/2021.09.05.458983. - DOI - PMC - PubMed

LinkOut - more resources