Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 16;18(11):e1010639.
doi: 10.1371/journal.pcbi.1010639. eCollection 2022 Nov.

Brain connectivity meets reservoir computing

Affiliations

Brain connectivity meets reservoir computing

Fabrizio Damicelli et al. PLoS Comput Biol. .

Abstract

The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs' architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain's ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. General approach scheme.
For each of the three species we generated a Bio- Echo State Network (BioESN) by integrating the real connectivity pattern as reservoir of an Echo State Network (ESN). Thus, in contrast to the classical ESN with randomly connected reservoir, BioESNs have connectomes based on connectivity coming from the empirical connectomes. We also propose a framework for mapping biological to artificial networks, bio2art, which allows to optionally scale up the empirical connectomes to augment the model capacity. The resulting BioESNs are then tested on cognitive tasks (see Methods for details on the tasks).
Fig 2
Fig 2. bio2art, scaling up connectivity and surrogates.
The connectivity of the networks derived from the empirical connectivity and used as reservoirs in the BioESNs can be represented as an adjacency matrix. This figure shows examples of adjacency matrices representing a scaled up version (4x) of the Macaque monkey empirical brain connectivity then integrated into the BioESN as reservoir. We also build surrogate connectivities for comparison with the empirical case that preserves real connectivity patterns. Each surrogate network controls for different aspects of the connectivity, as shown in the summary table in the figure. The figure depicts an example of the empirical (Macaque) connectivity and the different derived connectivities tested. Notice the nodes indices, explicitly showing the upscaling of the connectivity. This was repeated for all the other connectomes tested. See Mapping and upscaling connectomes with bio2art for more details on connectivity generation and surrogates.
Fig 3
Fig 3. Memory capacity task.
(A) (Upper) Schematic representation of the task. An input signal (X) is feed as a time series into the network through an input neuron. Each output neuron independently learns a lagged version of the input (Yτ) (Lower) Alternative representation of the task in terms of the input/output structure of the data. (B) Examples of network evaluation on the task. A forgetting curve (grey line) is shown for each tested species (columns) and connectivity W condition (color coded). For each time lag (τ) the score is plotted (squared Pearson correlation coefficient, ρ2). The memory capacity (MC, see legends) is defined as the sum of performances over all values of τ and represents the shaded areas in the plotted examples. (C) Performance of the bio-instantiated echo state networks (BioESNs) for the three different species tested. For each pattern length, 100 different networks with newly instantiated weights were trained (4000 time steps) and tested (1000 time steps). The test performance of each networks is represented by a point in the plots.
Fig 4
Fig 4. Sequence memory task.
(A) (Upper) Schematic representation of one trial of the task. The input signal (X1) and the recall signal (X2) are feed as a time series into the network through two input neurons. When the recall signal is given (X2 = 1), the output neuron is supposed to deliver the memorized input of the last L steps. (Lower) Alternative representation of the task in terms of the input/output structure of the data. (B) Examples of actual and predicted times series for 5 trials at three different difficulty levels (pattern length, from top to bottom: L = 10/14/18). The scatter plots on the right show the predicted vs. the true output (as explained in main text). The BioESN in the example was built from human connectome with the Bio (no-rank) variation. (C) Performance of the bio-instantiated echo state networks (BioESNs) for different task difficulties (pattern length) for the three different species. The bio-instantiated reservoirs, Bio (rank/no-rank), are compared to surrogates with random connectivity patterns. For each pattern length, 100 different networks with newly instantiated weights were trained (800 trials) and tested (200 trials). The curves depict the mean test performance and standard deviation across networks.
Fig 5
Fig 5. Scaling of performance with reservoir size.
(A) Scaling up empirical connectomes with bio2art. Scaling allows to specify a number of neurons per area (brain region as defined in the connectome). The interareal weights might be mapped either homogeneously or heterogeneously. Homogeneous mapping partitions total weights in equal parts amongst interareal connections. Heterogeneous mapping partitions total weights at random amongst interareal connections. (B) Relationship between the neurons per area and the total reservoir size for all the studied scaling factors. (C) Performance of BioESNs with scaled up connectomes on the memory capacity task, for heterogeneous and homogeneous interareal connectivity patterns (upper and lower row, respectively). For each single condition (size, interareal connectivity), 100 different networks with newly instantiated weights were trained (4000 time steps) and tested (1000 time steps). The curves depict the test performance mean and standard deviation across runs.
Fig 6
Fig 6. Cognitive tasks.
Schematic representation of the tasks and the input/output data structure for each of the cognitive tasks used to evaluate the performance of the BioESNs. Left: Memory capacity (MC) task, where the network receives a stream of random values as single input X and has several independent outputs Y (for simplicity, the example shows only two. Each output is memorized by an independent output neuron of the network and is supposed to recall the input at a specific time lag τ. The BioESNs were trained with 4000 time steps and tested on the subsequent 1000. Right: One trial of the sequence recall task. The network receives inputs X1, X2 coming from a random sequence and a recall signal channel, respectively. There is only one output neuron, which after the recall signal channel indicates it (i.e., X2 = 1) is supposed to reproduce the input received in the previous L steps, i.e., the pattern length parameter determining the difficulty of the task (for simplicity, in the scheme L = 2). The BioESNs were trained with 800 trials and tested on 200 trials. The score was computed considering only the recall phase in order to avoid inflation of the metric, given that the fixation periods were much easier to perform correctly.

References

    1. Hassabis D, Kumaran D, Summerfield C, Botvinick M. Neuroscience-inspired artificial intelligence. Neuron. 2017;95(2):245–258. doi: 10.1016/j.neuron.2017.06.011 - DOI - PubMed
    1. Srivastava RK, Greff K, Schmidhuber J. Highway networks. arXiv preprint arXiv:150500387. 2015;.
    1. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016.
    1. Csordás R, van Steenkiste S, Schmidhuber J. Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks. 2020.
    1. Sporns O. The Non-Random Brain: Efficiency, Economy, and Complex Dynamics. Frontiers in Computational Neuroscience. 2011;5(February):5. doi: 10.3389/fncom.2011.00005 - DOI - PMC - PubMed

Publication types