Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 May 21:7:620.
doi: 10.12688/f1000research.15020.2. eCollection 2018.

Lost in translation

Affiliations

Lost in translation

Parashkev Nachev et al. F1000Res. .

Abstract

Translation in cognitive neuroscience remains beyond the horizon, brought no closer by supposed major advances in our understanding of the brain. Unless our explanatory models descend to the individual level-a cardinal requirement for any intervention-their real-world applications will always be limited. Drawing on an analysis of the informational properties of the brain, here we argue that adequate individualisation needs models of far greater dimensionality than has been usual in the field. This necessity arises from the widely distributed causality of neural systems, a consequence of the fundamentally adaptive nature of their developmental and physiological mechanisms. We discuss how recent advances in high-performance computing, combined with collections of large-scale data, enable the high-dimensional modelling we argue is critical to successful translation, and urge its adoption if the ultimate goal of impact on the lives of patients is to be achieved.

Keywords: Translation; causality; cognitive neuroscience; high-dimensional inference; machine learning.; neuroimaging.

PubMed Disclaimer

Conflict of interest statement

No competing interests were disclosed.

Figures

Figure 1.
Figure 1.. Dimensionality and individualisation.
The face of the Roman Emperor Hostilian (top left) is poorly described by the canonical face of all Roman Emperors (top right), which is—by definition—not identical with any of the individual faces from which it is derived. Furthermore, the individuality of a face is better captured by a low-precision, high-dimensional parameterisation (bottom left), than it is by a high-precision, low-dimensional parameterisation such as the inter-pupillary distance (bottom right). The photograph of Hostilian is reproduced with the kind permission of Dr William Storage.
Figure 2.
Figure 2.. Causal fields.
Distributed causality is elegantly illustrated by the behaviour of artificial neural networks trained to transform an input into an output by optimising the weights of a stack of fully connected nodes. Here the input-output transformation is causally dependent on the nodes and their connections, for it cannot occur without most of them. But when the network is large, its dependence on any limited subset of nodes will be low. This is not because there is a reserve of unused nodes, but because the causality of the system is constitutionally distributed. Inactivating (in black) a set of nodes (large circles) or their connections (small circles) will thus degrade performance broadly in proportion to their number and not necessarily their identity. Causality thus becomes irreducible to any simple specification of necessity and sufficiency. Instead, each node becomes an insufficient but necessary part of an unnecessary but sufficient set of factors: an INUS condition. An adequate description of the causality of the system as a whole then requires specification of the entire causal field of factors: no subset will do, and no strong ranking need exist between them. If the architecture of real neural networks makes such causality possible—and it certainly does—we need to be capable of modelling it. But this is more than just a theoretical possibility. It is striking that encouraging distributed causal architectures through dropping nodes or connections during training dramatically improves the performance of artificial neural networks. And, of course, real neural substrates often exhibit remarkable robustness to injury, a phenomenon conventionally construed as “reserve”, but since no part of the brain lies in wait, inactive, distributed causality is a more plausible explanation.
Figure 3.
Figure 3.. Monomorphous vs polymorphous systems.
Where the fundamental architecture of a biological system is the same, our best guide will be the simple mean of the population, for each individual will differ from it randomly. Studying such monorphous systems is illustrated by adding random noise to an image of a specific watch mechanism, and then averaging across 45 noisy instances. The underlying architecture is thereby easily revealed. Where the solution in each individual differs locally, illustrated by taking a family of 45 different watch mechanisms of the same brand, the population mean is a very poor guide, for individual variability is no longer noise but the outcome of a plurality of comparably good solutions. We must instead define local regularities of organisation, here done by t stochastic neighbour embedding of the images into a two dimensional latent space, revealing characteristic features of each family of solutions. Given that neural systems are complex, stochastically initiated, and optimised by feedback, polymorphous architectures are likely to dominate, mandating a data-driven, neighbourhood-defining approach to modelling.

References

    1. Adolphs R: Human Lesion Studies in the 21st Century. Neuron. 2016;90(6):1151–1153. 10.1016/j.neuron.2016.05.014 - DOI - PMC - PubMed
    1. Bressler SL, Menon V: Large-scale brain networks in cognition: emerging methods and principles. Trends Cogn Sci. 2010;14(6):277–290. 10.1016/j.tics.2010.04.004 - DOI - PubMed
    1. Bzdok D, Yeo BTT: Inference in the age of big data: Future perspectives on neuroscience. NeuroImage. 2017;155:549–564. 10.1016/j.neuroimage.2017.04.061 - DOI - PubMed
    1. Chang L, Tsao DY: The Code for Facial Identity in the Primate Brain. Cell. 2017;169(6):1013–1028.e14. 10.1016/j.cell.2017.05.011 - DOI - PMC - PubMed
    1. Dramiński M, Rada-Iglesias A, Enroth S, et al. : Monte Carlo feature selection for supervised classification. Bioinformatics. 2008;24(1):110–117. 10.1093/bioinformatics/btm486 - DOI - PubMed

Publication types

LinkOut - more resources