Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Oct 17;187(21):5814-5832.
doi: 10.1016/j.cell.2024.08.051.

Decoding the brain: From neural representations to mechanistic models

Affiliations
Review

Decoding the brain: From neural representations to mechanistic models

Mackenzie Weygandt Mathis et al. Cell. .

Abstract

A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.

Keywords: BCIs; data-driven; decoding; deep learning; encoding; language; normative models.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests The authors declare no competing interests.

Figures

Figure 1.
Figure 1.. Encoding-Decoding across scales.
A: An encoder represents the neural response of population K(t) to stimulus x(t) via P(K|x), and a decoder aims to recover x(t) given the neural activity K(t) via P(x|K). B: Systems neuroscience spans scales of descriptions and decoding algorithms can target any individual level, and even span across scales. Here, we outline example scales (from genes to environment), the types of data we can collect (from genetic sequencing to whole-animal video analysis), and the classes of models the field has developed. On the far right is our mapping of scales, example data, and example tools to levels of understanding. Inset images adapted from: (–8).
Figure 2.
Figure 2.. Encoding model and decoding methods.
A: The cercal system of the cricket has four interneurons that represent the wind direction. The preferred wind directions of the neurons are pointing in four cardinal directions and can be represented by orthogonal vectors (on the left). Each neuron responds with a firing rate approximated by a half-wave rectified cosine function. The maximum firing rate is elicited when the wind is blowing in the preferred direction. B: The wind direction x can be decoded as the direction of the population vector x^. This vector is the sum of the four preferred orientations scaled by their firing rate. An example is shown for neurons responding with activity [36, 12, 2, 1]T. Note how the population vector closely matches the wind direction. C: In the k-nearest neighbors (k-NN) decoding method, neural activity K is represented within a neural activity space, which is illustrated here in 2D for two neurons for clarity (neuron 1 and neuron 2 from panel A). With these two neurons, angles between 0° and 225° can be represented. For simplicity, we focus on a nearest neighbor variant with k = 1. As 1-NN is only able to decode discrete variable, we classify the angles in three ranges: 0°−45°, 45°−135° and 135°−225°. Previously observed trials are color-coded by their associated wind direction ranges (L = 13). To decode the wind direction for a new trial (unfilled triangle), the k-nearest neighbors (here, k = 1) in the activity space are identified. The decoded wind direction corresponds to the wind associated with the nearest neighbor, highlighted by the sample connected to the observed sample via a dashed line. D: Bayesian decoders incorporate a prior P(x) (dashed line) that reflects the probability of different wind directions before taking neural evidence into account and influences the decoded angle. For instance, if mainly wind directions around 125° have been experienced (mean of the prior P(x)), the decoded angle will be shifted towards this direction. The likelihood P(K|x) (green-blue line) describes the probability of observing a particular neural response K given a specific wind direction. Following Bayes’ theorem Eq. (3), the prior P(x) and the likelihood P(K|x) are multiplied to obtain the posterior distribution P(x|K) (solid black line). The posterior can be used to decode the wind direction, here 270°, based on highest value for the observed neural activity K.
Figure 3.
Figure 3.. Data-driven models: statistical power vs. mechanistic realism.
Ultimately, as a field we want to map mechanism to computation in order to have causal, testable models. On one side we have mechanistic models, and on the other statistical models that aim to best encode neural dynamics, but there is a large gap between them. We provide a non-exhaustive selection of contributions: (, , , –46).
Figure 4.
Figure 4.. Learnable latent variable models.
A: On the path to building more causal models are new frameworks, such as CEBRA (13), that allow for learning a mapping from the observable data K to the latent dynamics Z. Here, the aim is to use identifiable models with contrastive learning g(Z) (the encoder), then invert this model or use another decoder framework to probe the relationship between the estimated latents, Z1, and a variable such as an externally observable state (behavior), internal, or sensory (i.e., recover some stimulus space (x)).
Figure 5.
Figure 5.. Examples of decoding from motor, vision, and language areas.
A: Close loop experiments using digital twins: Schematic of an inception loop, depicted clockwise from the upper left: 1) Presentation of large entropy natural stimuli and tasks while recording large-scale neural activity, 2) Deep learning models accurately predict neural activity, creating a functional digital twin of the recorded neurons, 3) The in silico model facilitates unlimited experiments and employs mechanistic interpretability tools to characterize neural tuning, and 4) Images and hypotheses synthesized in silico are validated back in vivo. B: Illustration of decoded images from fMRI using Diffusion Models: Ground truth (GT) vs. decoded images generated by Chen et al. (94) from human fMRI with a diffusion model. Note that decoded images share similar color, shape and semantics. C: Multi-modal speech decoding: Adapted from Metzger et al. (95), this panel shows the decoding pipeline, where neural activity was used to train a ANN to predict phone probabilities, speech-sound features and articulatory gestures. A decoder was then constructed to produce text, generate audible speech and animate an avatar, respectively.

References

    1. Wolpert Daniel M, Miall R Chris, and Kawato Mitsuo. Internal models in the cerebellum. Trends in cognitive sciences, 2(9):338–347, 1998. - PubMed
    1. Sussillo David, Nuyujukian Paul, Fan Joline M, Kao Jonathan C, Stavisky Sergey D, Ryu Stephen, and Shenoy Krishna. A recurrent neural network for closed-loop intracortical brain–machine interface decoders. Journal of neural engineering, 9(2):026027, 2012. - PMC - PubMed
    1. Pandarinath Chethan, Gilja Vikash, Blabe Christine H, Nuyujukian Paul, Sarma Anish A, Sorice Brittany L, Eskandar Emad N, Hochberg Leigh R, Henderson Jaimie M, and Shenoy Krishna V. Neural population dynamics in human motor cortex during movements in people with als. Elife, 4: e07436, 2015. - PMC - PubMed
    1. Tuia Devis, Kellenberger Benjamin, Beery Sara, Costelloe Blair R., Zuffi Silvia, Risse Benjamin, Mathis Alexander, Mathis Mackenzie W., van Langevelde Frank, Burghardt Tilo, Kays Roland W., Klinck Holger, Wikelski Martin, Couzin Iain D., Van Horn Grant, Crofoot Margaret C., Stewart Chuck, and Berger-Wolf T. Perspectives in machine learning for wildlife conservation. Nature Communications, 13, 2022. - PMC - PubMed
    1. Wang Quanxin, Ding Song-Lin, Li Yang, Royall Joshua J., Zeng Hongkui, and Ng Lydia. The allen mouse brain common coordinate framework: A 3d reference atlas. Cell, 181:936–953.e20, 2020. - PMC - PubMed

LinkOut - more resources