Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2021 Oct 6;109(19):3055-3068.
doi: 10.1016/j.neuron.2021.07.011. Epub 2021 Aug 19.

The population doctrine in cognitive neuroscience

Affiliations
Review

The population doctrine in cognitive neuroscience

R Becket Ebitz et al. Neuron. .

Abstract

A major shift is happening within neurophysiology: a population doctrine is drawing level with the single-neuron doctrine that has long dominated the field. Population-level ideas have so far had their greatest impact in motor neuroscience, but they hold great promise for resolving open questions in cognition as well. Here, we codify the population doctrine and survey recent work that leverages this view to specifically probe cognition. Our discussion is organized around five core concepts that provide a foundation for population-level thinking: (1) state spaces, (2) manifolds, (3) coding dimensions, (4) subspaces, and (5) dynamics. The work we review illustrates the progress and promise that population-level thinking holds for cognitive neuroscience-for delivering new insight into attention, working memory, decision-making, executive function, learning, and reward processing.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests The authors declare no competing interests.

Figures

Figure 1:
Figure 1:. Neural state spaces and dimensionality reduction
A) A neural state is a pattern of activity across a population of neurons. Neural states can be represented as histograms of firing rates across neurons (top left) or as points or vectors in neuron-dimensional state space (bottom left). The state space representation makes it more natural to think about neural activity geometrically. We can use this as a starting point for reasoning about the distance between different neural states (top right), whether distance is measured via the Euclidean distance, cosine angle, or some other measure. It also illustrates the geometric interpretation of state magnitude (bottom right), which is the neural state’s distance from the origin, the length of the neural state vector. B) Peri-stimulus time histograms (PSTHs) plot the average firing rate of single neurons as a function of time, aligned to some event. Spikes are discrete and noisy, so we smooth spike trains by averaging across trials or by smoothing spikes in time. Both were done here to data from (Ebitz et al. 2018) (10-trial averages, gaussian smoothing, σ = 25 ms). We can plot these traces in neural state space, in which case they are called neural trajectories: paths linking neural states over time. C) To reduce noise or compress neuron-dimensional state spaces for intuition or visualization, we use dimensionality reduction methods like principal components analysis (PCA). PCA finds an ordered set of orthogonal (independent) directions in neural space that explain decreasing amounts of variability in the set of neural states. The first principal component (PC 1) is the direction vector (linear combination of neuronal firing rates) that explains the most variance in neural states (here, 73%). It is often related to time. PC 2 is the direction vector that explains the next most variability, subject to the constraint that it is orthogonal to PC 1, and so on. Right) Projecting neural activity onto a subset of the PCs (here, the first 2) flattens our original 3-dimensional example into a 2-dimensional view that still explains 93% of the variability in neural states.
Figure 2:
Figure 2:. Manifolds and coding dimensions
A) A toy manifold for the data from Figure 1, illustrating some on- and off-manifold states. B) In a system with two pools of mutually inhibitory, self-excitatory neurons, the manifold would be an almost 1-dimensional negative correlation between the two neurons. This is sufficient for any computation that a balance beam could perform, like measuring the difference between 2 inputs. C-D) Coding dimensions are direction vectors in a state space that explain variability across task conditions (here illustrated as colored Gaussian distributions). With 2 task conditions, coding dimensions can be identified via linear (C) or logistic regression (D). Linear regression fits a line that connects the two states (black arrow), so we decode by projecting data onto the regression line (red and blue distributions). Logistic regression finds a classifier that discriminates the two states, so the distance from the separating boundary is the decoding axis. E) When there is a continuum of conditions, we can use linear regression to identify a classifier, even when the states are arranged non-linearly. A linear approximation captures most of the variance in most curved functions and, at least in some circumstances, behavior may itself reflect a linear readout from a curved representation (Sohn et al. 2019). F) When there are more than 2 conditions, multiple-class models can identify a set of coding dimensions: a coding subspace. Here, multinomial logistic regression identifies coding dimensions that predict one specific condition (colored distributions), versus the other conditions (gray distributions). Because this approach assumes that each neural state is associated with exactly one condition, the last direction vector is fully determined by the rest of the set (i.e. the green axis is the not-blue and not-red axis). In general, whenever there are k exclusive conditions, the coding subspace will have at most k-1 dimensions. G) To understand why decoding accuracy improves as we add more neurons, it is helpful to realize that space expands in higher dimensions. Consider the distances between 100 random, uniformly distributed points in 1 or 2 dimensions. H) The expansion means that the distance between distributions will tend to increase as we add more neurons to our decoding model. Compare the difference in coding axis projections as we go from decoding from 1 neuron, to 2 neurons, to 3 neurons. I) Although each new neuron adds noise, neural states within distributions (red trace) will always be closer together than neural states between distributions (gray traces), unless those distributions overlap perfectly. As the dimensionality of the model increases, the distance between distributions grows more rapidly than the distance within distributions. Pairwise correlations between neurons limit information when they cause the dimensionality of the manifold to grow more slowly than the number of recorded neurons. Distances were calculated over 100 simulated trials with 1 to 50 neurons with independent, unit-variance Gaussian noise, at 3 different effect sizes (0.5, 1, 2).
Figure 3:
Figure 3:. Dynamics
A) A 2-pool neural network (similar to Figure 2B) and 3 different views of the network’s dynamics. Bottom left) A phase portrait shows the direction and magnitude of the local forces in the network (gray arrows). Simulated trajectories are overlaid (pale green traces). Filled circles are the fixed points at the center of the attractors’ basins. Top right) We can also visualize the dynamics as a potential energy landscape, which highlights the unstable peaks and stable valleys that shape how activity evolves over time. Bottom right) A cartoon illustrating one slice through the potential energy landscape, following the typical path of trajectories in the state space (i.e. the dotted line in the phase portrait). B) One random simulation of the network in (A), illustrating the activity in each pool of neurons as a function of time. Noise is sufficient to cause the network to hop from one stable state (p1 > p2) to a second (p2 > p1) and back again. Over many simulations, the duration of time spent in each state will be proportional to the relative depth of the states in (A).

References

    1. Ajemian R, D’Ausilio A, Moorman H and Bizzi E. 2013. “A Theory for How Sensorimotor Skills Are Learned and Retained in Noisy and Nonstationary Neural Circuits.” Proceedings of the National Academy of Sciences 110 (52): E5078–87. - PMC - PubMed
    1. Akrami A, Kopec CD, Diamond ME, and Brody CD. 2018. “Posterior Parietal Cortex Represents Sensory History and Mediates Its Effects on Behaviour.” Nature 554 (7692): 368–72. - PubMed
    1. Aoi MC, Mante V, and Pillow JW. 2020. “Prefrontal Cortex Exhibits Multidimensional Dynamic Encoding during Decision-Making.” Nature Neuroscience 23 (11): 1410–20. - PMC - PubMed
    1. Baker B, Lansdell B, and Kording K. 2021. “A Philosophical Understanding of Representation for Neuroscience.” ArXiv Preprint ArXiv:2102.06592.
    1. Barack DL, and Krakauer JW. 2021. “Two Views on the Cognitive Brain.” Nature Reviews Neuroscience 22 (6): 359–71. - PubMed

Publication types

LinkOut - more resources