Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Aug 25;117(34):20959-20968.
doi: 10.1073/pnas.2004306117. Epub 2020 Aug 11.

Stochastic sampling provides a unifying account of visual working memory limits

Affiliations

Stochastic sampling provides a unifying account of visual working memory limits

Sebastian Schneegans et al. Proc Natl Acad Sci U S A. .

Abstract

Research into human working memory limits has been shaped by the competition between different formal models, with a central point of contention being whether internal representations are continuous or discrete. Here we describe a sampling approach derived from principles of neural coding as a framework to understand working memory limits. Reconceptualizing existing models in these terms reveals strong commonalities between seemingly opposing accounts, but also allows us to identify specific points of difference. We show that the discrete versus continuous nature of sampling is not critical to model fits, but that, instead, random variability in sample counts is the key to reproducing human performance in both single- and whole-report tasks. A probabilistic limit on the number of items successfully retrieved is an emergent property of stochastic sampling, requiring no explicit mechanism to enforce it. These findings resolve discrepancies between previous accounts and establish a unified computational framework for working memory that is compatible with neural principles.

Keywords: capacity limits; population coding; resource model; visual working memory.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
Sampling interpretation of working memory models. (AC) A theoretical account of neural population coding can be reinterpreted as sampling. (A) The stimulus-evoked response of spiking neurons in an idealized population depends on their individual tuning (one neuron’s tuning function and preferred value [*] is highlighted). (B) Probability distribution over stimulus space obtained by associating a spike with the preferred stimulus of the neuron that generated it. (C) Precision of maximum likelihood estimates based on spikes emitted in a fixed decoding window. Precision, defined as the width of the likelihood function (Insets), is discretely distributed as a product of the tuning precision (ω1) and the number of spikes, which varies stochastically. Assuming normalization of total activity encoding multiple items, larger set sizes correspond to less mean activity per item. (D and E) An account based on averaging limited memory slots can also be described as sampling. (D) Allocation of a fixed number of samples or slots (here, three) to memory displays of different sizes. (E) Precision is discretely distributed as a product of the tuning width, ω1, and the number of samples allocated per item.
Fig. 2.
Fig. 2.
Response distributions and model fits in delayed reproduction tasks. (A) Distributions of response errors in a single-report task for a representative participant at different set sizes (10). (B and C) ML fits of the data in A with the stochastic sampling model and fixed sampling model, respectively. (D) Distributions of response errors in a whole-report task for a representative participant at set size four, showing how errors increase with the (freely chosen) order of sequential report (24). (E and F) ML fits of the participant’s data with the stochastic sampling model and fixed sampling model, respectively. Fits are based on results from all set sizes, not only the single set size shown in D.
Fig. 3.
Fig. 3.
Model comparison based on single- and whole-report data. (A) Mean difference in log likelihood of each model from the stochastic sampling model (with independence between items), for a benchmark dataset of single-report experiments. More positive values indicate better fits to data. Error bars indicate ±1 SE across participants. (B) The same comparison for a set of whole-report experiments. (C) Total difference in log likelihood between models across single- and whole report experiments. (D) Fano factor (ratio of variance to mean) of precision distribution. A constant Fano factor is characteristic of the stochastic model and contrasts with the varying Fano factor (dependent on set size and number of samples) in fixed sampling. (E) Mean difference in log likelihood for differing levels of discretization in the generalized stochastic model (Top), and number of participants best fit with each discretization level (Bottom). Differences in log likelihood are plotted relative to the maximum discretization (p=1; Left) corresponding to the standard stochastic model with Poisson-distributed precision. Lower discretization (p<1) corresponds to more samples each of lower precision, converging to a continuous Gamma distribution over precision as p approaches zero (Right). All models have the same number of free parameters and include a fixed per-item probability of swap errors (SI Appendix).
Fig. 4.
Fig. 4.
Precision distributions in the generalized stochastic model, for different levels of discretization, p, and different set sizes. (Insets) Construction of the corresponding distributions of response error (for set size eight), with thin lines showing normal distributions with different precisions incrementally accumulated in ascending order (magenta to blue). (A) Example of discrete Poisson-distributed precision values (p = 1). For typical ML parameters, estimates are based on a small mean number of samples (here, γ = 12), each of moderate precision (ω1 = 1.5). (B and C) With decreasing discretization (p< 1), estimates are based on larger mean numbers of samples, and discrete precision values are more finely spaced. (D) In the limit as discretization falls to zero, the mean number of samples becomes infinite, and the distribution over precision approaches a continuous Gamma distribution. The ratio of variance to mean precision (Fano factor) is fixed (at ω1 = 1.5) across all set sizes and levels of discretization.
Fig. 5.
Fig. 5.
Item limits in sampling models. For each model, A, C, E, and G show how the probability distribution of the number of items recovered with greater than zero precision (A and C; greater than a fixed threshold for E and G) changes with set size (color coded, increasing blue to red; discrete probability distributions are depicted as line plots for better visualization). B, D, F, and H plot the mean number of items with above-threshold precision as a function of set size for different threshold values. Thresholds are defined as a proportion of the base precision ω1. (A and B) In the fixed sampling model, the number of items with nonzero precision increases with set size, then plateaus when the number of items equals the number of samples. (C and D) The stochastic sampling model with Poisson variability also has a limit on the number of items with nonzero precision, although this limit is probabilistic and emerges asymptotically (converging to the distribution shown by the red curve in C for large set sizes, corresponding to the mean number of items plotted as black curve in D). (E and F) Stochastic models with lower discretization display similar probabilistic item limits for precision exceeding a fixed threshold, but with the expected number of items saturating at different values depending on threshold (different colors in F). (G and H) This property also extends to models with continuous precision distributions.

References

    1. Luck S. J., Vogel E. K., The capacity of visual working memory for features and conjunctions. Nature 390, 279–281 (1997). - PubMed
    1. Cowan N., The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav. Brain Sci. 24, 87–114 (2001). - PubMed
    1. Zhang W., Luck S. J., Discrete fixed-resolution representations in visual working memory. Nature 453, 233–235 (2008). - PMC - PubMed
    1. Bays P. M., Catalao R. F. G., Husain M., The precision of visual working memory is set by allocation of a shared resource. J. Vis. 9, 7–7 (2009). - PMC - PubMed
    1. Ma W. J., Husain M., Bays P. M., Changing concepts of working memory. Nat. Neurosci. 17, 347–356 (2014). - PMC - PubMed

Publication types