Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Feb 21;15(2):e1006563.
doi: 10.1371/journal.pcbi.1006563. eCollection 2019 Feb.

Independent working memory resources for egocentric and allocentric spatial information

Affiliations

Independent working memory resources for egocentric and allocentric spatial information

David Aagten-Murphy et al. PLoS Comput Biol. .

Abstract

Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Experiment one.
(A) LM-PRESENT design. Participants memorized the locations of colored disks in the presence of a landmark (a larger dark gray disk; note object sizes are exaggerated for visibility). (B-D) Data points indicate mean variability in location recall for set sizes 1, 2 and 4 respectively, with predictions of the optimal integration model overlaid (colored lines). Note the model captures both the reduction in variability near the landmark, and the plateau in variability at far landmark-target separations. LM-PRESENT data is shown in red, LM-ABSENT in blue. Errorbars and patches indicate 95% CI. Gray dots indicate size of the landmark on the x-axis scale. (E-H) Box plots depicting parameter estimates for the best-fitting model (notch represents 95% confidence interval on the median). Note the decrease in egocentric precision (E), decrease in allocentric precision (F) and increase in lapse rate (H) associated with increasing set size, while the best-fitting model exhibited no changes in the allocentric scale (rate of decay with distance), which is therefore estimated by a single parameter (G).
Fig 2
Fig 2. Ideal observer model.
(A) The visual working memory decoding model, in which egocentric and allocentric estimates are integrated depending on their respective reliabilities. While precision of the egocentric component is set by Pego, the allocentric precision is determined by two parameters: the peak precision obtained when landmark and target are aligned (Amax), and a scale parameter describing how quickly allocentric precision declines with increasing landmark-target distance (Ascale). The model further incorporates a fixed probability of lapsing (p(lapse); responding at random relative to the target), giving four free parameters in total. (B) Precision of egocentric (blue) and allocentric (green) estimates shown as a function of distance from the landmark. While egocentric precision is constant, the precision of allocentric information decreases exponentially as the distance increases. The precision of the integrated estimate (red) is equal to the sum of precisions of the individual components.
Fig 3
Fig 3. Experiment two.
(A-C) Mean variability in memory recall across participants for LM-ENCODE (A), LM-GAP (B) and LM-PRESENT (C) conditions (with LM-ABSENT shown on the right in blue). There is a substantial reduction in variability in the vicinity of the landmark irrespective of whether the landmark was persistently (LM-PRESENT) or intermittently shown (LM-GAP), but no apparent influence of the visual landmark when it was only visible during encoding (LM-ENCODE). Predictions of the best-fitting model are overlaid.
Fig 4
Fig 4. Experiment three.
(A) Example LM-SHIFT trial. When the landmark returned, it was shifted by either 6° clockwise or counter-clockwise (exaggerated above for clarity; light gray disk illustrates previous landmark location and was not visible in the experiment). If participants used the post-shift location to anchor their allocentric estimates, we would expect their responses to be biased in the direction of the displacement, with the magnitude related to the reliability of the allocentric cue. (B) The response bias measured in the direction of the shift (magnitude 6° indicated by gray line), as a function of distance from the landmark. The data reveals a consistent bias in the direction of the displacement, which may be either towards or away from the visible landmark location. Bias magnitude depended on distance from the landmark with a peak of ~80% of the shift. (C) Spatially specific decreases in response variability near the landmark in LM-SHIFT. Note that for clarity the bias was subtracted prior to calculation of the median absolute deviation. The model predictions (overlaid) simultaneously capture landmark effects on both bias (B) and variability (C), without any additional free parameters.

References

    1. Jiang Y, Olson IR, Chun MM. Organization of visual short-term memory. J Exp Psychol Learn Mem Cogn. 2000;26: 683–702. 10.1037//0278-7393.26.3.683 - DOI - PubMed
    1. Luck SJ, Vogel EK. The capacity of visual working memory for features and conjunctions. Nature. 1997;390: 279–281. 10.1038/36846 - DOI - PubMed
    1. Pashler H. Familiarity and visual change detection. Percept Psychophys. 1988;44: 369–378. 10.3758/BF03210419 - DOI - PubMed
    1. Bays PM, Catalao RFG, Husain M. The precision of visual working memory is set by allocation of a shared resource. J Vis. 2009;9: 7.1–711. 10.1167/9.10.7 - DOI - PMC - PubMed
    1. Wilken P, Ma WJ. A detection theory account of change detection. J Vis. 2004;4: 1120–1135. 10.1167/4.12.11 - DOI - PubMed

Publication types