Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec 21;118(51):e2018422118.
doi: 10.1073/pnas.2018422118.

Place cells may simply be memory cells: Memory compression leads to spatial tuning and history dependence

Affiliations

Place cells may simply be memory cells: Memory compression leads to spatial tuning and history dependence

Marcus K Benna et al. Proc Natl Acad Sci U S A. .

Abstract

The observation of place cells has suggested that the hippocampus plays a special role in encoding spatial information. However, place cell responses are modulated by several nonspatial variables and reported to be rather unstable. Here, we propose a memory model of the hippocampus that provides an interpretation of place cells consistent with these observations. We hypothesize that the hippocampus is a memory device that takes advantage of the correlations between sensory experiences to generate compressed representations of the episodes that are stored in memory. A simple neural network model that can efficiently compress information naturally produces place cells that are similar to those observed in experiments. It predicts that the activity of these cells is variable and that the fluctuations of the place fields encode information about the recent history of sensory experiences. Place cells may simply be a consequence of a memory compression process implemented in the hippocampus.

Keywords: compression; hippocampus; memory; place cells; sparse autoencoders.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interest.

Figures

Fig. 1.
Fig. 1.
Storing efficiently correlated patterns in memory. (A) Schematic of an ultrametric tree with p ancestors and k descendants per ancestor used to generate correlated patterns. (B) A possible scheme to take advantage of the correlations and generate compressed representations that are sparse and hence, more efficiently storable. (C) Total number Pcorr of correlated patterns generated from a tree model with parameters p, k, and γ that can be stored using a simple compression strategy, divided by the number of patterns Puncorr that could be stored (using approximately the same number of neurons and synapses) if the patterns were uncorrelated. The plot thus shows the relative advantage of using a compression strategy compared with storing incompressible patterns as a function of k and γ.
Fig. 2.
Fig. 2.
(A) Scheme of the simulated autoencoder. The input layer (300 neurons; mappable to EC) projects to an intermediate layer (DG; 600 neurons). The weights to DG are chosen so that the output light blue neurons reproduce the input. (B) Geometry of the compressed representations: correlations between the representations of different descendants of the same ancestor for the inputs (red), the autoencoder (intermediate layer in A; black), and a random encoder (blue) as a function of the branching ratio when the total number of patterns is kept constant (and hence, the number of ancestors varies). As γ is fixed (γ=0.6), the correlations of the inputs and the random encoder are constant (γ2=0.36 for the input). For the autoencoder, they decrease with the compressibility of the environment (i.e., when k increases). SI Appendix, Fig. S1A shows the average of the absolute value of the correlations between all descendants. (C) Memory performance of the autoencoder compared with a random encoder and a readout of the input; the number of reconstructed memories is plotted as a function of the total number of memory patterns (changing the number of ancestors). For the autoencoder, we show two curves that correspond to different branching ratios (k = 2, 20) but the same γ=0.6 (different values of γ are shown in SI Appendix, Fig. S1B). As the number of ancestors increases, the quality of reconstruction decreases, and the number of reconstructed memories reaches a maximum. The autoencoder outperforms the input and the random encoder and performs better when the memories are more compressible. (D) Memory capacity as a function of the square root of the total number of synapses for autoencoder, random, and input representations. The autoencoder outperforms all the other models, even though it requires four times more synapses than the system that reads out inputs directly.
Fig. 3.
Fig. 3.
(A) Schematic of a rodent exploring an open field arena. Whenever the animal returns to the same location, its sensory inputs will have some similarity with those experienced during previous visitations of that location. (B) Schematic of the architecture of the network with potential mapping of the layers onto EC and hippocampus. (C) The memory retrieval capacity (the number of patterns of 7,480 ± 150 stored inputs per session that can be recalled from noisy cues in the autoassociative network) as a function of the number of training sessions (exposures to the environment). This illustrates the computational advantage of using even a simple compression algorithm with one layer of learned weights as implemented in our network (black) compared with a network of the same architecture (and coding levels) but with fixed random feed-forward connections (blue). Note that the memory retrieval capacity is different from the reconstruction memory capacity studied in Fig. 2, which we also plot for comparison (dotted lines) and which is again larger for the autoencoder than for the random network.
Fig. 4.
Fig. 4.
(A) Trajectories of a simulated animal in an open arena (exploration statistics A) and (B) the spatial tuning profiles emerging from training the autoencoder network on an artificial sensory input corresponding to these trajectories for 36 neurons randomly selected from the second (DG-like) layer of the model. We find a very heterogeneous set of spatial tuning profiles: some consistent with simple place cells, some exhibiting multiple place fields, and some that look more like boundary cells. The statistics of this diverse set of responses appear to be consistent with calcium imaging data from the dentate (12). (C and D) Same as A and B but for a set of trajectories with a slightly different exploration bias (exploration statistics B). Half of the trajectories on both sides have the same statistics and are drawn from an isotropic distribution of initial positions. The other half of the trajectories are drawn from different distributions with initial positions biased toward the lower right corner in A and B and the upper left corner in C and D. As a result, the two sets of place fields that correspond to exploration statistics A and B are slightly different.
Fig. 5.
Fig. 5.
(A) Maps of differences of average place fields between A and B sessions in a simulated experiment in which the animal experiences a random sequence of the two types of sessions with different exploration statistics (as in Fig. 4). (B) Normalized overlap between the place fields of two sessions with the same (blue) or with different (red) statistics as a function of the time interval between sessions. The overlap is larger in the former case and stays rather high even for long time intervals between sessions, indicating relative long-term stability despite short-term fluctuations. (C) Median decoding error for position from simple regression predictors for the x and y coordinates of the animal. Position can be predicted more accurately if the decoder was trained on the same type of exploration statistics as in the session used for testing, but even for different statistics, this works significantly better than chance level. The decoding error grows only slowly with the interval between training and test sessions.
Fig. 6.
Fig. 6.
(A) Difference maps of average place fields in A sessions between the cases when the previous session was A vs. B (i.e., sequences AA–BA). (B) Similar difference maps for B sessions (corresponding to sequences AB–BB). Note that these differences are more subtle than those between A and B shown in Fig. 5A. (C) To demonstrate that the fluctuations of the previous two panels are not just noise but reliably capture history-dependent information, we show that one can decode from the neural (DG) representations of the simulated animal exploring an environment not just the statistics of the current session (i.e., A vs. B; green) but also, the statistics of the previous session it experienced (purple). We decode using simple maximum margin linear classifiers in combination with a form of boosting (by combining the predictions made from several neural representations experienced at different points in time) and report the resulting performance as a function of the number of neural representations (snapshots of the second-layer activity in the current session) used for decoding. While the performance is only slightly above chance level when decoding from a single snapshot of the neural activity, a linear classifier can almost perfectly discriminate A and B sessions when combining together the predictions of the trained classifier for many such activity patterns by taking a simple majority vote of the predicted labels. Crucially, the decoder for the statistics of the previous session only uses activity patterns from the current session.

References

    1. O’Keefe J., Dostrovsky J., The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971). - PubMed
    1. Gluck M. A., Myers C. E., Hippocampal mediation of stimulus representation: A computational theory. Hippocampus 3, 491–516 (1993). - PubMed
    1. McClelland J. L., Goddard N. H., Considerations arising from a complementary learning systems perspective on hippocampus and neocortex. Hippocampus 6, 654–665 (1996). - PubMed
    1. Hasselmo M. E., Wyble B. P., Free recall and recognition in a network model of the hippocampus: Simulating effects of scopolamine on human memory function. Behav. Brain Res. 89, 1–34 (1997). - PubMed
    1. Schapiro A. C., Turk-Browne N. B., Botvinick M. M., Norman K. A., Complementary learning systems within the hippocampus: A neural network modelling approach to reconciling episodic memory with statistical learning. Philos. Trans. R. Soc. Lond. B Biol. Sci. 372, 20160049 (2017). - PMC - PubMed

Publication types