Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 21:17:1092933.
doi: 10.3389/fncir.2023.1092933. eCollection 2023.

A deep network-based model of hippocampal memory functions under normal and Alzheimer's disease conditions

Affiliations

A deep network-based model of hippocampal memory functions under normal and Alzheimer's disease conditions

Tamizharasan Kanagamani et al. Front Neural Circuits. .

Abstract

We present a deep network-based model of the associative memory functions of the hippocampus. The proposed network architecture has two key modules: (1) an autoencoder module which represents the forward and backward projections of the cortico-hippocampal projections and (2) a module that computes familiarity of the stimulus and implements hill-climbing over the familiarity which represents the dynamics of the loops within the hippocampus. The proposed network is used in two simulation studies. In the first part of the study, the network is used to simulate image pattern completion by autoassociation under normal conditions. In the second part of the study, the proposed network is extended to a heteroassociative memory and is used to simulate picture naming task in normal and Alzheimer's disease (AD) conditions. The network is trained on pictures and names of digits from 0 to 9. The encoder layer of the network is partly damaged to simulate AD conditions. As in case of AD patients, under moderate damage condition, the network recalls superordinate words ("odd" instead of "nine"). Under severe damage conditions, the network shows a null response ("I don't know"). Neurobiological plausibility of the model is extensively discussed.

Keywords: Alzheimer’s disease; associative memory recall; autoencoder; dopamine; familiarity; hippocampus; pattern completion; picture-naming task.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

FIGURE 1
FIGURE 1
(A) A schematic that shows how convergent projections from the cortex to the hippocampus achieves Pattern Separation. (B) A schematic that shows how the loop dynamics over the cortical representations in the hippocampus achieves Pattern Completion.
FIGURE 2
FIGURE 2
(A) Digit Images without noise. (B) Image of Zero at different noise levels.
FIGURE 3
FIGURE 3
Architecture of standard convolutional autoencoder. This network is trained to reproduce the observed input image as the output. i.e., if there is a noise in the input, the network needs to learn the noise along with the input.
FIGURE 4
FIGURE 4
Architecture of recurrent convolutional autoencoder. This network structure and the parameters are the same as the Standard Convolutional Autoencoder. One difference is that the current iteration’s output is used as input in the next iteration. After multiple iterations, the settled pattern is used as output.
FIGURE 5
FIGURE 5
(A) Schematic diagram of the cortico-hippocampal memory Network. (B) Architecture of Value-based Convolutional Autoencoder Network. CL- Central Layer. Here the value function estimated from the encoder layer represents the familiarity (correctness value).
FIGURE 6
FIGURE 6
Comparison between the actual familiarity value and the network predicted value at different noise levels.
FIGURE 7
FIGURE 7
Image reconstruction comparison for Image three at different noise levels. SCA, standard convolutional autoencoder; RCA, recurrent convolutional autoencoder; ACA, attractor-based convolutional autoencoder.
FIGURE 8
FIGURE 8
Comparison of reconstruction error between SCA, RCA, and ACA at different noise levels.
FIGURE 9
FIGURE 9
Network architecture of multimodal autoencoder with an associated value function. CL, central layer. The network receives two types of inputs (Image and Word). The CL establishes the association between the Image encoded feature vector and the Word encoded feature vector. The value function predicts the noise level in the input combination.
FIGURE 10
FIGURE 10
Vector representation of word features in 2D space. These features are generated by giving the Word inputs alone. Two clusters are formed for each category (even and odd). This explains the characteristic of pattern separation, where similar patterns form a cluster and non-similar patterns are far away in the feature space.
FIGURE 11
FIGURE 11
The response counts for all the number-names and the number-type-names while resetting different percent of neurons (0, 10, 20, 30, 40, 50, and 60%) for the image input of number 9. Here neuronal loss is related to resetting the neurons. Correct response (“nine”) is observed when there is no neuronal loss. For 10–30% neuronal loss, the responses belonging to the same category (“one,” “three,” “five,” “seven,” and “odd”) are observed, which is related to the semantic error. For 40–50% neuronal loss, most responses are non-word responses, which is attributed to no response.
FIGURE 12
FIGURE 12
(A) Response percentage comparison of correct number-name (“nine”) vs. wrong number-names (“zero,” “one,”…,” eight”) for image input 9, (B) comparison of percentage of correct number-type-name (“odd”) vs. wrong number-type-name (“even”) responses for image input of “9.” (C) Sum of the count of even number-name responses vs. odd number-name responses for image input 4. (D) The sum of the count of even number-name responses vs. odd number-name responses for image input “9.” These results show that the possibility of a wrong number-name or wrong number-type response is minimal for a given image input, which explains the logic behind the observation of semantic errors.

Similar articles

Cited by

References

    1. Amaral D. G., Ishizuka N., Claiborne B. (1990). Chapter Neurons, numbers and the hippocampal network. Prog. Brain Res. 83 1–11. 10.1016/S0079-6123(08)61237-6 - DOI - PubMed
    1. Amaral D. G., Witter M. P. (1989). The three-dimensional organization of the hippocampal formation: A review of anatomical data. Neuroscience 31 571–591. 10.1016/0306-4522(89)90424-7 - DOI - PubMed
    1. Amit D. J. (1990). Modeling brain function: The world of attractor neural networks. Trends Neurosci. 13 357–358. 10.1016/0166-2236(90)90155-4 - DOI
    1. Banino A., Barry C., Uria B., Blundell C., Lillicrap T., Mirowski P., et al. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature 557 429–433. 10.1038/s41586-018-0102-6 - DOI - PubMed
    1. Barbarotto R., Capitani E., Jori T., Laiacona M., Molinari S. (1998). Picture naming and progression of Alzheimer’s disease: An analysis of error types. Neuropsychologia 36 397–405. 10.1016/S0028-3932(97)00124-3 - DOI - PubMed