Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jan 5;31(2):974-992.
doi: 10.1093/cercor/bhaa269.

Visual and Semantic Representations Predict Subsequent Memory in Perceptual and Conceptual Memory Tests

Affiliations

Visual and Semantic Representations Predict Subsequent Memory in Perceptual and Conceptual Memory Tests

Simon W Davis et al. Cereb Cortex. .

Abstract

It is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used functional magnetic resonance imaging (fMRI) and representational similarity analysis to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative observed ("is round"), taxonomic ("is a fruit"), and encyclopedic features ("is sweet"). We identified brain regions where each representation type predicted later perceptual memory, conceptual memory, or both (general memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent perceptual memory in visual cortices, but also facilitated conceptual and general memory in more anterior regions. Semantic representations, in turn, predicted perceptual memory in visual cortex, conceptual memory in the perirhinal and inferior prefrontal cortex, and general memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.

Keywords: DNNs; episodic memory; object representation; representational similarity analysis; semantic memory.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Task paradigm. (A) Across 2 encoding runs on Day 1, participants viewed 360 object images while covertly naming. (B) Incidental memory tests on Day 2 consisted of previously viewed and novel concepts (conceptual memory test), or previously viewed concepts with previously viewed and novel image exemplars (perceptual memory test).
Figure 2
Figure 2
Four steps of the method employed. (1) RDMs are generated for each visual and semantic representation type investigated, and activation pattern dissimilarity matrices are generated for each region-of-interest. (2) An “activation pattern matrix” was created for each region-of-interest. This matrix tracks the dissimilarity between the fMRI activation patterns for all voxels in the ROI for each pair of stimuli, yielding a matrix of dissimilarity values of the same dimensions as the model RDM. (3) For each brain region, each model RDM is correlated with the activation pattern matrix, yielding a stimulus-brain fit (IRAF) measure for the region. (3) The IRAF is used as an independent variable in regressor analyses to identify regions where the IRAF of each RDM predicted subsequent memory in the perceptual memory test but not the conceptual memory test (perceptual memory), in the conceptual memory test but not the perceptual memory test (conceptual memory), and in both memory tests (general memory).
Figure 3
Figure 3
RDMs and corresponding descriptive MDS plots for the 3 visual (A: Early DNN Visual Information; B: Middle DNN Visual Information; C: Late DNN Visual Information) and 3 semantic (D: Observed infomation; E: Taxonomic Information; F: Encyclopedic Information) representations used in our analyses.
Figure 4
Figure 4
Visual information predicting subsequent perceptual memory, conceptual memory, and general memory. The first row represents regions where memory was predicted by early visual (layer 2 from VGG16) information, the second row corresponds to middle visual (layer 12), and the last row to late visual (layer 22) information.
Figure 5
Figure 5
Semantic information predicting subsequent perceptual memory, conceptual memory, and general memory. The first row represents regions where memory was predicted by observed semantic information (e.g., “is yellow,” or “is round”), the second row corresponds to taxonomic information (e.g., “is an animal”), and the last row to more abstract, encyclopedic (e.g., “lives in caves”, or “is found in markets”) information.

Similar articles

Cited by

References

    1. Badre D, Wagner AD. 2007. Left ventrolateral prefrontal cortex and the cognitive control of memory. Neuropsychologia. 45:2883–2901. - PubMed
    1. Bahrick HP, Bahrick P. 1971. Independence of verbal and visual codes of the same stimuli. J Exp Psychol. 91:344–346. - PubMed
    1. Bahrick HP, Boucher B. 1968. Retention of visual and verbal codes of same stimuli. J Exp Psychol. 78:417. - PubMed
    1. Barense MD, Bussey TJ, Lee AC, Rogers TT, Davies RR, Saksida LM, Murray EA, Graham KS. 2005. Functional specialization in the human medial temporal lobe. J Neurosci. 25:10239–10246. - PMC - PubMed
    1. Barense MD, Groen II, Lee AC, Yeung LK, Brady SM, Gregori M, Kapur N, Bussey TJ, Saksida LM, Henson RN. 2012. Intact memory for irrelevant information impairs perception in amnesia. Neuron. 75:157–167. - PMC - PubMed

Publication types