Predicting recall of words and lists
- PMID: 33090842
- PMCID: PMC8253044
- DOI: 10.1037/xlm0000964
Predicting recall of words and lists
Abstract
For more than a half-century, lists of words have served as the memoranda of choice in studies of human memory. To better understand why some words and lists are easier to recall than others, we estimated multivariate models of word and list recall. In each of the 23 sessions, subjects (N = 98) studied and recalled the same set of 576 words, presented in 24 study-test lists. Fitting a statistical model to these data revealed positive effects of animacy, contextual diversity, valence, arousal, concreteness, and semantic structure on recall of individual words. We next asked whether a similar approach would allow us to account for list-level variability in recall performance. Here we hypothesized that semantically coherent lists would be most memorable. Consistent with this prediction, we found that semantic similarity, weighted by temporal distance, was a strong positive predictor of list-level recall. Additionally, we found significant effects of average contextual diversity, valence, animacy, and concreteness on list-level recall. Our findings extend previous models of item-level recall and show that aggregate measures of item recallability also account for variability in list-level performance. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Figures
References
-
- Baddeley AD, Thomson N, & Buchanan M (1975). Word length and the structure of short-term memory. Journal of Verbal Learning and Verbal Behavior, 14 (6), 575–589. doi: 10.1016/S0022-5371(75)80045-4 - DOI
-
- Bates D, Mächler M, Bolker B, & Walker S (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software. doi: 10.18637/jss.v067.i01 - DOI
-
- Bhatia S (2016). Vector Space Semantic Models Predict Subjective Probability Judgments for Real-World Events. In Proceedings of the 38th annual conference of the cognitive science society (pp. 1937–1942).
-
- Bian J, Gao B, & Liu TY (2014). Knowledge-powered deep learning for word embedding. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8724 LNAI(PART 1), 132–148. doi: 10.1007/978-3-662-44848-9_9 - DOI
