Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 May 10;8(1):266.
doi: 10.1038/s41746-025-01583-5.

A scoping review of remote and unsupervised digital cognitive assessments in preclinical Alzheimer's disease

Affiliations

A scoping review of remote and unsupervised digital cognitive assessments in preclinical Alzheimer's disease

Sarah E Polk et al. NPJ Digit Med. .

Abstract

Characterizing subtle cognitive changes in preclinical Alzheimer's disease (AD) is difficult using traditional neuropsychological assessments. Remote and unsupervised digital assessments can improve scalability, measurement reliability, and ecological validity, enabling the capture of subtle changes. We evaluate such tools for use in preclinical AD, or cognitively unimpaired individuals with abnormal levels of AD pathology. We screened 1904 reports for studies remotely assessing cognition in preclinical AD samples. Twenty-three tools were identified, and their usability, reliability, and validity, including construct and criterion validity based on in-person neuropsychological and Aβ/tau measures, was reported. We present a necessary update to a rapidly evolving field, following our previous review (Öhman et al., 2021) and address open questions of feasibility and reliability of remote testing in older adults. Future applications of such tools are discussed, including longitudinal monitoring of cognition, scalable case finding, and individualized prognostics in both clinical trials and healthcare contexts.

PubMed Disclaimer

Conflict of interest statement

Competing interests: SEP and FÖ declare no competing interests. JH is a paid consultant for Eisai, AlzPath, Prothena. AK is an employee of ki:elements. KVP has served as a paid consultant for Novoic, Prothena, and Biogen and is on the advisory board for Cogstate. MS has served on advisory boards for Roche and Novo Nordisk, received speaker honoraria from Bioarctic, Eisai, Genentech, Lilly, Novo Nordisk and Roche and receives research support (to the institution) from Alzpath, Bioarctic, Novo Nordisk and Roche (outside scope of submitted work); he is a co-founder and shareholder of Centile Bioscience and serves as Associate Editor with Alzheimer’s Research & Therapy. DB is co-founder and shareholder of neotiv GmbH.

Figures

Fig. 1
Fig. 1. A non-exhaustive taxonomy of the types of digital cognitive assessments.
A hierarchical diagram of categories under which digital cognitive assessments may fall in white, with subgroups and examples of these categories in gray. These categories are not necessarily mutually exclusive. The current review focuses on those assessments that are remotely deployed without supervision, and which use active data collection to quantify cognitive function (i.e., the center of the diagram).
Fig. 2
Fig. 2. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram detailing the screening of records.
Number of records identified from which sources (databases and others), pre-screening of duplicates and records in a language other than English, screening and exclusion of records (with reasons), and inclusion of records are detailed.
Fig. 3
Fig. 3. Venn diagram of the tools included in the current review based on the type of cognitive metrics they quantify.
Tools identified in the current scoping review were categorized according to their methodology and metrics that they quantified. Tools measuring conventional cognitive constructs are shown in the teal circle; tools using multi-modal data collection are shown in the green circle; tools capturing speech-based metrics are shown in the red circle; tools using EMA-based protocols are shown in the dark blue circle; and tools quantifying learning curves are shown in the yellow circle. Some tools are considered to belong to multiple categories. EMA ecological momentary assessment, ARC Ambulatory Research in cognition, BRANCH Boston Remote Assessment for Neurocognitive Health, C3 FNAME Computerized Cognitive Composite Face-Name Associative Memory Exam, CANTAB Cambridge Neuropsychological Test Automated Battery, CBB Cogstate Brief Battery, ki:e SB-C ki:elements Speech Biomarker for Cognition, M2C2 Mobile Monitoring of Cognitive Change, MTD Mayo Test Drive, NIH National Institutes of Health, OCTAL Oxford Cognitive Testing Portal, ORCA-LLT Online Repeatable Cognitive Assessment—Language Learning Test, RWLRT ReVeRe Word List Recall Test, SIDE-AD Speech for Intelligent cognition change tracking and DEtection of Alzheimer’s disease, WLA Winterlight Assessment. *Tools awaiting validation for use in preclinical AD samples.

Update of

References

    1. Jack, C. R. et al. Revised criteria for diagnosis and staging of Alzheimer's disease: Alzheimer's association workgroup. Alzheimers Dement.20, 5143–5169 (2024). - PMC - PubMed
    1. Jack, C. R. et al. NIA-AA research framework: toward a biological definition of Alzheimer's disease. Alzheimers Dement.14, 535–562 (2018). - PMC - PubMed
    1. Dubbelman, M. A. et al. Cognitive and functional change over time in cognitively healthy individualsaccording to Alzheimer disease biomarker-defined subgroups. Neurology102, e207978 (2024). - PMC - PubMed
    1. Jutten, R. J. et al. Why a clinical trial is as good as its outcome measure: a framework for the selection and use of cognitive outcome measures for clinical trials of Alzheimer's disease. Alzheimers Dement.19, 708–720 (2023). - PMC - PubMed
    1. Jessen, F. et al. A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer’s disease. Alzheimers Dement.10, 844–852 (2014). - PMC - PubMed

LinkOut - more resources