Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Dec 5;22(1):311.
doi: 10.1186/s12874-022-01793-5.

External validation of existing dementia prediction models on observational health data

Affiliations
Review

External validation of existing dementia prediction models on observational health data

Luis H John et al. BMC Med Res Methodol. .

Abstract

Background: Many dementia prediction models have been developed, but only few have been externally validated, which hinders clinical uptake and may pose a risk if models are applied to actual patients regardless. Externally validating an existing prediction model is a difficult task, where we mostly rely on the completeness of model reporting in a published article. In this study, we aim to externally validate existing dementia prediction models. To that end, we define model reporting criteria, review published studies, and externally validate three well reported models using routinely collected health data from administrative claims and electronic health records.

Methods: We identified dementia prediction models that were developed between 2011 and 2020 and assessed if they could be externally validated given a set of model criteria. In addition, we externally validated three of these models (Walters' Dementia Risk Score, Mehta's RxDx-Dementia Risk Index, and Nori's ADRD dementia prediction model) on a network of six observational health databases from the United States, United Kingdom, Germany and the Netherlands, including the original development databases of the models.

Results: We reviewed 59 dementia prediction models. All models reported the prediction method, development database, and target and outcome definitions. Less frequently reported by these 59 prediction models were predictor definitions (52 models) including the time window in which a predictor is assessed (21 models), predictor coefficients (20 models), and the time-at-risk (42 models). The validation of the model by Walters (development c-statistic: 0.84) showed moderate transportability (0.67-0.76 c-statistic). The Mehta model (development c-statistic: 0.81) transported well to some of the external databases (0.69-0.79 c-statistic). The Nori model (development AUROC: 0.69) transported well (0.62-0.68 AUROC) but performed modestly overall. Recalibration showed improvements for the Walters and Nori models, while recalibration could not be assessed for the Mehta model due to unreported baseline hazard.

Conclusion: We observed that reporting is mostly insufficient to fully externally validate published dementia prediction models, and therefore, it is uncertain how well these models would work in other clinical settings. We emphasize the importance of following established guidelines for reporting clinical prediction models. We recommend that reporting should be more explicit and have external validation in mind if the model is meant to be applied in different settings.

Keywords: Alzheimer; Dementia; External validation; Patient-level prediction; Prognostic model; Transportability.

PubMed Disclaimer

Conflict of interest statement

Jenna M. Reps is an employee of Janssen Research & Development and shareholder of Johnson & Johnson. Peter R. Rijnbeek, Egill A. Fridgeirsson, Luis H. John, Jan A. Kors work for a research group who received unconditional research grants from Boehringer-Ingelheim, GSK, Janssen Research & Development, Novartis, Pfizer, Yamanouchi, Servier. None of these grants result in a conflict of interest to the content of this paper.

Figures

Fig. 1
Fig. 1
Patient-level prediction time windows and index date
Fig. 2
Fig. 2
Round-trip calibration presented as observed versus expected risks across sex and age for non re-calibrated models: a Walters’ Dementia Risk Score on IMRD; b Nori’s ADRD prediction model on OPEHR. The shaded are presents the 95% confidence interval of the expected risk

References

    1. Stephan BC, Kurth T, Matthews FE, Brayne C, Dufouil C. Dementia risk prediction in the population: are screening models accurate? Nat Rev Neurol. 2010;6(6):318–26. doi: 10.1038/nrneurol.2010.54. - DOI - PubMed
    1. Tang EY, Harrison SL, Errington L, Gordon MF, Visser PJ, Novak G, et al. Current developments in dementia risk prediction modelling: an updated systematic review. PLoS ONE. 2015;10(9):e0136181. doi: 10.1371/journal.pone.0136181. - DOI - PMC - PubMed
    1. Hou XH, Feng L, Zhang C, Cao XP, Tan L, Yu JT. Models for predicting risk of dementia: a systematic review. J Neurol Neurosurg Psychiatry. 2019;90(4):373–9. doi: 10.1136/jnnp-2018-318212. - DOI - PubMed
    1. Goerdten J, Čukić I, Danso SO, Carrière I, Muniz-Terrera G. Statistical methods for dementia risk prediction and recommendations for future work: a systematic review. Alzheimer’s and Dementia: Translational Research and Clinical Interventions. 2019;5:563–9. - PMC - PubMed
    1. Jacqmin-Gadda H, Blanche P, Chary E, Loubère L, Amieva H, Dartigues J-F. Prognostic score for predicting risk of dementia over 10 years while accounting for competing risk of death. Am J Epidemiol. 2014;180(8):790–8. doi: 10.1093/aje/kwu202. - DOI - PubMed

Publication types