Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1992 Feb 28;11(4):475-89.
doi: 10.1002/sim.4780110409.

Cross-validation performance of mortality prediction models

Affiliations

Cross-validation performance of mortality prediction models

D C Hadorn et al. Stat Med. .

Abstract

Mortality prediction models hold substantial promise as tools for patient management, quality assessment, and, perhaps, health care resource allocation planning. Yet relatively little is known about the predictive validity of these models. We report here a comparison of the cross-validation performance of seven statistical models of patient mortality: (1) ordinary-least-squares (OLS) regression predicting 0/1 death status six months after admission; (2) logistic regression; (3) Cox regression; (4-6) three unit-weight models derived from the logistic regression, and (7) a recursive partitioning classification technique (CART). We calculated the following performance statistics for each model in both a learning and test sample of patients, all of whom were drawn from a nationally representative sample of 2558 Medicare patients with acute myocardial infarction: overall accuracy in predicting six-month mortality, sensitivity and specificity rates, positive and negative predictive values, and per cent improvement in accuracy rates and error rates over model-free predictions (i.e., predictions that make no use of available independent variables). We developed ROC curves based on logistic regression, the best unit-weight model, the single best predictor variable, and a series of CART models generated by varying the misclassification cost specifications. In our sample, the models reduced model-free error rates at the patient level by 8-22 per cent in the test sample. We found that the performance of the logistic regression models was marginally superior to that of other models. The areas under the ROC curves for the best models ranged from 0.61 to 0.63. Overall predictive accuracy for the best models may be adequate to support activities such as quality assessment that involve aggregating over large groups of patients, but the extent to which these models may be appropriately applied to patient-level resource allocation planning is less clear.

PubMed Disclaimer

Publication types

LinkOut - more resources