Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2007 Sep;35(9):2052-6.
doi: 10.1097/01.CCM.0000275267.64078.B0.

Assessing the calibration of mortality benchmarks in critical care: The Hosmer-Lemeshow test revisited

Affiliations

Assessing the calibration of mortality benchmarks in critical care: The Hosmer-Lemeshow test revisited

Andrew A Kramer et al. Crit Care Med. 2007 Sep.

Abstract

Objective: To examine the Hosmer-Lemeshow test's sensitivity in evaluating the calibration of models predicting hospital mortality in large critical care populations.

Design: Simulation study.

Setting: Intensive care unit databases used for predictive modeling.

Patients: Data sets were simulated representing the approximate number of patients used in earlier versions of critical care predictive models (n = 5,000 and 10,000) and more recent predictive models (n = 50,000). Each patient had a hospital mortality probability generated as a function of 23 risk variables.

Interventions: None.

Measurements and main results: Data sets of 5,000, 10,000, and 50,000 patients were replicated 1,000 times. Logistic regression models were evaluated for each simulated data set. This process was initially carried out under conditions of perfect fit (observed mortality = predicted mortality; standardized mortality ratio = 1.000) and repeated with an observed mortality that differed slightly (0.4%) from predicted mortality. Under conditions of perfect fit, the Hosmer-Lemeshow test was not influenced by the number of patients in the data set. In situations where there was a slight deviation from perfect fit, the Hosmer-Lemeshow test was sensitive to sample size. For populations of 5,000 patients, 10% of the Hosmer-Lemeshow tests were significant at p < .05, whereas for 10,000 patients 34% of the Hosmer-Lemeshow tests were significant at p < .05. When the number of patients matched contemporary studies (i.e., 50,000 patients), the Hosmer-Lemeshow test was statistically significant in 100% of the models.

Conclusions: Caution should be used in interpreting the calibration of predictive models developed using a smaller data set when applied to larger numbers of patients. A significant Hosmer-Lemeshow test does not necessarily mean that a predictive model is not useful or suspect. While decisions concerning a mortality model's suitability should include the Hosmer-Lemeshow test, additional information needs to be taken into consideration. This includes the overall number of patients, the observed and predicted probabilities within each decile, and adjunct measures of model calibration.

PubMed Disclaimer

Comment in

  • Size matters to a model's fit.
    Marcin JP, Romano PS. Marcin JP, et al. Crit Care Med. 2007 Sep;35(9):2212-3. doi: 10.1097/01.CCM.0000281522.70992.EF. Crit Care Med. 2007. PMID: 17713369 No abstract available.

Publication types

LinkOut - more resources