Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Apr;64(4):424-442.
doi: 10.1111/aas.13527. Epub 2019 Dec 26.

Mortality prediction models in the adult critically ill: A scoping review

Affiliations

Mortality prediction models in the adult critically ill: A scoping review

Britt E Keuning et al. Acta Anaesthesiol Scand. 2020 Apr.

Abstract

Background: Mortality prediction models are applied in the intensive care unit (ICU) to stratify patients into different risk categories and to facilitate benchmarking. To ensure that the correct prediction models are applied for these purposes, the best performing models must be identified. As a first step, we aimed to establish a systematic review of mortality prediction models in critically ill patients.

Methods: Mortality prediction models were searched in four databases using the following criteria: developed for use in adult ICU patients in high-income countries, with mortality as primary or secondary outcome. Characteristics and performance measures of the models were summarized. Performance was presented in terms of discrimination, calibration and overall performance measures presented in the original publication.

Results: In total, 43 mortality prediction models were included in the final analysis. In all, 15 models were only internally validated (35%), 13 externally (30%) and 10 (23%) were both internally and externally validated by the original researchers. Discrimination was assessed in 42 models (98%). Commonly used calibration measures were the Hosmer-Lemeshow test (60%) and the calibration plot (28%). Calibration was not assessed in 11 models (26%). Overall performance was assessed in the Brier score (19%) and the Nagelkerke's R2 (4.7%).

Conclusions: Mortality prediction models have varying methodology, and validation and performance of individual models differ. External validation by the original researchers is often lacking and head-to-head comparisons are urgently needed to identify the best performing mortality prediction models for guiding clinical care and research in different settings and populations.

Keywords: critical care; intensive care unit; mortality prediction model; performance; risk prediction; scoping review.

PubMed Disclaimer

References

REFERENCES

    1. Moons KGM, Wolff RF, Riley RD, et al. PROBAST: a tool to assess risk of bias and applicability of prediction model studies: explanation and ElaborationPROBAST: explanation and elaboration. Ann Intern Med. 2019;170:1-33.
    1. Lemeshow S, Le J-R. Modeling the severity of illness of ICU patients: a systems update. JAMA. 1994;272:1049-1055.
    1. Vincent J-L, Moreno R. Clinical review: scoring systems in the critically ill. Crit Care. 2010;14:207, https://doi.org/10.1186/cc8204
    1. Bouch C, Thompson J. Severity scoring systems in the critically ill. Contin Educ Anaesth Crit Care Pain. 2008;8:181-185.
    1. Strand K, Flaatten H. Severity scoring in the ICU: a review. Acta Anaesthesiol Scand. 2008;52:467-478 https://doi.org/10.1111/j.1399-6576.2008.01586.x

Publication types

LinkOut - more resources