Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Sep 5;12(9):e060026.
doi: 10.1136/bmjopen-2021-060026.

Development and validation of a dynamic 48-hour in-hospital mortality risk stratification for COVID-19 in a UK teaching hospital: a retrospective cohort study

Affiliations

Development and validation of a dynamic 48-hour in-hospital mortality risk stratification for COVID-19 in a UK teaching hospital: a retrospective cohort study

Martin Wiegand et al. BMJ Open. .

Abstract

Objectives: To develop a disease stratification model for COVID-19 that updates according to changes in a patient's condition while in hospital to facilitate patient management and resource allocation.

Design: In this retrospective cohort study, we adopted a landmarking approach to dynamic prediction of all-cause in-hospital mortality over the next 48 hours. We accounted for informative predictor missingness and selected predictors using penalised regression.

Setting: All data used in this study were obtained from a single UK teaching hospital.

Participants: We developed the model using 473 consecutive patients with COVID-19 presenting to a UK hospital between 1 March 2020 and 12 September 2020; and temporally validated using data on 1119 patients presenting between 13 September 2020 and 17 March 2021.

Primary and secondary outcome measures: The primary outcome is all-cause in-hospital mortality within 48 hours of the prediction time. We accounted for the competing risks of discharge from hospital alive and transfer to a tertiary intensive care unit for extracorporeal membrane oxygenation.

Results: Our final model includes age, Clinical Frailty Scale score, heart rate, respiratory rate, oxygen saturation/fractional inspired oxygen ratio, white cell count, presence of acidosis (pH <7.35) and interleukin-6. Internal validation achieved an area under the receiver operating characteristic (AUROC) of 0.90 (95% CI 0.87 to 0.93) and temporal validation gave an AUROC of 0.86 (95% CI 0.83 to 0.88).

Conclusions: Our model incorporates both static risk factors (eg, age) and evolving clinical and laboratory data, to provide a dynamic risk prediction model that adapts to both sudden and gradual changes in an individual patient's clinical condition. On successful external validation, the model has the potential to be a powerful clinical risk assessment tool.

Trial registration: The study is registered as 'researchregistry5464' on the Research Registry (www.researchregistry.com).

Keywords: COVID-19; epidemiology; risk management; statistics & research methods.

PubMed Disclaimer

Conflict of interest statement

Competing interests: None Declared.

Figures

Figure 1
Figure 1
Performance metrics for in-hospital mortality in the training dataset. (A) Receiver operating characteristic plot, with labels indicating the corresponding threshold and the dashed line indicating the line of no discrimination. (B) Precision-recall plot, with the 2.8% observed incidence indicated by the dashed line. (C) NNE against sensitivity. (D) Calibration plot (with 95% CI), by tenths of predicted risk and a LOESS interpolation (grey), with the dashed line indicating perfect calibration. AUPRC, area under the precision-recall curve; AUROC, area under the receiver operating characteristic; FPR, false positive rate; LOESS, locally estimated scatterplot smoothing; NNE, number needed to evaluate; PPV, positive predictive value; TPR, true positive rate.
Figure 2
Figure 2
Performance metrics for in-hospital mortality in the validation dataset. (A) Receiver operating characteristic plot, with labels indicating the corresponding threshold and the dashed line indicating the line of no discrimination. (B) Precision-recall plot, with the 3.1% observed incidence indicated by the dashed line. (C) NNE against sensitivity. (D) Calibration plot (with 95% CI), by tenths of predicted risk and a LOESS interpolation (grey), with the dashed line indicating perfect calibration. AUPRC, area under the precision-recall curve; AUROC, area under the receiver operating characteristic; FPR, false positive rate; LOESS, locally estimated scatterplot smoothing; NNE, number needed to evaluate; PPV, positive predictive value; TPR, true positive rate.

Similar articles

Cited by

References

    1. Carr E, Bendayan R, Bean D, et al. . Evaluation and improvement of the National early warning score (NEWS2) for COVID-19: a multi-hospital study. BMC Med 2021;19:23. 10.1186/s12916-020-01893-3 - DOI - PMC - PubMed
    1. Fan G, Tu C, Zhou F, et al. . Comparison of severity scores for COVID-19 patients with pneumonia: a retrospective study. Eur Respir J 2020;56:2002113. 10.1183/13993003.02113-2020 - DOI - PMC - PubMed
    1. Gupta RK, Harrison EM, Ho A, et al. . Development and validation of the ISARIC 4C deterioration model for adults hospitalised with COVID-19: a prospective cohort study. Lancet Respir Med 2021;9:e592. 10.1016/S2213-2600(20)30559-2 - DOI - PMC - PubMed
    1. Haimovich AD, Ravindra NG, Stoytchev S, et al. . Development and validation of the quick COVID-19 severity index: a prognostic tool for early clinical decompensation. Ann Emerg Med 2020;76:442–53. 10.1016/j.annemergmed.2020.07.022 - DOI - PMC - PubMed
    1. Mei J, Hu W, Chen Q, et al. . Development and external validation of a COVID-19 mortality risk prediction algorithm: a multicentre retrospective cohort study. BMJ Open 2020;10:e044028. 10.1136/bmjopen-2020-044028 - DOI - PMC - PubMed

Publication types