Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Jul 4:12:1604388.
doi: 10.3389/fmed.2025.1604388. eCollection 2025.

Baseline predictors for 28-day COVID-19 severity and mortality among hospitalized patients: results from the IMPACC study

Affiliations

Baseline predictors for 28-day COVID-19 severity and mortality among hospitalized patients: results from the IMPACC study

Jintong Hou et al. Front Med (Lausanne). .

Abstract

Introduction: The coronavirus disease 2019 (COVID-19) pandemic threatened public health and placed a significant burden on medical resources. The Immunophenotyping Assessment in a COVID-19 Cohort (IMPACC) study collected clinical, demographic, blood cytometry, serum receptor-binding domain (RBD) antibody titers, metabolomics, targeted proteomics, nasal metagenomics, Olink, nasal viral load, autoantibody, SARS-CoV-2 antibody titers, and nasal and peripheral blood mononuclear cell (PBMC) transcriptomics data from patients hospitalized with COVID-19. The aim of this study is to select baseline biomarkers and build predictive models for 28-day in-hospital COVID-19 severity and mortality with most predictive variables while prioritizing routinely collected variables.

Methods: We analyzed 1102 hospitalized COVID-19 participants. We used the lasso and forward selection to select top predictors for severity and mortality, and built predictive models based on balanced training data. We then validated the models on testing data.

Results: Severity was best predicted by the baseline SpO2/FiO2 ratio obtained from COVID-19 patients (test AUC: 0.874). Adding patient age, BMI, FGF23, IL-6, and LTA to the disease severity prediction model improves the test AUC by an additional 3%. The clinical mortality prediction model using SpO2/FiO2 ratio, age, and BMI resulted in a test AUC of 0.83. Adding laboratory results such as TNFRSF11B and plasma ribitol count increased the prediction model by 3.5%. The severity and mortality prediction models developed outperform the Sequential Organ Failure Assessment (SOFA) score among inpatients and perform similarly to the SOFA score among ICU patients.

Conclusion: This study identifies clinical data and laboratory biomarkers of COVID-19 severity and mortality using machine learning models. The study identifies SpO2/FiO2 ratio to be the most important predictor for both severity and mortality. Several biomarkers were identified to modestly improve the predictions. The results also provide a baseline of SARS-CoV-2 infection during the early stages of the coronavirus emergence and can serve as a baseline for future studies that inform how the genetic evolution of the coronavirus affects the host response to new variants.

Keywords: COVID-19; FGF23; SpO2/FiO2; TNFRSF11B; machine learning; mortality; ribitol; severity.

PubMed Disclaimer

Conflict of interest statement

The Icahn School of Medicine at Mount Sinai has filed patent applications relating to SARS-CoV-2 serological assays, NDV-based SARS-CoV-2 vaccines, influenza virus vaccines, and influenza virus therapeutics, which list Florian Krammer as co-inventor. Mount Sinai has spun out a company, Kantaro, to market serological tests for SARS-CoV-2 and another company, Castlevax, to develop SARS-CoV-2 vaccines. Florian Krammer is a co-founder and scientific advisory board member of Castlevax. Florian Krammer has consulted for Merck, Curevac, Seqirus, GSK, and Pfizer and is currently consulting for 3rd Rock Ventures, Sanofi, Gritstone, and Avimex. The Krammer laboratory is also collaborating with Dynavax on influenza vaccine development and with VIR on influenza virus therapeutics development. Viviana Simon is a co-inventor on a patent filed relating to SARS-CoV-2 serological assays (the “Serology Assays”). Ofer Levy is a named inventor on patents held by Boston Children's Hospital relating to vaccine adjuvants and human in vitro platforms that model vaccine action. His laboratory has received research support from GlaxoSmithKline (GSK) and is a co-founder of and advisor to Ovax, Inc. Charles Cairns serves as a consultant to bioMerieux and is funded by a grant from the Bill & Melinda Gates Foundation. James A. Overton is a consultant at Knocean Inc. Jessica Lasky-Su serves as a scientific advisor of Precion Inc. Scott R. Hutton, Greg Michelloti, and Kari Wong are employees of Metabolon Inc. Vicki Seyfer-Margolis is a current employee of MyOwnMed. Nadine Rouphael reports grants or contracts with Merck, Sanofi, Pfizer, Vaccine Company, and Immorna, and has participated in data safety monitoring boards and selected advisory boards for Moderna, Sanofi, Seqirus, Pfizer, EMMES, ICON, BARDA, and CyanVan, Imunon Micron. N.R. has also received support for meetings/travel from Sanofi and Moderna and honoraria from Virology Education and Krog Consulting. Adeeb Rahman is a current employee of Immunai Inc. Steven Kleinstein is a consultant related to the ImmPort data repository for Peraton. Nathan Grabaugh is a consultant for Tempus Labs and the National Basketball Association. Akiko Iwasaki is a consultant for 4BIO, Blue Willow Biologics, Revelar Biotherapeutics, RIGImmune, Xanadu Bio, Paratus Sciences. Monika Kraft receives research funds paid to her institution from NIH, ALA, Sanofi, and Astra-Zeneca for work in asthma, serves as a consultant for Astra-Zeneca, Sanofi, Chiesi, and GSK for severe asthma; is a co-founder and CMO for RaeSedo, Inc., a company created to develop peptidomimetics for the treatment of inflammatory lung disease. Esther Melamed received research funding from Babson Diagnostics and an honorarium from the Multiple Sclerosis Association of America and has served on the advisory boards of Genentech, Horizon, Teva, and Viela Bio. Carolyn Calfee receives research funding from NIH, FDA, DOD, Roche-Genentech, and Quantum Leap Healthcare Collaborative, as well as consulting services for Janssen, Vasomune, Gen1e Life Sciences, NGMBio, and Cellenkos. Wade Schulz was an investigator for a research agreement, through Yale University, from the Shenzhen Center for Health Information for work to advance intelligent disease prevention and health promotion; collaborates with the National Center for Cardiovascular Diseases in Beijing; is a technical consultant to Hugo Health, a personal health information platform; cofounder of Refactor Health, an AI-augmented data management platform for healthcare; and has received grants from Merck and Regeneron Pharmaceutical for research related to COVID-19. Grace A McComsey received research grants from Rehdhill, Cognivue, Pfizer, and Genentech, and served as a research consultant for Gilead, Merck, Viiv/GSK, and Janssen. Linda N. Geng received research funding paid to her institution from Pfizer, Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Figures

Two ROC curves compare SpO2/FiO2 and SOFA scores. Panel A shows SpO2/FiO2 with an AUC of 0.865, outperforming SOFA with an AUC of 0.805, p=0.015. Panel B shows an AUC of 0.874 for SpO2/FiO2 and 0.743 for SOFA, p < 0.001. Both curves depict sensitivity versus 1-specificity.
Figure 1
Comparison of receiver operating characteristic (ROC) curves of SpO2/FiO2 and SOFA for predicting 28-day COVID-19 severity among inpatients. (A) ROC on the training set (severe, n = 186; non-severe, n = 472). SpO2/FiO2: AUC = 0.865 (95% CI: 0.8308–0.8992, sensitivity = 79.6%, specificity = 81.1%, probability cut-off = 0.5, i.e., SpO2/FiO2 = 285.5). SOFA score: AUC = 0.805(95% CI: 0.7659–0.8438, sensitivity = 58.1%, specificity = 87.7%). (B) ROC on the testing set (severe, n = 119; non-severe, n = 325). SpO2/FiO2: AUC = 0.874 (95% CI: 0.8345–0.9131, sensitivity = 78.2%, specificity = 83.4%). SOFA score: AUC = 0.743 (95% CI: 0.6869–0.7985, sensitivity = 56.3%, specificity = 85.8%). Paired Delong's test was used to obtain p-values.
Charts showing the distribution of normalized biomarkers and metabolites in two groups with p-values less than 0.001. Panels A–C compare “NonSevere” to “Severe” for FGF23, IL-6, and LTA respectively, showing higher values in “Severe” for FGF23, IL-6, and lower values in “Severe” for LTA. Panels D–F compare “Alive” to “Dead” for TNFRSF11B, Ribitol, and Urea, showing higher values in “Dead”. Box plots indicate variance and distribution.
Figure 2
(A–C) Comparison of normalized FGF23, IL-6, and LTA between the severe and non-severe cohort in the merged dataset (Olink data merged with clinical data, severe n = 292, non-severe n = 761). P < 0.001 based on two-sample t-tests for all three comparisons. (D–F) Comparison of normalized TNFRSF11B (Olink feature, alive n = 958, Deceased n = 95), ribitol, and urea (metabolomics features, alive n = 908, deceased n = 90) between the deceased and alive cohorts in the merged datasets (merged with the clinical dataset, respectively). P < 0.001 based on two-sample t-tests for all three comparisons.
Two ROC curve graphs labeled A and B compare clinical and full models. Graph A shows the clinical model with an AUC of 0.884 and the full model with an AUC of 0.922, with a p-value less than 0.001. Graph B shows the clinical model with an AUC of 0.886 and the full model with an AUC of 0.916, with a p-value of 0.007. Both graphs plot sensitivity against 1-specificity.
Figure 3
Comparison of ROC curves of the clinical model (SpO2/FiO2+age+BMI) and the full model (SpO2/FiO2+age+BMI + FGF23 + IL-6 + LTA) for predicting 28-day in-hospital severity. (A). ROC on the training set (severe, n = 177; non-severe, n = 450). Clinical model: AUC = 0.884 (95% CI: 0.8539–0.9141, sensitivity = 0.802, specificity = 0.82). Full Model: AUC = 0.922 (95% CI: 0.8993–0.9446, sensitivity = 0.836, specificity = 0.867). (B). ROC on the testing set (severe, n = 115; non-severe, n = 311). Clinical model: AUC = 0.886 (95% CI: 0.8474–0.9241, sensitivity = 0.783, specificity = 0.83). Full Model: AUC = 0.916 (95% CI: 0.882–0.9491, sensitivity = 0.817, specificity = 0.859). Paired Delong's test was used to obtain p-values.
Two ROC curve graphs compare the performance of Clinical and SOFA models. Graph A shows Clinical model AUC as 0.827 and SOFA as 0.774, with p-value 0.18. Graph B shows Clinical model AUC as 0.834 and SOFA as 0.711, with p-value 0.016. Sensitivity is on the y-axis, and 1-Specificity is on the x-axis.
Figure 4
Comparison of ROC curves of the clinical model (SpO2/FiO2 + age + BMI) and SOFA for predicting 28-day in-hospital mortality. (A) ROC on the training set (deceased, n = 57; alive, n = 601). Clinical model: AUC = 0.827(95% CI: 0.774–0.881, sensitivity = 77.2%, specificity = 73.2%). SOFA score: AUC = 0.774(95% CI: 0.71–0.838, sensitivity = 50.9%, specificity = 83.9%). (B) ROC on the testing set (deceased, n = 42; Alive, n = 402): Clinical model: AUC = 0.834 (95% CI: 0.782–0.887, sensitivity = 81%, specificity = 75.9%). SOFA score: AUC = 0.711 (95% CI: 0.629–0.793, sensitivity = 42.9%, specificity = 84.6%). Paired Delong's test was used to obtain p-values.

References

    1. World Health Organization . Number of COVID-19 Deaths Reported to WHO (Cumulative Total) (2024). Available online at: https://data.who.int/dashboards/covid19/deaths (Accessed November 31, 2024).
    1. French G, Hulse M, Nguyen D, Sobotka K, Webster K, Corman J, et al. Impact of hospital strain on excess deaths during the COVID-19 pandemic-United States, july 2020–july 2021. Am J Transplant. (2022) 22:654–7. 10.1111/ajt.16645 - DOI - PMC - PubMed
    1. World Health Organization . COVID-19 Vaccine Coverage. World. (2023). Available online at: https://data.who.int/dashboards/covid19/vaccines (Accessed December 31, 2023).
    1. Ferdinands JM, Rao S, Dixon BE, Mitchell PK, DeSilva MB, Irving SA, et al. Waning of vaccine effectiveness against moderate and severe COVID-19 among adults in the US from the VISION network: test negative, case-control study. BMJ. (2022) 379:e072141. 10.1136/bmj-2022-072141 - DOI - PMC - PubMed
    1. Miller JL, Tada M, Goto M, Chen H, Dang E, Mohr NM, et al. Prediction models for severe manifestations and mortality due to COVID-19: a systematic review. Acad Emerg Med. (2022) 29:206–16. 10.1111/acem.14447 - DOI - PubMed

LinkOut - more resources