Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 16:25:e42717.
doi: 10.2196/42717.

Deep Learning With Chest Radiographs for Making Prognoses in Patients With COVID-19: Retrospective Cohort Study

Affiliations

Deep Learning With Chest Radiographs for Making Prognoses in Patients With COVID-19: Retrospective Cohort Study

Hyun Woo Lee et al. J Med Internet Res. .

Erratum in

Abstract

Background: An artificial intelligence (AI) model using chest radiography (CXR) may provide good performance in making prognoses for COVID-19.

Objective: We aimed to develop and validate a prediction model using CXR based on an AI model and clinical variables to predict clinical outcomes in patients with COVID-19.

Methods: This retrospective longitudinal study included patients hospitalized for COVID-19 at multiple COVID-19 medical centers between February 2020 and October 2020. Patients at Boramae Medical Center were randomly classified into training, validation, and internal testing sets (at a ratio of 8:1:1, respectively). An AI model using initial CXR images as input, a logistic regression model using clinical information, and a combined model using the output of the AI model (as CXR score) and clinical information were developed and trained to predict hospital length of stay (LOS) ≤2 weeks, need for oxygen supplementation, and acute respiratory distress syndrome (ARDS). The models were externally validated in the Korean Imaging Cohort of COVID-19 data set for discrimination and calibration.

Results: The AI model using CXR and the logistic regression model using clinical variables were suboptimal to predict hospital LOS ≤2 weeks or the need for oxygen supplementation but performed acceptably in the prediction of ARDS (AI model area under the curve [AUC] 0.782, 95% CI 0.720-0.845; logistic regression model AUC 0.878, 95% CI 0.838-0.919). The combined model performed better in predicting the need for oxygen supplementation (AUC 0.704, 95% CI 0.646-0.762) and ARDS (AUC 0.890, 95% CI 0.853-0.928) compared to the CXR score alone. Both the AI and combined models showed good calibration for predicting ARDS (P=.079 and P=.859).

Conclusions: The combined prediction model, comprising the CXR score and clinical information, was externally validated as having acceptable performance in predicting severe illness and excellent performance in predicting ARDS in patients with COVID-19.

Keywords: AI model; COVID-19; artificial intelligence; clinical outcome; deep learning; machine learning; medical imaging; prediction model; prognosis; radiography, thoracic.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: HK received consulting fees from Radisen; holds stock and stock options in MEDICALIP. Outside this study, SHY works as a chief medical officer in the MEDICAL IP.

Figures

Figure 1
Figure 1
Illustration of the data flow model for artificial intelligence–assisted prediction. Our deep learning model was developed in two stages to ensure robust performance: (1) backbone training and (2) model training. ARDS: acute respiratory distress syndrome. Conv2D: Convolution 2D.
Figure 2
Figure 2
Representative cases in the test set database. (A) Chest radiograph of a 65-year-old woman who survived for 32 days after hospitalization. She had no cardiopulmonary comorbidities. She required oxygen supplementation but did not meet the operational definition of acute respiratory distress syndrome. The radiograph shows multiple consolidations and ground-glass opacities in both lung fields. The heat map mainly distinguishes the focal consolidative opacities of both lung fields. The image demonstrates red areas not only in the right lower and left upper lung fields but also around both shoulder joints, because lung segmentation was not applied in our model. The combined model, using chest radiography scores and clinical information, predicted a 40.9% chance of hospital length of stay ≤2 weeks, 74.5% chance of oxygen supplementation, and 33% chance of developing acute respiratory distress syndrome. (B) Chest radiograph of a 93-year-old man who died after 18 days of hospitalization. This patient had a previous history of heart disease. He required oxygen supplementation and met the operational definition for acute respiratory distress syndrome. The radiograph shows diffuse ground-glass opacities in both lung fields. The heat map mainly distinguishes the bilateral ground-glass opacities of both lung fields. The combined model, using chest radiography scores and clinical information, predicted a 57.8% chance of hospital length of stay ≤2 weeks, 96.4% chance of oxygen supplementation, and 99.1% chance of acute respiratory distress syndrome.
Figure 3
Figure 3
Externally validated performance of the artificial intelligence model with chest radiography score, logistic regression model with clinical information, and the combined prediction model. (A) Hospital LOS ≤2 weeks. (B) Oxygen supplementation. (C) Development of ARDS. ARDS: acute respiratory distress syndrome; AUC: area under the curve; CXR: chest radiography; LOS: length of stay.

Similar articles

Cited by

References

    1. WHO coronavirus disease (COVID-19) dashboard. World Health Organization. [2023-01-30]. https://covid19.who.int .
    1. Chua F, Vancheeswaran R, Draper A, Vaghela T, Knight M, Mogal R, Singh J, Spencer LG, Thwaite E, Mitchell H, Calmonson S, Mahdi N, Assadullah S, Leung M, O'Neill A, Popat C, Kumar R, Humphries T, Talbutt R, Raghunath S, Molyneaux PL, Schechter M, Lowe J, Barlow A. Early prognostication of COVID-19 to guide hospitalisation versus outpatient monitoring using a point-of-test risk prediction score. Thorax. 2021 Jul;76(7):696–703. doi: 10.1136/thoraxjnl-2020-216425. https://europepmc.org/abstract/MED/33692174 thoraxjnl-2020-216425 - DOI - PubMed
    1. Wynants L, Van Calster B, Collins GS, Riley RD, Heinze G, Schuit E, Bonten MMJ, Dahly Darren L, Damen Johanna A A, Debray Thomas P A, de Jong Valentijn M T, De Vos Maarten, Dhiman Paul, Haller Maria C, Harhay Michael O, Henckaerts Liesbet, Heus Pauline, Kammer Michael, Kreuzberger Nina, Lohmann Anna, Luijken Kim, Ma Jie, Martin Glen P, McLernon David J, Andaur Navarro Constanza L, Reitsma Johannes B, Sergeant Jamie C, Shi Chunhu, Skoetz Nicole, Smits Luc J M, Snell Kym I E, Sperrin Matthew, Spijker René, Steyerberg Ewout W, Takada Toshihiko, Tzoulaki Ioanna, van Kuijk Sander M J, van Bussel Bas, van der Horst Iwan C C, van Royen Florien S, Verbakel Jan Y, Wallisch Christine, Wilkinson Jack, Wolff Robert, Hooft Lotty, Moons Karel G M, van Smeden Maarten. Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. BMJ. 2020 Apr 07;369:m1328. doi: 10.1136/bmj.m1328. http://www.bmj.com/lookup/pmidlookup?view=long&pmid=32265220 - DOI - PMC - PubMed
    1. Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection. American College of Radiology. [2023-01-30]. https://www.acr.org/Advocacy-and-Economics/ACR-Position-Statements/Recom... .
    1. Wang L, Zhang Y, Wang D, Tong X, Liu T, Zhang S, Huang J, Zhang L, Chen L, Fan H, Clarke M. Artificial intelligence for COVID-19: a systematic review. Front Med (Lausanne) 2021;8:704256. doi: 10.3389/fmed.2021.704256. https://europepmc.org/abstract/MED/34660623 - DOI - PMC - PubMed

Publication types