Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022;81(21):30615-30645.
doi: 10.1007/s11042-022-12156-z. Epub 2022 Apr 7.

COVID-CXNet: Detecting COVID-19 in frontal chest X-ray images using deep learning

Affiliations

COVID-CXNet: Detecting COVID-19 in frontal chest X-ray images using deep learning

Arman Haghanifar et al. Multimed Tools Appl. 2022.

Abstract

One of the primary clinical observations for screening the novel coronavirus is capturing a chest x-ray image. In most patients, a chest x-ray contains abnormalities, such as consolidation, resulting from COVID-19 viral pneumonia. In this study, research is conducted on efficiently detecting imaging features of this type of pneumonia using deep convolutional neural networks in a large dataset. It is demonstrated that simple models, alongside the majority of pretrained networks in the literature, focus on irrelevant features for decision-making. In this paper, numerous chest x-ray images from several sources are collected, and one of the largest publicly accessible datasets is prepared. Finally, using the transfer learning paradigm, the well-known CheXNet model is utilized to develop COVID-CXNet. This powerful model is capable of detecting the novel coronavirus pneumonia based on relevant and meaningful features with precise localization. COVID-CXNet is a step towards a fully automated and robust COVID-19 detection system.

Keywords: COVID-19; CheXNet; Chest X-ray; Convolutional neural networks; Imaging features.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Randomly selected frontal CXR images from different sources
Fig. 2
Fig. 2
Dataset distribution. COVIDGR dataset with 426 images has the largest contribution in building our dataset
Fig. 3
Fig. 3
Descriptive charts for categorical variables. a x-ray image view, b patient sex, c PCR test result, and d patient chance of survival
Fig. 4
Fig. 4
Most reported symptoms from COVID patients
Fig. 5
Fig. 5
Different image enhancement methods. a is the main image, b is the image with histogram equalization (HE), c is adaptive histogram equalization (AHE) applied on the image, and d is the image with contrast limited AHE
Fig. 6
Fig. 6
BEASF with different hyperparameter values compared with original image and CLAHE
Fig. 7
Fig. 7
The segmentation approach based on the U-Net
Fig. 8
Fig. 8
A high-level illustration of the base model
Fig. 9
Fig. 9
Co-occurrence of different CXR findings as a circular diagram by [54]
Fig. 10
Fig. 10
COVID-CXNet model architecture based on the DenseNet-121 feature extractor as the backbone
Fig. 11
Fig. 11
Learning Curves for base model trained on (a) 450 images, and (b) 600 images
Fig. 12
Fig. 12
Grad-CAM heatmaps of the base model for 6 images of positive class. Important regions are wrong in most images, while classification scores are notably high
Fig. 13
Fig. 13
Results of the base model trained on 3,400 images; a training loss changes, b training accuracy score changes, and c receiver operating characteristic (ROC) plot
Fig. 14
Fig. 14
Model interpretability visualization by (a) Grad-CAM and (b) LIME image explanation
Fig. 15
Fig. 15
DenseNet-121 fine-tuning curve over 10 epochs
Fig. 16
Fig. 16
CheXNet probabilities of different classes for (a) a COVID-19 positive case, and (b) a normal case
Fig. 17
Fig. 17
Grad-CAM visualization of the proposed model over sample cases
Fig. 18
Fig. 18
Comparison between the CheXNet and the proposed model; a is the image with patchy opacities in the upper left zone, b and c are heatmaps of the CheXNet and the proposed COVID-CXNet, respectively
Fig. 19
Fig. 19
Text removal effect on model results. Images on the right have dates and signs, which are concealed in the images on the left
Fig. 20
Fig. 20
Grad-CAM visualization of the proposed model, trained with lung-segmented CXRs, over sample cases
Fig. 21
Fig. 21
COVID-CXNet multiclass classification visualization results
Fig. 22
Fig. 22
Comparison of learning curves for (a) training loss and (b) validation loss. Note that curves of the validation loss are smoothed for better intuition
Fig. 23
Fig. 23
Grad-CAMs from COVID-CXNet
Fig. 24
Fig. 24
Grad-CAMs from COVID-CXNet with lung segmentation module
Fig. 25
Fig. 25
Grad-CAMs from multiclass COVID-CXNet

Similar articles

Cited by

References

    1. Ai T et al (2020) Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases. Radiology:200642 - PMC - PubMed
    1. Al-Karawi D, Al-Zaidi S, Polus N, Jassim S (2020) Ai based chest x-ray (cxr) scan texture analysis algorithm for digital test of covid-19 patients. medRxiv
    1. Almuhayar M, Lu HH-S, Iriawan N (2019) Classification of abnormality in chest x-ray images by transfer learning of chexnet. In: 2019 3Rd international conference on informatics and computational sciences (ICICos). IEEE, pp 1–6
    1. Arriaga-Garcia EF, Sanchez-Yanez RE, Garcia-Hernandez M (2014) Image enhancement using bi-histogram equalization with adaptive sigmoid functions. In: 2014 International conference on electronics, communications and computers (CONIELECOMP). IEEE, pp 28–34
    1. Bai HX, Hsieh B, Xiong Z, Halsey K, Choi JW, Tran TML, Pan I, Shi L-B, Wang D-C, Mei J et al (2020) Performance of radiologists in differentiating covid-19 from viral pneumonia on chest ct. Radiology:200823 - PMC - PubMed

LinkOut - more resources