Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug 5:5:919672.
doi: 10.3389/frai.2022.919672. eCollection 2022.

COVID-19 diagnosis using deep learning neural networks applied to CT images

Affiliations

COVID-19 diagnosis using deep learning neural networks applied to CT images

Andronicus A Akinyelu et al. Front Artif Intell. .

Abstract

COVID-19, a deadly and highly contagious virus, caused the deaths of millions of individuals around the world. Early detection of the virus can reduce the virus transmission and fatality rate. Many deep learning (DL) based COVID-19 detection methods have been proposed, but most are trained on either small, incomplete, noisy, or imbalanced datasets. Many are also trained on a small number of COVID-19 samples. This study tackles these concerns by introducing DL-based solutions for COVID-19 diagnosis using computerized tomography (CT) images and 12 cutting-edge DL pre-trained models with acceptable Top-1 accuracy. All the models are trained on 9,000 COVID-19 samples and 5,000 normal images, which is higher than the COVID-19 images used in most studies. In addition, while most of the research used X-ray images for training, this study used CT images. CT scans capture blood arteries, bones, and soft tissues more effectively than X-Ray. The proposed techniques were evaluated, and the results show that NASNetLarge produced the best classification accuracy, followed by InceptionResNetV2 and DenseNet169. The three models achieved an accuracy of 99.86, 99.79, and 99.71%, respectively. Moreover, DenseNet121 and VGG16 achieved the best sensitivity, while InceptionV3 and InceptionResNetV2 achieved the best specificity. DenseNet121 and VGG16 attained a sensitivity of 99.94%, while InceptionV3 and InceptionResNetV2 achieved a specificity of 100%. The models are compared to those designed in three existing studies, and they produce better results. The results show that deep neural networks have the potential for computer-assisted COVID-19 diagnosis. We hope this study will be valuable in improving the decisions and accuracy of medical practitioners when diagnosing COVID-19. This study will assist future researchers in minimizing the repetition of analysis and identifying the ideal network for their tasks.

Keywords: COVID-19 diagnosis; CT images; convolutional neural network; deep learning networks; pre-trained models.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
An overview of the network architecture that was used in this study. All the pre-trained networks are passed through one pooling layer, one fully-connected layer and one output layer.
Figure 2
Figure 2
Samples of normal CT images (row 1) and COVID-19 CT images (row 2) used for evaluation (Gunraj et al., 2021).
Figure 3
Figure 3
Classification accuracy and sensitivity of models without data augmentation.
Figure 4
Figure 4
Classification accuracy and sensitivity of models with data augmentation.
Figure 5
Figure 5
Comparison between models with data augmentation and models without data augmentation.
Figure 6
Figure 6
Localization maps for ResNet101V2, DenseNet169, NASNetLarge, VGG16 (without data augmentation).
Figure 7
Figure 7
Localization maps for ResNet101V2, DenseNet169, NASNetLarge, VGG16 (with data augmentation).
Figure 8
Figure 8
DenseNet121 without data augmentation (left image) and with data augmentation (right image).
Figure 9
Figure 9
DenseNet169 without data augmentation (left image) and with data augmentation (right image).
Figure 10
Figure 10
DenseNet201 without data augmentation (left image) and with data augmentation (right image).
Figure 11
Figure 11
InceptionResNetV2 without data augmentation (left image) and with data augmentation (right image).
Figure 12
Figure 12
InceptionV3 without data augmentation (left image) and with data augmentation (right image).
Figure 13
Figure 13
MobileNetV2 without data augmentation (left image) and with data augmentation (right image).
Figure 14
Figure 14
NASNetLarge without data augmentation (left image) and with data augmentation (right image).
Figure 15
Figure 15
ResNet50 without data augmentation (left image) and with data augmentation (right image).
Figure 16
Figure 16
ResNet101V2 without data augmentation (left image) and with data augmentation (right image).
Figure 17
Figure 17
VGG16 without data augmentation (left image) and with data augmentation (right image).
Figure 18
Figure 18
VGG19 without data augmentation (left image) and with data augmentation (right image).
Figure 19
Figure 19
Xception without data augmentation (left image) and with data augmentation (right image).

References

    1. Albahli S., Albattah W. (2020). Detection of coronavirus disease from X-ray images using deep learning and transfer learning algorithms. J. Xray Sci. Technol. 28, 841–850. 10.3233/XST-200720 - DOI - PMC - PubMed
    1. An B. J., Xu P., Harmon S., Turkbey S. A., Sanford E. B., Amalou T. H., et al. (2020). CT Images in COVID-19 [Data set]. Available online at: https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19 (accessed January 5, 2022).
    1. Ardakani A. A., Kanafi A. R., Acharya U. R., Khadem N., Mohammadi A. (2020). Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput. Biol. Med. 121, 103795. 10.1016/j.compbiomed.2020.103795 - DOI - PMC - PubMed
    1. Chollet F. (2017). “Xception: deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1251–1258.
    1. Faisal A., Dharma T. N. A., Ferani W., Widi P. G., Rizka S. F. (2021). Comparative study of VGG16 and MobileNetv2 for masked face recognition. J. Ilm. Tek. Elektro Komput. 7, 230–237. 10.26555/jiteki.v7i2.20758 - DOI