Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Jan 21;10(2):223.
doi: 10.3390/biomedicines10020223.

Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks

Affiliations

Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks

Bilal Ahmad et al. Biomedicines. .

Abstract

Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder-decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts.

Keywords: MRI; PET; brain tumor classification; cancer classification; convolutional neural networks; deep learning; generative adversarial networks; glioma; meningioma; pituitary; radiolabeled PET; variational autoencoder.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Sample images from the dataset. The first, second, and third rows of images represent glioma, meningioma, and pituitary brain tumors.
Figure 2
Figure 2
The main framework of the proposed methodology.
Figure 3
Figure 3
Variational autoencoder architecture used in this study.
Figure 4
Figure 4
The training and validation accuracy of the classifier when the training set includes the images generated by ED-GAN.
Figure 5
Figure 5
Confusion matrices of various experiments. (A): without classic augmentation and generative images, (B): with classic augmentation and without generative images, (C): without classic augmentation and with generative images, and (D): with generative images and with classic augmentation.
Figure 6
Figure 6
Artificially generated brain tumor MR images. (A) MR images of brain tumor generated by proposed ED-GAN. The top and the bottom rows represent the images generated by the proposed method after 100 and 20,000 training steps, respectively. (B) Brain tumor images generated by Ghassemi et al. [29]. They used the same public dataset as ours for the training of their proposed generative model. They reported an average accuracy of around 95%, only 2% less than we achieved in this study. However, there is much quality difference in the generated images of both methods.

Similar articles

Cited by

References

    1. Abdelaziz Ismael S.A., Mohammed A., Hefny H. An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artif. Intell. Med. 2020;102:101779. doi: 10.1016/j.artmed.2019.101779. - DOI - PubMed
    1. Alther B., Mylius V., Weller M., Gantenbein A. From first symptoms to diagnosis: Initial clinical presentation of primary brain tumors. Clin. Transl. Neurosci. 2020;4:2514183X2096836. doi: 10.1177/2514183X20968368. - DOI
    1. Somasundaram S., Gobinath R. Current Trends on Deep Learning Models for Brain Tumor Segmentation and Detection—A Review; Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon); Faridabad, India. 14–16 February 2019; Piscataway, NJ, USA: IEEE; 2019. pp. 217–221.
    1. Pereira S., Pinto A., Alves V., Silva C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging. 2016;35:1240–1251. doi: 10.1109/TMI.2016.2538465. - DOI - PubMed
    1. Amin J., Sharif M., Gul N., Yasmin M., Shad S.A. Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognit. Lett. 2020;129:115–122. doi: 10.1016/j.patrec.2019.11.016. - DOI

LinkOut - more resources