Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 18;13(3):547.
doi: 10.3390/jpm13030547.

Generative Adversarial Networks Can Create High Quality Artificial Prostate Cancer Magnetic Resonance Images

Affiliations

Generative Adversarial Networks Can Create High Quality Artificial Prostate Cancer Magnetic Resonance Images

Isaac R L Xu et al. J Pers Med. .

Abstract

The recent integration of open-source data with machine learning models, especially in the medical field, has opened new doors to studying disease progression and/or regression. However, the ability to use medical data for machine learning approaches is limited by the specificity of data for a particular medical condition. In this context, the most recent technologies, like generative adversarial networks (GANs), are being looked upon as a potential way to generate high-quality synthetic data that preserve the clinical variability of a condition. However, despite some success, GAN model usage remains largely minimal when depicting the heterogeneity of a disease such as prostate cancer. Previous studies from our group members have focused on automating the quantitative multi-parametric magnetic resonance imaging (mpMRI) using habitat risk scoring (HRS) maps on the prostate cancer patients in the BLaStM trial. In the current study, we aimed to use the images from the BLaStM trial and other sources to train the GAN models, generate synthetic images, and validate their quality. In this context, we used T2-weighted prostate MRI images as training data for Single Natural Image GANs (SinGANs) to make a generative model. A deep learning semantic segmentation pipeline trained the model to segment the prostate boundary on 2D MRI slices. Synthetic images with a high-level segmentation boundary of the prostate were filtered and used in the quality control assessment by participating scientists with varying degrees of experience (more than ten years, one year, or no experience) to work with MRI images. Results showed that the most experienced participating group correctly identified conventional vs. synthetic images with 67% accuracy, the group with one year of experience correctly identified the images with 58% accuracy, and the group with no prior experience reached 50% accuracy. Nearly half (47%) of the synthetic images were mistakenly evaluated as conventional. Interestingly, in a blinded quality assessment, a board-certified radiologist did not significantly differentiate between conventional and synthetic images in the context of the mean quality of synthetic and conventional images. Furthermore, to validate the usability of the generated synthetic images from prostate cancer MRIs, we subjected these to anomaly detection along with the original images. Importantly, the success rate of anomaly detection for quality control-approved synthetic data in phase one corresponded to that of the conventional images. In sum, this study shows promise that high-quality synthetic images from MRIs can be generated using GANs. Such an AI model may contribute significantly to various clinical applications which involve supervised machine-learning approaches.

Keywords: MRI; generative adversarial networks; image segmentation; machine learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Image generation using SinGAN. SinGAN trains a model using generative adversarial networks in a course-to-fine manner. At each scale, the generator makes sample images that cannot be distinguished in down-sampled training images by the discriminator. The 8th, 9th, and 10th scale images (the three rightmost images) of the model were resized and used in all quality control tests.
Figure 2
Figure 2
(A) Example of training data for the image segmentation neural network and the desired result. The top left is the T2 tse training image, and the bottom left is the corresponding binary contour. (B) The deep learning image segmentation training progression is shown for one image of the training set. The final predicted contour had a dice similarity coefficient of 0.99 to the real contour. (C) To pass the deep learning segmentation pipeline, the predicted contour had to be one continuous contour greater than 10,000 pixels. (D) Examples of the 253 synthetic images that passed the deep learning segmentation pipeline, shown with the prediction boundary.
Figure 3
Figure 3
Boxplots depicting the minimum, median, interquartile range, maximum and potential outliers of image intensity for both synthetic and conventional samples. Highlighting the convolutional neural network quality control and applicability validation steps in evaluating the distribution of intensity for synthetic and conventional images, mean intensity for synthetic and conventional images, and distribution of synthetic image intensity by cancer stage. Here “N” represents normal and 1–5 represents the increasing progression of the disease.
Figure 4
Figure 4
Workflow of procedure for second human quality control assessment. Three synthetic samples output by the last three generative models from SinGAN were resized to 500 × 500 and then given a predicted segment of the prostate using a pre-trained segmentation neural network. Synthetic images that had a segmentation of the contour that passed our defined criteria were pooled and randomly selected for our human visual assessment.
Figure 5
Figure 5
Model framework diagram of the corresponding generators and discriminators in the SinGAN network.

References

    1. Giona S. The Epidemiology of Prostate Cancer. In: Bott S.R.J., Ng K.L., editors. Prostate Cancer. Exon Publications; Brisbane, Australia: 2021. - DOI - PubMed
    1. Sanghera S., Coast J., Martin R.M., Donovan J.L., Mohiuddin S. Cost-effectiveness of prostate cancer screening: A systematic review of decision-analytical models. BMC Cancer. 2018;18:84. doi: 10.1186/s12885-017-3974-1. - DOI - PMC - PubMed
    1. Tsodikov A., Gulati R., de Carvalho T.M., Heijnsdijk E.A.M., Hunter-Merrill R.A., Mariotto A.B., de Koning H.J., Etzioni R. Is prostate cancer different in black men? Answers from 3 natural history models. Cancer. 2017;123:2312–2319. doi: 10.1002/cncr.30687. - DOI - PMC - PubMed
    1. Ellinger J., Alajati A., Kubatka P., Giordano F.A., Ritter M., Costigliola V., Golubnitschaja O. Prostate cancer treatment costs increase more rapidly than for any other cancer-how to reverse the trend? EPMA J. 2022;13:1–7. doi: 10.1007/s13167-022-00276-3. - DOI - PMC - PubMed
    1. Bentley C., Cressman S., van der Hoek K., Arts K., Dancey J., Peacock S. Conducting clinical trials-costs, impacts, and the value of clinical trials networks: A scoping review. Clin. Trials. 2019;16:183–193. doi: 10.1177/1740774518820060. - DOI - PubMed

LinkOut - more resources