Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 5;12(1):18787.
doi: 10.1038/s41598-022-23325-3.

Generative adversarial network-created brain SPECTs of cerebral ischemia are indistinguishable to scans from real patients

Affiliations

Generative adversarial network-created brain SPECTs of cerebral ischemia are indistinguishable to scans from real patients

Rudolf A Werner et al. Sci Rep. .

Abstract

Deep convolutional generative adversarial networks (GAN) allow for creating images from existing databases. We applied a modified light-weight GAN (FastGAN) algorithm to cerebral blood flow SPECTs and aimed to evaluate whether this technology can generate created images close to real patients. Investigating three anatomical levels (cerebellum, CER; basal ganglia, BG; cortex, COR), 551 normal (248 CER, 174 BG, 129 COR) and 387 pathological brain SPECTs using N-isopropyl p-I-123-iodoamphetamine (123I-IMP) were included. For the latter scans, cerebral ischemic disease comprised 291 uni- (66 CER, 116 BG, 109 COR) and 96 bilateral defect patterns (44 BG, 52 COR). Our model was trained using a three-compartment anatomical input (dataset 'A'; including CER, BG, and COR), while for dataset 'B', only one anatomical region (COR) was included. Quantitative analyses provided mean counts (MC) and left/right (LR) hemisphere ratios, which were then compared to quantification from real images. For MC, 'B' was significantly different for normal and bilateral defect patterns (P < 0.0001, respectively), but not for unilateral ischemia (P = 0.77). Comparable results were recorded for LR, as normal and ischemia scans were significantly different relative to images acquired from real patients (P ≤ 0.01, respectively). Images provided by 'A', however, revealed comparable quantitative results when compared to real images, including normal (P = 0.8) and pathological scans (unilateral, P = 0.99; bilateral, P = 0.68) for MC. For LR, only uni- (P = 0.03), but not normal or bilateral defect scans (P ≥ 0.08) reached significance relative to images of real patients. With a minimum of only three anatomical compartments serving as stimuli, created cerebral SPECTs are indistinguishable to images from real patients. The applied FastGAN algorithm may allow to provide sufficient scan numbers in various clinical scenarios, e.g., for "data-hungry" deep learning technologies or in the context of orphan diseases.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Generator network in our model. The latent vector and the conditional vector for specifying patterns of radiotracer accumulation served as input to the generator to synthesize a two-dimensional brain SPECT. Symbols F, n, s and p denote channels of output feature maps, number of neurons, strides and padding, respectively. The “same” for padding indicates that padding is applied to the input feature map so that the height and width of the input and output feature maps are not changed. GLU is a gating unit proposed in. Tanh is a hyperbolic tangent activation function. Loss function LG is defined as Eq. (1).
Figure 2
Figure 2
Skip-layer excitation module used in the generator. Symbols H, W and F in feature maps denote height, width and channels, respectively. Symbols s, p and a denote strides, padding and slope of Leaky ReLU activation function, respectively. For padding, “none” indicates that no padding is applied to the input feature map.
Figure 3
Figure 3
Discriminator in our model. The discriminator uses as input the real or generated image with the conditional image representing the pattern of radiotracer accumulation. Symbols F, s, p and a denote channels of output feature maps, strides, padding and slope of Leaky ReLU activation function, respectively. For padding, the “same” indicates that padding is applied to the input feature map so that the height and width of the input and output feature maps are not changed, “none” indicates that no padding is applied to the input feature map. GLU is a gating unit proposed in. Loss function Lreal, Lfake and Lreocn are defined as Eqs. (2, 3, 4), respectively.
Figure 4
Figure 4
Real images, generated images trained with dataset ‘A’ (with three-compartment levels serving as stimuli), and dataset ‘B’ (only providing one anatomical level as input). On a visual assessment, dataset ‘A’ including more anatomical information resembles real images more closely than generated images by dataset ‘B’.
Figure 5
Figure 5
Whisker Plots for comparing real images and generated images for datasets ‘A’ and ‘B’. First row: mean counts, second row: left to right hemisphere ratio (LR). Except for unilateral defect patterns on LR, all comparisons of ‘A’ with real images failed to reach significance. On the other hand, for dataset ‘B’, statistical significance was reached in almost all cases (expect for mean counts of unilateral ischemia), supporting the notion that ‘A’ (using more anatomical input) provides scans closely resembling real scans. *, ** and **** denote P < 0.05, P < 0.01 and P < 0.0001, respectively.
Figure 6
Figure 6
Left: pixel-wise average maps for real and generated images with dataset ‘A’ and ‘B’. Right: pixel-wise standard deviation (SD) maps for real and generated images with dataset ‘A’ and ‘B’.

Similar articles

Cited by

References

    1. Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface. 2018;15(141):20170387. doi: 10.1098/rsif.2017.0387. - DOI - PMC - PubMed
    1. Chartrand G, Cheng PM, Vorontsov E, Drozdzal M, Turcotte S, Pal CJ, et al. Deep learning: A primer for radiologists. Radiographics. 2017;37(7):2113–2131. doi: 10.1148/rg.2017170077. - DOI - PubMed
    1. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J. Big Data. 2019;6(1):60. doi: 10.1186/s40537-019-0197-0. - DOI - PMC - PubMed
    1. Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal. 2019;58:101552. doi: 10.1016/j.media.2019.101552. - DOI - PubMed
    1. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. Generative Adversarial Networks. ArXiv e-prints [Internet]. 2014 June 01, 2014. https://ui.adsabs.harvard.edu/#abs/2014arXiv1406.2661G.

Publication types

MeSH terms