Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Nov 16;1(4):100079.
doi: 10.1016/j.xops.2021.100079. eCollection 2021 Dec.

Deepfakes in Ophthalmology: Applications and Realism of Synthetic Retinal Images from Generative Adversarial Networks

Affiliations

Deepfakes in Ophthalmology: Applications and Realism of Synthetic Retinal Images from Generative Adversarial Networks

Jimmy S Chen et al. Ophthalmol Sci. .

Abstract

Purpose: Generative adversarial networks (GANs) are deep learning (DL) models that can create and modify realistic-appearing synthetic images, or deepfakes, from real images. The purpose of our study was to evaluate the ability of experts to discern synthesized retinal fundus images from real fundus images and to review the current uses and limitations of GANs in ophthalmology.

Design: Development and expert evaluation of a GAN and an informal review of the literature.

Participants: A total of 4282 image pairs of fundus images and retinal vessel maps acquired from a multicenter ROP screening program.

Methods: Pix2Pix HD, a high-resolution GAN, was first trained and validated on fundus and vessel map image pairs and subsequently used to generate 880 images from a held-out test set. Fifty synthetic images from this test set and 50 different real images were presented to 4 expert ROP ophthalmologists using a custom online system for evaluation of whether the images were real or synthetic. Literature was reviewed on PubMed and Google Scholars using combinations of the terms ophthalmology, GANs, generative adversarial networks, ophthalmology, images, deepfakes, and synthetic. Ancestor search was performed to broaden results.

Main outcome measures: Expert ability to discern real versus synthetic images was evaluated using percent accuracy. Statistical significance was evaluated using a Fisher exact test, with P values ≤ 0.05 thresholded for significance.

Results: The expert majority correctly identified 59% of images as being real or synthetic (P = 0.1). Experts 1 to 4 correctly identified 54%, 58%, 49%, and 61% of images (P = 0.505, 0.158, 1.000, and 0.043, respectively). These results suggest that the majority of experts could not discern between real and synthetic images. Additionally, we identified 20 implementations of GANs in the ophthalmology literature, with applications in a variety of imaging modalities and ophthalmic diseases.

Conclusions: Generative adversarial networks can create synthetic fundus images that are indiscernible from real fundus images by expert ROP ophthalmologists. Synthetic images may improve dataset augmentation for DL, may be used in trainee education, and may have implications for patient privacy.

Keywords: DL, deep learning; DR, diabetic retinopathy; Deep learning; GAN, generative adversarial network; Generative adversarial networks; Ophthalmology; ROP, retinopathy of prematurity; Synthetic images; i-ROP, Informatics in ROP.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Generative adversarial network (GAN) pipeline for generating synthetic fundus images. First, a U-Net, a convolutional neural network architecture designed to segment image features such as vessels, was used to generate vessel maps from all fundus images in the dataset. Next, paired fundus images and their corresponding vessel maps from the test set were fed as inputs into Pix2Pix, a conditional GAN. This GAN consists of 2 neural networks: (1) a generator that was trained to generate synthetic fundus images from vessel maps and (2) a discriminator that was trained to discriminate between real and synthetic fundus images. After training was completed, vessel maps from the test set were inputted into the GAN and a synthetic fundus image was generated.
Figure 2
Figure 2
Synthetic retinal images generated from retinal vessel maps. Real retinal fundus images (left) are first segmented into retinal vessel maps (center) using a previously trained U-Net. By using pix2pixHD, a custom implementation of a generative adversarial network (GAN), the retinal vessel maps are then used to generate synthetic retinal fundus images (right).
Figure 3
Figure 3
Obvious cases where the generative adversarial network (GAN) did not produce realistic results. A small proportion of test dataset images (0.57%) had clear and obvious markings that indicated they were synthetic images (white arrows).

References

    1. Brown J.M., Campbell J.P., Beers A., et al. Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. JAMA Ophthalmol. 2018;136:803–810. - PMC - PubMed
    1. Gulshan V., Peng L., Coram M., et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316:2402–2410. - PubMed
    1. Coyner A.S., Swan R., Campbell J.P., et al. Automated fundus image quality assessment in retinopathy of prematurity using deep convolutional neural networks. Ophthalmol Retina. 2019;3:444–450. - PMC - PubMed
    1. Chen J.S., Coyner A.S., Ostmo S., et al. Deep learning for the diagnosis of stage in retinopathy of prematurity: accuracy and generalizability across populations and cameras. Ophthalmol Retina. 2021;5:1027–1035. - PMC - PubMed
    1. Christopher M., Bowd C., Proudfoot J.A., et al. Deep learning estimation of 10-2 and 24-2 visual field metrics based on thickness maps from macula optical coherence tomography. Ophthalmology. 2021;128:1534–1548. - PubMed

LinkOut - more resources