Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Apr;13(4):e201960135.
doi: 10.1002/jbio.201960135. Epub 2020 Feb 3.

Optical coherence tomography image denoising using a generative adversarial network with speckle modulation

Affiliations

Optical coherence tomography image denoising using a generative adversarial network with speckle modulation

Zhao Dong et al. J Biophotonics. 2020 Apr.

Abstract

Optical coherence tomography (OCT) is widely used for biomedical imaging and clinical diagnosis. However, speckle noise is a key factor affecting OCT image quality. Here, we developed a custom generative adversarial network (GAN) to denoise OCT images. A speckle-modulating OCT (SM-OCT) was built to generate low speckle images to be used as the ground truth. In total, 210 000 SM-OCT images were used for training and validating the neural network model, which we call SM-GAN. The performance of the SM-GAN method was further demonstrated using online benchmark retinal images, 3D OCT images acquired from human fingers and OCT videos of a beating fruit fly heart. The denoise performance of the SM-GAN model was compared to traditional OCT denoising methods and other state-of-the-art deep learning based denoise networks. We conclude that the SM-GAN model presented here can effectively reduce speckle noise in OCT images and videos while maintaining spatial and temporal resolutions.

Keywords: de-noise; deep learning; generative adversarial network; optical coherence tomography.

PubMed Disclaimer

Figures

FIGURE 1
FIGURE 1
Collage SM-OCT images of different samples. (a) chicken skin. (b) tape. (c) beef. (d) pork skin. (e) Pork. (f) fish. The top image of each sample is a single SM-OCT B-scan image, while the bottom is the ground truth image obtained by averaging 100 repeated SM-OCT B-scans, which significantly reduced the speckle noise.
FIGURE 2
FIGURE 2
SM-GAN training process and network structure. (a) SM-GAN model training process. (b) Generator network structure (c) Discriminator network structure.
FIGURE 3
FIGURE 3
Input and ground truth OCT images of chicken and grape and different de-noising methods output images. Three signal regions (green) and one background region (blue) are manually selected for CNR calculation. Middle images are magnified images selected by the red box. (a) input OCT images of chicken and grape. (b) BM3D de-noising images of chicken and grape. (c) MSBTD de-noising images of chicken and grape. (d) SRResNet de-noising images of chicken and grape. (e) SRGAN de-noising images of chicken and grape. (f) SM-GAN de-noising images of chicken and grape (g) 100 Frames averaged ground truth images of chicken and grape.
FIGURE 4
FIGURE 4
CNR and PSNR evaluation plots for grape and chicken OCT images. (a) CNR evaluation of input data, denoising outputs of BM3D, MSBTD, SRResNet, SRGAN, SM-GAN, and ground truth image. (b) PSNR evaluation of input data, denoising outputs of BM3D, MSBTD, SRResNet, SRGAN, and SM-GAN.
FIGURE 5
FIGURE 5
De-noising retinal images with traditional and deep learning based methods. Both the high noise input and low noise ground truth retinal images are from the online benchmark dataset. (a) high noise input retinal image. (b) BM3D denoising image. (c) MSBTD denoising image. (d)SRResNet denoising image. (e) SRGAN denoising image. (f) SM-GAN denoising image. (g) ground truth image. (h-n) zoom in regions selected with the red boxes in (a-g).
FIGURE 6
FIGURE 6
3D volumetric images and cross-sectional images of a human finger without de-noising, de-noising output images of BM3D, SRResNet, SRGAN, and SM-GAN. (a) 3D dataset of finger without de-noising (Video S1) (b) 3D dataset of BM3D output (Video S2) (c) 3D dataset of SRResNet output (Video S3) (d) 3D dataset of SRGAN output (Video S4) (e) 3D dataset of SM-GAN output (Video S5) (f-j) one frame of combination finger video (Video S6) of input image, de-noising outputs of BM3D, SRResNet, SRGAN and SM-GAN.
FIGURE 7
FIGURE 7
One frame of fly heartbeat video [Video S7] of the input image and de-noising output images with BM3D, SRResNet, SRGAN, and SM-GAN methods. Column intensity plot and FWHM measurement of the corresponding peak. (a-e) input fly heart image, and de-noise output images with BM3D, SRResNet, SRGAN, and SM-GAN methods. (f-j) selected column intensity profile plot for the input image and four methods of output images. The black box shows the fly heart tube and the red box represents the fly heart wall. (k-o) FWHM measurements of intensity peak of input image and four de-noise methods output images that show the thickness of fly heart wall thickness.

References

    1. Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, Hee MR, Flotte T, Gregory K, Puliafito CA Science. 1991, 254, 1178. - PMC - PubMed
    1. Wojtkowski M. Appl. Opt. 2010, 49, D30–D61. - PubMed
    1. Fujimoto J, Swanson E Investigative ophthalmology & visual science. 2016, 57, OCT1–OCT13. - PMC - PubMed
    1. Klein T, Huber R. Biomed. Opt. Express 2017, 8, 828–859. - PMC - PubMed
    1. Grulkowski I, Liu JJ, Potsaid B, Jayaraman V, Lu CD, Jiang J, Cable AE, Duker JS, Fujimoto JG. Biomed. Opt. Express 2012, 3, 2733–2751. - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources