Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2021 Aug;53(6):748-775.
doi: 10.1002/lsm.23414. Epub 2021 May 20.

Deep Learning in Biomedical Optics

Affiliations
Review

Deep Learning in Biomedical Optics

Lei Tian et al. Lasers Surg Med. 2021 Aug.

Abstract

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.

Keywords: biomedical optics; biophotonics; computer aided detection; deep learning; diffuse tomography; fluorescence lifetime; functional optical brain imaging; in vivo microscopy; machine learning; microscopy; optical coherence tomography; photoacoustic imaging; widefield endoscopy.

PubMed Disclaimer

Figures

Fig 1:
Fig 1:
Number of reviewed research papers which utilize DL in biomedical optics stratified by year and imaging domain.
Fig 2:
Fig 2:
(a) Classical machine learning uses engineered features and a model. (b) Deep learning uses learned features and predictors in an “end-to-end” deep neural network.
Fig 3:
Fig 3:
Three of the most commonly-used DNN architectures in biomedical optics: (a) Encoder-decoder, (b) U-Net, and (c) GAN.
Fig 4:
Fig 4:
DL overcomes physical tradeoffs and augments microscopy contrast. (a) CARE network achieves higher SNR with reduced light exposure (with permission from the authors [18]). (b) Cross-modality super-resolution network reconstructs high-resolution images across a wide FOV [19] (with permission from the authors). (c) DL enables wide-FOV high-resolution phase reconstruction with reduced measurements (adapted from [20]). (d) Deep-Z network enables digital 3D refocusing from a single measurement [21] (with permission from the authors). (e) Virtual staining GAN transforms autofluorescence images of unstained tissue sections to virtual H&E staining [22] (with permission from the authors) (f) DL enables predicting fluorescent labels from label-free images [23] (Reprinted from Cell, 2018 Apr 19;173(3):792–803.e19, Christiansen et al., In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images, Copyright (2020), with permission from Elsevier).
Fig 5:
Fig 5:
Example of quantitative FLI metabolic imaging as reported by NADH tm for a breast cancer cell line (AU565) as obtained (a) with SPCImage and (b) FLI-Net. (c) Linear regression with corresponding 95% confidence band (gray shading) of averaged NADH Tm values from 4 cell line data (adapted from [90]).
Fig 6:
Fig 6:
DL approaches to support real-time, automated diagnostic assessment of tissues with confocal laser endomicroscopy. (a) Graphical rendering of two confocal laser endomicroscopy probes (left: Cellvizio, right: Pentax) (adapted from [109]). (b) Example CLE images obtained from four different regions of the oral cavity (adapted from [110]) (c) Fine-tuning of CNNs pre-trained using ImageNet is utilized in the majority of CLE papers reported since 2017 (adapted from [110]). (d) Super-resolution networks for probe-based CLE images incorporate novel layers to better account for the sparse, irregular structure of the images (adapted from [111]). (e) Example H&E stained histology images with corresponding CLE images. Adversarial training of GANs to transfer between these two modalities has been successful (adapted from [112]). (f) Transfer recurrent feature learning utilizes adversarially trained discriminators in conjunction with an LSTM for state-of-the-art video classification performance (adapted from [112]).
Fig 7:
Fig 7:
(a) Example automatic retinal layer segmentation using DL compared to manual segmentation (reprinted from [175]). (b) GAN for denoising OCT images (adapted from [181]). (c) Attention map overlaid with retinal images indicated features that CNN used for diagnosing normal versus age-related macular degeneration (AMD) [182] (Reproduced from Detection of features associated with neovascular age-related macular degeneration in ethnically distinct data sets by an optical coherence tomography: trained deep learning algorithm, Hyungtaek et al., Br. J. Ophthalmol. bjophthalmol-2020–316984, 2020 with permission from BMJ Publishing Group Ltd.).
Fig 8:
Fig 8:
(a) Examples of using DL to predict blood flow based on structural OCT image features (reprinted from [191]). (b) Example of deep spectral learning for label-free oximetry in visible light OCT (reprinted from [192]). (c) The predicted blood oxygen saturation and the tandem prediction uncertainty from rat retina in vivo in hypoxia, normoxia and hyperoxia (reprinted from [192]).
Fig 9:
Fig 9:
Example of point source detection as a precursor to photoacoustic image formation after identifying true sources and removing reflection artifacts, modified from [201]. (©2018 IEEE. Adapted, with permission, from Allman et al. Photoacoustic source detection and reflection artifact removal enabled by deep learning, IEEE Transactions on Medical Imaging. 2018; 37:1464–1477.)
Fig 10:
Fig 10:
Example of blood vessel and tumor phantom results with multiple DL approaches. (Reprinted from [205].)
Fig 11:
Fig 11:
Reconstruction for a mouse with tumor (right thigh) where higher absorption values are resolved (slices at z=15 and 3.8 mm) for the tumor area with the DNN in (a) compared to the L1-based inversion in (b). (adapted with permission from the authors from [252]).
Fig 12:
Fig 12:
Hemodynamic time series for prediction of epileptic seizure using a CNN (with permission from the authors [257]) (Computers in Biology and Medicine, 11, 2019, 103355, Rosas-Romero et al., Prediction of epileptic seizures with convolutional neural networks and functional near-infrared spectroscopy signals, Copyright (2020), with permission from Elsevier).

Comment in

References

    1. Hyun Yun Seok, Kwok Sheldon JJ. Light in diagnosis, therapy and surgery Nature biomedical engineering. 2017;1:1–16. - PMC - PubMed
    1. Yann LeCun, Yoshua Bengio, Geoffrey Hinton. Deep learning nature. 2015;521:436–444. - PubMed
    1. Ian Goodfellow, Yoshua Bengio, Aaron Courville, Yoshua Bengio. Deep learning;1. MIT press Cambridge; 2016.
    1. Geert Litjens, Thijs Kooi, Ehteshami Bejnordi Babak, et al. A Survey on Deep Learning in Medical Image Analysis Medical Image Analysis. 2017;42:60–88. arXiv: 1702.05747. - PubMed
    1. Nichols James A, Herbert Chan Hsien W, Baker Matthew AB. Machine learning: applications of artificial intelligence to imaging and diagnosis Biophysical Reviews. 2019;11:111–118. - PMC - PubMed

Publication types

LinkOut - more resources