Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Sep 1;11(9):25.
doi: 10.1167/tvst.11.9.25.

Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning

Affiliations

Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning

Philippe Valmaggia et al. Transl Vis Sci Technol. .

Abstract

Purpose: To evaluate the feasibility of automated segmentation of pigmented choroidal lesions (PCLs) in optical coherence tomography (OCT) data and compare the performance of different deep neural networks.

Methods: Swept-source OCT image volumes were annotated pixel-wise for PCLs and background. Three deep neural network architectures were applied to the data: the multi-dimensional gated recurrent units (MD-GRU), the V-Net, and the nnU-Net. The nnU-Net was used to compare the performance of two-dimensional (2D) versus three-dimensional (3D) predictions.

Results: A total of 121 OCT volumes were analyzed (100 normal and 21 PCLs). Automated PCL segmentations were successful with all neural networks. The 3D nnU-Net predictions showed the highest recall with a mean of 0.77 ± 0.22 (MD-GRU, 0.60 ± 0.31; V-Net, 0.61 ± 0.25). The 3D nnU-Net predicted PCLs with a Dice coefficient of 0.78 ± 0.13, outperforming MD-GRU (0.62 ± 0.23) and V-Net (0.59 ± 0.24). The smallest distance to the manual annotation was found using 3D nnU-Net with a mean maximum Hausdorff distance of 315 ± 172 µm (MD-GRU, 1542 ± 1169 µm; V-Net, 2408 ± 1060 µm). The 3D nnU-Net showed a superior performance compared with stacked 2D predictions.

Conclusions: The feasibility of automated deep learning segmentation of PCLs was demonstrated in OCT data. The neural network architecture had a relevant impact on PCL predictions.

Translational relevance: This work serves as proof of concept for segmentations of choroidal pathologies in volumetric OCT data; improvements are conceivable to meet clinical demands for the diagnosis, monitoring, and treatment evaluation of PCLs.

PubMed Disclaimer

Conflict of interest statement

Disclosure: P. Valmaggia, Swiss National Science Foundation (Grant 323530_199395) (F); P. Friedli, None; B. Hörmann, Supercomputing Systems AG, Zurich, Switzerland (E); P. Kaiser, Supercomputing Systems AG, Zurich, Switzerland (E); H.P.N. Scholl, Swiss National Science Foundation (Project funding: “Developing novel outcomes for clinical trials in Stargardt disease using structure/function relationship and deep learning” #310030_201165, and National Center of Competence in Research Molecular Systems Engineering: “NCCR MSE: Molecular Systems Engineering (phase II)” #51NF40-182895) (F), the Wellcome Trust (PINNACLE study) (F), and the Foundation Fighting Blindness Clinical Research Institute (ProgStar study) (F), Astellas Pharma Global Development, Inc./Astellas Institute for Regenerative Medicine (S), Boehringer Ingelheim Pharma GmbH & Co (S), Gyroscope Therapeutics Ltd. (S), Janssen Research & Development, LLC (Johnson & Johnson) (S), Novartis Pharma AG (CORE) (S), Okuvision GmbH (S), and Third Rock Ventures, LLC (S), Gerson Lehrman Group (C), Guidepoint Global, LLC (C), and Tenpoint Therapeutics Limited (C), Data Monitoring and Safety Board/Committee of Belite Bio (CT2019-CTN-04690-1), ReNeuron Group Plc/Ora Inc. (NCT02464436), F. Hoffmann-La Roche Ltd (VELODROME trial, NCT04657289; DIAGRID trial, NCT05126966) and member of the Steering Committee of Novo Nordisk (FOCUS trial; NCT03811561). All arrangements have been reviewed and approved by the University of Basel (Universitätsspital Basel, USB) and the Board of Directors of the Institute of Molecular and Clinical Ophthalmology Basel (IOB) in accordance with their conflict of interest policies. Compensation is being negotiated and administered as grants by USB, which receives them on its proper accounts. Funding organizations had no influence on the design, performance or evaluation of the current study (N); P.C. Cattin, (N); R. Sandkühler, (N); P.M. Maloca, Roche (C), MIMO AG and VisionAI, Switzerland (O)

Figures

Figure 1.
Figure 1.
Manual annotation process for PCL segmentation in volumetric OCT data. (a) Outline of the PCL in a single B-scan. (b) Filling of the outline to generate a highlighted PCL. (c) Binary label creation through clearing of the background. (d) Assemblage of the binary labels for each B-scan in the volume produced a 3D label of the PCL.
Figure 2.
Figure 2.
Image processing pipeline for PCL segmentation. (a) OCT data with their corresponding labels were loaded into different deep neural networks (MD-GRU, V-Net, and nnU-Net). (b) Training and testing of the neural networks were performed using k-fold cross-validation with training from scratch for each fold. (c) The resulting lesion predictions are displayed in blue, red, green, and yellow according to each neural network.
Figure 3.
Figure 3.
Visualisations of the neural network predictions with overlays on the OCT images and the manual annotations. (a) Volume-rendered retinal and choroidal compartments. (b) Three-dimensional manual annotations and model predictions. (c) Two-dimensional OCT images and predictions as overlays. (d) Enlarged 2D manual annotations and predictions as overlays.
Figure 4.
Figure 4.
Example automated PCL segmentations generated using 3D nnU-Net. PCL predictions are shown as a green overlay, and manual annotations in are shown in white. (a, d, g) Volume rendered retinal and choroidal compartments. Predicted PCL with corresponding Dice coefficient and maximum Hausdorff distance. (b, e, h) 3D predictions overlaid over the manual annotations. (c, f, i) Enlarged 2D manual annotations with overlaid predictions. Axial stretching was performed for a better visualization according to isometric pixels.

References

    1. Shen D, Wu G, Suk H-I.. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017; 19: 221–248, doi:10.1146/annurev-bioeng-071516-044442. - DOI - PMC - PubMed
    1. Ghaffari M, Sowmya A, Oliver R.. Automated brain tumor segmentation using multimodal brain scans: A survey based on models submitted to the BraTS 2012-2018 Challenges. IEEE Rev Biomed Eng. 2020; 13: 156–168, doi:10.1109/RBME.2019.2946868. - DOI - PubMed
    1. Ting DSW, Peng L, Varadarajan AV, et al. .. Deep learning in ophthalmology: The technical and clinical considerations. Prog Retin Eye Res. 2019; 72: 100759. doi:10.1016/j.preteyeres.2019.04.003. - DOI - PubMed
    1. Huang D, Swanson E, Lin C, et al. .. Optical coherence tomography. Science . 1991; 254(5035): 1178–1181, doi:10.1126/science.1957169. - DOI - PMC - PubMed
    1. Maloca PM, Lee AY, De Carvalho ER, et al. .. Validation of automated artificial intelligence segmentation of optical coherence tomography images. PLoS One. 2019; 14(8): e0220063. doi:10.1371/journal.pone.0220063. - DOI - PMC - PubMed

Publication types

MeSH terms