Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 3;10(6):30.
doi: 10.1167/tvst.10.6.30.

Automated Identification of Referable Retinal Pathology in Teleophthalmology Setting

Affiliations

Automated Identification of Referable Retinal Pathology in Teleophthalmology Setting

Qitong Gao et al. Transl Vis Sci Technol. .

Abstract

Purpose: This study aims to meet a growing need for a fully automated, learning-based interpretation tool for retinal images obtained remotely (e.g. teleophthalmology) through different imaging modalities that may include imperfect (uninterpretable) images.

Methods: A retrospective study of 1148 optical coherence tomography (OCT) and color fundus photography (CFP) retinal images obtained using Topcon's Maestro care unit on 647 patients with diabetes. To identify retinal pathology, a Convolutional Neural Network (CNN) with dual-modal inputs (i.e. CFP and OCT images) was developed. We developed a novel alternate gradient descent algorithm to train the CNN, which allows for the use of uninterpretable CFP/OCT images (i.e. ungradable images that do not contain sufficient image biomarkers for the reviewer to conclude absence or presence of retinal pathology). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. Paired CFP/OCT inputs (obtained from a single eye of a patient) were grouped as retinal pathology negative (RPN; 924 images) in the absence of retinal pathology in both imaging modalities, or if one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding CFP/OCT inputs were deemed retinal pathology positive (RPP; 224 images) if any imaging modality exhibited referable retinal pathology.

Results: Our approach achieved 88.60% (95% confidence interval [CI] = 82.76% to 94.43%) accuracy in identifying pathology, along with the false negative rate (FNR) of 12.28% (95% CI = 6.26% to 18.31%), recall (sensitivity) of 87.72% (95% CI = 81.69% to 93.74%), specificity of 89.47% (95% CI = 83.84% to 95.11%), and area under the curve of receiver operating characteristic (AUC-ROC) was 92.74% (95% CI = 87.71% to 97.76%).

Conclusions: Our model can be successfully deployed in clinical practice to facilitate automated remote retinal pathology identification.

Translational relevance: A fully automated tool for early diagnosis of retinal pathology might allow for earlier treatment and improved visual outcomes.

PubMed Disclaimer

Conflict of interest statement

Disclosure: Q. Gao, None; J. Amason, None; S. Cousins, NotalVision (I); and Stealth (C), PanOptica (C), Merck Pharmaceuticals (C), and Clearside Biomedical (C); M. Pajic, None; M. Hadziahmetovic, None

Figures

Figure 1.
Figure 1.
Overview of the proposed CNN model design methodology. The OCT and CFP images obtained from the automated screening system were first labeled respectively by experts (step I), and the individual diagnoses were used to generate training labels according to the Label Consensus Mechanism (step II). The two types of images were augmented and pre-processed to constitute the inputs to the CNN (step III), before being used, along with the obtained labels, for the CNN training (step IV).
Figure 2.
Figure 2.
The architecture of the proposed CNN model with Class Activation Mapping (CAM). The OCT and CFP modalities are first processed with two sets of convolutional filters respectively; the resulting features are then concatenated and processed by a fully connected layer (θ3) for classification. CAMs are generated using the outputs from the two global average pooling layers and weights from the fully connected layer.
Figure 3.
Figure 3.
Accuracy-false negative rate (ACC-FNR) (A) curve and ROC (B) curve on the Testing Dataset. A ACC-FNR curve for our approach and baseline C. Baseline C has lower FNR than our approach with a decision threshold of 0.5; however, our method achieves both higher accuracy and lower FNR with a decision threshold providing optimal tradeoff between accuracy and FNR (e.g. the threshold of 0.65 as shown by the red dot in the plot). B ROC curves for our approach and baseline methods. Our approach achieves the highest AUC compared to all the baseline methods.

Similar articles

Cited by

References

    1. Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H.. Artificial intelligence in retina. Prog Retin Eye Res. 2018; 67: 1–29. - PubMed
    1. Gargeya R, Leng T.. Automated identification of diabetic retinopathy using deep learning. Ophthalmology. 2017; 124(7): 962–969. - PubMed
    1. Raman R, Srinivasan S, Virmani S, Sivaprasad S, Rao C, Rajalakshmi R.. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye. 2019; 33(1): 97–109. - PMC - PubMed
    1. Winder RJ, Morrow PJ, McRitchie IN, Bailie JR, Hart PM.. Algorithms for digital image processing in diabetic retinopathy. Comput Med Imaging Graph. 2009; 33(8): 608–622. - PubMed
    1. Usher D, Dumskyj M, Himaga M, Williamson TH, Nussey S, Boyce J.. Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening. Diabetic Med. 2004; 21(1): 84–90. - PubMed