Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2019 Apr;126(4):565-575.
doi: 10.1016/j.ophtha.2018.11.015. Epub 2018 Nov 22.

DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs

Affiliations
Comparative Study

DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs

Yifan Peng et al. Ophthalmology. 2019 Apr.

Abstract

Purpose: In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated deep learning systems have been developed for classifying color fundus photographs (CFP) of individual eyes by AREDS severity score, none to date has used a patient-based scoring system that uses images from both eyes to assign a severity score.

Design: DeepSeeNet, a deep learning model, was developed to classify patients automatically by the AREDS Simplified Severity Scale (score 0-5) using bilateral CFP.

Participants: DeepSeeNet was trained on 58 402 and tested on 900 images from the longitudinal follow-up of 4549 participants from AREDS. Gold standard labels were obtained using reading center grades.

Methods: DeepSeeNet simulates the human grading process by first detecting individual AMD risk factors (drusen size, pigmentary abnormalities) for each eye and then calculating a patient-based AMD severity score using the AREDS Simplified Severity Scale.

Main outcome measures: Overall accuracy, specificity, sensitivity, Cohen's kappa, and area under the curve (AUC). The performance of DeepSeeNet was compared with that of retinal specialists.

Results: DeepSeeNet performed better on patient-based classification (accuracy = 0.671; kappa = 0.558) than retinal specialists (accuracy = 0.599; kappa = 0.467) with high AUC in the detection of large drusen (0.94), pigmentary abnormalities (0.93), and late AMD (0.97). DeepSeeNet also outperformed retinal specialists in the detection of large drusen (accuracy 0.742 vs. 0.696; kappa 0.601 vs. 0.517) and pigmentary abnormalities (accuracy 0.890 vs. 0.813; kappa 0.723 vs. 0.535) but showed lower performance in the detection of late AMD (accuracy 0.967 vs. 0.973; kappa 0.663 vs. 0.754).

Conclusions: By simulating the human grading process, DeepSeeNet demonstrated high accuracy with increased transparency in the automated assignment of individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. These results highlight the potential of deep learning to assist and enhance clinical decision-making in patients with AMD, such as early AMD detection and risk prediction for developing late AMD. DeepSeeNet is publicly available on https://github.com/ncbi-nlp/DeepSeeNet.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest: no conflicting relationship exists for any author

Figures

Figure 1.
Figure 1.
Scoring schematic for participants with and without late age-related macular degeneration. Pigmentary abnormalities- 0-no, 1-yes; drusen size- 0-small or none, 1-medium, 2-large; late AMD- 0-no, 1-yes.
Figure 2.
Figure 2.
Receiver operating characteristic curves for large drusen, pigment abnormalities, and late AMD classification. Retinal specialists performance levels are represented as a single blue point.
Figure 3.
Figure 3.
Confusion matrices comparing retinal specialists’ performance with that of DeepSeeNet based on the test set values. The rows and columns of each matrix are the Scale scores (0-5).
Figure 4.
Figure 4.
t-SNE visualization of the last hidden layer representation for each sub-network of DeepSeeNet. Each point represents a fundus image. Different colors represent the different classes of the respective risk factor or late AMD.
Figure 5.
Figure 5.
Image-specific class saliency maps that highlight the macular region for color fundus photographs with large drusen, pigmentary abnormalities, and late AMD.

References

    1. Quartilho A, Simkiss P, Zekite A, Xing W, Wormald R, Bunce C. Leading causes of certifiable visual loss in England and Wales during the year ending 31 March 2013. Eye (Lond). 2016;30(4):602–607. - PMC - PubMed
    1. Congdon N, O'Colmain B, Klaver CC, et al. Causes and prevalence of visual impairment among adults in the United States. Archives of ophthalmology. 2004;122(4):477–485. - PubMed
    1. Wong WL, Su X, Li X, et al. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014;2(2):e106–116. - PubMed
    1. Rudnicka AR, Jarrar Z, Wormald R, Cook DG, Fletcher A, Owen CG. Age and gender variations in age-related macular degeneration prevalence in populations of European ancestry: a meta-analysis. Ophthalmology. 2012;119(3):571–580. - PubMed
    1. Fritsche LG, Fariss RN, Stambolian D, Abecasis GR, Curcio CA, Swaroop A. Age-related macular degeneration: genetics and biology coming together. Annu Rev Genomics Hum Genet. 2014;15:151–171. - PMC - PubMed

Publication types

MeSH terms