Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Apr 3;12(4):12.
doi: 10.1167/tvst.12.4.12.

Multitask Learning for Activity Detection in Neovascular Age-Related Macular Degeneration

Affiliations

Multitask Learning for Activity Detection in Neovascular Age-Related Macular Degeneration

Murat Seçkin Ayhan et al. Transl Vis Sci Technol. .

Abstract

Purpose: The purpose of this study was to provide a comparison of performance and explainability of a multitask convolutional deep neuronal network to single-task networks for activity detection in neovascular age-related macular degeneration (nAMD).

Methods: From 70 patients (46 women and 24 men) who attended the University Eye Hospital Tübingen, 3762 optical coherence tomography B-scans (right eye = 2011 and left eye = 1751) were acquired with Heidelberg Spectralis, Heidelberg, Germany. B-scans were graded by a retina specialist and an ophthalmology resident, and then used to develop a multitask deep learning model to predict disease activity in neovascular age-related macular degeneration along with the presence of sub- and intraretinal fluid. We used performance metrics for comparison to single-task networks and visualized the deep neural network (DNN)-based decision with t-distributed stochastic neighbor embedding and clinically validated saliency mapping techniques.

Results: The multitask model surpassed single-task networks in accuracy for activity detection (94.2% vs. 91.2%). The area under the curve of the receiver operating curve was 0.984 for the multitask model versus 0.974 for the single-task model. Furthermore, compared to single-task networks, visualizations via t-distributed stochastic neighbor embedding and saliency maps highlighted that multitask networks' decisions for activity detection in neovascular age-related macular degeneration were highly consistent with the presence of both sub- and intraretinal fluid.

Conclusions: Multitask learning increases the performance of neuronal networks for predicting disease activity, while providing clinicians with an easily accessible decision control, which resembles human reasoning.

Translational relevance: By improving nAMD activity detection performance and transparency of automated decisions, multitask DNNs can support the translation of machine learning research into clinical decision support systems for nAMD activity detection.

PubMed Disclaimer

Conflict of interest statement

Disclosure: M.S. Ayhan, None; H. Faber, received medical training event costs from Novartis; L. Kühlewein, receives, via third-party accounts of the University Eye Hospital, research funding, and honoraria from Novartis; W. Inhoffen, None; G. Aliyeva, None; F. Ziemssen, received consulting fees from Allergan, Bayer HealthCare, Boehringer-Ingelheim, Novo Nordisk, MSD, and Novartis and speaker fees from Alimera, Allergan, Bayer HealthCare, and Novartis; and involved in research funded by grants from Bayer Healthcare (F), Biogen (F), Clearside (F), Ionis (F), Kodiak (F), Novartis (F), Ophtea (F), Regeneron (F), and Roche/Genentech (F); P. Berens, None

Figures

Figure 1.
Figure 1.
Exemplary retinal images (B-scans) with neovascular age-related macular degeneration (nAMD). (A) No nAMD activity. (B) nAMD activity due to subretinal fluid (SRF). (C) nAMD activity due to intraretinal fluid (IRF). (D) nAMD activity due to both SRF and IRF.
Figure 2.
Figure 2.
A deep neural network for simultaneous detection of subretinal and intraretinal fluid as well as the nAMD activity from OCT B-scans. Given a B-scan, convolutional stack of the InceptionV3 architecture extracts 2048 feature maps. These are average and max pooled, and fed into a fully connected (dense) layer with 1024 units for shared representation. Then, task-specific heads specialize into individual tasks and single units with sigmoid function achieve binary classification based on 256 task-specific features.
Figure 3.
Figure 3.
Performance curves of the selected models on the test images. Area under the curve (AUC) values given for models also summarize the overall performance into one number (higher is better). (A) Receiver Operating Characteristics (ROC) curves. (B) Precision-recall curves.
Figure 4.
Figure 4.
Visualization of data via t-SNE of ensemble-based representations. Only the test data are shown. (A) Low dimensional embedding of images based on the 1024-dimensional features from the pre-penultimate layers of single-task networks. Colored with respect to the task-specific labels. (B) Same as in A but with respect to 1024 features from the shared representation layer of multitask networks. (C) Same map as in B but colored with respect to correct and wrong predictions. (D) Same map as in B but colored with respect to uncertainty minimum-maximum normalized to [0, 1].
Figure 5.
Figure 5.
Layer-wise visualization of test data via t-SNE. Starting just before the first inception module (A) and reading out feature representations yielded by every other module (B-F) along with the last inception module (G), the shared representation layer (H) and the nAMD activity detection head's penultimate layer (I), we performed t-SNE with the aforementioned settings. Useful representations emerged toward the end of convolutional stack and the task-specific representation allowed the best separation of nAMD active cases from those inactive.
Figure 6.
Figure 6.
Exemplary saliency maps for four optical coherence tomography (OCT) images. The first column displays the OCT B-scan with the corresponding labeling of a retinal specialist. Second to fourth columns show saliency maps and the network's confidence for active nAMD (yellow), subretinal fluid (SRF; cyan), and intraretinal fluid (IRF; magenta). Note, that saliency maps are only shown in case of confidence >0.5.
Figure 7.
Figure 7.
Exemplary saliency maps as in Figure 6 but results were obtained from single-task models.

Similar articles

Cited by

References

    1. World Health Organization, ed. World report on vision. Available at: https://www.who.int/publications/i/item/9789241516570.
    1. Wong WL, Su X, Li X, et al. .. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014; 2(2): e106–e116. - PubMed
    1. Ferris FL III, Fine SL, Hyman L. Age-Related macular degeneration and blindness due to neovascular maculopathy. Arch Ophthalmol. 1984; 102(11): 1640–1642. - PubMed
    1. Spaide RF, Jaffe GJ, Sarraf D, et al. .. Consensus nomenclature for reporting neovascular age-related macular degeneration data: Consensus on neovascular age-related macular degeneration nomenclature study group. Ophthalmology. 2020; 127(5): 616–636. - PMC - PubMed
    1. Rosenfeld PJ. Optical coherence tomography and the development of antiangiogenic therapies in neovascular age-related macular degeneration. Invest Ophthalmol Vis Sci. 2016; 57(9): OCT14–OCT26. - PMC - PubMed

Publication types