Applications of interpretability in deep learning models for ophthalmology
- PMID: 34231530
- PMCID: PMC8373813
- DOI: 10.1097/ICU.0000000000000780
Applications of interpretability in deep learning models for ophthalmology
Abstract
Purpose of review: In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare.
Recent findings: The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models.
Summary: Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.
Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.
Figures

References
-
- LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. - PubMed
-
- Rajpurkar P, Irvin J, Zhu K, et al.Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:171105225. 2017.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Research Materials