Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2025 May:106:101352.
doi: 10.1016/j.preteyeres.2025.101352. Epub 2025 Mar 12.

AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed

Affiliations
Free article
Review

AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed

Songyang An et al. Prog Retin Eye Res. 2025 May.
Free article

Abstract

Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.

Keywords: Artificial intelligence; Disease classification; Interpretable AI; Intrinsic interpretability; Oculomics; Post-hoc interpretability; Retinal imaging.

PubMed Disclaimer

Conflict of interest statement

Declaration of interest statement S. An is an employee of Toku Eyes Limited NZ. D. Squirrell is a co-founder and medical advisor at Toku Eyes Limited NZ. M.McConnell and J.Marshall are shareholders of Toku Inc US. Toku Eyes Limited NZ is an AI company specializing in the development of retinal AI models. The authors report no other conflicts of interest in this work.