A Framework for Interpretability in Machine Learning for Medical Imaging
- PMID: 39421804
- PMCID: PMC11486155
- DOI: 10.1109/access.2024.3387702
A Framework for Interpretability in Machine Learning for Medical Imaging
Abstract
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.
Keywords: Interpretability; explainability; machine learning; medical imaging.
Figures



Similar articles
-
The future of Cochrane Neonatal.Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
-
Definitions, methods, and applications in interpretable machine learning.Proc Natl Acad Sci U S A. 2019 Oct 29;116(44):22071-22080. doi: 10.1073/pnas.1900654116. Epub 2019 Oct 16. Proc Natl Acad Sci U S A. 2019. PMID: 31619572 Free PMC article.
-
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.Comput Biol Med. 2022 Jan;140:105111. doi: 10.1016/j.compbiomed.2021.105111. Epub 2021 Dec 4. Comput Biol Med. 2022. PMID: 34891095 Review.
-
A review of explainable AI in the satellite data, deep machine learning, and human poverty domain.Patterns (N Y). 2022 Oct 14;3(10):100600. doi: 10.1016/j.patter.2022.100600. eCollection 2022 Oct 14. Patterns (N Y). 2022. PMID: 36277818 Free PMC article. Review.
-
Explainability of deep learning models in medical video analysis: a survey.PeerJ Comput Sci. 2023 Mar 14;9:e1253. doi: 10.7717/peerj-cs.1253. eCollection 2023. PeerJ Comput Sci. 2023. PMID: 37346619 Free PMC article.
Cited by
-
Interpretable machine learning to evaluate relationships between DAO/DAOA (pLG72) protein data and features in clinical assessments, functional outcome, and cognitive function in schizophrenia patients.Schizophrenia (Heidelb). 2025 Feb 22;11(1):27. doi: 10.1038/s41537-024-00548-z. Schizophrenia (Heidelb). 2025. PMID: 39987274 Free PMC article.
-
Progress in the application of machine learning in CT diagnosis of acute appendicitis.Abdom Radiol (NY). 2025 Sep;50(9):4040-4049. doi: 10.1007/s00261-025-04864-5. Epub 2025 Mar 17. Abdom Radiol (NY). 2025. PMID: 40095017 Review.
-
Generating Novel Brain Morphology by Deforming Learned Templates.ArXiv [Preprint]. 2025 Mar 7:arXiv:2503.03778v2. ArXiv. 2025. PMID: 40093358 Free PMC article. Preprint.
-
Causality and scientific explanation of artificial intelligence systems in biomedicine.Pflugers Arch. 2025 Apr;477(4):543-554. doi: 10.1007/s00424-024-03033-9. Epub 2024 Oct 29. Pflugers Arch. 2025. PMID: 39470762 Free PMC article. Review.
References
-
- Dembrower K, Crippa A, Colón E, Eklund M, and Strand F, “Artificial intelligence for breast cancer detection in screening mammography in sweden: A prospective, population-based, paired-reader, non-inferiority study,” Lancet Digit. Health, vol. 5, no. 10, pp. e703–e711, Oct. 2023. - PubMed
-
- Rajpurkar P. and Lungren MP, “The current and future state of AI interpretation of medical images,” Obstetrical Gynecol. Surv, vol. 78, no. 11, pp. 634–635, 2023. - PubMed
Grants and funding
LinkOut - more resources
Full Text Sources