The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies
- PMID: 33309898
- DOI: 10.1016/j.jbi.2020.103655
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies
Abstract
Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanations (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in health care (e.g. reporting data quality, performing extensive (external) validation, and regulation).
Keywords: Explainable artificial intelligence; Explainable modelling; Interpretability; Post-hoc explanation; Trustworthy artificial intelligence.
Copyright © 2020 The Authors. Published by Elsevier Inc. All rights reserved.
Similar articles
-
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611. J Med Internet Res. 2021. PMID: 34898454 Free PMC article.
-
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2. BMC Med Ethics. 2024. PMID: 39354512 Free PMC article.
-
Explainable artificial intelligence in emergency medicine: an overview.Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28. Clin Exp Emerg Med. 2023. PMID: 38012816 Free PMC article.
-
A mental models approach for defining explainable artificial intelligence.BMC Med Inform Decis Mak. 2021 Dec 9;21(1):344. doi: 10.1186/s12911-021-01703-7. BMC Med Inform Decis Mak. 2021. PMID: 34886856 Free PMC article.
-
The false hope of current approaches to explainable artificial intelligence in health care.Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9. Lancet Digit Health. 2021. PMID: 34711379 Review.
Cited by
-
A deep learning analysis of stroke onset time prediction and comparison to DWI-FLAIR mismatch.Neuroimage Clin. 2023;40:103544. doi: 10.1016/j.nicl.2023.103544. Epub 2023 Nov 16. Neuroimage Clin. 2023. PMID: 38000188 Free PMC article.
-
State of the Art in 2022 PET/CT in Breast Cancer: A Review.J Clin Med. 2023 Jan 27;12(3):968. doi: 10.3390/jcm12030968. J Clin Med. 2023. PMID: 36769616 Free PMC article. Review.
-
The Use of AI in Diagnosing Diseases and Providing Management Plans: A Consultation on Cardiovascular Disorders With ChatGPT.Cureus. 2023 Aug 7;15(8):e43106. doi: 10.7759/cureus.43106. eCollection 2023 Aug. Cureus. 2023. PMID: 37692649 Free PMC article.
-
Dynamic early warning scores for predicting clinical deterioration in patients with respiratory disease.Respir Res. 2022 Aug 11;23(1):203. doi: 10.1186/s12931-022-02130-6. Respir Res. 2022. PMID: 35953815 Free PMC article.
-
From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition.Sensors (Basel). 2024 Mar 18;24(6):1940. doi: 10.3390/s24061940. Sensors (Basel). 2024. PMID: 38544204 Free PMC article.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical