Should AI models be explainable to clinicians?
- PMID: 39267172
- PMCID: PMC11391805
- DOI: 10.1186/s13054-024-05005-y
Should AI models be explainable to clinicians?
Abstract
In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
Keywords: Algorithmic bias; Clinical decision-making; Explainable artificial intelligence; Fairness; Generative artificial intelligence; Interpretability; Patient autonomy; Regulatory compliance; Transparency.
© 2024. The Author(s).
Conflict of interest statement
Andre L. HOLDER, MD, MSc, has received speaker fees from Baxter International and has served as a consultant for Philips Medical. He also has funding from the NIH (NIGMS) for developing a sepsis algorithm. The other authors have no conflict of interests to declare.
Figures
References
-
- Grote T. Allure of simplicity: on interpretable machine learning models in healthcare. Philod Med. 2023;4:1.
-
- Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling For the purposes of Regulation 2016/679’ [2017].
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Miscellaneous
