Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 12;28(1):301.
doi: 10.1186/s13054-024-05005-y.

Should AI models be explainable to clinicians?

Affiliations

Should AI models be explainable to clinicians?

Gwénolé Abgrall et al. Crit Care. .

Abstract

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.

Keywords: Algorithmic bias; Clinical decision-making; Explainable artificial intelligence; Fairness; Generative artificial intelligence; Interpretability; Patient autonomy; Regulatory compliance; Transparency.

PubMed Disclaimer

Conflict of interest statement

Andre L. HOLDER, MD, MSc, has received speaker fees from Baxter International and has served as a consultant for Philips Medical. He also has funding from the NIH (NIGMS) for developing a sepsis algorithm. The other authors have no conflict of interests to declare.

Figures

Fig. 1
Fig. 1
Which explainability for which audience?

References

    1. Saqib M, Iftikhar M, Neha F, Karishma F, Mumtaz H. Artificial intelligence in critical illness and its impact on patient care: a comprehensive review. Front Med. 2023;20(10):1176192. 10.3389/fmed.2023.1176192 - DOI - PMC - PubMed
    1. Van De Sande D, Van Genderen ME, Braaf H, Gommers D, Van Bommel J. Moving towards clinical use of artificial intelligence in intensive care medicine: business as usual? Intensive Care Med. 2022;48(12):1815–7. 10.1007/s00134-022-06910-y - DOI - PubMed
    1. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform. 2021;113: 103655. 10.1016/j.jbi.2020.103655 - DOI - PubMed
    1. Grote T. Allure of simplicity: on interpretable machine learning models in healthcare. Philod Med. 2023;4:1.
    1. Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling For the purposes of Regulation 2016/679’ [2017].

Publication types

MeSH terms