Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb 11:8:20552076221074488.
doi: 10.1177/20552076221074488. eCollection 2022 Jan-Dec.

Re-focusing explainability in medicine

Affiliations
Review

Re-focusing explainability in medicine

Laura Arbelaez Ossa et al. Digit Health. .

Abstract

Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors' understanding, meet patients' needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models' clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence.

Keywords: Explainability; digital health; explainable AI; human-center AI; medicine.

PubMed Disclaimer

Conflict of interest statement

Declaration of conflicting interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Figures

Figure 1.
Figure 1.
Example on explainability criteria for the construction of sufficient understanding.
Figure 2.
Figure 2.
Explainability evaluation flow depending on clinical implementation.

References

    1. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism 2017; 69: S36–S40. - PubMed
    1. Daunhawer I, Kasser S, Koch G, et al. Enhanced early prediction of clinically relevant neonatal hyperbilirubinemia with machine learning. Pediatr Res 2019; 86: 122–127. - PubMed
    1. Weng SF, Reps J, Kai J, et al. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLOS One 2017; 12: e0174944. - PMC - PubMed
    1. Kegerreis B, Catalina MD, Bachali P, et al.. Machine learning approaches to predict lupus disease activity from gene expression data. Sci Rep 2019; 9: 9617. DOI: 10.1038/s41598-019-45989-0. - DOI - PMC - PubMed
    1. Feretzakis G, Loupelis E, Sakagianni A, et al.. Using machine learning techniques to aid empirical antibiotic therapy decisions in the intensive care unit of a general hospital in Greece. 2020; 9: 50. DOI: 10.3390/antibiotics9020050. PMID: 32023854; PMCID: PMC7167935. - DOI - PMC - PubMed

LinkOut - more resources