Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Mar 26;7(1):80.
doi: 10.1038/s41746-024-01080-1.

The clinician-AI interface: intended use and explainability in FDA-cleared AI devices for medical image interpretation

Affiliations

The clinician-AI interface: intended use and explainability in FDA-cleared AI devices for medical image interpretation

Stephanie L McNamara et al. NPJ Digit Med. .

Abstract

As applications of AI in medicine continue to expand, there is an increasing focus on integration into clinical practice. An underappreciated aspect of this clinical translation is where the AI fits into the clinical workflow, and in turn, the outputs generated by the AI to facilitate clinician interaction in this workflow. For instance, in the canonical use case of AI for medical image interpretation, the AI could prioritize cases before clinician review or even autonomously interpret the images without clinician review. A related aspect is explainability - does the AI generate outputs to help explain its predictions to clinicians? While many clinical AI workflows and explainability techniques have been proposed, a summative assessment of the current scope in clinical practice is lacking. Here, we evaluate the current state of FDA-cleared AI devices for medical image interpretation assistance in terms of intended clinical use, outputs generated, and types of explainability offered. We create a curated database focused on these aspects of the clinician-AI interface, where we find a high frequency of "triage" devices, notable variability in output characteristics across products, and often limited explainability of AI predictions. Altogether, we aim to increase transparency of the current landscape of the clinician-AI interface and highlight the need to rigorously assess which strategies ultimately lead to the best clinical outcomes.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Overview of types of FDA-cleared CAD products and their integration into medical image interpretation workflows.
CAD types vary according to their outputs and place within the clinical workflow. CADt (triage) devices are designed to flag cases for prioritized review and do not place marks on the image. CADe (detection) devices mark regions of interest to aid in the detection of lesions as a clinician is interpreting an exam. CADx (diagnosis) devices are designed to aid in diagnosis, such as by outputting a score or category, but do not explicitly detect lesions across the exam. CADe/x (detection & diagnosis) devices provide both detection and diagnosis support. Finally, an autonomous system, which we denote as CADa, aims to automatically interpret the exam without clinician input.
Fig. 2
Fig. 2. Landscape of intended uses of FDA-cleared AI products for medical image interpretation.
a Total number of FDA-cleared AI products from January 2016 to October 2023 for each CAD type: CADt (triage), CADe (detection), CADx (diagnosis), CADe/x (detection & diagnosis), CADa (autonomous). b Distribution of FDA clearances by year (*up to October 1st for 2023). c Distribution of FDA-cleared AI products for each CAD type by disease indication. Diseases/conditions with three or more products are shown. ICH intracranial hemorrhage, LVO large vessel occlusion, PE pulmonary embolism, VCF vertebral compression fracture, MSK musculoskeletal.
Fig. 3
Fig. 3. Prediction and explainability output types of current FDA-cleared AI products for medical image interpretation.
a Predictions are grouped according to binary, category, or score. b Type of explainability offered by products, with “none” corresponding to products that provide image/exam-level predictions without explicit localization or other form of explainability. Counts are also indicated by CAD type: CADt (triage), CADe (detection), CADx (diagnosis), CADe/x (detection & diagnosis), CADa (autonomous).

References

    1. Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health. 2021;3:e745–e750. doi: 10.1016/S2589-7500(21)00208-9. - DOI - PubMed
    1. Babic B, Gerke S, Evgeniou T, Cohen IG. Beware explanations from AI in health care. Science. 2021;373:284–286. doi: 10.1126/science.abg1834. - DOI - PubMed
    1. Chen H, Gomez C, Huang C-M, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit. Med. 2022;5:156. doi: 10.1038/s41746-022-00699-2. - DOI - PMC - PubMed
    1. Bienefeld N, et al. Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals. NPJ Digit. Med. 2023;6:94. doi: 10.1038/s41746-023-00837-4. - DOI - PMC - PubMed
    1. van der Velden BHM, Kuijf HJ, Gilhuijs KGA, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022;79:102470. doi: 10.1016/j.media.2022.102470. - DOI - PubMed