Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Jul 12;14(14):1506.
doi: 10.3390/diagnostics14141506.

AI in Radiology: Navigating Medical Responsibility

Affiliations
Review

AI in Radiology: Navigating Medical Responsibility

Maria Teresa Contaldo et al. Diagnostics (Basel). .

Abstract

The application of Artificial Intelligence (AI) facilitates medical activities by automating routine tasks for healthcare professionals. AI augments but does not replace human decision-making, thus complicating the process of addressing legal responsibility. This study investigates the legal challenges associated with the medical use of AI in radiology, analyzing relevant case law and literature, with a specific focus on professional liability attribution. In the case of an error, the primary responsibility remains with the physician, with possible shared liability with developers according to the framework of medical device liability. If there is disagreement with the AI's findings, the physician must not only pursue but also justify their choices according to prevailing professional standards. Regulations must balance the autonomy of AI systems with the need for responsible clinical practice. Effective use of AI-generated evaluations requires knowledge of data dynamics and metrics like sensitivity and specificity, even without a clear understanding of the underlying algorithms: the opacity (referred to as the "black box phenomenon") of certain systems raises concerns about the interpretation and actual usability of results for both physicians and patients. AI is redefining healthcare, underscoring the imperative for robust liability frameworks, meticulous updates of systems, and transparent patient communication regarding AI involvement.

Keywords: Artificial Intelligence Systems (AISs); European doctrine; black-box phenomenon; computernalism; decision-making process; liability; responsibility; transparency.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
The results reported by the PubMed research over the 5 years analyzed (from 2018 to 2023), categorized by year of publication.
Figure 2
Figure 2
PRISMA 2020 flow diagram for updated systematic reviews; from [5].
Figure 3
Figure 3
The review process, which began with defining aims and roles. PubMed research, data extraction, and filtering led to thematic analysis of legal aspect of AISs in radiology. Reviewers then unified their drafts into a cohesive paper.
Figure 4
Figure 4
Literal opacity in AI obscures the entirety of its decision-making mechanisms, while practical opacity allows for partial visibility into the algorithmic processes. This image was generated with the assistance of DALL-E (by OpenAI).
Figure 5
Figure 5
Here is an illustration depicting the evolution from the “black box” to “glass/crystal box” in AI, showcasing the concept of Explainable Artificial Intelligence (XAI) and enlightening the increasing transparency with a vibrant color transition. This image was generated with the assistance of DALL-E (by OpenAI).

References

    1. Kiseleva A., Kotzinos D., De Hert P. Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations. Front. Artif. Intell. 2022;5:879603. doi: 10.3389/frai.2022.879603. - DOI - PMC - PubMed
    1. Sung J.J., Stewart C.L., Freedman B. Artificial Intelligence in Health Care: Preparing for the Fifth Industrial Revolution. Med. J. Aust. 2020;213:253–255.e1. doi: 10.5694/mja2.50755. - DOI - PubMed
    1. Stewart C., Wong S.K.Y., Sung J.J.Y. Mapping Ethico-Legal Principles for the Use of Artificial Intelligence in Gastroenterology. J. Gastroenterol. Hepatol. 2021;36:1143–1148. doi: 10.1111/jgh.15521. - DOI - PubMed
    1. Sullivan H.R., Schweikart S.J. Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI? AMA J. Ethics. 2019;21:E160–E166. doi: 10.1001/amajethics.2019.160. - DOI - PubMed
    1. Page M.J., McKenzie J.E., Bossuyt P.M., Boutron I., Hoffmann T.C., Mulrow C.D., Shamseer L., Tetzlaff J.M., Akl E.A., Brennan S.E., et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. doi: 10.1136/bmj.n71. - DOI - PMC - PubMed

LinkOut - more resources