Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul 14;28(7):1574-1577.
doi: 10.1093/jamia/ocab055.

Public vs physician views of liability for artificial intelligence in health care

Affiliations

Public vs physician views of liability for artificial intelligence in health care

Dhruv Khullar et al. J Am Med Inform Assoc. .

Abstract

The growing use of artificial intelligence (AI) in health care has raised questions about who should be held liable for medical errors that result from care delivered jointly by physicians and algorithms. In this survey study comparing views of physicians and the U.S. public, we find that the public is significantly more likely to believe that physicians should be held responsible when an error occurs during care delivered with medical AI, though the majority of both physicians and the public hold this view (66.0% vs 57.3%; P = .020). Physicians are more likely than the public to believe that vendors (43.8% vs 32.9%; P = .004) and healthcare organizations should be liable for AI-related medical errors (29.2% vs 22.6%; P = .05). Views of medical liability did not differ by clinical specialty. Among the general public, younger people are more likely to hold nearly all parties liable.

Keywords: Artificial intelligence; medical errors; medical liability; regulatory policy.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Physician vs public perceptions of liable parties when medical errors result from use of artificial intelligence in health care. The chi-square test was performed to determine whether physicians’ and patients’ attitudes differed significantly regarding the liable parties for medical errors occurring when physicians and the computer program work together to treat patients. The x-axis includes the 4 parties that physicians or patients think should be liable for the medical error occurred when physicians and the computer program work together to treat patients. The 4 potentially liable parties are (1) the physician making the decision, (2) the company or vendor that provides the algorithm, (3) the healthcare organization purchasing the algorithm, and (4) the Food and Drug Administration (FDA) or governmental entity approving the algorithm for clinical use.

Similar articles

Cited by

References

    1. Wang F, Casalino LP, Khullar D.. Deep learning in medicine—promise, progress, and challenges. JAMA Intern Med 2019; 179 (3): 293–4. - PubMed
    1. Rajkomar A, Oren E, Chen K, et al.Scalable and accurate deep learning with electronic health records. NPJ Digit Med 2018; 1: 18. - PMC - PubMed
    1. Benjamens S, Dhunnoo P, Meskó B.. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020; 3 (1): 118. - PMC - PubMed
    1. Price WN, Gerke S, Cohen IG.. Potential liability for physicians using artificial intelligence. JAMA 2019; 322 (18): 1765–6. - PubMed
    1. Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). Discussion Paper and Request for Feedback. 2019. https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-.... Accessed October 26, 2020.

Publication types