Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jan;62(1):17-21.
doi: 10.2967/jnumed.120.256032. Epub 2020 Sep 25.

When Does Physician Use of AI Increase Liability?

Affiliations

When Does Physician Use of AI Increase Liability?

Kevin Tobia et al. J Nucl Med. 2021 Jan.

Abstract

An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI. However, such liability depends in part on lay judgments by jurors: when physicians use AI systems, in which circumstances would jurors hold physicians liable? Methods: To determine potential jurors' judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read 1 of 4 scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician's decision (to accept or reject that recommendation). Subsequently, the physician's decision caused harm. Participants then assessed the physician's liability. Results: Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else being equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. Conclusion: The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools.

Keywords: artificial intelligence; liability; precision medicine.

PubMed Disclaimer

Figures

FIGURE 1.
FIGURE 1.
Experimental design that crosses recommendation (standard, nonstandard) with decision (accept, reject).
FIGURE 2.
FIGURE 2.
Experimental predictions of 4 models.
FIGURE 3.
FIGURE 3.
Flowchart of vignette.
FIGURE 4.
FIGURE 4.
Mean ratings of reasonableness, by condition. Error bars indicate 95% CIs.

References

    1. Price WN, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. 2019;322:1765–1766. - PubMed
    1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44–56. - PubMed
    1. Avraham R, Schanzenbach M. Medical malpractice. In: Parisi F, ed. Oxford Handbook of Law and Economics. Oxford University Press; 2017.
    1. O’Connell J, Pohl C. How reliable is medical malpractice law? A review of “Medical Malpractice and the American Jury: Confronting the Myths about Jury Incompetence, Deep Pockets, and Outrageous Damage Awards” by Neil Vidmar. J Law Health. 1998;12:359–379. - PubMed
    1. Peters P. The quiet demise of deference to custom: malpractice at the millennium. Wash Lee Law Rev. 2000;57:163–205.

Publication types

MeSH terms

LinkOut - more resources