Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2025 May 22:27:e68823.
doi: 10.2196/68823.

Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment

Affiliations
Randomized Controlled Trial

Patient Reactions to Artificial Intelligence-Clinician Discrepancies: Web-Based Randomized Experiment

Farrah Madanay et al. J Med Internet Res. .

Abstract

Background: As the US Food and Drug Administration (FDA)-approved use of artificial intelligence (AI) for medical imaging rises, radiologists are increasingly integrating AI into their clinical practices. In lung cancer screening, diagnostic AI offers a second set of eyes with the potential to detect cancer earlier than human radiologists. Despite AI's promise, a potential problem with its integration is the erosion of patient confidence in clinician expertise when there is a discrepancy between the radiologist's and the AI's interpretation of the imaging findings.

Objective: We examined how discrepancies between AI-derived recommendations and radiologists' recommendations affect patients' agreement with radiologists' recommendations and satisfaction with their radiologists. We also analyzed how patients' medical maximizing-minimizing preferences moderate these relationships.

Methods: We conducted a randomized, between-subjects experiment with 1606 US adult participants. Assuming the role of patients, participants imagined undergoing a low-dose computerized tomography scan for lung cancer screening and receiving results and recommendations from (1) a radiologist only, (2) AI and a radiologist in agreement, (3) a radiologist who recommended more testing than AI (ie, radiologist overcalled AI), or (4) a radiologist who recommended less testing than AI (ie, radiologist undercalled AI). Participants rated the radiologist on three criteria: agreement with the radiologist's recommendation, how likely they would be to recommend the radiologist to family and friends, and how good of a provider they perceived the radiologist to be. We measured medical maximizing-minimizing preferences and categorized participants as maximizers (ie, those who seek aggressive intervention), minimizers (ie, those who prefer no or passive intervention), and neutrals (ie, those in the middle).

Results: Participants' agreement with the radiologist's recommendation was significantly lower when the radiologist undercalled AI (mean 4.01, SE 0.07, P<.001) than in the other 3 conditions, with no significant differences among them (radiologist overcalled AI [mean 4.63, SE 0.06], agreed with AI [mean 4.55, SE 0.07], or had no AI [mean 4.57, SE 0.06]). Similarly, participants were least likely to recommend (P<.001) and positively rate (P<.001) the radiologist who undercalled AI, with no significant differences among the other conditions. Maximizers agreed with the radiologist who overcalled AI (β=0.82, SE 0.14; P<.001) and disagreed with the radiologist who undercalled AI (β=-0.47, SE 0.14; P=.001). However, whereas minimizers disagreed with the radiologist who overcalled AI (β=-0.43, SE 0.18, P=.02), they did not significantly agree with the radiologist who undercalled AI (β=0.14, SE 0.17, P=.41).

Conclusions: Radiologists who recommend less testing than AI may face decreased patient confidence in their expertise, but they may not face this same penalty for giving more aggressive recommendations than AI. Patients' reactions may depend in part on whether their general preferences to maximize or minimize align with the radiologists' recommendations. Future research should test communication strategies for radiologists' disclosure of AI discrepancies to patients.

Keywords: artificial intelligence; communication; decision making; early detection of cancer; medical maximizing-minimizing; patient satisfaction; patient-physician relationship; radiologists.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Effect of participants’ medical maximizing-minimizing preferences on agreement with the radiologist’s recommendation, by condition. Error bars depict SEs. AI: artificial intelligence.
Figure 2
Figure 2
Effect of condition on participants’ agreement with the radiologist’s recommendation, by participants’ MMM category. Means and standard error bars are depicted. Stars represent significant differences between groups (*P<.05, **P<.01, ***P<.001). AI: artificial intelligence; MMM: medical maximizing-minimizing.

References

    1. Wang TW, Hong JS, Chiu HY, Chao HS, Chen YM, Wu YT. Standalone deep learning versus experts for diagnosis lung cancer on chest computed tomography: a systematic review. Eur Radiol. 2024;34(11):7397–7407. doi: 10.1007/s00330-024-10804-6.10.1007/s00330-024-10804-6 - DOI - PMC - PubMed
    1. Gierada DS, Pinsky P, Nath H, Chiles C, Duan F, Aberle DR. Projected outcomes using different nodule sizes to define a positive CT lung cancer screening examination. J Natl Cancer Inst. 2014;106(11):dju284. doi: 10.1093/jnci/dju284. https://europepmc.org/abstract/MED/25326638 dju284 - DOI - PMC - PubMed
    1. Tam MDBS, Dyer T, Dissez G, Morgan TN, Hughes M, Illes J, Rasalingham R, Rasalingham S. Augmenting lung cancer diagnosis on chest radiographs: positioning artificial intelligence to improve radiologist performance. Clin Radiol. 2021;76(8):607–614. doi: 10.1016/j.crad.2021.03.021.S0009-9260(21)00237-3 - DOI - PubMed
    1. Rubin DL. Artificial intelligence in imaging: The radiologist's role. J Am Coll Radiol. 2019;16(9 Pt B):1309–1317. doi: 10.1016/j.jacr.2019.05.036. https://europepmc.org/abstract/MED/31492409 S1546-1440(19)30636-2 - DOI - PMC - PubMed
    1. Mikhael PG, Wohlwend J, Yala A, Karstens L, Xiang J, Takigami AK, Bourgouin PP, Chan P, Mrah S, Amayri W, Juan Y, Yang C, Wan Y, Lin G, Sequist LV, Fintelmann FJ, Barzilay R. Sybil: a validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. J Clin Oncol. 2023;41(12):2191–2200. doi: 10.1200/JCO.22.01345. https://europepmc.org/abstract/MED/36634294 - DOI - PMC - PubMed

Publication types