Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Nov 7;3(11):e0000651.
doi: 10.1371/journal.pdig.0000651. eCollection 2024 Nov.

Bias in medical AI: Implications for clinical decision-making

Affiliations
Review

Bias in medical AI: Implications for clinical decision-making

James L Cross et al. PLOS Digit Health. .

Abstract

Biases in medical artificial intelligence (AI) arise and compound throughout the AI lifecycle. These biases can have significant clinical consequences, especially in applications that involve clinical decision-making. Left unaddressed, biased medical AI can lead to substandard clinical decisions and the perpetuation and exacerbation of longstanding healthcare disparities. We discuss potential biases that can arise at different stages in the AI development pipeline and how they can affect AI algorithms and clinical decision-making. Bias can occur in data features and labels, model development and evaluation, deployment, and publication. Insufficient sample sizes for certain patient groups can result in suboptimal performance, algorithm underestimation, and clinically unmeaningful predictions. Missing patient findings can also produce biased model behavior, including capturable but nonrandomly missing data, such as diagnosis codes, and data that is not usually or not easily captured, such as social determinants of health. Expertly annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard care practices. Overreliance on performance metrics during model development may obscure bias and diminish a model's clinical utility. When applied to data outside the training cohort, model performance can deteriorate from previous validation and can do so differentially across subgroups. How end users interact with deployed solutions can introduce bias. Finally, where models are developed and published, and by whom, impacts the trajectories and priorities of future medical AI development. Solutions to mitigate bias must be implemented with care, which include the collection of large and diverse data sets, statistical debiasing methods, thorough model evaluation, emphasis on model interpretability, and standardized bias reporting and transparency requirements. Prior to real-world implementation in clinical settings, rigorous validation through clinical trials is critical to demonstrate unbiased application. Addressing biases across model development stages is crucial for ensuring all patients benefit equitably from the future of medical AI.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

References

    1. Muntner P, Colantonio LD, Cushman M, Goff DC, Howard G, Howard VJ, et al.. Validation of the Atherosclerotic Cardiovascular Disease Pooled Cohort Risk Equations. JAMA. 2014;311(14):1406. doi: 10.1001/jama.2014.2630 - DOI - PMC - PubMed
    1. Tătaru OS, Vartolomei MD, Rassweiler JJ, Virgil O, Lucarelli G, Porpiglia F, et al.. Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management—Current Trends and Future Perspectives. Diagnostics. 2021;11(2):354. doi: 10.3390/diagnostics11020354 - DOI - PMC - PubMed
    1. Adlung L, Cohen Y, Mor U, Elinav E. Machine learning in clinical decision making. Medicamundi. 2021;2(6):642–665. doi: 10.1016/j.medj.2021.04.006 - DOI - PubMed
    1. Gu C, Wang Y, Jiang Y, Xu F, Wang S, Liu R, et al.. Application of artificial intelligence system for screening multiple fundus diseases in Chinese primary healthcare settings: a real-world, multicentre and cross-sectional study of 4795 cases. Br J Ophthalmol. 2024;108(3):424–431. doi: 10.1136/bjo-2022-322940 - DOI - PMC - PubMed
    1. Elías-Cabot E, Romero-Martín S, Raya-Povedano JL, Brehl A-K, Álvarez-Benito M. Impact of real-life use of artificial intelligence as support for human reading in a population-based breast cancer screening program with mammography and tomosynthesis. Eur Radiol. 2023;34(6):3958–3966. doi: 10.1007/s00330-023-10426-4 - DOI - PMC - PubMed

LinkOut - more resources