Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Dec 18;169(12):866-872.
doi: 10.7326/M18-1990. Epub 2018 Dec 4.

Ensuring Fairness in Machine Learning to Advance Health Equity

Affiliations

Ensuring Fairness in Machine Learning to Advance Health Equity

Alvin Rajkomar et al. Ann Intern Med. .

Abstract

Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human and structural biases in the past-called protected groups-are vulnerable to harm by incorrect predictions or withholding of resources. This article describes how model design, biases in data, and the interactions of model predictions with clinicians and patients may exacerbate health care disparities. Rather than simply guarding against these harms passively, machine-learning systems should be used proactively to advance health equity. For that goal to be achieved, principles of distributive justice must be incorporated into model design, deployment, and evaluation. The article describes several technical implementations of distributive justice-specifically those that ensure equality in patient outcomes, performance, and resource allocation-and guides clinicians as to when they should prioritize each principle. Machine learning is providing increasingly sophisticated decision support and population-level monitoring, and it should encode principles of justice to ensure that models benefit all patients.

PubMed Disclaimer

Conflict of interest statement

Disclosures: Disclosures can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M18-1990.

Figures

Figure
Figure. Conceptual framework of how various biases relate to one another.
During model development, differences in the distribution of features used to predict a label between the protected and nonprotected groups may bias a model to be less accurate for protected groups. Moreover, the data used to develop a model may not generalize to the data used during model deployment (training–serving skew). Biases in model design and data affect patient outcomes through the model’s interaction with clinicians and patients.

Comment in

References

    1. Krause J, Gulshan V, Rahimy E, Karth P, Widner K, Corrado GS, et al. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 2018;125:1264–72. doi: 10.1016/j.ophtha.2018.01.034 - DOI - PubMed
    1. Angwin J, Larson J, Kirchner L, Mattu S. Machine bias. ProPublica 23 May 2016. Accessed at www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sen... on 13 December 2017.
    1. Kleinberg J Inherent trade-offs in algorithmic fairness [Abstract]. In: Abstracts of the 2018 Association for Computing Machinery International Conference on Measurement and Modeling of Computer Systems, Irvine, California, 18 –22 June 2018 New York: Association for Computing Machinery; 2018:40.
    1. Lum K, Isaac W. To predict and serve? Significance 2016;13:14–9.
    1. Chouldechova A, Benavides-Prado D, Fialko O, Vaithianathan R. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Proc Mach Learn Res 2018: 134–48.

Publication types