Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020 May 8:3:34.
doi: 10.3389/frai.2020.00034. eCollection 2020.

On Consequentialism and Fairness

Affiliations
Review

On Consequentialism and Fairness

Dallas Card et al. Front Artif Intell. .

Abstract

Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage "fair" outcomes. Less attention has been paid, however, to the ethical foundations which underlie such efforts. Among the ethical perspectives that should be taken into consideration is consequentialism, the position that, roughly speaking, outcomes are all that matter. Although consequentialism is not free from difficulties, and although it does not necessarily provide a tractable way of choosing actions (because of the combined problems of uncertainty, subjectivity, and aggregation), it nevertheless provides a powerful foundation from which to critique the existing literature on machine learning fairness. Moreover, it brings to the fore some of the tradeoffs involved, including the problem of who counts, the pros and cons of using a policy, and the relative value of the distant future. In this paper we provide a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism. We conclude with a broader discussion of the issues of learning and randomization, which have important implications for the ethics of automated decision making systems.

Keywords: consequentialism; ethics; fairness; machine learning; randomization.

PubMed Disclaimer

References

    1. Abel D., MacGlashan J., Littman M. L. (2016). Reinforcement learning as a framework for ethical decision making. in Proceedings of the Workshop on AI, Ethics, and Society at AAAI (Phoenix, AZ: ).
    1. Amodei D., Olah C., Steinhardt J., Christiano P., Schulman J., Mané D. (2016). Concrete problems in AI safety. arXiv:1606.06565.
    1. Anscombe G. E. M. (1958). Modern moral philosophy. Philosophy 33, 1–19. 10.1017/S0031819100037943 - DOI
    1. Barabas C., Virza M., Dinakar K., Ito J., Zittrain J. (2018). “Interventions over predictions: Reframing the ethical debate for actuarial risk assessment,” in Proceedings of FAT* (New York, NY: ).
    1. Barocas S., Selbst A. D. (2016). Big data's disparate impact. California Law Rev. 104, 671–732. 10.2139/ssrn.2477899 - DOI