Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices
- PMID: 38836021
- PMCID: PMC11148221
- DOI: 10.3389/frai.2024.1320277
Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices
Abstract
Introduction: Algorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions.
Methods: The methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation.
Results: We identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia.
Conclusion: Real-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.
Keywords: AI and law; algorithmic discrimination; automated decision-making; computational intelligence; regulatory measures.
Copyright © 2024 Wang, Wu, Ji and Fu.
Conflict of interest statement
XW was employed by Sage IT Consulting Group. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
-
- Ajunwa I. (2019). The paradox of automation as anti-bias intervention. Cardozo L. Rev. 41:1671
-
- Angwin J., Larson J., Mattu S., Kirchner L. (2022). “Machine bias” in Ethics of data and analytics. ed. Martin K. (Boca Raton: Auerbach Publications; ), 254–264.
-
- Australian Government (2019). Artificial intelligence ethics framework. Canberra, Australia: Department of Industry, Science, Energy and Resources.
-
- Berk R., Heidari H., Jabbari S., Kearns M., Roth A. (2021). Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44. doi: 10.1177/0049124118782533 - DOI
-
- Bogen M., Rieke A. (2018). Help wanted: An examination of hiring algorithms, equity, and bias. 75. Available at: https://www.upturn.org/static/reports/20
LinkOut - more resources
Full Text Sources
Research Materials
