Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices
- PMID: 38836021
- PMCID: PMC11148221
- DOI: 10.3389/frai.2024.1320277
Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices
Abstract
Introduction: Algorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions.
Methods: The methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation.
Results: We identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia.
Conclusion: Real-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.
Keywords: AI and law; algorithmic discrimination; automated decision-making; computational intelligence; regulatory measures.
Copyright © 2024 Wang, Wu, Ji and Fu.
Conflict of interest statement
XW was employed by Sage IT Consulting Group. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Similar articles
-
Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring.Front Psychol. 2024 Sep 10;15:1416504. doi: 10.3389/fpsyg.2024.1416504. eCollection 2024. Front Psychol. 2024. PMID: 39319065 Free PMC article.
-
Research agenda for algorithmic fairness studies: Access to justice lessons for interdisciplinary research.Front Artif Intell. 2022 Dec 21;5:882134. doi: 10.3389/frai.2022.882134. eCollection 2022. Front Artif Intell. 2022. PMID: 36620752 Free PMC article. Review.
-
Ethical machines: The human-centric use of artificial intelligence.iScience. 2021 Mar 3;24(3):102249. doi: 10.1016/j.isci.2021.102249. eCollection 2021 Mar 19. iScience. 2021. PMID: 33763636 Free PMC article. Review.
-
AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings.Telecomm Policy. 2020 Jul;44(6):101976. doi: 10.1016/j.telpol.2020.101976. Epub 2020 Apr 17. Telecomm Policy. 2020. PMID: 32313360 Free PMC article.
-
The future of Cochrane Neonatal.Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
References
-
- Ajunwa I. (2019). The paradox of automation as anti-bias intervention. Cardozo L. Rev. 41:1671
-
- Angwin J., Larson J., Mattu S., Kirchner L. (2022). “Machine bias” in Ethics of data and analytics. ed. Martin K. (Boca Raton: Auerbach Publications; ), 254–264.
-
- Australian Government (2019). Artificial intelligence ethics framework. Canberra, Australia: Department of Industry, Science, Energy and Resources.
-
- Berk R., Heidari H., Jabbari S., Kearns M., Roth A. (2021). Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44. doi: 10.1177/0049124118782533 - DOI
-
- Bogen M., Rieke A. (2018). Help wanted: An examination of hiring algorithms, equity, and bias. 75. Available at: https://www.upturn.org/static/reports/20
LinkOut - more resources
Full Text Sources
Research Materials