Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May 21:7:1320277.
doi: 10.3389/frai.2024.1320277. eCollection 2024.

Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices

Affiliations

Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices

Xukang Wang et al. Front Artif Intell. .

Abstract

Introduction: Algorithmic decision-making systems are widely used in various sectors, including criminal justice, employment, and education. While these systems are celebrated for their potential to enhance efficiency and objectivity, they also pose risks of perpetuating and amplifying societal biases and discrimination. This paper aims to provide an indepth analysis of the types of algorithmic discrimination, exploring both the challenges and potential solutions.

Methods: The methodology includes a systematic literature review, analysis of legal documents, and comparative case studies across different geographic regions and sectors. This multifaceted approach allows for a thorough exploration of the complexity of algorithmic bias and its regulation.

Results: We identify five primary types of algorithmic bias: bias by algorithmic agents, discrimination based on feature selection, proxy discrimination, disparate impact, and targeted advertising. The analysis of the U.S. legal and regulatory framework reveals a landscape of principled regulations, preventive controls, consequential liability, self-regulation, and heteronomy regulation. A comparative perspective is also provided by examining the status of algorithmic fairness in the EU, Canada, Australia, and Asia.

Conclusion: Real-world impacts are demonstrated through case studies focusing on criminal risk assessments and hiring algorithms, illustrating the tangible effects of algorithmic discrimination. The paper concludes with recommendations for interdisciplinary research, proactive policy development, public awareness, and ongoing monitoring to promote fairness and accountability in algorithmic decision-making. As the use of AI and automated systems expands globally, this work highlights the importance of developing comprehensive, adaptive approaches to combat algorithmic discrimination and ensure the socially responsible deployment of these powerful technologies.

Keywords: AI and law; algorithmic discrimination; automated decision-making; computational intelligence; regulatory measures.

PubMed Disclaimer

Conflict of interest statement

XW was employed by Sage IT Consulting Group. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Similar articles

References

    1. Ajunwa I. (2019). The paradox of automation as anti-bias intervention. Cardozo L. Rev. 41:1671
    1. Angwin J., Larson J., Mattu S., Kirchner L. (2022). “Machine bias” in Ethics of data and analytics. ed. Martin K. (Boca Raton: Auerbach Publications; ), 254–264.
    1. Australian Government (2019). Artificial intelligence ethics framework. Canberra, Australia: Department of Industry, Science, Energy and Resources.
    1. Berk R., Heidari H., Jabbari S., Kearns M., Roth A. (2021). Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44. doi: 10.1177/0049124118782533 - DOI
    1. Bogen M., Rieke A. (2018). Help wanted: An examination of hiring algorithms, equity, and bias. 75. Available at: https://www.upturn.org/static/reports/20