Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 10:15:1416504.
doi: 10.3389/fpsyg.2024.1416504. eCollection 2024.

Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring

Affiliations

Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring

Astrid Marieke Rosenthal-von der Pütten et al. Front Psychol. .

Abstract

Introduction: Artificial intelligence algorithms are increasingly adopted as decisional aides in many contexts such as human resources, often with the promise of being fast, efficient, and even capable of overcoming biases of human decision-makers. Simultaneously, this promise of objectivity and the increasing supervisory role of humans may make it more likely for existing biases in algorithms to be overlooked, as humans are prone to over-rely on such automated systems. This study therefore aims to investigate such reliance on biased algorithmic advice in a hiring context.

Method: Simulating the algorithmic pre-selection of applicants we confronted participants with biased or non-biased recommendations in a 1 × 2 between-subjects online experiment (n = 260).

Results: The findings suggest that the algorithmic bias went unnoticed for about 60% of the participants in the bias condition when explicitly asking for this. However, overall individuals relied less on biased algorithms making more changes to the algorithmic scores. Reduced reliance on the algorithms led to the increased noticing of the bias. The biased recommendations did not lower general attitudes toward algorithms but only evaluations for this specific hiring algorithm, while explicitly noticing the bias affected both. Individuals with a more negative attitude toward decision subjects were more likely to not notice the bias.

Discussion: This study extends the literature by examining the interplay of (biased) human operators and biased algorithmic decision support systems to highlight the potential negative impacts of such automation for vulnerable and disadvantaged individuals.

Keywords: algorithmic bias; algorithmic decision-making; discrimination; hiring; human bias; human resources; selective adherence.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Screenshot of the presented applicants with their preset algorithmic scores in the biased condition. Photo by Generated Photos (https://generated.photos/).
Figure 2
Figure 2
Screenshot of the presented applicants with their preset algorithmic scores in the non-biased condition. Photo by Generated Photos (https://generated.photos/).
Figure 3
Figure 3
Final Selection of the job profile “Software Developer,” on the left side the biased version on the right side the non-biased version. Photo by Generated Photos (https://generated.photos/).

References

    1. Aizenberg E., Dennis M. J., van den Hoven J. (2023). Examining the assumptions of AI hiring assessments and their impact on job seekers' autonomy over self-representation. AI Soc. 1–9. 10.1007/s00146-023-01783-1 - DOI - PMC - PubMed
    1. Alberdi E., Strigini L., Povyakalo A. A., Ayton P. (2009). “Why are people's decisions sometimes worse with computer support?” in Computer Safety, Reliability, and Security, eds. B. Buth, G. Rabe, and T. Seyfarth (Berlin: Springer; ), 18–31. 10.1007/978-3-642-04468-7_3 - DOI
    1. Alelyani S. (2021). Detection and evaluation of machine learning bias. Appl. Sci. 11:6271. 10.3390/app11146271 - DOI
    1. Alon-Barkat S., Busuioc M. (2023). Human-AI interactions in public sector decision making: “automation bias” and “selective adherence” to algorithmic advice. J. Public Adm. Res. Theory 33, 153–169. 10.1093/jopart/muac007 - DOI
    1. Ashburn-Nardo L., Morris K. A., Goodwin S. A. (2008). The confronting prejudiced responses (CPR) model: applying cpr in organizations. Acad. Manag. Learn. Educ. 7, 332–342. 10.5465/amle.2008.34251671 - DOI

LinkOut - more resources