Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring
- PMID: 39319065
- PMCID: PMC11420529
- DOI: 10.3389/fpsyg.2024.1416504
Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring
Abstract
Introduction: Artificial intelligence algorithms are increasingly adopted as decisional aides in many contexts such as human resources, often with the promise of being fast, efficient, and even capable of overcoming biases of human decision-makers. Simultaneously, this promise of objectivity and the increasing supervisory role of humans may make it more likely for existing biases in algorithms to be overlooked, as humans are prone to over-rely on such automated systems. This study therefore aims to investigate such reliance on biased algorithmic advice in a hiring context.
Method: Simulating the algorithmic pre-selection of applicants we confronted participants with biased or non-biased recommendations in a 1 × 2 between-subjects online experiment (n = 260).
Results: The findings suggest that the algorithmic bias went unnoticed for about 60% of the participants in the bias condition when explicitly asking for this. However, overall individuals relied less on biased algorithms making more changes to the algorithmic scores. Reduced reliance on the algorithms led to the increased noticing of the bias. The biased recommendations did not lower general attitudes toward algorithms but only evaluations for this specific hiring algorithm, while explicitly noticing the bias affected both. Individuals with a more negative attitude toward decision subjects were more likely to not notice the bias.
Discussion: This study extends the literature by examining the interplay of (biased) human operators and biased algorithmic decision support systems to highlight the potential negative impacts of such automation for vulnerable and disadvantaged individuals.
Keywords: algorithmic bias; algorithmic decision-making; discrimination; hiring; human bias; human resources; selective adherence.
Copyright © 2024 Rosenthal-von der Pütten and Sach.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures



References
-
- Alberdi E., Strigini L., Povyakalo A. A., Ayton P. (2009). “Why are people's decisions sometimes worse with computer support?” in Computer Safety, Reliability, and Security, eds. B. Buth, G. Rabe, and T. Seyfarth (Berlin: Springer; ), 18–31. 10.1007/978-3-642-04468-7_3 - DOI
-
- Alelyani S. (2021). Detection and evaluation of machine learning bias. Appl. Sci. 11:6271. 10.3390/app11146271 - DOI
-
- Alon-Barkat S., Busuioc M. (2023). Human-AI interactions in public sector decision making: “automation bias” and “selective adherence” to algorithmic advice. J. Public Adm. Res. Theory 33, 153–169. 10.1093/jopart/muac007 - DOI
-
- Ashburn-Nardo L., Morris K. A., Goodwin S. A. (2008). The confronting prejudiced responses (CPR) model: applying cpr in organizations. Acad. Manag. Learn. Educ. 7, 332–342. 10.5465/amle.2008.34251671 - DOI
LinkOut - more resources
Full Text Sources