What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration
- PMID: 39472597
- PMCID: PMC11522280
- DOI: 10.1038/s41598-024-76320-1
What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration
Abstract
As firms increasingly depend on artificial intelligence to evaluate people across various contexts (e.g., job interviews, performance reviews), research has explored the specific impact of algorithmic evaluations in the workplace. In particular, the extant body of work focuses on the possibility that employees may perceive biases from algorithmic evaluations. We show that although perceptions of biases are indeed a notable outcome of AI-driven assessments (vs. those performed by humans), a crucial risk inherent in algorithmic evaluations is that individuals perceive them as lacking respect and dignity. Specifically, we find that the effect of algorithmic (vs. human) evaluations on perceptions of disrespectful treatment (a) remains significant while controlling for perceived biases (but not vice versa), (b) is significant even when the effect on perceived biases is not, and (c) is larger in size than the effect on perceived biases. The effect of algorithmic evaluations on disrespectful treatment is explained by perceptions that individuals' detailed characteristics are not properly considered during the evaluation process conducted by AI.
Keywords: AI; Algorithmic evaluations; Artificial intelligence; Biases; Individualized consideration; Respect.
© 2024. The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures
References
-
- Bean, R. Why is it so hard to become a data-driven company? Harvard Business Rev. (2021).
-
- De Cremer, D., Narayanan, D., Nagpal, M., McGuire, J. & Schweitzer, S. AI Fairness in Action: A human-computer perspective on AI fairness in organizations and society. Int. J. Hum. Comput. Interact.40, 1–3 (2024). - DOI
-
- Taskesen, B., Blanchet, J., Kuhn, D. & Nguyen, V. A. A statistical test for probabilistic fairness. in Proceedings of the ACM Conference on Fairness, Accountability, and Transparency 648–665 (Association for Computing Machinery, New York, NY, USA, 2021). doi: (2021). 10.1145/3442188.3445927
-
- Gee, K. In Unilever’s radical hiring experiment, resumes are out, algorithms are in. Wall Str. J. (2017).
-
- Hoffman, M., Kahn, L. B. & Li, D. Discretion in hiring. Quart. J. Econom. 133, 765–800 (2018). - DOI
MeSH terms
LinkOut - more resources
Full Text Sources
Research Materials
