Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Oct 29;14(1):25996.
doi: 10.1038/s41598-024-76320-1.

What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration

Affiliations

What algorithmic evaluation fails to deliver: respectful treatment and individualized consideration

Jinseok S Chun et al. Sci Rep. .

Abstract

As firms increasingly depend on artificial intelligence to evaluate people across various contexts (e.g., job interviews, performance reviews), research has explored the specific impact of algorithmic evaluations in the workplace. In particular, the extant body of work focuses on the possibility that employees may perceive biases from algorithmic evaluations. We show that although perceptions of biases are indeed a notable outcome of AI-driven assessments (vs. those performed by humans), a crucial risk inherent in algorithmic evaluations is that individuals perceive them as lacking respect and dignity. Specifically, we find that the effect of algorithmic (vs. human) evaluations on perceptions of disrespectful treatment (a) remains significant while controlling for perceived biases (but not vice versa), (b) is significant even when the effect on perceived biases is not, and (c) is larger in size than the effect on perceived biases. The effect of algorithmic evaluations on disrespectful treatment is explained by perceptions that individuals' detailed characteristics are not properly considered during the evaluation process conducted by AI.

Keywords: AI; Algorithmic evaluations; Artificial intelligence; Biases; Individualized consideration; Respect.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Perceptions of respectful treatment and unbiasedness as afunction of algorithmic vs. human evaluations

References

    1. Bean, R. Why is it so hard to become a data-driven company? Harvard Business Rev. (2021).
    1. De Cremer, D., Narayanan, D., Nagpal, M., McGuire, J. & Schweitzer, S. AI Fairness in Action: A human-computer perspective on AI fairness in organizations and society. Int. J. Hum. Comput. Interact.40, 1–3 (2024). - DOI
    1. Taskesen, B., Blanchet, J., Kuhn, D. & Nguyen, V. A. A statistical test for probabilistic fairness. in Proceedings of the ACM Conference on Fairness, Accountability, and Transparency 648–665 (Association for Computing Machinery, New York, NY, USA, 2021). doi: (2021). 10.1145/3442188.3445927
    1. Gee, K. In Unilever’s radical hiring experiment, resumes are out, algorithms are in. Wall Str. J. (2017).
    1. Hoffman, M., Kahn, L. B. & Li, D. Discretion in hiring. Quart. J. Econom. 133, 765–800 (2018). - DOI

LinkOut - more resources