Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr 27;12(5):127.
doi: 10.3390/bs12050127.

Artificial Intelligence Decision-Making Transparency and Employees' Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort

Affiliations

Artificial Intelligence Decision-Making Transparency and Employees' Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort

Liangru Yu et al. Behav Sci (Basel). .

Abstract

The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans' trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans' trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans' trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees' perceived transparency, employees' perceived effectiveness of AI, and employees' discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees' trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans' trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees' trust in AI, so that employees can better collaborate with AI.

Keywords: AI decision-making transparency; discomfort; effectiveness; trust.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure A1
Figure A1
Employees interacting with the AI system used in study.
Figure 1
Figure 1
Research model.

References

    1. Glikson E., Woolley A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020;14:627–660. doi: 10.5465/annals.2018.0057. - DOI
    1. Höddinghaus M., Sondern D., Hertel G. The automation of leadership functions: Would people trust decision algorithms? Comput. Hum. Behav. 2021;116:106635. doi: 10.1016/j.chb.2020.106635. - DOI
    1. Hoff K.A., Bashir M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors. 2015;57:407–434. doi: 10.1177/0018720814547570. - DOI - PubMed
    1. Felzmann H., Villaronga E.F., Lutz C., Tamò-Larrieux A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019;6:2053951719860542. doi: 10.1177/2053951719860542. - DOI
    1. Sinha R., Swearingen K. CHI’02 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery; New York, NY, USA: 2002. The role of transparency in recommender systems; pp. 830–831. - DOI

LinkOut - more resources