Artificial Intelligence Decision-Making Transparency and Employees' Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort
- PMID: 35621424
- PMCID: PMC9138134
- DOI: 10.3390/bs12050127
Artificial Intelligence Decision-Making Transparency and Employees' Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort
Abstract
The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans' trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans' trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans' trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees' perceived transparency, employees' perceived effectiveness of AI, and employees' discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees' trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans' trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees' trust in AI, so that employees can better collaborate with AI.
Keywords: AI decision-making transparency; discomfort; effectiveness; trust.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
References
-
- Glikson E., Woolley A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020;14:627–660. doi: 10.5465/annals.2018.0057. - DOI
-
- Höddinghaus M., Sondern D., Hertel G. The automation of leadership functions: Would people trust decision algorithms? Comput. Hum. Behav. 2021;116:106635. doi: 10.1016/j.chb.2020.106635. - DOI
-
- Felzmann H., Villaronga E.F., Lutz C., Tamò-Larrieux A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019;6:2053951719860542. doi: 10.1177/2053951719860542. - DOI
-
- Sinha R., Swearingen K. CHI’02 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery; New York, NY, USA: 2002. The role of transparency in recommender systems; pp. 830–831. - DOI
Grants and funding
LinkOut - more resources
Full Text Sources