How transparency modulates trust in artificial intelligence
- PMID: 35465233
- PMCID: PMC9023880
- DOI: 10.1016/j.patter.2022.100455
How transparency modulates trust in artificial intelligence
Abstract
The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics-among other strategies-may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field.
Keywords: artificial intelligence; explainable AI; human factors; human-AI teams; human-computer interaction; machine learning; transparency; trust.
© 2022 The Author(s).
Figures
Comment in
-
Responsible and accountable data science.Patterns (N Y). 2022 Nov 11;3(11):100629. doi: 10.1016/j.patter.2022.100629. eCollection 2022 Nov 11. Patterns (N Y). 2022. PMID: 36419445 Free PMC article. No abstract available.
References
-
- Lewandowsky S., Mundy M., Tan G. The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 2000;6:104. - PubMed
-
- Lee M.K. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018;5
-
- Logg J.M., Minson J.A., Moore D.A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019;151:90–103.
-
- Parasuraman R., Riley V. Humans and automation: use, misuse, disuse, abuse. Hum. Factors. 1997;39:230–253.
-
- Dzindolet M.T., Peterson S.A., Pomranky R.A., Pierce L.G., Beck H.P. The role of trust in automation reliance. Int. J. Human Comput. Stud. 2003;58:697–718.
Publication types
LinkOut - more resources
Full Text Sources
