Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb 24;3(4):100455.
doi: 10.1016/j.patter.2022.100455. eCollection 2022 Apr 8.

How transparency modulates trust in artificial intelligence

Affiliations
Review

How transparency modulates trust in artificial intelligence

John Zerilli et al. Patterns (N Y). .

Abstract

The study of human-machine systems is central to a variety of behavioral and engineering disciplines, including management science, human factors, robotics, and human-computer interaction. Recent advances in artificial intelligence (AI) and machine learning have brought the study of human-AI teams into sharper focus. An important set of questions for those designing human-AI interfaces concerns trust, transparency, and error tolerance. Here, we review the emerging literature on this important topic, identify open questions, and discuss some of the pitfalls of human-AI team research. We present opposition (extreme algorithm aversion or distrust) and loafing (extreme automation complacency or bias) as lying at opposite ends of a spectrum, with algorithmic vigilance representing an ideal mid-point. We suggest that, while transparency may be crucial for facilitating appropriate levels of trust in AI and thus for counteracting aversive behaviors and promoting vigilance, transparency should not be conceived solely in terms of the explainability of an algorithm. Dynamic task allocation, as well as the communication of confidence and performance metrics-among other strategies-may ultimately prove more useful to users than explanations from algorithms and significantly more effective in promoting vigilance. We further suggest that, while both aversive and appreciative attitudes are detrimental to optimal human-AI team performance, strategies to curb aversion are likely to be more important in the longer term than those attempting to mitigate appreciation. Our wider aim is to channel disparate efforts in human-AI team research into a common framework and to draw attention to the ecological validity of results in this field.

Keywords: artificial intelligence; explainable AI; human factors; human-AI teams; human-computer interaction; machine learning; transparency; trust.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Scale of user attitudes toward AI in human-AI teams
Figure 2
Figure 2
User trust in automation after witnessing system failures (A) Five possible trust trajectories over time. Notice that the default attitude toward automation is generally one of high trust that falls by some measure in response to seeing a system err. The vigilant user of AI recalibrates their initially unrealistic estimate of a system’s capabilities gradually, but not to the point where their attitude becomes aversive. (B) The hypothesized role of transparency in trust calibration
Figure 3
Figure 3
Trust versus reliability

Comment in

  • Responsible and accountable data science.
    Wagner B, Müller-Birn C. Wagner B, et al. Patterns (N Y). 2022 Nov 11;3(11):100629. doi: 10.1016/j.patter.2022.100629. eCollection 2022 Nov 11. Patterns (N Y). 2022. PMID: 36419445 Free PMC article. No abstract available.

References

    1. Lewandowsky S., Mundy M., Tan G. The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 2000;6:104. - PubMed
    1. Lee M.K. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018;5
    1. Logg J.M., Minson J.A., Moore D.A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019;151:90–103.
    1. Parasuraman R., Riley V. Humans and automation: use, misuse, disuse, abuse. Hum. Factors. 1997;39:230–253.
    1. Dzindolet M.T., Peterson S.A., Pomranky R.A., Pierce L.G., Beck H.P. The role of trust in automation reliance. Int. J. Human Comput. Stud. 2003;58:697–718.

LinkOut - more resources