Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Dec;66(12):2569-2589.
doi: 10.1177/00187208241228636. Epub 2024 Mar 6.

Trust with increasing and decreasing reliability

Affiliations

Trust with increasing and decreasing reliability

Benjamin S P Rittenberg et al. Hum Factors. 2024 Dec.

Abstract

Objective: The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level.

Background: Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored.

Method: Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%.

Results: Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system.

Conclusion: Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system.

Applications: This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.

Keywords: automation; decision making; human-automation interaction; levels of automation; trust in automation.

PubMed Disclaimer

Conflict of interest statement

Declaration of Conflicting InterestsThe author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Figures

Figure 1.
Figure 1.
Flow diagram of study design.
Figure 2.
Figure 2.
(a) Image shown to the participants during the instruction script showing them the (fictional) regions where they would be testing soil samples over the six blocks. (b) Example detection trial. (c) Trust and confidence scale.
Figure 3.
Figure 3.
(a) Trust, (b) self-confidence, and (c) accuracy by block. Trust and self-confidence were scored on a 0–100 scale, and performance was scored on a 0–1 scale. It should be noted that the maximum and minimum of the scales have been adjusted for ease of interpreting the figures.
Figure 4.
Figure 4.
Visualization of the three-way interaction between reliability level, reliability group, and trust measurement predicting trust in the 50% increasing and decreasing conditions. As the figure is broken down by reliability level, it should be noted that the increasing group started from 50% reliability (moving from left to right and up to down in the figure), while the decreasing group started at 100% reliability (moving from right to left and down to up).
Figure 5.
Figure 5.
Visualization of the three-way interaction between self-confidence, group, and reliability level predicting trust.
Figure 6.
Figure 6.
(a) Trust, (b) self-confidence, and (c) accuracy by block. Trust and self-confidence were scored on a 0–100 scale, and performance was scored on a 0–1 scale. The 50% increasing group is the same data from the increasing group in Experiment 1, and has been included for comparison. It should also be noted that horizontal axis is showing blocks, which had a different progression of reliability between the two groups (progressions: 50% increasing—50%, 60%, 70%, 80%, 90%, 100%; 70% increasing—70%, 80%, 90%, 100%, 100%, 100%).
Figure 7.
Figure 7.
Visualization of the three-way interaction between reliability level, group, and trial predicting trust of the 50% increasing and 70% increasing conditions. The 50% increasing group is the same data from the increasing group in Experiment 1, and has been included for comparison. At the 100% reliability level, Blocks 4 through 6 for the 70% increasing group largely overlap.
Figure 8.
Figure 8.
Visualization of the three-way interaction between self-confidence, group, and reliability level predicting trust of the 50% increasing and 70% increasing conditions. It should be noted that the 70% increasing group did not experience the 50% or 60% automation reliability and that were matched for block order between the figures to aid in demonstrating the change across blocks. The 50% increasing group is the same data from the increasing group in Experiment 1, and has been included for comparison.

References

    1. Abbass H. A., Petraki E., Merrick K., Harvey J., Barlow M. (2016). Trusted autonomy and cognitive cyber symbiosis: Open challenges. Cognitive Computation, 8(3), 385–408. 10.1007/s12559-015-9365-5 - DOI - PMC - PubMed
    1. Akash K., Hu W.-L., Reid T., Jain N. (2017). Dynamic modeling of trust in human-machine interactions. 2017 American Control Conference (ACC) (pp. 1542–1548), Seattle, WA, USA, 24–26 May 2017. 10.23919/ACC.2017.7963172 - DOI
    1. Barnhart G., Knocton S., Hunger A., Dithurbide L., Neyedli H. (2023). Interpersonal and human-automation trust in an underwater mine detection task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 67(1), 145–150. 10.1177/21695067231192560 - DOI
    1. Bartlett M. L., McCarley J. S. (2019). No effect of cue format on automation dependence in an aided signal detection task. Human Factors, 61(2), 169–190. 10.1177/0018720818802961 - DOI - PubMed
    1. Bhat S., Lyons J. B., Shi C., Yang X. J. (2022). Clustering trust dynamics in a human-robot sequential decision-making task. IEEE Robotics and Automation Letters, 7(4), 8815–8822. 10.1109/LRA.2022.3188902 - DOI

Publication types

LinkOut - more resources