From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction
- PMID: 27445741
- PMCID: PMC4927573
- DOI: 10.3389/fnhum.2016.00290
From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction
Abstract
Human automation interaction (HAI) systems have thus far failed to live up to expectations mainly because human users do not always interact with the automation appropriately. Trust in automation (TiA) has been considered a central influence on the way a human user interacts with an automation; if TiA is too high there will be overuse, if TiA is too low there will be disuse. However, even though extensive research into TiA has identified specific HAI behaviors, or trust outcomes, a unique mapping between trust states and trust outcomes has yet to be clearly identified. Interaction behaviors have been intensely studied in the domain of HAI and TiA and this has led to a reframing of the issues of problems with HAI in terms of reliance and compliance. We find the behaviorally defined terms reliance and compliance to be useful in their functionality for application in real-world situations. However, we note that once an inappropriate interaction behavior has occurred it is too late to mitigate it. We therefore take a step back and look at the interaction decision that precedes the behavior. We note that the decision neuroscience community has revealed that decisions are fairly stereotyped processes accompanied by measurable psychophysiological correlates. Two literatures were therefore reviewed. TiA literature was extensively reviewed in order to understand the relationship between TiA and trust outcomes, as well as to identify gaps in current knowledge. We note that an interaction decision precedes an interaction behavior and believe that we can leverage knowledge of the psychophysiological correlates of decisions to improve joint system performance. As we believe that understanding the interaction decision will be critical to the eventual mitigation of inappropriate interaction behavior, we reviewed the decision making literature and provide a synopsis of the state of the art understanding of the decision process from a decision neuroscience perspective. We forward hypotheses based on this understanding that could shape a research path toward the ability to mitigate interaction behavior in the real world.
Keywords: decision making; human automation interaction; interaction decisions; neuroergonomics; trust in automation.
Figures
Similar articles
-
Who's the real expert here? Pedigree's unique bias on trust between human and automated advisers.Appl Ergon. 2019 Nov;81:102907. doi: 10.1016/j.apergo.2019.102907. Epub 2019 Jul 26. Appl Ergon. 2019. PMID: 31422272
-
Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence.Hum Factors. 2017 May;59(3):333-345. doi: 10.1177/0018720816682648. Epub 2016 Dec 19. Hum Factors. 2017. PMID: 28430544
-
Effects of information source, pedigree, and reliability on operator interaction with decision support systems.Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154. Hum Factors. 2007. PMID: 17915596
-
Trust in Automation Measures for Aeromedical Settings.Aerosp Med Hum Perform. 2024 Nov;95(11):851-861. doi: 10.3357/AMHP.6465.2024. Aerosp Med Hum Perform. 2024. PMID: 39711345 Review.
-
Trust in automation: designing for appropriate reliance.Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392. Hum Factors. 2004. PMID: 15151155 Review.
Cited by
-
Measurement of Trust in Automation: A Narrative Review and Reference Guide.Front Psychol. 2021 Oct 19;12:604977. doi: 10.3389/fpsyg.2021.604977. eCollection 2021. Front Psychol. 2021. PMID: 34737716 Free PMC article. Review.
-
How Do Drivers Perceive Risks During Automated Driving Scenarios? An fNIRS Neuroimaging Study.Hum Factors. 2024 Sep;66(9):2244-2263. doi: 10.1177/00187208231185705. Epub 2023 Jun 26. Hum Factors. 2024. PMID: 37357740 Free PMC article.
-
Adaptive trust calibration for human-AI collaboration.PLoS One. 2020 Feb 21;15(2):e0229132. doi: 10.1371/journal.pone.0229132. eCollection 2020. PLoS One. 2020. PMID: 32084201 Free PMC article.
-
Psychometric properties of the Chinese version of the trust between People and Automation Scale (TPAS) in Chinese adults.Psicol Reflex Crit. 2022 May 30;35(1):15. doi: 10.1186/s41155-022-00219-x. Psicol Reflex Crit. 2022. PMID: 35644898 Free PMC article.
-
Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains.Front Neuroergon. 2021 Sep 3;2:731327. doi: 10.3389/fnrgo.2021.731327. eCollection 2021. Front Neuroergon. 2021. PMID: 38235218 Free PMC article. Review.
References
-
- Bagheri N., Jamieson G. A. (2004). “Considering subjective trust and monitoring behavior in assessing automation-induced complacency,” in Human Performance, Situation Awareness and Automation: Current Research and Trends, eds Vicenzi D. A., Moulousa M., Hancock P. A. (Mahwah, NJ: Erlbaum; ), 54–59.
-
- Bahner J. E., Hüper A.-D., Manzey D. (2008). Misuse of automated decision aids: complacency, automation bias and the impact of training experience. Int. J. Hum. Comput. Stud. 66, 688–699. 10.1016/j.ijhcs.2008.06.001 - DOI
-
- Biros D., Daly M., Gunsch G. (2004). The influence of task load and automation trust on deception detection. Group Decis. Negot. 13, 173–189. 10.1023/b:grup.0000021840.85686.57 - DOI
Publication types
LinkOut - more resources
Full Text Sources
Other Literature Sources
Research Materials