Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 19;13(1):9877.
doi: 10.1038/s41598-023-37032-0.

The theory of mind and human-robot trust repair

Affiliations

The theory of mind and human-robot trust repair

Connor Esterwood et al. Sci Rep. .

Abstract

Nothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human's trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human-robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot's mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human-robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Environment and robot used from participants’ perspective.
Figure 2
Figure 2
Flowchart illustrating study progression and timeline.
Figure 3
Figure 3
Flowchart illustrating measurement timeline.
Figure 4
Figure 4
Box plots showing results of manipulation check across all three trust change events.
Figure 5
Figure 5
Visual representation of slopes for three-way interaction between conscious experience, repair strategy, and violation event.

References

    1. Savela N, Kaakinen M, Ellonen N, Oksanen A. Sharing a work team with robots: The negative effect of robot co-workers on in-group identification with the work team. Comput. Hum. Behav. 2021;115:106585. doi: 10.1016/j.chb.2020.106585. - DOI
    1. Haidegger T, et al. Applied ontologies and standards for service robots. Robot. Auton. Syst. 2013;61:1215–1223. doi: 10.1016/j.robot.2013.05.008. - DOI
    1. Esterwood C, Robert L. Robots and Covid-19: Re-imagining human–robot collaborative work in terms of reducing risks to essential workers. ROBONOMICS J. Autom. Econ. 2021;1:9–9.
    1. You S, Robert LP. Subgroup formation in human–robot teams: A multi-study mixed-method approach with implications for theory and practice. J. Assoc. Inf. Sci. Technol. 2022;74:323–338. doi: 10.1002/asi.24626. - DOI
    1. Barnes M, Jentsch F. Human–Robot Interactions in Future Military Operations. 1. CRC Press; 2010.