Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Apr 29;374(1771):20180032.
doi: 10.1098/rstb.2018.0032.

Would a robot trust you? Developmental robotics model of trust and theory of mind

Affiliations

Would a robot trust you? Developmental robotics model of trust and theory of mind

Samuele Vinanzi et al. Philos Trans R Soc Lond B Biol Sci. .

Abstract

Trust is a critical issue in human-robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one's owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.

Keywords: cognitive robotics; developmental robotics; episodic memory; human–robot interaction; theory of mind; trust.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Figure 1.
Figure 1.
The BN that models the relation between the robot and an informant. The agent generates a separate network for each user, with the same structure but different probability distribution.
Figure 2.
Figure 2.
Architecture of the artificial cognitive agent. The human informant interacts with the robot through the vision and audio modules, which, respectively, perform image processing (face detection and recognition) and vocal command parsing. Data then flows to the motor module in charge of the robot’s joints, and to the belief module that manages the collection of BNs memorized by the agent. (Online version in colour.)
Figure 3.
Figure 3.
Consistency values histogram of the episodes memorized by the agent through progressive interactions with different informants (from 100% helpful to 100% tricky at 25% steps of variation. The opposite, that is from 100% tricky to 100% helpful, produces the same graph). (Online version in colour.)
Figure 4.
Figure 4.
Mean entropy of episodic memory networks generated with different numbers of samples. Given the random component intrinsic in the algorithm, a very large number of samples (105) have been generated for every value of k. (Online version in colour.)
Figure 5.
Figure 5.
Familiarization phase with a tricker informant. (a) The robot asks for a suggestion on the sticker’s location. (b) The informant places the sticker in one of the two positions. (c) The informant gives its suggestion on where to find the sticker. Note how the tricker gives misleading directions. (d) The robot searches for the presence of the sticker in the suggested position and records the episode. (Online version in colour.)
Figure 6.
Figure 6.
Decision-making phase with a tricker informant. (a) The robot asks for a suggestion on the location of the sticker and receives a misleading suggestion from the informant. (b) The robot performs inference on that informant’s belief network. (c) The agent decides that the informant will probably try to trick it, so it looks in the opposite location. (d) The robot finds the sticker and gives feedback to the informant. (Online version in colour.)
Figure 7.
Figure 7.
Belief estimation phase with a tricker informant. (a) The robot recognizes the informant using machine learning techniques and looks at the table to find the position of the sticker. (b) The robot computes inference on the informant’s belief network and predicts what he would suggest in that situation. (Online version in colour.)
Figure 8.
Figure 8.
Reliability histogram of episodic belief networks generated by agents possessing different histories of interactions. Green bars represent trustful BNs (T > 0) and red bars depict BNs that tend to distrust (T < 0). Agents that have a more positive than negative background tend to be more prone to trust a new informant and vice versa. When T = 0, the informant is neither trusted nor distrusted and the agent will act randomly. (Online version in colour.)

Similar articles

Cited by

References

    1. Vanderbilt KE, Liu D, Heyman GD. 2011. The development of distrust. Child Dev. 82, 1372–1380. (10.1111/j.1467-8624.2011.01629.x) - DOI - PMC - PubMed
    1. Mayer RC, Davis JH, Schoorman FD. 1995. An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734. (10.5465/amr.1995.9508080335) - DOI
    1. Hawley K. 2014. Trust, distrust and commitment. Noûs 48, 1–20. (10.1111/nous.12000) - DOI
    1. Das TK, Teng B-S. 2004. The risk-based view of trust: a conceptual framework. J. Bus. Psychol. 19, 85–116. (10.1023/B:JOBU.0000040274.23551.1b) - DOI
    1. Jones GR, George JM. 1998. The experience and evolution of trust: implications for cooperation and teamwork. Acad. Manag. Rev. 23, 531–546. (10.5465/amr.1998.926625) - DOI

Publication types

LinkOut - more resources