Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 Feb 28:5:750763.
doi: 10.3389/frai.2022.750763. eCollection 2022.

Supporting Artificial Social Intelligence With Theory of Mind

Affiliations
Review

Supporting Artificial Social Intelligence With Theory of Mind

Jessica Williams et al. Front Artif Intell. .

Abstract

In this paper, we discuss the development of artificial theory of mind as foundational to an agent's ability to collaborate with human team members. Agents imbued with artificial social intelligence will require various capabilities to gather the social data needed to inform an artificial theory of mind of their human counterparts. We draw from social signals theorizing and discuss a framework to guide consideration of core features of artificial social intelligence. We discuss how human social intelligence, and the development of theory of mind, can contribute to the development of artificial social intelligence by forming a foundation on which to help agents model, interpret and predict the behaviors and mental states of humans to support human-agent interaction. Artificial social intelligence will need the processing capabilities to perceive, interpret, and generate combinations of social cues to operate within a human-agent team. Artificial Theory of Mind affords a structure by which a socially intelligent agent could be imbued with the ability to model their human counterparts and engage in effective human-agent interaction. Further, modeling Artificial Theory of Mind can be used by an ASI to support transparent communication with humans, improving trust in agents, so that they may better predict future system behavior based on their understanding of and support trust in artificial socially intelligent agents.

Keywords: artificial social intelligence; human-agent interaction; social intelligence and cognition; social signal processing; theory of mind; transparency.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

    1. Akula A. R., Liu C., Saba-Sadiya S., Lu H., Todorovic S., Chai J. Y., et al. (2019). X-TOM: explaining with theory-of-mind for gaining justified human trust. arXiv preprint arXiv:1909.06907. https://arxiv.org/abs/1909.06907v1
    1. Allen J. W. (2015). How to help: can more active behavioral measures help transcend the infant false-belief debate? N. Ideas Psychol. 39, 63–72. 10.1016/j.newideapsych.2015.07.008 - DOI
    1. Alonso V., De La Puente P. (2018). System transparency in shared autonomy: a mini review. Front. Neurorobot. 12, 83. 10.3389/fnbot.2018.00083 - DOI - PMC - PubMed
    1. Ambady N., Bernieri F. J., Richeson J. A. (2000). Toward a histology of social behavior: judgmental accuracy from thin slices of the behavioral stream. Adv. Exp. Soc. Psychol. 32, 201–271. 10.1016/S0065-2601(00)80006-4 - DOI
    1. Barnes M. J., Wang N., Pynadath D. V., Chen J. Y. (2021). Chapter 10 - Human-agent bidirectional transparency. Trust in Human-Robot Interaction, 209–232. 10.1016/B978-0-12-819472-0.00010-1 - DOI

LinkOut - more resources