Supporting Artificial Social Intelligence With Theory of Mind
- PMID: 35295867
- PMCID: PMC8919046
- DOI: 10.3389/frai.2022.750763
Supporting Artificial Social Intelligence With Theory of Mind
Abstract
In this paper, we discuss the development of artificial theory of mind as foundational to an agent's ability to collaborate with human team members. Agents imbued with artificial social intelligence will require various capabilities to gather the social data needed to inform an artificial theory of mind of their human counterparts. We draw from social signals theorizing and discuss a framework to guide consideration of core features of artificial social intelligence. We discuss how human social intelligence, and the development of theory of mind, can contribute to the development of artificial social intelligence by forming a foundation on which to help agents model, interpret and predict the behaviors and mental states of humans to support human-agent interaction. Artificial social intelligence will need the processing capabilities to perceive, interpret, and generate combinations of social cues to operate within a human-agent team. Artificial Theory of Mind affords a structure by which a socially intelligent agent could be imbued with the ability to model their human counterparts and engage in effective human-agent interaction. Further, modeling Artificial Theory of Mind can be used by an ASI to support transparent communication with humans, improving trust in agents, so that they may better predict future system behavior based on their understanding of and support trust in artificial socially intelligent agents.
Keywords: artificial social intelligence; human-agent interaction; social intelligence and cognition; social signal processing; theory of mind; transparency.
Copyright © 2022 Williams, Fiore and Jentsch.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
-
- Akula A. R., Liu C., Saba-Sadiya S., Lu H., Todorovic S., Chai J. Y., et al. (2019). X-TOM: explaining with theory-of-mind for gaining justified human trust. arXiv preprint arXiv:1909.06907. https://arxiv.org/abs/1909.06907v1
-
- Allen J. W. (2015). How to help: can more active behavioral measures help transcend the infant false-belief debate? N. Ideas Psychol. 39, 63–72. 10.1016/j.newideapsych.2015.07.008 - DOI
-
- Ambady N., Bernieri F. J., Richeson J. A. (2000). Toward a histology of social behavior: judgmental accuracy from thin slices of the behavioral stream. Adv. Exp. Soc. Psychol. 32, 201–271. 10.1016/S0065-2601(00)80006-4 - DOI
-
- Barnes M. J., Wang N., Pynadath D. V., Chen J. Y. (2021). Chapter 10 - Human-agent bidirectional transparency. Trust in Human-Robot Interaction, 209–232. 10.1016/B978-0-12-819472-0.00010-1 - DOI
Publication types
LinkOut - more resources
Full Text Sources