In dialogue with an avatar, language behavior is identical to dialogue with a human partner
- PMID: 26676949
- PMCID: PMC5352801
- DOI: 10.3758/s13428-015-0688-7
In dialogue with an avatar, language behavior is identical to dialogue with a human partner
Abstract
The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants' language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
Keywords: Human-computer interaction; Language; Structural priming; Syntactic processing; Virtual reality.
Figures











References
-
- Balcetis, E. E., Dale, R. (2005). An Exploration of Social Modulation of Syntactic Priming. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Meeting of the Cognitive Science Society. (pp. 184–189). Stresa, Italy.
-
- Bates, D., Maechler, M., & Bolker, B. (2012). lme4: Linear mixed-effects models using S4 classes (2011). R package version 0.999375-42.
-
- Bee, N., André, E., & Tober, S. (2009). Breaking the Ice in Human-Agent Communication : Eye-Gaze Based Initiation of Contact with an Embodied Conversational Agent. In Intelligent Virtual Agents (pp. 229–242). Springer Berlin Heidelberg.
-
- Bergmann, K., Branigan, H. P., & Kopp, S. (2015). Exploring the Alignment Space – Lexical and Gestural Alignment with Real and Virtual Humans. Frontiers in ICT, 2, 1–11. doi:10.3389/fict.2015.00007
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical
Research Materials