Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2025 Feb 7:12:e64396.
doi: 10.2196/64396.

The Efficacy of Conversational AI in Rectifying the Theory-of-Mind and Autonomy Biases: Comparative Analysis

Affiliations
Comparative Study

The Efficacy of Conversational AI in Rectifying the Theory-of-Mind and Autonomy Biases: Comparative Analysis

Marcin Rządeczka et al. JMIR Ment Health. .

Abstract

Background: The increasing deployment of conversational artificial intelligence (AI) in mental health interventions necessitates an evaluation of their efficacy in rectifying cognitive biases and recognizing affect in human-AI interactions. These biases are particularly relevant in mental health contexts as they can exacerbate conditions such as depression and anxiety by reinforcing maladaptive thought patterns or unrealistic expectations in human-AI interactions.

Objective: This study aimed to assess the effectiveness of therapeutic chatbots (Wysa and Youper) versus general-purpose language models (GPT-3.5, GPT-4, and Gemini Pro) in identifying and rectifying cognitive biases and recognizing affect in user interactions.

Methods: This study used constructed case scenarios simulating typical user-bot interactions to examine how effectively chatbots address selected cognitive biases. The cognitive biases assessed included theory-of-mind biases (anthropomorphism, overtrust, and attribution) and autonomy biases (illusion of control, fundamental attribution error, and just-world hypothesis). Each chatbot response was evaluated based on accuracy, therapeutic quality, and adherence to cognitive behavioral therapy principles using an ordinal scale to ensure consistency in scoring. To enhance reliability, responses underwent a double review process by 2 cognitive scientists, followed by a secondary review by a clinical psychologist specializing in cognitive behavioral therapy, ensuring a robust assessment across interdisciplinary perspectives.

Results: This study revealed that general-purpose chatbots outperformed therapeutic chatbots in rectifying cognitive biases, particularly in overtrust bias, fundamental attribution error, and just-world hypothesis. GPT-4 achieved the highest scores across all biases, whereas the therapeutic bot Wysa scored the lowest. Notably, general-purpose bots showed more consistent accuracy and adaptability in recognizing and addressing bias-related cues across different contexts, suggesting a broader flexibility in handling complex cognitive patterns. In addition, in affect recognition tasks, general-purpose chatbots not only excelled but also demonstrated quicker adaptation to subtle emotional nuances, outperforming therapeutic bots in 67% (4/6) of the tested biases.

Conclusions: This study shows that, while therapeutic chatbots hold promise for mental health support and cognitive bias intervention, their current capabilities are limited. Addressing cognitive biases in AI-human interactions requires systems that can both rectify and analyze biases as integral to human cognition, promoting precision and simulating empathy. The findings reveal the need for improved simulated emotional intelligence in chatbot design to provide adaptive, personalized responses that reduce overreliance and encourage independent coping skills. Future research should focus on enhancing affective response mechanisms and addressing ethical concerns such as bias mitigation and data privacy to ensure safe, effective AI-based mental health support.

Keywords: AI; affect recognition; artificial intelligence; bias rectification; chatbots; cognitive bias; conversational artificial intelligence; digital mental health.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Performance score parallel coordinates for all bots.
Figure 2
Figure 2
Performance score parallel coordinates for therapeutic versus nontherapeutic chatbots.
Figure 3
Figure 3
Performance score box plots for all bots.
Figure 4
Figure 4
Affect recognition score parallel coordinates for all bots.
Figure 5
Figure 5
Affect recognition score parallel coordinates for therapeutic versus nontherapeutic chatbots.
Figure 6
Figure 6
Affect recognition score box plots for all bots.

Similar articles

Cited by

References

    1. Gabriel I, Manzini A, Keeling G, Hendricks LA, Rieser V, Iqbal H, Tomašev N, Ktena I, Kenton Z, Rodriguez M, El-Sayed S, Brown S, Akbulut C, Trask A, Hughes E, Bergman AS, Shelby R, Marchal N, Griffin C, Mateos-Garcia J, Weidinger L, Street W, Lange B, Ingerman A, Lentz A, Enger R, Barakat A, Krakovna V, Siy JO, Kurth-Nelson Z, McCroskery A, Bolina V, Law H, Shanahan M, Alberts L, Balle B, de Haas S, Ibitoye Y, Dafoe A, Goldberg B, Krier S, Reese A, Witherspoon S, Hawkins W, Rauh M, Wallace D, Franklin M, Goldstein JA, Lehman J, Klenk M, Vallor S, Biles C, Morris MR, King H, Agüera y Arcas B, Isaac W, Manyika J. The ethics of advanced AI assistants. Google DeepMind. 2024. Apr 19, [2024-10-26]. https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/ethics-o... .
    1. Cowan BR, Clark L, Candello H, Tsai J. Introduction to this special issue: guiding the conversation: new theory and design perspectives for conversational user interfaces. Hum Comput Interact. 2023 Jan 05;38(3-4):159–67. doi: 10.1080/07370024.2022.2161905. - DOI
    1. Grodniewicz JP, Hohol M. Therapeutic chatbots as cognitive-affective artifacts. Topoi. 2024 Apr 06;43(3):795–807. doi: 10.1007/s11245-024-10018-x. - DOI
    1. Habicht J, Viswanathan S, Carrington B, Hauser TU, Harper R, Rollwage M. Closing the accessibility gap to mental health treatment with a personalized self-referral chatbot. Nat Med. 2024 Mar 05;30(2):595–602. doi: 10.1038/s41591-023-02766-x.10.1038/s41591-023-02766-x - DOI - PubMed
    1. Franze A, Galanis CR, King DL. Social chatbot use (e.g., ChatGPT) among individuals with social deficits: risks and opportunities. J Behav Addict. 2023 Dec 22;12(4):871–2. doi: 10.1556/2006.2023.00057. https://europepmc.org/abstract/MED/38141065 - DOI - PMC - PubMed

Publication types

LinkOut - more resources