The plasticity of ChatGPT's mentalizing abilities: personalization for personality structures
- PMID: 37720897
- PMCID: PMC10503434
- DOI: 10.3389/fpsyt.2023.1234397
The plasticity of ChatGPT's mentalizing abilities: personalization for personality structures
Abstract
This study evaluated the potential of ChatGPT, a large language model, to generate mentalizing-like abilities that are tailored to a specific personality structure and/or psychopathology. Mentalization is the ability to understand and interpret one's own and others' mental states, including thoughts, feelings, and intentions. Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD) are characterized by distinct patterns of emotional regulation. Individuals with BPD tend to experience intense and unstable emotions, while individuals with SPD tend to experience flattened or detached emotions. We used ChatGPT's free version 23.3 and assessed the extent to which its responses akin to emotional awareness (EA) were customized to the distinctive personality structure-character characterized by Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD), employing the Levels of Emotional Awareness Scale (LEAS). ChatGPT was able to accurately describe the emotional reactions of individuals with BPD as more intense, complex, and rich than those with SPD. This finding suggests that ChatGPT can generate mentalizing-like responses consistent with a range of psychopathologies in line with clinical and theoretical knowledge. However, the study also raises concerns regarding the potential for stigmas or biases related to mental diagnoses to impact the validity and usefulness of chatbot-based clinical interventions. We emphasize the need for the responsible development and deployment of chatbot-based interventions in mental health, which considers diverse theoretical frameworks.
Keywords: Schizoid Personality Disorder; artificial intelligence; borderline personality disorder; emotional awareness; emotional intelligence; empathy.
Copyright © 2023 Hadar-Shoval, Elyoseph and Lvovsky.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures
Similar articles
-
Capacity of Generative AI to Interpret Human Emotions From Visual and Textual Data: Pilot Evaluation Study.JMIR Ment Health. 2024 Feb 6;11:e54369. doi: 10.2196/54369. JMIR Ment Health. 2024. PMID: 38319707 Free PMC article.
-
ChatGPT outperforms humans in emotional awareness evaluations.Front Psychol. 2023 May 26;14:1199058. doi: 10.3389/fpsyg.2023.1199058. eCollection 2023. Front Psychol. 2023. PMID: 37303897 Free PMC article.
-
Do my emotions show or not? Problems with transparency estimation in women with borderline personality disorder features.Personal Disord. 2022 May;13(3):288-299. doi: 10.1037/per0000504. Epub 2021 Oct 21. Personal Disord. 2022. PMID: 34672637
-
The interplay between borderline personality disorder and oxytocin: a systematic narrative review on possible contribution and treatment options.Front Psychiatry. 2024 Jul 23;15:1439615. doi: 10.3389/fpsyt.2024.1439615. eCollection 2024. Front Psychiatry. 2024. PMID: 39109363 Free PMC article. Review.
-
[Affective mentalizing in Addictive Borderline Personality: A literature review].Encephale. 2016 Oct;42(5):458-462. doi: 10.1016/j.encep.2016.02.001. Epub 2016 Mar 16. Encephale. 2016. PMID: 26995152 Review. French.
Cited by
-
Large language models outperform general practitioners in identifying complex cases of childhood anxiety.Digit Health. 2024 Dec 15;10:20552076241294182. doi: 10.1177/20552076241294182. eCollection 2024 Jan-Dec. Digit Health. 2024. PMID: 39687523 Free PMC article.
-
A step toward the future? evaluating GenAI QPR simulation training for mental health gatekeepers.Front Med (Lausanne). 2025 Jun 11;12:1599900. doi: 10.3389/fmed.2025.1599900. eCollection 2025. Front Med (Lausanne). 2025. PMID: 40568211 Free PMC article.
-
Capacity of Generative AI to Interpret Human Emotions From Visual and Textual Data: Pilot Evaluation Study.JMIR Ment Health. 2024 Feb 6;11:e54369. doi: 10.2196/54369. JMIR Ment Health. 2024. PMID: 38319707 Free PMC article.
-
Regulating AI in Mental Health: Ethics of Care Perspective.JMIR Ment Health. 2024 Sep 19;11:e58493. doi: 10.2196/58493. JMIR Ment Health. 2024. PMID: 39298759 Free PMC article.
-
Assessing prognosis in depression: comparing perspectives of AI models, mental health professionals and the general public.Fam Med Community Health. 2024 Jan 9;12(Suppl 1):e002583. doi: 10.1136/fmch-2023-002583. Fam Med Community Health. 2024. PMID: 38199604 Free PMC article.
References
-
- Lee EE, Torous J, De Choudhury M, Depp CA, Graham SA, Kim H, et al. . Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging. (2021) 6:856–64. doi: 10.1016/j.bpsc.2021.02.001 - DOI - PMC - PubMed
-
- Rudolph J, Tan S, Tan S. ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J Appl Learn Teach. (2023) 6:1–22. doi: 10.37074/jalt.2023.6.1.9 - DOI
-
- Topol E. Deep medicine: how artificial intelligence can make healthcare human again. New York: Basic Books; (2019).
LinkOut - more resources
Full Text Sources