GPT-4 as an X data annotator: Unraveling its performance on a stance classification task
- PMID: 39146280
- PMCID: PMC11326574
- DOI: 10.1371/journal.pone.0307741
GPT-4 as an X data annotator: Unraveling its performance on a stance classification task
Abstract
Data annotation in NLP is a costly and time-consuming task, traditionally handled by human experts who require extensive training to enhance the task-related background knowledge. Besides, labeling social media texts is particularly challenging due to their brevity, informality, creativity, and varying human perceptions regarding the sociocultural context of the world. With the emergence of GPT models and their proficiency in various NLP tasks, this study aims to establish a performance baseline for GPT-4 as a social media text annotator. To achieve this, we employ our own dataset of tweets, expertly labeled for stance detection with full inter-rater agreement among three annotators. We experiment with three techniques: Zero-shot, Few-shot, and Zero-shot with Chain-of-Thoughts to create prompts for the labeling task. We utilize four training sets constructed with different label sets, including human labels, to fine-tune transformer-based large language models and various combinations of traditional machine learning models with embeddings for stance classification. Finally, all fine-tuned models undergo evaluation using a common testing set with human-generated labels. We use the results from models trained on human labels as the benchmark to assess GPT-4's potential as an annotator across the three prompting techniques. Based on the experimental findings, GPT-4 achieves comparable results through the Few-shot and Zero-shot Chain-of-Thoughts prompting methods. However, none of these labeling techniques surpass the top three models fine-tuned on human labels. Moreover, we introduce the Zero-shot Chain-of-Thoughts as an effective strategy for aspect-based social media text labeling, which performs better than the standard Zero-shot and yields results similar to the high-performing yet expensive Few-shot approach.
Copyright: © 2024 Liyanage et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Conflict of interest statement
The authors have declared that no competing interests exist.
Figures









Similar articles
-
An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study.JMIR Med Inform. 2024 Apr 8;12:e55318. doi: 10.2196/55318. JMIR Med Inform. 2024. PMID: 38587879 Free PMC article.
-
Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.JMIR AI. 2023 May 4;2:e44293. doi: 10.2196/44293. JMIR AI. 2023. PMID: 38875537 Free PMC article.
-
Evaluating large language models for health-related text classification tasks with public social media data.J Am Med Inform Assoc. 2024 Oct 1;31(10):2181-2189. doi: 10.1093/jamia/ocae210. J Am Med Inform Assoc. 2024. PMID: 39121174 Free PMC article.
-
How to apply zero-shot learning to text data in substance use research: An overview and tutorial with media data.Addiction. 2024 May;119(5):951-959. doi: 10.1111/add.16427. Epub 2024 Jan 11. Addiction. 2024. PMID: 38212974 Review.
-
Few-shot learning for medical text: A review of advances, trends, and opportunities.J Biomed Inform. 2023 Aug;144:104458. doi: 10.1016/j.jbi.2023.104458. Epub 2023 Jul 23. J Biomed Inform. 2023. PMID: 37488023 Free PMC article. Review.
Cited by
-
Large Language Models' Accuracy in Emulating Human Experts' Evaluation of Public Sentiments about Heated Tobacco Products on Social Media: Evaluation Study.J Med Internet Res. 2025 Mar 4;27:e63631. doi: 10.2196/63631. J Med Internet Res. 2025. PMID: 40053746 Free PMC article.
References
-
- Introduction to OpenAI models [Internet]. OpenAI; [cited 2023 Aug 10]. Available from: https://platform.openai.com/docs/introduction
-
- Cheng L, Li X, Bing L. Is GPT-4 a Good Data Analyst?. arXiv preprint arXiv:2305.15038. 2023 May 24.
-
- Chiang CH, Lee HY. Can Large Language Models Be an Alternative to Human Evaluations?. arXiv preprint arXiv:2305.01937. 2023 May 3.
-
- Wang J, Liang Y, Meng F, Shi H, Li Z, Xu J, et al. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. 2023 Mar 7.
-
- Feng Y, Vanam S, Cherukupally M, Zheng W, Qiu M, Chen H. Investigating Code Generation Performance of Chat-GPT with Crowdsourcing Social Data. InProceedings of the 47th IEEE Computer Software and Applications Conference 2023 (pp. 1–10).
MeSH terms
LinkOut - more resources
Full Text Sources