Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Aug 15;19(8):e0307741.
doi: 10.1371/journal.pone.0307741. eCollection 2024.

GPT-4 as an X data annotator: Unraveling its performance on a stance classification task

Affiliations

GPT-4 as an X data annotator: Unraveling its performance on a stance classification task

Chandreen R Liyanage et al. PLoS One. .

Abstract

Data annotation in NLP is a costly and time-consuming task, traditionally handled by human experts who require extensive training to enhance the task-related background knowledge. Besides, labeling social media texts is particularly challenging due to their brevity, informality, creativity, and varying human perceptions regarding the sociocultural context of the world. With the emergence of GPT models and their proficiency in various NLP tasks, this study aims to establish a performance baseline for GPT-4 as a social media text annotator. To achieve this, we employ our own dataset of tweets, expertly labeled for stance detection with full inter-rater agreement among three annotators. We experiment with three techniques: Zero-shot, Few-shot, and Zero-shot with Chain-of-Thoughts to create prompts for the labeling task. We utilize four training sets constructed with different label sets, including human labels, to fine-tune transformer-based large language models and various combinations of traditional machine learning models with embeddings for stance classification. Finally, all fine-tuned models undergo evaluation using a common testing set with human-generated labels. We use the results from models trained on human labels as the benchmark to assess GPT-4's potential as an annotator across the three prompting techniques. Based on the experimental findings, GPT-4 achieves comparable results through the Few-shot and Zero-shot Chain-of-Thoughts prompting methods. However, none of these labeling techniques surpass the top three models fine-tuned on human labels. Moreover, we introduce the Zero-shot Chain-of-Thoughts as an effective strategy for aspect-based social media text labeling, which performs better than the standard Zero-shot and yields results similar to the high-performing yet expensive Few-shot approach.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Overall methodology of the study.
Fig 2
Fig 2. Zero-shot prompt for generating labels.
Fig 3
Fig 3. Few-shot prompt for generating labels.
Fig 4
Fig 4. Zero-shot Chain-of-Thoughts prompt for label generation.
Fig 5
Fig 5. The distribution of class labels in the four different label sets.
(a) human labels, (b) Zero-shot labels, (c) Few-shot labels, and (d) Zero-shot CoT labels.
Fig 6
Fig 6. The percentages of changes in the three types of new label sets; Zero-shot, Few-shot, and Zero-shot CoT compared to human labels.
Fig 7
Fig 7. The percentage increase in performance compared to human-labeled data, observed across the top-performing classifiers of human labeling.
(a) Zero-shot, (b) Few-shot, (c) Zero-shot CoT.
Fig 8
Fig 8. Performance analysis of classifiers trained on GPT-4’s labeled datasets, which outperformed ground truth labels.
Fig 9
Fig 9. Two examples explaining the advantage of Zero-shot CoT over the basic Zero-shot prompting mechanism.

Similar articles

Cited by

References

    1. Introduction to OpenAI models [Internet]. OpenAI; [cited 2023 Aug 10]. Available from: https://platform.openai.com/docs/introduction
    1. Cheng L, Li X, Bing L. Is GPT-4 a Good Data Analyst?. arXiv preprint arXiv:2305.15038. 2023 May 24.
    1. Chiang CH, Lee HY. Can Large Language Models Be an Alternative to Human Evaluations?. arXiv preprint arXiv:2305.01937. 2023 May 3.
    1. Wang J, Liang Y, Meng F, Shi H, Li Z, Xu J, et al. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. 2023 Mar 7.
    1. Feng Y, Vanam S, Cherukupally M, Zheng W, Qiu M, Chen H. Investigating Code Generation Performance of Chat-GPT with Crowdsourcing Social Data. InProceedings of the 47th IEEE Computer Software and Applications Conference 2023 (pp. 1–10).

LinkOut - more resources