GPT detectors are biased against non-native English writers
- PMID: 37521038
- PMCID: PMC10382961
- DOI: 10.1016/j.patter.2023.100779
GPT detectors are biased against non-native English writers
Abstract
GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.
© 2023 The Author(s).
Conflict of interest statement
The authors declare no competing interests.
Figures


References
-
- Mollman S. Yahoo! Finance; 2022. ChatGPT gained 1 million users in under a week. Here’s why the AI chatbot is primed to disrupt search as we know it.https://www.yahoo.com/video/chatgpt-gained-1-million-followers-224523258...
-
- Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613:423. - PubMed
-
- Heikkilä M. How to spot AI-generated text. MIT Technol. Rev. 2022 https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-gener...
-
- Fowler G.A. The Washington Post; 2023. We tested a new ChatGPT-detector for teachers. It flagged an innocent student.https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-de...
-
- Liang W., Yuksekgonul M., Mao Y., Wu E., Zou J. GPT detectors are biased against non-native English writers. arXiv. 2023 doi: 10.48550/arXiv.2304.02819. https://arxiv.org/abs/2304.02819 Preprint at. - DOI - PMC - PubMed
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources