Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 May 30:7:1391472.
doi: 10.3389/frai.2024.1391472. eCollection 2024.

Hate speech detection with ADHAR: a multi-dialectal hate speech corpus in Arabic

Affiliations
Review

Hate speech detection with ADHAR: a multi-dialectal hate speech corpus in Arabic

Anis Charfi et al. Front Artif Intell. .

Abstract

Hate speech detection in Arabic poses a complex challenge due to the dialectal diversity across the Arab world. Most existing hate speech datasets for Arabic cover only one dialect or one hate speech category. They also lack balance across dialects, topics, and hate/non-hate classes. In this paper, we address this gap by presenting ADHAR-a comprehensive multi-dialect, multi-category hate speech corpus for Arabic. ADHAR contains 70,369 words and spans four language variants: Modern Standard Arabic (MSA), Egyptian, Levantine, Gulf and Maghrebi. It covers four key hate speech categories: nationality, religion, ethnicity, and race. A major contribution is that ADHAR is carefully curated to maintain balance across dialects, categories, and hate/non-hate classes to enable unbiased dataset evaluation. We describe the systematic data collection methodology, followed by a rigorous annotation process involving multiple annotators per dialect. Extensive qualitative and quantitative analyses demonstrate the quality and usefulness of ADHAR. Our experiments with various classical and deep learning models demonstrate that our dataset enables the development of robust hate speech classifiers for Arabic, achieving accuracy and F1-scores of up to 90% for hate speech detection and up to 92% for category detection. When trained with Arabert, we achieved an accuracy and F1-score of 94% for hate speech detection, as well as 95% for the category detection.

Keywords: Arabic corpora; Arabic language; dataset annotation; dialectal Arabic; hate speech; natural language processing.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Figures

Figure 1
Figure 1
Number of tweets per Arabic variant in ADHAR.
Figure 2
Figure 2
Distribution of nationality related tweets per Arabic variant.
Figure 3
Figure 3
Architecture of CNN-BiLSTM model.

References

    1. Albadi N., Kurdi M., Mishra S. (2018). “Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere,” in 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (Barcelona: IEEE; ), 69–76.
    1. Al-Ibrahim R. M., Ali M. Z., Najadat H. M. (2023). Detection of hateful social media content for arabic language. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 22, 1–26. 10.1145/3592792 - DOI - PubMed
    1. Almaliki M., Almars A. M., Gad I., Atlam E.-S. (2023). Abmm: Arabic bert-mini model for hate-speech detection on social media. Electronics 12:1048. 10.3390/electronics12041048 - DOI
    1. Bilewicz M., Soral W. (2020). Hate speech epidemic. the dynamic effects of derogatory language on intergroup relations and political radicalization. Politi. Psychol. 41, 3–33. 10.1111/pops.12670 - DOI
    1. Caselli T., Basile V., Mitrović J., Granitzer M. (2021). “HateBERT: retraining BERT for abusive language detection in English,” in Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), eds. A. Mostafazadeh Davani, D. Kiela, M. Lambert, B. Vidgen, V. Prabhakaran, and Z. Waseem (Stroudsburg: Association for Computational Linguistics), 17–25.