Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2022 May 25;18(2):e1243.
doi: 10.1002/cl2.1243. eCollection 2022 Jun.

Online interventions for reducing hate speech and cyberhate: A systematic review

Affiliations
Review

Online interventions for reducing hate speech and cyberhate: A systematic review

Steven Windisch et al. Campbell Syst Rev. .

Abstract

Background: The unique feature of the Internet is that individual negative attitudes toward minoritized and racialized groups and more extreme, hateful ideologies can find their way onto specific platforms and instantly connect people sharing similar prejudices. The enormous frequency of hate speech/cyberhate within online environments creates a sense of normalcy about hatred and the potential for acts of intergroup violence or political radicalization. While there is some evidence of effective interventions to counter hate speech through television, radio, youth conferences, and text messaging campaigns, interventions for online hate speech have only recently emerged.

Objectives: This review aimed to assess the effects of online interventions to reduce online hate speech/cyberhate.

Search methods: We systematically searched 2 database aggregators, 36 individual databases, 6 individual journals, and 34 websites, as well as bibliographies of published reviews of related literature, and scrutiny of annotated bibliographies of related literature.

Inclusion criteria: We included randomized and rigorous quasi-experimental studies of online hate speech/cyberhate interventions that measured the creation and/or consumption of hateful content online and included a control group. Eligible populations included youth (10-17 years) and adult (18+ years) participants of any racial/ethnic background, religious affiliation, gender identity, sexual orientation, nationality, or citizenship status.

Data collection and analysis: The systematic search covered January 1, 1990 to December 31, 2020, with searches conducted between August 19, 2020 and December 31, 2020, and supplementary searches undertaken between March 17 and 24, 2022. We coded characteristics of the intervention, sample, outcomes, and research methods. We extracted quantitative findings in the form of a standardized mean difference effect size. We computed a meta-analysis on two independent effect sizes.

Main results: Two studies were included in the meta-analysis, one of which had three treatment arms. For the purposes of the meta-analysis we chose the treatment arm from the Álvarez-Benjumea and Winter (2018) study that most closely aligned with the treatment condition in the Bodine-Baron et al. (2020) study. However, we also present additional single effect sizes for the other treatment arms from the Álvarez-Benjumea and Winter (2018) study. Both studies evaluated the effectiveness of an online intervention for reducing online hate speech/cyberhate. The Bodine-Baron et al. (2020) study had a sample size of 1570 subjects, while the Álvarez-Benjumea and Winter (2018) study had a sample size of 1469 tweets (nested in 180 subjects). The mean effect was small (g = -0.134, 95% confidence interval [-0.321, -0.054]). Each study was assessed for risk of bias on the following domains: randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results. Both studies were rated as "low risk" on the randomization process, deviations from intended interventions, and measurement of the outcome domains. We assessed the Bodine-Baron et al. (2020) study as "some" risk of bias regarding missing outcome data and "high risk" for selective outcome reporting bias. The Álvarez-Benjumea and Winter (2018) study was rated as "some concern" for the selective outcome reporting bias domain.

Authors' conclusions: The evidence is insufficient to determine the effectiveness of online hate speech/cyberhate interventions for reducing the creation and/or consumption of hateful content online. Gaps in the evaluation literature include the lack of experimental (random assignment) and quasi-experimental evaluations of online hate speech/cyberhate interventions, addressing the creation and/or consumption of hate speech as opposed to the accuracy of detection/classification software, and assessing heterogeneity among subjects by including both extremist and non-extremist individuals in future intervention studies. We provide suggestions for how future research on online hate speech/cyberhate interventions can fill these gaps moving forward.

PubMed Disclaimer

Figures

Figure 1
Figure 1
PRISMA flowchart.
Figure 2
Figure 2
Standardized mean difference and Hedges G formulas.
Figure 3
Figure 3
Logit transformation formula.
Figure 4
Figure 4
Risk of bias summary.
Figure 5
Figure 5
Forest plot for meta‐analysis.

Update of

  • Protocol

References

REFERENCES TO INCLUDED STUDIES

    1. Álvarez‐Benjumea, A. , & Winter, F. (2018). Normative change and culture of hate: An experiment on online environments. European Sociological Review, 34(3), 223–237. 10.1093/esr/jcy005 - DOI
    1. Bodine‐Baron, E. , Marrone, J. V. , Helmus, T. C. , & Schlang, D. (2020). Countering violent extremism in Indonesia: Using an online panel survey to assess a social media counter‐messaging campaign. RAND Corporation.
REFERENCES TO EXCLUDED STUDIES
    1. Boccanfuso, E. , White, F. A. , & Maunder, R. D. (2020). Reducing transgender stigma via an e‐contact intervention. Sex Roles, 84, 326–336. 10.1007/s11199-020-01171-9 - DOI
    1. Bozeman, R. (2015). Bystander confronting of Anti‐Black Racism: Effects of Belonging Affirmation and Confrontation Training. [Master's thesis, Loyla University Chicago].
    1. Braddock, K. (2019). Vaccinating against hate: Using attitudinal inoculation to confer resistance to persuasion by extremist propaganda. Terrorism and Political Violence, 34, 1–23. 10.1080/09546553.2019.1693370 - DOI
    1. *Chandrasekharan, E. , Pavalanathan, U. , Srinivasan, A. , Glynn, A. , Eisenstein, J. , & Gilbert, E. (2017). You can't stay here: The efficacy of Reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human‐Computer Interaction, 1(CSCW, 31)​, 10.1145/3134666 - DOI
    1. *Davey, J. , Birdwell, J. , & Skellett, R. (2018). Counter‐Conversations: A model for direct engagement with individuals showing signs of radicalisation online. Institute for Strategic Dialogue. https://www.isdglobal.org/isd-publications/counter-conversations-a-model...
REFERENCES FOR STUDIES AWAITING CLASSIFICATION
    1. Blair, T. (1999, February 1). Online and out of reach. Time Australia, 5, 48–49.
    1. Braddock, K. (2009). Dark side of the superhighway: A quantitative content analytic view of terrorism on the Internet [Conference paper]. American Society of Criminology.
    1. Cherian, A. K. , Tripathi, A. , & Shrey (2020). Detecting hate speech on social media using machine learning. International Journal of Psychosocial Rehabilitation, 24(8), 1047–1058. 10.37200/IJPR/V24I8/PR280115 - DOI
    1. Hemker, K. (2018). Data augmentation and deep learning for hate speech detection. [Master's thesis, Imperial College London].
    1. Larsen, E. V. (2012). Ending Al‐Qa'ida's violent social movement: Assessing jihadi strategies phase III. RAND.
ADDITIONAL REFERENCES
    1. Al‐Hassan, A. , & Al‐Dossari, H. (2019). Detection of hate speech in social networks: A survey on multilingual corpus. Computer Science & Information Technology, 9(2), 83–100.
    1. Allport, G. W. (1954). The nature of prejudice. Addison‐Wesley.
    1. Altman, D. , Ashby, D. , Birks, J. , Borenstein, M. , Campbell, M. , Deeks, J. , Egger, M. , Higgins, J. , Lau, J. , O'Rourke, K. , Rṻcker, G. , Scholten, R. , Sterne, J. , Thompson, S. , & Whitehead, A. (2021). Chapter 10: Analysing data and undertaking meta‐analyses. In Deeks J. J., Higgins J. P. T., & Altman D. G. (Eds.), Cochrane handbook for systematic reviews of interventions (version 6.1, section‐10‐5‐2). Cochrane Collaboration. 10.1002/9781119536604 - DOI
    1. Álvarez‐Benjumea, A. , & Winter, F. (2018). Normative change and culture of hate: An experiment in online environments. European Sociological Review, 34, 223–237. 10.1093/esr/jcy005 - DOI
    1. Bakalis, C. (2018). Rethinking cyberhate laws. Information & Communications Technology Law, 27(1), 86–110. 10.1080/13600834.2017.1393934 - DOI

LinkOut - more resources