Detecting Potentially Harmful and Protective Suicide-Related Content on Twitter: Machine Learning Approach
- PMID: 35976193
- PMCID: PMC9434391
- DOI: 10.2196/34705
Detecting Potentially Harmful and Protective Suicide-Related Content on Twitter: Machine Learning Approach
Abstract
Background: Research has repeatedly shown that exposure to suicide-related news media content is associated with suicide rates, with some content characteristics likely having harmful and others potentially protective effects. Although good evidence exists for a few selected characteristics, systematic and large-scale investigations are lacking. Moreover, the growing importance of social media, particularly among young adults, calls for studies on the effects of the content posted on these platforms.
Objective: This study applies natural language processing and machine learning methods to classify large quantities of social media data according to characteristics identified as potentially harmful or beneficial in media effects research on suicide and prevention.
Methods: We manually labeled 3202 English tweets using a novel annotation scheme that classifies suicide-related tweets into 12 categories. Based on these categories, we trained a benchmark of machine learning models for a multiclass and a binary classification task. As models, we included a majority classifier, an approach based on word frequency (term frequency-inverse document frequency with a linear support vector machine) and 2 state-of-the-art deep learning models (Bidirectional Encoder Representations from Transformers [BERT] and XLNet). The first task classified posts into 6 main content categories, which are particularly relevant for suicide prevention based on previous evidence. These included personal stories of either suicidal ideation and attempts or coping and recovery, calls for action intending to spread either problem awareness or prevention-related information, reporting of suicide cases, and other tweets irrelevant to these 5 categories. The second classification task was binary and separated posts in the 11 categories referring to actual suicide from posts in the off-topic category, which use suicide-related terms in another meaning or context.
Results: In both tasks, the performance of the 2 deep learning models was very similar and better than that of the majority or the word frequency classifier. BERT and XLNet reached accuracy scores above 73% on average across the 6 main categories in the test set and F1-scores between 0.69 and 0.85 for all but the suicidal ideation and attempts category (F1=0.55). In the binary classification task, they correctly labeled around 88% of the tweets as about suicide versus off-topic, with BERT achieving F1-scores of 0.93 and 0.74, respectively. These classification performances were similar to human performance in most cases and were comparable with state-of-the-art models on similar tasks.
Conclusions: The achieved performance scores highlight machine learning as a useful tool for media effects research on suicide. The clear advantage of BERT and XLNet suggests that there is crucial information about meaning in the context of words beyond mere word frequencies in tweets about suicide. By making data labeling more efficient, this work has enabled large-scale investigations on harmful and protective associations of social media content with suicide rates and help-seeking behavior.
Keywords: Twitter; deep learning; machine learning; social media; suicide prevention.
©Hannah Metzler, Hubert Baginski, Thomas Niederkrotenthaler, David Garcia. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.08.2022.
Conflict of interest statement
Conflicts of Interest: None declared.
Figures





Similar articles
-
A machine learning approach to detect potentially harmful and protective suicide-related content in broadcast media.PLoS One. 2024 May 14;19(5):e0300917. doi: 10.1371/journal.pone.0300917. eCollection 2024. PLoS One. 2024. PMID: 38743759 Free PMC article.
-
Association of 7 million+ tweets featuring suicide-related content with daily calls to the Suicide Prevention Lifeline and with suicides, United States, 2016-2018.Aust N Z J Psychiatry. 2023 Jul;57(7):994-1003. doi: 10.1177/00048674221126649. Epub 2022 Oct 14. Aust N Z J Psychiatry. 2023. PMID: 36239594 Free PMC article.
-
Explainable Predictive Model for Suicidal Ideation During COVID-19: Social Media Discourse Study.J Med Internet Res. 2025 Jan 17;27:e65434. doi: 10.2196/65434. J Med Internet Res. 2025. PMID: 39823631 Free PMC article.
-
Leveraging Reddit for Suicidal Ideation Detection: A Review of Machine Learning and Natural Language Processing Techniques.Int J Environ Res Public Health. 2022 Aug 19;19(16):10347. doi: 10.3390/ijerph191610347. Int J Environ Res Public Health. 2022. PMID: 36011981 Free PMC article. Review.
-
Precision in Prevention and Health Surveillance: How Artificial Intelligence May Improve the Time of Identification of Health Concerns through Social Media Content Analysis.Yearb Med Inform. 2024 Aug;33(1):158-165. doi: 10.1055/s-0044-1800736. Epub 2025 Apr 8. Yearb Med Inform. 2024. PMID: 40199301 Free PMC article. Review.
Cited by
-
A machine learning approach to detect potentially harmful and protective suicide-related content in broadcast media.PLoS One. 2024 May 14;19(5):e0300917. doi: 10.1371/journal.pone.0300917. eCollection 2024. PLoS One. 2024. PMID: 38743759 Free PMC article.
-
Association of 7 million+ tweets featuring suicide-related content with daily calls to the Suicide Prevention Lifeline and with suicides, United States, 2016-2018.Aust N Z J Psychiatry. 2023 Jul;57(7):994-1003. doi: 10.1177/00048674221126649. Epub 2022 Oct 14. Aust N Z J Psychiatry. 2023. PMID: 36239594 Free PMC article.
-
The Applications of Large Language Models in Mental Health: Scoping Review.J Med Internet Res. 2025 May 5;27:e69284. doi: 10.2196/69284. J Med Internet Res. 2025. PMID: 40324177 Free PMC article.
-
Large Language Models for Mental Health Applications: Systematic Review.JMIR Ment Health. 2024 Oct 18;11:e57400. doi: 10.2196/57400. JMIR Ment Health. 2024. PMID: 39423368 Free PMC article.
-
Year 2022 in Medical Natural Language Processing: Availability of Language Models as a Step in the Democratization of NLP in the Biomedical Area.Yearb Med Inform. 2023 Aug;32(1):244-252. doi: 10.1055/s-0043-1768752. Epub 2023 Dec 26. Yearb Med Inform. 2023. PMID: 38147866 Free PMC article.
References
-
- Ritchie H, Roser M, Ortiz-Ospina E. Suicide. Our World in Data. 2015. [2022-05-05]. https://ourworldindata.org/suicide .
-
- Niederkrotenthaler T, Braun M, Pirkis J, Till B, Stack S, Sinyor M, Tran US, Voracek M, Cheng Q, Arendt F, Scherr S, Yip PS, Spittal MJ. Association between suicide reporting in the media and suicide: systematic review and meta-analysis. BMJ. 2020 Mar 18;368:m575. doi: 10.1136/bmj.m575. http://www.bmj.com/lookup/pmidlookup?view=long&pmid=32188637 - DOI - PMC - PubMed
-
- Phillips DP. The influence of suggestion on suicide: substantive and theoretical implications of the Werther effect. Am Sociol Rev. 1974;39(3):340–54. - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Research Materials