Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jun 24:25:100422.
doi: 10.1016/j.invent.2021.100422. eCollection 2021 Sep.

Automatic identification of suicide notes with a transformer-based deep learning model

Affiliations

Automatic identification of suicide notes with a transformer-based deep learning model

Tianlin Zhang et al. Internet Interv. .

Abstract

Suicide is one of the leading causes of death worldwide. At the same time, the widespread use of social media has led to an increase in people posting their suicide notes online. Therefore, designing a learning model that can aid the detection of suicide notes online is of great importance. However, current methods cannot capture both local and global semantic features. In this paper, we propose a transformer-based model named TransformerRNN, which can effectively extract contextual and long-term dependency information by using a transformer encoder and a Bi-directional Long Short-Term Memory (BiLSTM) structure. We evaluate our model with baseline approaches on a dataset collected from online sources (including 659 suicide notes, 431 last statements, and 2000 neutral posts). Our proposed TransformerRNN achieves 95.0%, 94.9% and 94.9% performance in P, R and F1-score metrics respectively and therefore outperforms comparable machine learning and state-of-the-art deep learning models. The proposed model is effective for classifying suicide notes, which in turn, may help to develop suicide prevention technologies for social media.

Keywords: Deep learning; Natural language processing; Social media; Suicide notes; Transformer-based model.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig. 1
Fig. 1
Examples of our dataset.
Fig. 2
Fig. 2
Term cloud visualization of our dataset, the term clouds were generated using the Termine system (Frantzi et al., 2000). http://www.nactem.ac.uk/software/termine/.
Fig. 3
Fig. 3
The overall architecture of TransformerRNN. The model contains five components: input embeddings, transformer encoder, BiLSTM, max-pooling layer and classification layer. The symbol ⊕ denotes vector concatenation. The internal architecture of transformer encoder is shown in light green block. More details about our model are provided in the main text.
Fig. 4
Fig. 4
Confusion matrices for different models, SN stands for suicide notes, LS stands for last statements, NP stands for neutral posts.

References

    1. Acharya U.R., Oh S.L., Hagiwara Y. Automated EEG-based screening of depression using deep convolutional neural network[J] Comput. Methods Prog. Biomed. 2018;161:103–113. - PubMed
    1. Akhtar M.S., Ekbal A., Cambria E. How intense are you? Predicting intensities of emotions and sentiments using stacked ensemble [application notes] [J] IEEE Comput. Intell. Mag. 2020;15(1):64–75.
    1. Ba J.L., Kiros J.R., Hinton G.E. 2016. Layer Normalization[J]. arXiv Preprint arXiv:1607.06450.
    1. Baker M.C., Baker M.C. Cambridge University Press; 2003. Lexical Categories: Verbs, Nouns and Adjectives[M]
    1. Basiri M.E., Nemati S., Abdar M. ABCDM: an attention-based bidirectional CNN-RNN deep model for sentiment analysis[J] Futur. Gener. Comput. Syst. 2021;115:279–294.

LinkOut - more resources