Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 May 4:2:e44293.
doi: 10.2196/44293.

Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study

Affiliations

Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study

David Oniani et al. JMIR AI. .

Abstract

Background: Natural language processing (NLP) has become an emerging technology in health care that leverages a large amount of free-text data in electronic health records to improve patient care, support clinical decisions, and facilitate clinical and translational science research. Recently, deep learning has achieved state-of-the-art performance in many clinical NLP tasks. However, training deep learning models often requires large, annotated data sets, which are normally not publicly available and can be time-consuming to build in clinical domains. Working with smaller annotated data sets is typical in clinical NLP; therefore, ensuring that deep learning models perform well is crucial for real-world clinical NLP applications. A widely adopted approach is fine-tuning existing pretrained language models, but these attempts fall short when the training data set contains only a few annotated samples. Few-shot learning (FSL) has recently been investigated to tackle this problem. Siamese neural network (SNN) has been widely used as an FSL approach in computer vision but has not been studied well in NLP. Furthermore, the literature on its applications in clinical domains is scarce.

Objective: The aim of our study is to propose and evaluate SNN-based approaches for few-shot clinical NLP tasks.

Methods: We propose 2 SNN-based FSL approaches, including pretrained SNN and SNN with second-order embeddings. We evaluate the proposed approaches on the clinical sentence classification task. We experiment with 3 few-shot settings, including 4-shot, 8-shot, and 16-shot learning. The clinical NLP task is benchmarked using the following 4 pretrained language models: bidirectional encoder representations from transformers (BERT), BERT for biomedical text mining (BioBERT), BioBERT trained on clinical notes (BioClinicalBERT), and generative pretrained transformer 2 (GPT-2). We also present a performance comparison between SNN-based approaches and the prompt-based GPT-2 approach.

Results: In 4-shot sentence classification tasks, GPT-2 had the highest precision (0.63), but its recall (0.38) and F score (0.42) were lower than those of BioBERT-based pretrained SNN (0.45 and 0.46, respectively). In both 8-shot and 16-shot settings, SNN-based approaches outperformed GPT-2 in all 3 metrics of precision, recall, and F score.

Conclusions: The experimental results verified the effectiveness of the proposed SNN approaches for few-shot clinical NLP tasks.

Keywords: FSL; NLP; SNN; Siamese neural network; few-shot learning; natural language processing; neural networks.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Siamese neural network (SNN) architecture.
Figure 2
Figure 2
Siamese neural network with second-order embeddings (SOE-SNN) architecture. RNN: recurrent neural network.

Similar articles

Cited by

References

    1. Sejnowski TJ. The unreasonable effectiveness of deep learning in artificial intelligence. Proc Natl Acad Sci U S A. 2020 Dec 01;117(48):30033–30038. doi: 10.1073/pnas.1907373117. https://europepmc.org/abstract/MED/31992643 1907373117 - DOI - PMC - PubMed
    1. Brigato L, Iocchi L. A close look at deep learning with small data. 25th International Conference on Pattern Recognition (ICPR); January 10-15, 2021; Milan, Italy. 2021. - DOI
    1. Mo Y, Xiaoxiao G, Jinfeng Y, Shiyu C, Saloni P, Yu C, Gerald T, Haoyu W, Bowen Z. Diverse few-shot text classification with multiple metrics. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers); June 1-6, 2018; New Orleans, Louisiana, USA. 2018. - DOI
    1. Emmanouil M, Sepideh M, Alessandro B, Selene B, Robert JS. Give it a shot: Few-shot learning to normalize ADR mentions in social media posts. Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task; August 2, 2019; Florence, Italy. 2019. - DOI
    1. Zhipeng B, Yu-Xiong W, Martial H. Bowtie networks: Generative modeling for joint few-shot recognition and novel-view synthesis. International Conference on Learning Representations; May 3-7, 2021; Virtual. 2021. - DOI

LinkOut - more resources