Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
- PMID: 35448946
- PMCID: PMC9022356
- DOI: 10.1186/s12859-022-04688-w
Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT
Abstract
Background: The abundance of biomedical text data coupled with advances in natural language processing (NLP) is resulting in novel biomedical NLP (BioNLP) applications. These NLP applications, or tasks, are reliant on the availability of domain-specific language models (LMs) that are trained on a massive amount of data. Most of the existing domain-specific LMs adopted bidirectional encoder representations from transformers (BERT) architecture which has limitations, and their generalizability is unproven as there is an absence of baseline results among common BioNLP tasks.
Results: We present 8 variants of BioALBERT, a domain-specific adaptation of a lite bidirectional encoder representations from transformers (ALBERT), trained on biomedical (PubMed and PubMed Central) and clinical (MIMIC-III) corpora and fine-tuned for 6 different tasks across 20 benchmark datasets. Experiments show that a large variant of BioALBERT trained on PubMed outperforms the state-of-the-art on named-entity recognition (+ 11.09% BLURB score improvement), relation extraction (+ 0.80% BLURB score), sentence similarity (+ 1.05% BLURB score), document classification (+ 0.62% F1-score), and question answering (+ 2.83% BLURB score). It represents a new state-of-the-art in 5 out of 6 benchmark BioNLP tasks.
Conclusions: The large variant of BioALBERT trained on PubMed achieved a higher BLURB score than previous state-of-the-art models on 5 of the 6 benchmark BioNLP tasks. Depending on the task, 5 different variants of BioALBERT outperformed previous state-of-the-art models on 17 of the 20 benchmark datasets, showing that our model is robust and generalizable in the common BioNLP tasks. We have made BioALBERT freely available which will help the BioNLP community avoid computational cost of training and establish a new set of baselines for future efforts across a broad range of BioNLP tasks.
Keywords: BioNLP; Bioinformatics; Biomedical text mining; Domain-specific language model.
© 2022. The Author(s).
Conflict of interest statement
The authors declare that they have no competing interests.
Figures



Similar articles
-
BioBERT: a pre-trained biomedical language representation model for biomedical text mining.Bioinformatics. 2020 Feb 15;36(4):1234-1240. doi: 10.1093/bioinformatics/btz682. Bioinformatics. 2020. PMID: 31501885 Free PMC article.
-
Bioformer: an efficient transformer language model for biomedical text mining.ArXiv [Preprint]. 2023 Feb 3:arXiv:2302.01588v1. ArXiv. 2023. PMID: 36945685 Free PMC article. Preprint.
-
BioBERT and Similar Approaches for Relation Extraction.Methods Mol Biol. 2022;2496:221-235. doi: 10.1007/978-1-0716-2305-3_12. Methods Mol Biol. 2022. PMID: 35713867
-
Community challenges in biomedical text mining over 10 years: success, failure and the future.Brief Bioinform. 2016 Jan;17(1):132-44. doi: 10.1093/bib/bbv024. Epub 2015 May 1. Brief Bioinform. 2016. PMID: 25935162 Free PMC article. Review.
-
A Review of Recent Work in Transfer Learning and Domain Adaptation for Natural Language Processing of Electronic Health Records.Yearb Med Inform. 2021 Aug;30(1):239-244. doi: 10.1055/s-0041-1726522. Epub 2021 Sep 3. Yearb Med Inform. 2021. PMID: 34479396 Free PMC article. Review.
Cited by
-
Standigm ASK™: knowledge graph and artificial intelligence platform applied to target discovery in idiopathic pulmonary fibrosis.Brief Bioinform. 2024 Jan 22;25(2):bbae035. doi: 10.1093/bib/bbae035. Brief Bioinform. 2024. PMID: 38349059 Free PMC article.
-
Transformer models in biomedicine.BMC Med Inform Decis Mak. 2024 Jul 29;24(1):214. doi: 10.1186/s12911-024-02600-5. BMC Med Inform Decis Mak. 2024. PMID: 39075407 Free PMC article. Review.
-
Multi-label classification of symptom terms from free-text bilingual adverse drug reaction reports using natural language processing.PLoS One. 2022 Aug 4;17(8):e0270595. doi: 10.1371/journal.pone.0270595. eCollection 2022. PLoS One. 2022. PMID: 35925971 Free PMC article.
-
Artificial Intelligence in Emergency Medicine: Viewpoint of Current Applications and Foreseeable Opportunities and Challenges.J Med Internet Res. 2023 May 23;25:e40031. doi: 10.2196/40031. J Med Internet Res. 2023. PMID: 36972306 Free PMC article. Review.
-
Genome language modeling (GLM): a beginner's cheat sheet.Biol Methods Protoc. 2025 Mar 25;10(1):bpaf022. doi: 10.1093/biomethods/bpaf022. eCollection 2025. Biol Methods Protoc. 2025. PMID: 40370585 Free PMC article.
References
-
- Storks S, Gao Q, Chai JY. Recent advances in natural language inference: a survey of benchmarks, resources, and approaches. 2019. arXiv:1904.01172.
-
- Peters M, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L. Deep contextualized word representations. In: Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, vol 1 (Long Papers). Association for Computational Linguistics; 2018, pp. 2227–2237. 10.18653/v1/N18-1202. http://aclweb.org/anthology/N18-1202.
-
- Devlin J, Chang M-W, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, vol 1 (long and short papers). 2019, pp. 4171–4186.
MeSH terms
LinkOut - more resources
Full Text Sources