A comparison of word embeddings for the biomedical natural language processing
- PMID: 30217670
- PMCID: PMC6585427
- DOI: 10.1016/j.jbi.2018.09.008
A comparison of word embeddings for the biomedical natural language processing
Abstract
Background: Word embeddings have been prevalently used in biomedical Natural Language Processing (NLP) applications due to the ability of the vector representations being able to capture useful semantic properties and linguistic relationships between words. Different textual resources (e.g., Wikipedia and biomedical literature corpus) have been utilized in biomedical NLP to train word embeddings and these word embeddings have been commonly leveraged as feature input to downstream machine learning models. However, there has been little work on evaluating the word embeddings trained from different textual resources.
Methods: In this study, we empirically evaluated word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. For the former two resources, we trained word embeddings using unstructured electronic health record (EHR) data available at Mayo Clinic and articles (MedLit) from PubMed Central, respectively. For the latter two resources, we used publicly available pre-trained word embeddings, GloVe and Google News. The evaluation was done qualitatively and quantitatively. For the qualitative evaluation, we randomly selected medical terms from three categories (i.e., disorder, symptom, and drug), and manually inspected the five most similar words computed by embeddings for each term. We also analyzed the word embeddings through a 2-dimensional visualization plot of 377 medical terms. For the quantitative evaluation, we conducted both intrinsic and extrinsic evaluation. For the intrinsic evaluation, we evaluated the word embeddings' ability to capture medical semantics by measruing the semantic similarity between medical terms using four published datasets: Pedersen's dataset, Hliaoutakis's dataset, MayoSRS, and UMNSRS. For the extrinsic evaluation, we applied word embeddings to multiple downstream biomedical NLP applications, including clinical information extraction (IE), biomedical information retrieval (IR), and relation extraction (RE), with data from shared tasks.
Results: The qualitative evaluation shows that the word embeddings trained from EHR and MedLit can find more similar medical terms than those trained from GloVe and Google News. The intrinsic quantitative evaluation verifies that the semantic similarity captured by the word embeddings trained from EHR is closer to human experts' judgments on all four tested datasets. The extrinsic quantitative evaluation shows that the word embeddings trained on EHR achieved the best F1 score of 0.900 for the clinical IE task; no word embeddings improved the performance for the biomedical IR task; and the word embeddings trained on Google News had the best overall F1 score of 0.790 for the RE task.
Conclusion: Based on the evaluation results, we can draw the following conclusions. First, the word embeddings trained from EHR and MedLit can capture the semantics of medical terms better, and find semantically relevant medical terms closer to human experts' judgments than those trained from GloVe and Google News. Second, there does not exist a consistent global ranking of word embeddings for all downstream biomedical NLP applications. However, adding word embeddings as extra features will improve results on most downstream tasks. Finally, the word embeddings trained from the biomedical domain corpora do not necessarily have better performance than those trained from the general domain corpora for any downstream biomedical NLP task.
Keywords: Information extraction; Information retrieval; Machine learning; Natural language processing; Word embeddings.
Copyright © 2018 Elsevier Inc. All rights reserved.
Figures
Similar articles
-
Evaluating semantic relations in neural word embeddings with biomedical and general domain knowledge bases.BMC Med Inform Decis Mak. 2018 Jul 23;18(Suppl 2):65. doi: 10.1186/s12911-018-0630-x. BMC Med Inform Decis Mak. 2018. PMID: 30066651 Free PMC article.
-
The Impact of Specialized Corpora for Word Embeddings in Natural Langage Understanding.Stud Health Technol Inform. 2020 Jun 16;270:432-436. doi: 10.3233/SHTI200197. Stud Health Technol Inform. 2020. PMID: 32570421
-
HPO2Vec+: Leveraging heterogeneous knowledge resources to enrich node embeddings for the Human Phenotype Ontology.J Biomed Inform. 2019 Aug;96:103246. doi: 10.1016/j.jbi.2019.103246. Epub 2019 Jun 27. J Biomed Inform. 2019. PMID: 31255713 Free PMC article.
-
Visualization of medical concepts represented using word embeddings: a scoping review.BMC Med Inform Decis Mak. 2022 Mar 29;22(1):83. doi: 10.1186/s12911-022-01822-9. BMC Med Inform Decis Mak. 2022. PMID: 35351120 Free PMC article.
-
A Review of Recent Work in Transfer Learning and Domain Adaptation for Natural Language Processing of Electronic Health Records.Yearb Med Inform. 2021 Aug;30(1):239-244. doi: 10.1055/s-0041-1726522. Epub 2021 Sep 3. Yearb Med Inform. 2021. PMID: 34479396 Free PMC article. Review.
Cited by
-
Classification of Biomedical Texts for Cardiovascular Diseases with Deep Neural Network Using a Weighted Feature Representation Method.Healthcare (Basel). 2020 Oct 10;8(4):392. doi: 10.3390/healthcare8040392. Healthcare (Basel). 2020. PMID: 33050399 Free PMC article.
-
Comparison of Word Embeddings for Extraction from Medical Records.Int J Environ Res Public Health. 2019 Nov 8;16(22):4360. doi: 10.3390/ijerph16224360. Int J Environ Res Public Health. 2019. PMID: 31717300 Free PMC article.
-
Word Embedding for the French Natural Language in Health Care: Comparative Study.JMIR Med Inform. 2019 Jul 29;7(3):e12310. doi: 10.2196/12310. JMIR Med Inform. 2019. PMID: 31359873 Free PMC article.
-
The Coming of Age of AI/ML in Drug Discovery, Development, Clinical Testing, and Manufacturing: The FDA Perspectives.Drug Des Devel Ther. 2023 Sep 6;17:2691-2725. doi: 10.2147/DDDT.S424991. eCollection 2023. Drug Des Devel Ther. 2023. PMID: 37701048 Free PMC article.
-
BioConceptVec: Creating and evaluating literature-based biomedical concept embeddings on a large scale.PLoS Comput Biol. 2020 Apr 23;16(4):e1007617. doi: 10.1371/journal.pcbi.1007617. eCollection 2020 Apr. PLoS Comput Biol. 2020. PMID: 32324731 Free PMC article.
References
-
- Mikolov T, Yih W.-t., and Zweig G, “Linguistic regularities in continuous space word representations.” in hlt-Naacl, vol. 13, 2013, pp. 746–751.
-
- Liu F, Chen J, Jagannatha A, and Yu H, “Learning for biomedical information extraction: Methodological review of recent advances,” arXiv preprint arXiv:1606.07993, 2016.
-
- Levy O and Goldberg Y, “Dependency-based word embeddings.” in ACL (2), 2014, pp. 302–308.
-
- Zeng D, Liu K, Lai S, Zhou G, Zhao J et al., “Relation classification via convolutional deep neural network.” in COLING, 2014, pp. 2335–2344.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources