Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun:142:104370.
doi: 10.1016/j.jbi.2023.104370. Epub 2023 Apr 24.

Contextualized medication information extraction using Transformer-based deep learning architectures

Affiliations

Contextualized medication information extraction using Transformer-based deep learning architectures

Aokun Chen et al. J Biomed Inform. 2023 Jun.

Abstract

Objective: To develop a natural language processing (NLP) system to extract medications and contextual information that help understand drug changes. This project is part of the 2022 n2c2 challenge.

Materials and methods: We developed NLP systems for medication mention extraction, event classification (indicating medication changes discussed or not), and context classification to classify medication changes context into 5 orthogonal dimensions related to drug changes. We explored 6 state-of-the-art pretrained transformer models for the three subtasks, including GatorTron, a large language model pretrained using > 90 billion words of text (including > 80 billion words from > 290 million clinical notes identified at the University of Florida Health). We evaluated our NLP systems using annotated data and evaluation scripts provided by the 2022 n2c2 organizers.

Results: Our GatorTron models achieved the best F1-scores of 0.9828 for medication extraction (ranked 3rd), 0.9379 for event classification (ranked 2nd), and the best micro-average accuracy of 0.9126 for context classification. GatorTron outperformed existing transformer models pretrained using smaller general English text and clinical text corpora, indicating the advantage of large language models.

Conclusion: This study demonstrated the advantage of using large transformer models for contextual medication information extraction from clinical narratives.

Keywords: Clinical natural language processing; Deep learning; Medication information extraction; Named entity recognition; Text classification.

PubMed Disclaimer

Conflict of interest statement

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Similar articles

Cited by

References

    1. Xu H, Stenner SP, Doan S, et al. MedEx: a medication information extraction system for clinical narratives. Journal of the American Medical Informatics Association 2010;17:19–24. doi:10.1197/jamia.M3378 - DOI - PMC - PubMed
    1. Kinlay M, Zheng WY, Burke R, et al. Medication errors related to computerized provider order entry systems in hospitals and how they change over time: A narrative review. Res Social Adm Pharm 2021;17:1546–52. doi:10.1016/j.sapharm.2020.12.004 - DOI - PubMed
    1. Uzuner Ö, Solti I, Cadag E. Extracting medication information from clinical text. Journal of the American Medical Informatics Association 2010;17:514–8. doi:10.1136/jamia.2010.003947 - DOI - PMC - PubMed
    1. Uzuner Ö, South BR, Shen S, et al. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association 2011;18:552–6. doi:10.1136/amiajnl-2011-000203 - DOI - PMC - PubMed
    1. Sun W, Rumshisky A, Uzuner O. Evaluating temporal relations in clinical text: 2012 i2b2 Challenge. Journal of the American Medical Informatics Association 2013;20:806–13. doi:10.1136/amiajnl-2013-001628 - DOI - PMC - PubMed

Publication types

MeSH terms

LinkOut - more resources