Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 16;37(9):1524-1534.
doi: 10.1021/acs.chemrestox.4c00134. Epub 2024 Aug 27.

Toward an Explainable Large Language Model for the Automatic Identification of the Drug-Induced Liver Injury Literature

Affiliations

Toward an Explainable Large Language Model for the Automatic Identification of the Drug-Induced Liver Injury Literature

Chunwei Ma et al. Chem Res Toxicol. .

Abstract

Drug-induced liver injury (DILI) stands as a significant concern in drug safety, representing the primary cause of acute liver failure. Identifying the scientific literature related to DILI is crucial for monitoring, investigating, and conducting meta-analyses of drug safety issues. Given the intricate and often obscure nature of drug interactions, simple keyword searching can be insufficient for the exhaustive retrieval of the DILI-relevant literature. Manual curation of DILI-related publications demands pharmaceutical expertise and is susceptible to errors, severely limiting throughput. Despite numerous efforts utilizing cutting-edge natural language processing and deep learning techniques to automatically identify the DILI-related literature, their performance remains suboptimal for real-world applications in clinical research and regulatory contexts. In the past year, large language models (LLMs) such as ChatGPT and its open-source counterpart LLaMA have achieved groundbreaking progress in natural language understanding and question answering, paving the way for the automated, high-throughput identification of the DILI-related literature and subsequent analysis. Leveraging a large-scale public dataset comprising 14 203 training publications from the CAMDA 2022 literature AI challenge, we have developed what we believe to be the first LLM specialized in DILI analysis based on LLaMA-2. In comparison with other smaller language models such as BERT, GPT, and their variants, LLaMA-2 exhibits an enhanced out-of-fold accuracy of 97.19% and area under the ROC curve of 0.9947 using 3-fold cross-validation on the training set. Despite LLMs' initial design for dialogue systems, our study illustrates their successful adaptation into accurate classifiers for automated identification of the DILI-related literature from vast collections of documents. This work is a step toward unleashing the potential of LLMs in the context of regulatory science and facilitating the regulatory review process.

PubMed Disclaimer

MeSH terms