Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 1;31(9):2019-2029.
doi: 10.1093/jamia/ocae087.

Lingdan: enhancing encoding of traditional Chinese medicine knowledge for clinical reasoning tasks with large language models

Affiliations

Lingdan: enhancing encoding of traditional Chinese medicine knowledge for clinical reasoning tasks with large language models

Rui Hua et al. J Am Med Inform Assoc. .

Abstract

Objective: The recent surge in large language models (LLMs) across various fields has yet to be fully realized in traditional Chinese medicine (TCM). This study aims to bridge this gap by developing a large language model tailored to TCM knowledge, enhancing its performance and accuracy in clinical reasoning tasks such as diagnosis, treatment, and prescription recommendations.

Materials and methods: This study harnessed a wide array of TCM data resources, including TCM ancient books, textbooks, and clinical data, to create 3 key datasets: the TCM Pre-trained Dataset, the Traditional Chinese Patent Medicine (TCPM) Question Answering Dataset, and the Spleen and Stomach Herbal Prescription Recommendation Dataset. These datasets underpinned the development of the Lingdan Pre-trained LLM and 2 specialized models: the Lingdan-TCPM-Chat Model, which uses a Chain-of-Thought process for symptom analysis and TCPM recommendation, and a Lingdan Prescription Recommendation model (Lingdan-PR) that proposes herbal prescriptions based on electronic medical records.

Results: The Lingdan-TCPM-Chat and the Lingdan-PR Model, fine-tuned on the Lingdan Pre-trained LLM, demonstrated state-of-the art performances for the tasks of TCM clinical knowledge answering and herbal prescription recommendation. Notably, Lingdan-PR outperformed all state-of-the-art baseline models, achieving an improvement of 18.39% in the Top@20 F1-score compared with the best baseline.

Conclusion: This study marks a pivotal step in merging advanced LLMs with TCM, showcasing the potential of artificial intelligence to help improve clinical decision-making of medical diagnostics and treatment strategies. The success of the Lingdan Pre-trained LLM and its derivative models, Lingdan-TCPM-Chat and Lingdan-PR, not only revolutionizes TCM practices but also opens new avenues for the application of artificial intelligence in other specialized medical fields. Our project is available at https://github.com/TCMAI-BJTU/LingdanLLM.

Keywords: clinical reasoning; large language model; pre-training; prescription recommendation; traditional Chinese medicine.

PubMed Disclaimer

Conflict of interest statement

None declared.

Figures

Figure 1.
Figure 1.
The framework and workflow of this study. We began by organizing a vast array of TCM data to train a foundational LLM, the Lingdan Pre-trained Model. Subsequently, based on this foundational model, we leveraged data from Package Insert and Chinese Pharmacopoeia to develop a LLM for Traditional Chinese Patent Medicine (TCPM) dialogue, named Lingdan-TCPM-Chat. Additionally, utilizing outpatient data for Spleen and Stomach Disease (SSD), we trained a Spleen and Stomach Herbal Prescription Recommendation (SSHPR) LLM, Lingdan-PR. Each model is thoroughly documented in terms of data processing and training methodologies and has undergone comprehensive evaluation.
Figure 2.
Figure 2.
The process of Knowledge Linguification for herbal medicine.
Figure 3.
Figure 3.
The process of Knowledge Question-Answerization for TCPM.
Figure 4.
Figure 4.
The process of TCM Interactive Diagnostic Dialogue Framework (TCM-IDDF).
Figure 5.
Figure 5.
Training loss for different data sampling ratios.
Figure 6.
Figure 6.
Comparison of model performance with different data augmentation frequencies.

References

    1. OpenAI Team. ChatGPT: optimizing language models for dialogue. 2022. Accessed April 17, 2024. https://www.openai.com/research/chatgpt
    1. Achiam J, Adler S, Agarwal S, et al. Gpt-4 technical report. arXiv preprint, arXiv:2303.08774, 2023.
    1. Qiu J, Li L, Sun J, et al. Large AI models in health informatics: applications, challenges, and the future. IEEE J Biomed Health Inform. 2023;27(12):6074-6087. - PubMed
    1. Pan S, Luo L, Wang Y, Chen C, Wang J, Wu X. Unifying large language models and knowledge graphs: a roadmap. IEEE Trans Knowl Data Eng. 2024:1-20. 10.1109/TKDE.2024.3352100 - DOI
    1. Ali SR, Dobbs TD, Hutchings HA, Whitaker IS. Using ChatGPT to write patient clinic letters. Lancet Digit Health. 2023;45(4):e179-e181. - PubMed

Substances