Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Aug 21:13:e75279.
doi: 10.2196/75279.

Leveraging Retrieval-Augmented Large Language Models for Dietary Recommendations With Traditional Chinese Medicine's Medicine Food Homology: Algorithm Development and Validation

Affiliations

Leveraging Retrieval-Augmented Large Language Models for Dietary Recommendations With Traditional Chinese Medicine's Medicine Food Homology: Algorithm Development and Validation

Hangyu Sha et al. JMIR Med Inform. .

Abstract

Background: Traditional Chinese Medicine (TCM) emphasizes the concept of medicine food homology (MFH), which integrates dietary therapy into health care. However, the practical application of MFH principles relies heavily on expert knowledge and manual interpretation, posing challenges for automating MFH-based dietary recommendations. Although large language models (LLMs) have shown potential in health care decision support, their performance in specialized domains such as TCM is often hindered by hallucinations and a lack of domain knowledge. The integration of uncertain knowledge graphs (UKGs) with LLMs via retrieval-augmented generation (RAG) offers a promising solution to overcome these limitations by enabling a structured and faithful representation of MFH principles while enhancing LLMs' ability to understand the inherent uncertainty and heterogeneity of TCM knowledge. Consequently, it holds potential to improve the reliability and accuracy of MFH-based dietary recommendations generated by LLMs.

Objective: This study aimed to introduce Yaoshi-RAG, a framework that leverages UKGs to enhance LLMs' capabilities in generating accurate and personalized MFH-based dietary recommendations.

Methods: The proposed framework began by constructing a comprehensive MFH knowledge graph (KG) through LLM-driven open information extraction, which extracted structured knowledge from multiple sources. To address the incompleteness and uncertainty within the MFH KG, UKG reasoning was used to measure the confidence of existing triples and to complete missing triples. When processing user queries, query entities were identified and linked to the MFH KG, enabling retrieval of relevant reasoning paths. These reasoning paths were then ranked based on triple confidence scores and entity importance. Finally, the most informative reasoning paths were encoded into prompts using prompt engineering, enabling the LLM to generate personalized dietary recommendations that aligned with both individual health needs and MFH principles. The effectiveness of Yaoshi-RAG was evaluated through both automated metrics and human evaluation.

Results: The constructed MFH KG comprised 24,984 entities, 22 relations, and 29,292 triples. Extensive experiments demonstrate the superiority of Yaoshi-RAG in different evaluation metrics. Integrating the MFH KG significantly improved the performance of LLMs, yielding an average increase of 14.5% in Hits@1 and 8.7% in F1-score, respectively. Among the evaluated LLMs, DeepSeek-R1 achieved the best performance, with 84.2% in Hits@1 and 71.5% in F1-score, respectively. Human evaluation further validated these results, confirming that Yaoshi-RAG consistently outperformed baseline models across all assessed quality dimensions.

Conclusions: This study shows Yaoshi-RAG, a new framework that enhances LLMs' capabilities in generating MFH-based dietary recommendations through the knowledge retrieved from a UKG. By constructing a comprehensive TCM knowledge representation, our framework effectively extracts and uses MFH principles. Experimental results demonstrate the effectiveness of our framework in synthesizing traditional wisdom with advanced language models, facilitating personalized dietary recommendations that address individual health conditions while providing evidence-based explanations.

Keywords: Traditional Chinese Medicine; dietary recommendation; large language model; medicine food homology; retrieval-augmented generation; uncertain knowledge graph.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1.
Figure 1.. The architecture of the proposed framework. KG: knowledge graph; LLM: large language model; MFH: medicine food homology; OpenIE: open information extraction; TCM: Traditional Chinese Medicine; UKG: uncertain knowledge graph.
Figure 2.
Figure 2.. The detailed process of medicine food homology knowledge graph construction. KG: knowledge graph; MFH: medicine food homology.
Figure 3.
Figure 3.. Different knowledge retrieval settings and the corresponding F1-scores.
Figure 4.
Figure 4.. Different damping factor settings and the corresponding F1-scores.

Similar articles

References

    1. Zhong H, Tang ZQ, Li YF, Wang M, Sun WY, He RR. The evolution and significance of medicine and food homology. Acupunct Herbal Med. 2024;4(1):19–35. doi: 10.1097/HM9.0000000000000104. doi. - DOI
    1. Chen J. Essential role of medicine and food homology in health and wellness. Chin Herb Med. 2023 Jul;15(3):347–348. doi: 10.1016/j.chmed.2023.05.001. doi. Medline. - DOI - PMC - PubMed
    1. Gong X, Ji M, Xu J, Zhang C, Li M. Hypoglycemic effects of bioactive ingredients from medicine food homology and medicinal health food species used in China. Crit Rev Food Sci Nutr. 2020;60(14):2303–2326. doi: 10.1080/10408398.2019.1634517. doi. Medline. - DOI - PubMed
    1. Zhao W, Zhou K, Li J, et al. A survey of large language models. arXiv. 2023 doi: 10.48550/arXiv.2303.18223. Preprint posted online on. doi. - DOI
    1. Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023 Aug;29(8):1930–1940. doi: 10.1038/s41591-023-02448-8. doi. Medline. - DOI - PubMed

LinkOut - more resources