KRAGEN: a knowledge graph-enhanced RAG framework for biomedical problem solving using large language models
- PMID: 38830083
- PMCID: PMC11164829
- DOI: 10.1093/bioinformatics/btae353
KRAGEN: a knowledge graph-enhanced RAG framework for biomedical problem solving using large language models
Abstract
Motivation: Answering and solving complex problems using a large language model (LLM) given a certain domain such as biomedicine is a challenging task that requires both factual consistency and logic, and LLMs often suffer from some major limitations, such as hallucinating false or irrelevant information, or being influenced by noisy data. These issues can compromise the trustworthiness, accuracy, and compliance of LLM-generated text and insights.
Results: Knowledge Retrieval Augmented Generation ENgine (KRAGEN) is a new tool that combines knowledge graphs, Retrieval Augmented Generation (RAG), and advanced prompting techniques to solve complex problems with natural language. KRAGEN converts knowledge graphs into a vector database and uses RAG to retrieve relevant facts from it. KRAGEN uses advanced prompting techniques: namely graph-of-thoughts (GoT), to dynamically break down a complex problem into smaller subproblems, and proceeds to solve each subproblem by using the relevant knowledge through the RAG framework, which limits the hallucinations, and finally, consolidates the subproblems and provides a solution. KRAGEN's graph visualization allows the user to interact with and evaluate the quality of the solution's GoT structure and logic.
Availability and implementation: KRAGEN is deployed by running its custom Docker containers. KRAGEN is available as open-source from GitHub at: https://github.com/EpistasisLab/KRAGEN.
© The Author(s) 2024. Published by Oxford University Press.
Conflict of interest statement
None declared.
Figures
References
-
- Besta M, Blach N, Kubicek A. et al. Graph of thoughts: solving elaborate problems with large language models. Proc AAAI Conf AI 2023;38:17682–90.
-
- Brate R, Dang M-H, Hoppe F. et al. Improving language model predictions via prompts enriched with knowledge graphs. In: CEUR Workshop Proceedings. 2022. 10.5445/IR/1000151291. - DOI
-
- Ji Z, Lee N, Frieske R. et al. Survey of hallucination in natural language generation. ACM Comput Surv 2023;55:248. 10.1145/3571730. - DOI
-
- Kojima T, Gu SS, Reid M. et al. Large language models are zero-shot reasoners. In: Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22). Red Hook, NY, USA: Curran Associates Inc., 2022, pp. 22199–213.
-
- Lewis P, Perez E, Piktus A. et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Adv Neural Inf Process Syst 2020;33:9459–74.
