Local Large Language Models for Complex Structured Tasks
- PMID: 38827047
- PMCID: PMC11141822
Local Large Language Models for Complex Structured Tasks
Abstract
This paper introduces an approach that combines the language reasoning capabilities of large language models (LLMs) with the benefits of local training to tackle complex language tasks. The authors demonstrate their approach by extracting structured condition codes from pathology reports. The proposed approach utilizes local, fine-tuned LLMs to respond to specific generative instructions and provide structured outputs. Over 150k uncurated surgical pathology reports containing gross descriptions, final diagnoses, and condition codes were used. Different model architectures were trained and evaluated, including LLaMA, BERT, and LongFormer. The results show that the LLaMA-based models significantly outperform BERT-style models across all evaluated metrics. LLaMA models performed especially well with large datasets, demonstrating their ability to handle complex, multi-label tasks. Overall, this work presents an effective approach for utilizing LLMs to perform structured generative tasks on domain-specific language in the medical domain.
©2024 AMIA - All rights reserved.
Figures
References
-
- OpenAI. ChatGPT. 2023. Accessed: 2023-07-30. https://chat.openai.com.
-
- Li C, Wong C, Zhang S, Usuyama N, Liu H, Yang J, et al. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:230600890. 2023.
Grants and funding
LinkOut - more resources
Full Text Sources