Using GPT-4 for LI-RADS feature extraction and categorization with multilingual free-text reports
- PMID: 38651924
- DOI: 10.1111/liv.15891
Using GPT-4 for LI-RADS feature extraction and categorization with multilingual free-text reports
Abstract
Background and aims: The Liver Imaging Reporting and Data System (LI-RADS) offers a standardized approach for imaging hepatocellular carcinoma. However, the diverse styles and structures of radiology reports complicate automatic data extraction. Large language models hold the potential for structured data extraction from free-text reports. Our objective was to evaluate the performance of Generative Pre-trained Transformer (GPT)-4 in extracting LI-RADS features and categories from free-text liver magnetic resonance imaging (MRI) reports.
Methods: Three radiologists generated 160 fictitious free-text liver MRI reports written in Korean and English, simulating real-world practice. Of these, 20 were used for prompt engineering, and 140 formed the internal test cohort. Seventy-two genuine reports, authored by 17 radiologists were collected and de-identified for the external test cohort. LI-RADS features were extracted using GPT-4, with a Python script calculating categories. Accuracies in each test cohort were compared.
Results: On the external test, the accuracy for the extraction of major LI-RADS features, which encompass size, nonrim arterial phase hyperenhancement, nonperipheral 'washout', enhancing 'capsule' and threshold growth, ranged from .92 to .99. For the rest of the LI-RADS features, the accuracy ranged from .86 to .97. For the LI-RADS category, the model showed an accuracy of .85 (95% CI: .76, .93).
Conclusions: GPT-4 shows promise in extracting LI-RADS features, yet further refinement of its prompting strategy and advancements in its neural network architecture are crucial for reliable use in processing complex real-world MRI reports.
Keywords: GPT‐4; LI‐RADS; large language model; natural language processing; structured report.
© 2024 The Authors. Liver International published by John Wiley & Sons Ltd.
References
REFERENCES
-
- Chernyak V, Fowler KJ, Kamaya A, et al. Liver imaging reporting and data system (LI‐RADS) version 2018: imaging of hepatocellular carcinoma in At‐risk patients. Radiology. 2018;289(3):816‐830.
-
- Elsayes KM, Kielar AZ, Chernyak V, et al. LI‐RADS: a conceptual and historical review from its beginning to its recent integration into AASLD clinical practice guidance. J Hepatocell Carcinoma. 2019;6:49‐69.
-
- Wallis A, McCoubrie P. The radiology report—are we getting the message across? Clin Radiol. 2011;66(11):1015‐1022.
-
- Park H, Song M, Lee EB, Seo BK, Choi CM. An attention model with transfer embeddings to classify pneumonia‐related bilingual imaging reports: algorithm development and validation. JMIR Med Inform. 2021;9(5):e24803.
-
- Adams LC, Truhn D, Busch F, et al. Leveraging GPT‐4 for post hoc transformation of free‐text radiology reports into structured reporting: a multilingual feasibility study. Radiology. 2023;307:e230725.
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical