Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Nov;38(11):853-864.
doi: 10.1007/s12149-024-01981-x. Epub 2024 Sep 25.

Generative AI and large language models in nuclear medicine: current status and future prospects

Affiliations
Review

Generative AI and large language models in nuclear medicine: current status and future prospects

Kenji Hirata et al. Ann Nucl Med. 2024 Nov.

Erratum in

Abstract

This review explores the potential applications of Large Language Models (LLMs) in nuclear medicine, especially nuclear medicine examinations such as PET and SPECT, reviewing recent advancements in both fields. Despite the rapid adoption of LLMs in various medical specialties, their integration into nuclear medicine has not yet been sufficiently explored. We first discuss the latest developments in nuclear medicine, including new radiopharmaceuticals, imaging techniques, and clinical applications. We then analyze how LLMs are being utilized in radiology, particularly in report generation, image interpretation, and medical education. We highlight the potential of LLMs to enhance nuclear medicine practices, such as improving report structuring, assisting in diagnosis, and facilitating research. However, challenges remain, including the need for improved reliability, explainability, and bias reduction in LLMs. The review also addresses the ethical considerations and potential limitations of AI in healthcare. In conclusion, LLMs have significant potential to transform existing frameworks in nuclear medicine, making it a critical area for future research and development.

Keywords: Education; Generative AI; Large language model; Nuclear medicine; PET; Report generation; Report structuring; SPECT.

PubMed Disclaimer

Conflict of interest statement

Kenji Hirata has received research funding from GE HealthCare Japan.

Figures

Fig. 1
Fig. 1
Adapted from Tamaki et al. 2023 [22]. Sequential whole-body FDG-PET scans (3 min each) performed about 60 min post-FDG injection in a patient with lung cancer and pulmonary metastatic lesions. The scans demonstrated strong and sustained uptake in both the primary lung cancer and liver metastases, while uptake changes due to motion were observed in the ureter and small intestine. This distinction helps in differentiating between pathological and non-pathological abdominal accumulations
Fig. 2
Fig. 2
Adapted and modified from Gideonse et al. 2024 [30]. The FDG-PET/CT images of three patients who were diagnosed with immune-related adverse events (irAE) following immunotherapy. A Thyroiditis, B pneumonitis, and C colitis
Fig. 3
Fig. 3
Structuring FDG-PET/CT report using LLM. This experiment, conducted with ChatGPT-4o on August 31st, demonstrates the process of structuring reports using LLMs. Similarly, it is also possible to convert structured reports into regular text
Fig. 4
Fig. 4
Adapted from Nakaura et al. 2024 [1]. Conceptual diagram illustrating the deep learning process. Data from different domains, such as music, image, and text, are input, and a variety of domains can be produced as output
Fig. 5
Fig. 5
Simulation of tumor PET images. When evaluating treatment effects using baseline and follow-up scans, an image-based method involves aligning the images and performing subtraction. In cases with no actual change, a successful alignment will result in the tumor signal being cleanly removed. However, if there is misregistration between the two images, residual signal may persist, potentially leading to an inaccurate assessment of treatment effects. On the other hand, translating from the image domain to the language domain eliminates concerns about misregistration, enabling a more accurate and fair comparison of crucial information

Similar articles

Cited by

References

    1. Nakaura T, Ito R, Ueda D, Nozaki T, Fushimi Y, Matsui Y, et al. The impact of large language models on radiology: a guide for radiologists on the latest innovations in AI. Jpn J Radiol. 2024;42:685–96. - PMC - PubMed
    1. Soleimani M, Seyyedi N, Ayyoubzadeh SM, Kalhori SRN, Keshavarz H. Practical evaluation of ChatGPT performance for radiology report generation. Acad Radiol [Internet]. 2024; Available from: https://pubmed.ncbi.nlm.nih.gov/39142976/ - PubMed
    1. Nakaura T, Yoshida N, Kobayashi N, Shiraishi K, Nagayama Y, Uetani H, et al. Preliminary assessment of automated radiology report generation with generative pre-trained transformers: comparing results to radiologist-generated reports. Jpn J Radiol. 2024;42:190–200. - PMC - PubMed
    1. Nakaura T, Hirai T. Response to Letter to the Editor from Partha Pratim Ray: “Integrating AI in radiology: insights from GPT-generated reports and multimodal LLM performance on European Board of Radiology examinations.” Jpn J Radiol [Internet]. 2024; Available from: https://pubmed.ncbi.nlm.nih.gov/39002023/ - PubMed
    1. Bhayana R, Biswas S, Cook TS, Kim W, Kitamura FC, Gichoya J, et al. From bench to bedside with large language models: AJR Expert Panel Narrative Review. AJR Am J Roentgenol [Internet]. 2024; Available from: https://pubmed.ncbi.nlm.nih.gov/38598354/ - PubMed

LinkOut - more resources