Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Mar 6;30(2):80-90.
doi: 10.4274/dir.2023.232417. Epub 2023 Oct 3.

Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions

Affiliations
Review

Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions

Tugba Akinci D'Antonoli et al. Diagn Interv Radiol. .

Abstract

With the advent of large language models (LLMs), the artificial intelligence revolution in medicine and radiology is now more tangible than ever. Every day, an increasingly large number of articles are published that utilize LLMs in radiology. To adopt and safely implement this new technology in the field, radiologists should be familiar with its key concepts, understand at least the technical basics, and be aware of the potential risks and ethical considerations that come with it. In this review article, the authors provide an overview of the LLMs that might be relevant to the radiology community and include a brief discussion of their short history, technical basics, ChatGPT, prompt engineering, potential applications in medicine and radiology, advantages, disadvantages and risks, ethical and regulatory considerations, and future directions.

Keywords: ChatGPT; Large language models; artificial intelligence; deep learning; natural language processing.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest disclosure

F.V.; none related to this study; received support to attend meetings from Bracco Imaging S.r.l., and GE Healthcare. M.E.K.; meeting attendance support from Bayer. Ro.C.; support for attending meetings from Bracco and Bayer; research collaboration with Siemens Healthcare; co-funding by the European Union - FESR or FSE, PON Research and Innovation 2014–2020 - DM 1062/2021. Burak Koçak, MD, is Section Editor in Diagnostic and Interventional Radiology. He had no involvement in the peer-review of this article and had no access to information regarding its peer-review. Other authors have nothing to disclose.

Figures

Figure 1
Figure 1
Number of publications about language models in medical publications (green line) and medical imaging (yellow line) including radiology and nuclear medicine. Search date, July 20th, 2023; source, PubMed.
Figure 2
Figure 2
Technical developmental stages of language models.
Figure 3
Figure 3
Key concepts in LLMs. Tokenization is the process of splitting text into smaller units (i.e., tokens) that can be processed by language models. Embedding is the mathematical representation of data (e.g., vector representation of a word). The attention mechanism allows the models to focus on certain parts of the input data. Pre-training is the training of a model to be used for a lot of different tasks without re-training from scratch. Fine-tuning is the adjustment of models to achieve improved performance on domain-specific tasks. Reinforcement learning from human feedback is a machine learning approach based on reinforcement learning techniques along with human guidance. LLMs, large language models.
Figure 4
Figure 4
Architecture of transformers. The encoder and decoder are overly simplified in the figure. Both normally include attention mechanisms, feed-forward neural networks, residual connections, and the normalization layer. Transformers utilize multiple layers of encoders and decoders. Nx, number of layers of encoder and decoder parts.
Figure 5
Figure 5
Tokenization example. A 10-word sentence with one punctuation sign is tokenized to 14 tokens as shown in the upper panel. The bottom panel shows token identifiers unique to each token. Generated by OpenAI’s Tokenizer platform (https://platform.openai.com/tokenizer).

References

    1. Bick U, Lenzen H. PACS: the silent revolution. Eur Radiol. 1999;9(6):1152–1160. - PubMed
    1. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18(8):500–510. - PMC - PubMed
    1. Richardson ML, Garwood ER, Lee Y, et al. Noninterpretive uses of artificial intelligence in radiology. Acad Radiol. 2021;28(9):1225–1235. - PubMed
    1. Ornstein J. Mechanical translation: new challenge to communication. Science. 1955;122(3173):745–748. - PubMed
    1. Wei J, Tay Y, Bommasani R, et al. Emergent abilities of large language models. arXiv. 2022.

LinkOut - more resources