Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Apr;311(1):e232133.
doi: 10.1148/radiol.232133.

BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study

Affiliations

BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study

Andrea Cozzi et al. Radiology. 2024 Apr.

Abstract

Background The performance of publicly available large language models (LLMs) remains unclear for complex clinical tasks. Purpose To evaluate the agreement between human readers and LLMs for Breast Imaging Reporting and Data System (BI-RADS) categories assigned based on breast imaging reports written in three languages and to assess the impact of discordant category assignments on clinical management. Materials and Methods This retrospective study included reports for women who underwent MRI, mammography, and/or US for breast cancer screening or diagnostic purposes at three referral centers. Reports with findings categorized as BI-RADS 1-5 and written in Italian, English, or Dutch were collected between January 2000 and October 2023. Board-certified breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), assigned BI-RADS categories using only the findings described by the original radiologists. Agreement between human readers and LLMs for BI-RADS categories was assessed using the Gwet agreement coefficient (AC1 value). Frequencies were calculated for changes in BI-RADS category assignments that would affect clinical management (ie, BI-RADS 0 vs BI-RADS 1 or 2 vs BI-RADS 3 vs BI-RADS 4 or 5) and compared using the McNemar test. Results Across 2400 reports, agreement between the original and reviewing radiologists was almost perfect (AC1 = 0.91), while agreement between the original radiologists and GPT-4, GPT-3.5, and Bard was moderate (AC1 = 0.52, 0.48, and 0.42, respectively). Across human readers and LLMs, differences were observed in the frequency of BI-RADS category upgrades or downgrades that would result in changed clinical management (118 of 2400 [4.9%] for human readers, 611 of 2400 [25.5%] for Bard, 573 of 2400 [23.9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinical management (37 of 2400 [1.5%] for human readers, 435 of 2400 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001). Conclusion LLMs achieved moderate agreement with human reader-assigned BI-RADS categories across reports written in three languages but also yielded a high percentage of discordant BI-RADS categories that would negatively impact clinical management. © RSNA, 2024 Supplemental material is available for this article.

PubMed Disclaimer

Conflict of interest statement

Disclosures of conflicts of interest: A.C. No relevant relationships. K.P. Grants or contracts from the Research and Innovation Framework Programme, FET Open, Anniversary Fund of the Oesterreichische Nationalbank, Vienna Science and Technology Fund, Memorial Sloan Kettering Cancer Center, and Breast Cancer Research Foundation; unpaid consultant for Genentech; consulting fees from Merantix, AURA Health Technologies, and Guerbet; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from the European Society of Breast Imaging, Bayer, Siemens Healthineers, International Diagnostic Course Davos, Olea Medical, and Roche; support for attending meetings and/or travel from the European Society of Breast Imaging; participation on a data and safety monitoring board or advisory board for Bayer and Guerbet; and institution (Memorial Sloan Kettering Cancer Center) has institutional financial interests relative to Grail. A.H. No relevant relationships. T.Z. No relevant relationships. L.B. No relevant relationships. R.L.G. No relevant relationships. B.C. No relevant relationships. M.C. No relevant relationships. S.R. No relevant relationships. F.D.G. Institution (Imaging Institute of Southern Switzerland) is a Siemens Healthineers reference center for research. R.M.M. Grants or contracts from the Dutch Cancer Society, Europees Fonds voor Regionale Ontwikkeling Programma Oost-Nederland, Horizon Europe, European Research Council, Dutch Research Council, Health Holland, Siemens Healthineers, Bayer, ScreenPoint Medical, Beckton Dickinson, PA Imaging, Lunit, and Koning Health; royalties or licenses from Elsevier; consulting fees from Siemens Healthineers, Bayer, ScreenPoint Medical, Beckton Dickinson, PA Imaging, Lunit, Koning Health, and Guerbet; participation on a data and safety monitoring board or advisory board for the SMALL trial; member of the European Society of Breast Imaging executive board; member of the European Society of Radiology Research Committee; member of the editorial board for European Journal of Radiology; member of the Dutch Breast Cancer Research Group; and associate editor for Radiology. S.S. Consulting fees from Arterys; payment or honoraria for lectures, presentations, speakers bureaus, manuscript writing, or educational events from GE HealthCare; and support for attending meetings and/or travel from Bracco.

Figures

None
Graphical abstract
Study flowchart. Reports from center 3 obtained from the study by
Zhang et al (23). BI-RADS = Breast Imaging Reporting and Data System, CE-MRI
= contrast-enhanced MRI, MG = mammography.
Figure 1:
Study flowchart. Reports from center 3 obtained from the study by Zhang et al (23). BI-RADS = Breast Imaging Reporting and Data System, CE-MRI = contrast-enhanced MRI, MG = mammography.
Sankey plots showing changes in Breast Imaging Reporting and Data
System (BI-RADS) clinical management categories between human readers and
between human readers and large language models (LLMs). Human-human
agreement was assessed between the original radiologists who wrote the
breast imaging reports (Human 1) and the radiologists who reviewed the
findings section of the reports (Human 2). Human-LLM agreement was assessed
between the original breast imaging reports and the outputs from three LLMs
(Google Bard [27], GPT-3.5 [25], and GPT-4 [26]) provided with the findings
section of the report. The proportion of disagreements between the original
reporting radiologists and the radiologists who reviewed the findings
section of the reports was observed to be lower (P < .001) than the
proportions of disagreements between the original reporting radiologists and
the LLMs.
Figure 2:
Sankey plots showing changes in Breast Imaging Reporting and Data System (BI-RADS) clinical management categories between human readers and between human readers and large language models (LLMs). Human-human agreement was assessed between the original radiologists who wrote the breast imaging reports (Human 1) and the radiologists who reviewed the findings section of the reports (Human 2). Human-LLM agreement was assessed between the original breast imaging reports and the outputs from three LLMs (Google Bard [27], GPT-3.5 [25], and GPT-4 [26]) provided with the findings section of the report. The proportion of disagreements between the original reporting radiologists and the radiologists who reviewed the findings section of the reports was observed to be lower (P < .001) than the proportions of disagreements between the original reporting radiologists and the LLMs.

References

    1. Thirunavukarasu AJ , Ting DSJ , Elangovan K , Gutierrez L , Tan TF , Ting DSW . Large language models in medicine . Nat Med 2023. ; 29 ( 8 ): 1930 – 1940 . - PubMed
    1. Kitamura FC . ChatGPT is shaping the future of medical writing but still requires human judgment . Radiology 2023. ; 307 ( 2 ): e230171 . - PubMed
    1. Bhayana R , Krishna S , Bleakney RR . Performance of ChatGPT on a radiology board–style examination: insights into current strengths and limitations . Radiology 2023. ; 307 ( 5 ): e230582 . - PubMed
    1. Haupt CE , Marks M . AI-generated medical advice—GPT and beyond . JAMA 2023. ; 329 ( 16 ): 1349 – 1350 . - PubMed
    1. Lee P , Bubeck S , Petro J . Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine . N Engl J Med 2023. ; 388 ( 13 ): 1233 – 1239 . - PubMed

Publication types

LinkOut - more resources