BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study
- PMID: 38687216
- PMCID: PMC11070611
- DOI: 10.1148/radiol.232133
BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study
Abstract
Background The performance of publicly available large language models (LLMs) remains unclear for complex clinical tasks. Purpose To evaluate the agreement between human readers and LLMs for Breast Imaging Reporting and Data System (BI-RADS) categories assigned based on breast imaging reports written in three languages and to assess the impact of discordant category assignments on clinical management. Materials and Methods This retrospective study included reports for women who underwent MRI, mammography, and/or US for breast cancer screening or diagnostic purposes at three referral centers. Reports with findings categorized as BI-RADS 1-5 and written in Italian, English, or Dutch were collected between January 2000 and October 2023. Board-certified breast radiologists and the LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), assigned BI-RADS categories using only the findings described by the original radiologists. Agreement between human readers and LLMs for BI-RADS categories was assessed using the Gwet agreement coefficient (AC1 value). Frequencies were calculated for changes in BI-RADS category assignments that would affect clinical management (ie, BI-RADS 0 vs BI-RADS 1 or 2 vs BI-RADS 3 vs BI-RADS 4 or 5) and compared using the McNemar test. Results Across 2400 reports, agreement between the original and reviewing radiologists was almost perfect (AC1 = 0.91), while agreement between the original radiologists and GPT-4, GPT-3.5, and Bard was moderate (AC1 = 0.52, 0.48, and 0.42, respectively). Across human readers and LLMs, differences were observed in the frequency of BI-RADS category upgrades or downgrades that would result in changed clinical management (118 of 2400 [4.9%] for human readers, 611 of 2400 [25.5%] for Bard, 573 of 2400 [23.9%] for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinical management (37 of 2400 [1.5%] for human readers, 435 of 2400 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for GPT-4; P < .001). Conclusion LLMs achieved moderate agreement with human reader-assigned BI-RADS categories across reports written in three languages but also yielded a high percentage of discordant BI-RADS categories that would negatively impact clinical management. © RSNA, 2024 Supplemental material is available for this article.
Conflict of interest statement
Figures


![Sankey plots showing changes in Breast Imaging Reporting and Data
System (BI-RADS) clinical management categories between human readers and
between human readers and large language models (LLMs). Human-human
agreement was assessed between the original radiologists who wrote the
breast imaging reports (Human 1) and the radiologists who reviewed the
findings section of the reports (Human 2). Human-LLM agreement was assessed
between the original breast imaging reports and the outputs from three LLMs
(Google Bard [27], GPT-3.5 [25], and GPT-4 [26]) provided with the findings
section of the report. The proportion of disagreements between the original
reporting radiologists and the radiologists who reviewed the findings
section of the reports was observed to be lower (P < .001) than the
proportions of disagreements between the original reporting radiologists and
the LLMs.](https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f48/11070611/65eec27ebed0/radiol.232133.fig2.gif)
References
-
- Thirunavukarasu AJ , Ting DSJ , Elangovan K , Gutierrez L , Tan TF , Ting DSW . Large language models in medicine . Nat Med 2023. ; 29 ( 8 ): 1930 – 1940 . - PubMed
-
- Kitamura FC . ChatGPT is shaping the future of medical writing but still requires human judgment . Radiology 2023. ; 307 ( 2 ): e230171 . - PubMed
-
- Bhayana R , Krishna S , Bleakney RR . Performance of ChatGPT on a radiology board–style examination: insights into current strengths and limitations . Radiology 2023. ; 307 ( 5 ): e230582 . - PubMed
-
- Haupt CE , Marks M . AI-generated medical advice—GPT and beyond . JAMA 2023. ; 329 ( 16 ): 1349 – 1350 . - PubMed
-
- Lee P , Bubeck S , Petro J . Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine . N Engl J Med 2023. ; 388 ( 13 ): 1233 – 1239 . - PubMed
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical