Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jul 23;7(1):190.
doi: 10.1038/s41746-024-01185-7.

Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine

Affiliations

Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine

Qiao Jin et al. NPJ Digit Med. .

Abstract

Recent studies indicate that Generative Pre-trained Transformer 4 with Vision (GPT-4V) outperforms human physicians in medical challenge tasks. However, these evaluations primarily focused on the accuracy of multi-choice questions alone. Our study extends the current scope by conducting a comprehensive analysis of GPT-4V's rationales of image comprehension, recall of medical knowledge, and step-by-step multimodal reasoning when solving New England Journal of Medicine (NEJM) Image Challenges-an imaging quiz designed to test the knowledge and diagnostic capabilities of medical professionals. Evaluation results confirmed that GPT-4V performs comparatively to human physicians regarding multi-choice accuracy (81.6% vs. 77.8%). GPT-4V also performs well in cases where physicians incorrectly answer, with over 78% accuracy. However, we discovered that GPT-4V frequently presents flawed rationales in cases where it makes the correct final choices (35.5%), most prominent in image comprehension (27.2%). Regardless of GPT-4V's high accuracy in multi-choice questions, our findings emphasize the necessity for further in-depth evaluations of its rationales before integrating such multimodal AI models into clinical workflows.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests but the following competing financial interests: R.S. receives royalties for patents or software licenses from iCAD, Philips, ScanMed, PingAn, Translation Holdings, and MGB. R.S. received research support from PingAn.

Figures

Fig. 1
Fig. 1. Evaluation Procedure for GPT-4 with Vision (GPT-4V).
This figure illustrates the evaluation workflow for GPT-4V using 207 NEJM Image Challenges. The example instance is adapted from the New England Journal of Medicine, Xiaojing Tang and Lijun Sun, Encapsulating Peritoneal Sclerosis. Copyright © 2024 Massachusetts Medical Society. Reprinted with permission from Massachusetts Medical Society. a A medical student answered all questions and triaged them into specialties. b Nine physicians provided their answers to the questions in their specialty. c GPT-4V is prompted to answer challenge questions with a final choice and structured responses reflecting three specific capabilities. d The physicians then appraised the validity of each component of GPT-4V’s responses based on the ground-truth explanations.
Fig. 2
Fig. 2. Evaluation results.
a Average multi-choice accuracies achieved by various models and individuals, segmented by question difficulty. b Confusion matrices showing the intersection of errors made by GPT-4V and human physicians. c Bar graphs representing the percentage of GPT-4V’s rationales in each capability area as evaluated by human physicians for accuracy. ***p < 0.001, n.s. not significant.

Update of

References

    1. OpenAI. GPT-4 Technical Report. Preprint at arXiv10.48550/arXiv.2303.08774 (2023).
    1. Tian, S. et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief. Bioinforma.25, bbad493 (2024).10.1093/bib/bbad493 - DOI - PMC - PubMed
    1. Tang, L. et al. Evaluating large language models on medical evidence summarization. NPJ Digit. Med.6, 158 (2023). 10.1038/s41746-023-00896-7 - DOI - PMC - PubMed
    1. Jin, Q., Leaman, R. & Lu, Z. Retrieve, summarize, and verify: how will ChatGPT affect information seeking from the medical literature? J. Am. Soc. Nephrol. 34, 1302-1304 (2023). - PMC - PubMed
    1. Jin, Q., Leaman, R. & Lu, Z. PubMed and beyond: biomedical literature search in the age of artificial intelligence. EBioMedicine100, 104988 (2024). 10.1016/j.ebiom.2024.104988 - DOI - PMC - PubMed