Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 30:25:e49324.
doi: 10.2196/49324.

Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study

Affiliations

Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study

Theresa Isabelle Wilhelm et al. J Med Internet Res. .

Abstract

Background: As advancements in artificial intelligence (AI) continue, large language models (LLMs) have emerged as promising tools for generating medical information. Their rapid adaptation and potential benefits in health care require rigorous assessment in terms of the quality, accuracy, and safety of the generated information across diverse medical specialties.

Objective: This study aimed to evaluate the performance of 4 prominent LLMs, namely, Claude-instant-v1.0, GPT-3.5-Turbo, Command-xlarge-nightly, and Bloomz, in generating medical content spanning the clinical specialties of ophthalmology, orthopedics, and dermatology.

Methods: Three domain-specific physicians evaluated the AI-generated therapeutic recommendations for a diverse set of 60 diseases. The evaluation criteria involved the mDISCERN score, correctness, and potential harmfulness of the recommendations. ANOVA and pairwise t tests were used to explore discrepancies in content quality and safety across models and specialties. Additionally, using the capabilities of OpenAI's most advanced model, GPT-4, an automated evaluation of each model's responses to the diseases was performed using the same criteria and compared to the physicians' assessments through Pearson correlation analysis.

Results: Claude-instant-v1.0 emerged with the highest mean mDISCERN score (3.35, 95% CI 3.23-3.46). In contrast, Bloomz lagged with the lowest score (1.07, 95% CI 1.03-1.10). Our analysis revealed significant differences among the models in terms of quality (P<.001). Evaluating their reliability, the models displayed strong contrasts in their falseness ratings, with variations both across models (P<.001) and specialties (P<.001). Distinct error patterns emerged, such as confusing diagnoses; providing vague, ambiguous advice; or omitting critical treatments, such as antibiotics for infectious diseases. Regarding potential harm, GPT-3.5-Turbo was found to be the safest, with the lowest harmfulness rating. All models lagged in detailing the risks associated with treatment procedures, explaining the effects of therapies on quality of life, and offering additional sources of information. Pearson correlation analysis underscored a substantial alignment between physician assessments and GPT-4's evaluations across all established criteria (P<.01).

Conclusions: This study, while comprehensive, was limited by the involvement of a select number of specialties and physician evaluators. The straightforward prompting strategy ("How to treat…") and the assessment benchmarks, initially conceptualized for human-authored content, might have potential gaps in capturing the nuances of AI-driven information. The LLMs evaluated showed a notable capability in generating valuable medical content; however, evident lapses in content quality and potential harm signal the need for further refinements. Given the dynamic landscape of LLMs, this study's findings emphasize the need for regular and methodical assessments, oversight, and fine-tuning of these AI tools to ensure they produce consistently trustworthy and clinically safe medical advice. Notably, the introduction of an auto-evaluation mechanism using GPT-4, as detailed in this study, provides a scalable, transferable method for domain-agnostic evaluations, extending beyond therapy recommendation assessments.

Keywords: ChatGPT; LLM; accuracy; artificial intelligence; chatbot; chatbots; dermatology; health information; large language models; medical advice; medical information; ophthalmology; orthopedic; orthopedics; quality; recommendation; recommendations; reliability; reliable; safety; therapy.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Study design for the cross-specialty evaluation of large language models on treatment recommendations.
Figure 2
Figure 2
Evaluation of the therapy recommendations by large language models (LLMs). (A) Mean mDISCERN scores separated by LLMs and mDISCERN questions. (B) Mean mDISCERN scores across all specialties (dermatology, ophthalmology, and orthopedics) and LLMs. Most responses clearly show more than one therapeutic option, whereas risks and additional sources of information were lacking. All error bars show 95% CIs of the mean.

References

    1. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022 Jan;28(1):31–38. doi: 10.1038/s41591-021-01614-0.10.1038/s41591-021-01614-0 - DOI - PubMed
    1. Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020 Oct;92(4):807–812. doi: 10.1016/j.gie.2020.06.040.S0016-5107(20)34466-7 - DOI - PubMed
    1. Wang F, Preininger A. AI in health: state of the art, challenges, and future directions. Yearb Med Inform. 2019 Aug;28(1):16–26. doi: 10.1055/s-0039-1677908. http://www.thieme-connect.com/DOI/DOI?10.1055/s-0039-1677908 - DOI - PMC - PubMed
    1. Introducing ChatGPT. OpenAI. [2023-05-08]. https://openai.com/blog/chatgpt .
    1. OpenAI GPT-4 Technical Report. arXiv. doi: 10.5860/choice.189890. Preprint posted online on March 15, 2023. https://arxiv.org/abs/2303.08774 - DOI