Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jun 25;6(5):100963.
doi: 10.1016/j.asmr.2024.100963. eCollection 2024 Oct.

ChatGPT and Google Provide Mostly Excellent or Satisfactory Responses to the Most Frequently Asked Patient Questions Related to Rotator Cuff Repair

Affiliations

ChatGPT and Google Provide Mostly Excellent or Satisfactory Responses to the Most Frequently Asked Patient Questions Related to Rotator Cuff Repair

Martinus Megalla et al. Arthrosc Sports Med Rehabil. .

Abstract

Purpose: To assess the differences in frequently asked questions (FAQs) and responses related to rotator cuff surgery between Google and ChatGPT.

Methods: Both Google and ChatGPT (version 3.5) were queried for the top 10 FAQs using the search term "rotator cuff repair." Questions were categorized according to Rothwell's classification. In addition to questions and answers for each website, the source that the answer was pulled from was noted and assigned a category (academic, medical practice, etc). Responses were also graded as "excellent response not requiring clarification" (1), "satisfactory requiring minimal clarification" (2), "satisfactory requiring moderate clarification" (3), or "unsatisfactory requiring substantial clarification" (4).

Results: Overall, 30% of questions were similar between what Google and ChatGPT deemed to be the most FAQs. For questions from Google web search, most answers came from medical practices (40%). For ChatGPT, most answers were provided by academic sources (90%). For numerical questions, ChatGPT and Google provided similar responses for 30% of questions. For most of the questions, both Google and ChatGPT responses were either "excellent" or "satisfactory requiring minimal clarification." Google had 1 response rated as satisfactory requiring moderate clarification, whereas ChatGPT had 2 responses rated as unsatisfactory.

Conclusions: Both Google and ChatGPT offer mostly excellent or satisfactory responses to the most FAQs regarding rotator cuff repair. However, ChatGPT may provide inaccurate or even fabricated answers and associated citations.

Clinical relevance: In general, the quality of online medical content is low. As artificial intelligence develops and becomes more widely used, it is important to assess the quality of the information patients are receiving from this technology.

PubMed Disclaimer

Conflict of interest statement

The authors that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Fig 1
Fig 1
Grading of Google and ChatGPT responses to the most frequently asked questions regarding rotator cuff repair. Grading footnotes: (1) “excellent response not requiring clarification,” (2) “satisfactory requiring minimal clarification,” (3) “satisfactory requiring moderate clarification,” and (4) “unsatisfactory requiring substantial clarification.”

References

    1. Susnjak T. ChatGPT: The end of online exam integrity? arXiv:2212.09292v1. https://arxiv.org/abs/2212.09292
    1. Groot O.Q., Ogink P.T., Lans A., et al. Machine learning prediction models in orthopedic surgery: A systematic review in transparent reporting. J Orthop Res. 2022;40:475–483. - PMC - PubMed
    1. Sun Y., Zhang Y., Gwizdka J., Trace C.B. Consumer evaluation of the quality of online health information: Systematic literature review of relevant criteria and indicators. J Med Internet Res. 2019;21 - PMC - PubMed
    1. Zhang Y., Sun Y., Xie B. Quality of health information for consumers on the web: A systematic review of indicators, criteria, tools, and evaluation results. J Assoc Information Sci Technol. 2015;66:2071–2084.
    1. Daraz L., Morrow A.S., Ponce O.J., et al. Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet. J Gen Intern Med. 2019;34:1884–1891. - PMC - PubMed

LinkOut - more resources