Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Sep 16:69:51-62.
doi: 10.1016/j.euros.2024.08.015. eCollection 2024 Nov.

ChatGPT as a Clinical Decision Maker for Urolithiasis: Compliance with the Current European Association of Urology Guidelines

Affiliations
Review

ChatGPT as a Clinical Decision Maker for Urolithiasis: Compliance with the Current European Association of Urology Guidelines

Ali Talyshinskii et al. Eur Urol Open Sci. .

Abstract

Background and objective: Generative artificial intelligence models are among the most promising and widely used tools used in health care. This review investigates GPT-4 answers to decision-making questions regarding the diagnosis and treatment of urolithiasis across several clinical settings and their correspondence to the current European Association of Urology (EAU) guidelines.

Methods: In March 2024, the GPT-4 model was asked 11 questions, containing a brief description of a patient with urolithiasis. All questions were grouped according to urolithiasis care step: diagnosis, urgent care, scheduled intervention, and metaphylaxis. When responses were received, compliance with the current EAU guidelines was assessed by experienced urologists.

Key findings and limitations: Although all responses were provided with information that corresponded to EAU guidelines, six of the 11 answers were associated with missed guideline-provided parts, and incorrect data were given in eight of the 11 answers. GPT-4 is relatively safe in the initial diagnostic flow of patients suspected of having stones within the urinary tract and during treatment planning; however, its understanding of all the nuances of metaphylaxis leaves much to be desired and is far from the dogma given in the EAU guidelines. Moreover, GPT-4 knowledge of strategy and algorithm is not always aligned with the EAU guidelines.

Conclusions and clinical implications: Despite the fact that from the perspective of patients with urolithiasis, GPT-4 is capable of answering their questions well, the specificity of questions from urologists is labor intensive for its current version, and necessitates the ability to interpret it correctly and further attempts to improve it. While some aspects of diagnostics are more accurate, these struggle with surgical planning and algorithms in line with the EAU guidelines.

Patient summary: The generative artificial intelligence (AI) model GPT-4 is capable of answering urology-related questions, but lacks detailed responses. Although some aspects of the diagnostics are accurate, knowledge of surgical planning is not in line with the European Association of Urology guidelines. Future improvements should focus on efforts to enhance the accuracy, reliability, and clinical relevance of AI tools in urology.

Keywords: Clinical decision; Diagnosis; Generative pretrained transformer; Treatment; Urolithiasis.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Rajpurkar P., Chen E., Banerjee O., Topol E.J. AI in health and medicine. Nat Med. 2022;28:31–38. - PubMed
    1. Talyshinskii A., Naik N., Hameed B.M.Z., et al. Expanding horizons and navigating challenges for enhanced clinical workflows: ChatGPT in urology. Front Surg. 2023;10:1257191. - PMC - PubMed
    1. Caglar U., Yildiz O., Meric A., et al. Evaluating the performance of ChatGPT in answering questions related to pediatric urology. J Pediatr Urol. 2024;20:26.e1–26.e5. - PubMed
    1. Caglar U., Yildiz O., Meric A., et al. Evaluating the performance of ChatGPT in answering questions related to benign prostate hyperplasia and prostate cancer. Minerva Urol Nephrol. 2023;75:729–733. - PubMed
    1. Caglar U., Yildiz O., Ozervarli M.F., et al. Assessing the performance of Chat Generative Pretrained Transformer (ChatGPT) in answering andrology-related questions. Urol Res Pract. 2023;49:365–369. - PMC - PubMed

LinkOut - more resources