Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians
- PMID: 39729119
- PMCID: PMC11680670
- DOI: 10.1007/s00345-024-05399-y
Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians
Abstract
Purpose: To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.
Methods: Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs. human) rated by non-medical evaluators.
Results: Five non-medical volunteers independently evaluated, ranked, and inferred the source for 120 responses (n = 600 total). For volunteer evaluations, the mean (SD) score of chatbots, 3.0 (1.4) (moderately empathetic) was significantly higher than urologists, 2.1 (1.1) (slightly empathetic) (p < 0.001); mean (SD) and preference ranking for chatbots, 2.6 (1.6), was significantly higher than urologist ranking, 3.9 (1.6) (p < 0.001). Two subject matter experts (SMEs) independently evaluated 120 responses each (answers to 20 questions from 4 urologist and 2 chatbots, n = 240 total). For SME evaluations, mean (SD) accuracy score for chatbots was 4.5 (1.1) (nearly all correct) and not significantly different than urologists, 4.6 (1.2). The mean (SD) completeness score for chatbots was 2.4 (0.8) (comprehensive), significantly higher than urologists, 1.6 (0.6) (adequate) (p < 0.001).
Conclusion: Answers to patient BPH messages generated by chatbots were evaluated by experts as equally accurate and more complete than urologist answers. Non-medical volunteers preferred chatbot-generated messages and considered them more empathetic compared to answers generated by urologists.
Keywords: Artificial intelligence (AI); Benign prostatic hyperplasia (BPH); Care experience; ChatGPT; Chatbot; Large language models (LLMs); Patient communication; Patient messages; Physician experience; Sandbox.
© 2024. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests.
Comment in
-
Comment on "Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians".World J Urol. 2025 Jan 22;43(1):83. doi: 10.1007/s00345-025-05448-0. World J Urol. 2025. PMID: 39841262 No abstract available.
-
Chatbot's performance in answering medical questions: the effects of prompt design, customization settings, and session context.World J Urol. 2025 Jan 27;43(1):88. doi: 10.1007/s00345-025-05449-z. World J Urol. 2025. PMID: 39869150 No abstract available.
-
Optimizing AI-assisted communication in urology: potential and challenges.World J Urol. 2025 Feb 14;43(1):122. doi: 10.1007/s00345-025-05508-5. World J Urol. 2025. PMID: 39951154 No abstract available.
-
Letter to the Editor on "Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians".World J Urol. 2025 May 6;43(1):272. doi: 10.1007/s00345-025-05587-4. World J Urol. 2025. PMID: 40327130 No abstract available.
References
-
- OpenAI, Achiam J, Adler S, GPT-4 Technical Report (2023).:arXiv:2303.08774. 10.48550/arXiv.2303.08774 Accessed March 01, 2023. https://ui.adsabs.harvard.edu/abs/2023arXiv230308774O
-
- Lee P, Bubeck S, Petro J, Benefits (2023) Limits, and risks of GPT-4 as an AI Chatbot for Medicine. N Engl J Med Mar 30(13):1233–1239. 10.1056/NEJMsr2214184 - PubMed
-
- Shah NH, Entwistle D, Pfeffer MA (2023) Creation and adoption of large Language models in Medicine. JAMA 330(9). 10.1001/jama.2023.14217 - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
