Using AI-generated suggestions from ChatGPT to optimize clinical decision support
- PMID: 37087108
- PMCID: PMC10280357
- DOI: 10.1093/jamia/ocad072
Using AI-generated suggestions from ChatGPT to optimize clinical decision support
Abstract
Objective: To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions.
Methods: We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy.
Results: Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy.
Conclusion: AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
Keywords: artificial intelligence; clinical decision support; large language model.
© The Author(s) 2023. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Conflict of interest statement
The authors do not have conflicts of interest related to this study.
Figures
Comment in
-
Is ChatGPT worthy enough for provisioning clinical decision support?J Am Med Inform Assoc. 2025 Jan 1;32(1):258-259. doi: 10.1093/jamia/ocae282. J Am Med Inform Assoc. 2025. PMID: 39499794 Free PMC article. No abstract available.
References
-
- Sorace J, Wong H-H, DeLeire T, et al. Quantifying the competitiveness of the electronic health record market and its implications for interoperability. Int J Med Inform 2020; 136: 104037. - PubMed
-
- Clinical Decision Support (CDS) | HealthIT.gov. https://www.healthit.gov/test-method/clinical-decision-support-cds. Accessed September 7, 2021.
-
- Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems. Ann Intern Med 2012; 157 (1): 29–43. - PubMed
-
- Mitchell J, Probst J, Brock-Martin A, et al. Association between clinical decision support system use and rural quality disparities in the treatment of pneumonia. J Rural Health 2014; 30 (2): 186–95. - PubMed
