Expert evaluation of ChatGPT accuracy and reliability for basic celiac disease frequently asked questions
- PMID: 40813612
- PMCID: PMC12354742
- DOI: 10.1038/s41598-025-15898-6
Expert evaluation of ChatGPT accuracy and reliability for basic celiac disease frequently asked questions
Abstract
Artificial Intelligence's (AI) role in providing information on Celiac Disease (CD) remains understudied. This study aimed to evaluate the accuracy and reliability of ChatGPT-3.5 in generating responses to 20 basic CD-related queries. This study assessed ChatGPT-3.5, the dominant publicly accessible version during the study period, to establish a benchmark for AI-assisted CD education. The accuracy of ChatGPT's responses to twenty frequently asked questions (FAQs) was assessed by two independent experts using a Likert scale, followed by categorization based on CD management domains. Inter-rater reliability (agreement between experts) was determined through cross-tabulation, Cohen's kappa, and Wilcoxon signed-rank tests. Intra-rater reliability (agreement within the same expert) was evaluated using the Friedman test with post hoc comparisons. ChatGPT demonstrated high accuracy in responding to CD FAQs, with expert ratings predominantly ranging from 4 to 5. While overall performance was strong, responses to management strategies excelled compared to those related to disease etiology. Inter-rater reliability analysis revealed moderate agreement between the two experts in evaluating ChatGPT's responses (κ = 0.22, p-value = 0.026). Although both experts consistently assigned high scores across different CD management categories, subtle discrepancies emerged in specific instances. Intra-rater reliability analysis indicated high consistency in scoring for one expert (Friedman test=0.113), while the other exhibited some variability (Friedman test<0.001). ChatGPT exhibits potential as a reliable source of information for CD patients, particularly in the domain of disease management.
Keywords: Accuracy; Artificial intelligence; Celiac disease; ChatGPT; Reliability.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Competing interests: The authors declare no competing interests.
Figures
Similar articles
-
Evaluation of ChatGPT-4 as an Online Outpatient Assistant in Puerperal Mastitis Management: Content Analysis of an Observational Study.JMIR Med Inform. 2025 Jul 24;13:e68980. doi: 10.2196/68980. JMIR Med Inform. 2025. PMID: 40705609 Free PMC article.
-
Evaluating the validity and consistency of artificial intelligence chatbots in responding to patients' frequently asked questions in prosthodontics.J Prosthet Dent. 2025 Jul;134(1):199-206. doi: 10.1016/j.prosdent.2025.03.009. Epub 2025 Apr 7. J Prosthet Dent. 2025. PMID: 40199631
-
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642. JMIR Form Res. 2025. PMID: 40802972 Free PMC article.
-
Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis.J Med Internet Res. 2024 Jul 25;26:e60807. doi: 10.2196/60807. J Med Internet Res. 2024. PMID: 39052324 Free PMC article.
-
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis.BMC Med Educ. 2024 Sep 16;24(1):1013. doi: 10.1186/s12909-024-05944-8. BMC Med Educ. 2024. PMID: 39285377 Free PMC article.
References
-
- Mohammad-Rahimi, H. et al. Validity and reliability of artificial intelligence chatbots as public sources of information on endodontics. Int. Endod J.57, 305–314. 10.1111/iej.14014 (2024). - PubMed
-
- Mahmoudi Ghehsareh, M. et al. Application of artificial intelligence in Celiac disease: from diagnosis to patient follow-up. Iran. J. Blood Cancer. 15, 125–137. 10.61186/ijbc.15.3.125 (2023).
-
- Johnson, D. et al. Assessing the accuracy and reliability of AI-Generated medical responses: an evaluation of the Chat-GPT model. Res. Sq. 10.21203/rs.3.rs-2566942/v1 (2023).
MeSH terms
LinkOut - more resources
Full Text Sources
Medical