Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Aug 14;15(1):29871.
doi: 10.1038/s41598-025-15898-6.

Expert evaluation of ChatGPT accuracy and reliability for basic celiac disease frequently asked questions

Affiliations

Expert evaluation of ChatGPT accuracy and reliability for basic celiac disease frequently asked questions

Mohadeseh Mahmoudi Ghehsareh et al. Sci Rep. .

Abstract

Artificial Intelligence's (AI) role in providing information on Celiac Disease (CD) remains understudied. This study aimed to evaluate the accuracy and reliability of ChatGPT-3.5 in generating responses to 20 basic CD-related queries. This study assessed ChatGPT-3.5, the dominant publicly accessible version during the study period, to establish a benchmark for AI-assisted CD education. The accuracy of ChatGPT's responses to twenty frequently asked questions (FAQs) was assessed by two independent experts using a Likert scale, followed by categorization based on CD management domains. Inter-rater reliability (agreement between experts) was determined through cross-tabulation, Cohen's kappa, and Wilcoxon signed-rank tests. Intra-rater reliability (agreement within the same expert) was evaluated using the Friedman test with post hoc comparisons. ChatGPT demonstrated high accuracy in responding to CD FAQs, with expert ratings predominantly ranging from 4 to 5. While overall performance was strong, responses to management strategies excelled compared to those related to disease etiology. Inter-rater reliability analysis revealed moderate agreement between the two experts in evaluating ChatGPT's responses (κ = 0.22, p-value = 0.026). Although both experts consistently assigned high scores across different CD management categories, subtle discrepancies emerged in specific instances. Intra-rater reliability analysis indicated high consistency in scoring for one expert (Friedman test=0.113), while the other exhibited some variability (Friedman test<0.001). ChatGPT exhibits potential as a reliable source of information for CD patients, particularly in the domain of disease management.

Keywords: Accuracy; Artificial intelligence; Celiac disease; ChatGPT; Reliability.

PubMed Disclaimer

Conflict of interest statement

Declarations. Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
Average Scores of Responses by CD Management Category and Expertise.

Similar articles

References

    1. Guillen-Grima, F. et al. Evaluating the efficacy of ChatGPT in navigating the Spanish medical residency entrance examination (MIR): promising horizons for AI in clinical medicine. Clin. Pract.13, 1460–1487 (2023). - PMC - PubMed
    1. Mohammad-Rahimi, H. et al. Validity and reliability of artificial intelligence chatbots as public sources of information on endodontics. Int. Endod J.57, 305–314. 10.1111/iej.14014 (2024). - PubMed
    1. Mahmoudi Ghehsareh, M. et al. Application of artificial intelligence in Celiac disease: from diagnosis to patient follow-up. Iran. J. Blood Cancer. 15, 125–137. 10.61186/ijbc.15.3.125 (2023).
    1. Jo, H. & Park, D. H. Effects of chatgpt’s AI capabilities and human-like traits on spreading information in work environments. Sci. Rep.14, 7806. 10.1038/s41598-024-57977-0 (2024). - PMC - PubMed
    1. Johnson, D. et al. Assessing the accuracy and reliability of AI-Generated medical responses: an evaluation of the Chat-GPT model. Res. Sq. 10.21203/rs.3.rs-2566942/v1 (2023).

LinkOut - more resources