The ChatGPT Fact-Check: exploiting the limitations of generative AI to develop evidence-based reasoning skills in college science courses
- PMID: 39824517
- DOI: 10.1152/advan.00142.2024
The ChatGPT Fact-Check: exploiting the limitations of generative AI to develop evidence-based reasoning skills in college science courses
Abstract
Generative large language models (LLMs) like ChatGPT can quickly produce informative essays on various topics. However, the information generated cannot be fully trusted, as artificial intelligence (AI) can make factual mistakes. This poses challenges for using such tools in college classrooms. To address this, an adaptable assignment called the ChatGPT Fact-Check was developed to teach students in college science courses the benefits of using LLMs for topic exploration while emphasizing the importance of validating their claims based on evidence. The assignment requires students to use ChatGPT to generate essays, evaluate AI-generated sources, and assess the validity of AI-generated scientific claims (based on experimental evidence in primary sources). The assignment reinforces student learning around responsible AI use for exploration while maintaining evidence-based skepticism. The assignment meets objectives around efficiently leveraging beneficial features of AI, distinguishing evidence types, and evidence-based claim evaluation. Its adaptable nature allows integration across diverse courses to teach students to responsibly use AI for learning while maintaining a critical stance.NEW & NOTEWORTHY Generative large language models (LLMs) (e.g., ChatGPT) often produce erroneous information unsupported by scientific evidence. This article outlines how these limitations may be leveraged to develop critical thinking and teach students the importance of evaluating claims based on experimental evidence. Additionally, the activity highlights positive aspects of generative AI to efficiently explore new topics of interest, while maintaining skepticism.
Keywords: artificial intelligence; critical thinking; evidence-based reasoning; source evaluation; topic exploration.
Similar articles
-
Development of Evidence-Based Guidelines for the Integration of Generative AI in University Education Through a Multidisciplinary, Consensus-Based Approach.Eur J Dent Educ. 2025 May;29(2):285-303. doi: 10.1111/eje.13069. Epub 2025 Feb 13. Eur J Dent Educ. 2025. PMID: 39949032 Free PMC article.
-
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764. J Med Internet Res. 2024. PMID: 38662419 Free PMC article.
-
Incorporating ChatGPT in Medical Informatics Education: Mixed Methods Study on Student Perceptions and Experiential Integration Proposals.JMIR Med Educ. 2024 Mar 20;10:e51151. doi: 10.2196/51151. JMIR Med Educ. 2024. PMID: 38506920 Free PMC article.
-
A review of ophthalmology education in the era of generative artificial intelligence.Asia Pac J Ophthalmol (Phila). 2024 Jul-Aug;13(4):100089. doi: 10.1016/j.apjo.2024.100089. Epub 2024 Aug 10. Asia Pac J Ophthalmol (Phila). 2024. PMID: 39134176 Free PMC article. Review.
-
Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training.BMC Med Educ. 2024 Dec 28;24(1):1544. doi: 10.1186/s12909-024-06592-8. BMC Med Educ. 2024. PMID: 39732679 Free PMC article. Review.
Cited by
-
Integrating artificial intelligence into veterinary education: student perspectives.Front Vet Sci. 2025 Aug 4;12:1641685. doi: 10.3389/fvets.2025.1641685. eCollection 2025. Front Vet Sci. 2025. PMID: 40831896 Free PMC article.
MeSH terms
LinkOut - more resources
Full Text Sources