Exploring the potential of artificial intelligence chatbots in prosthodontics education
- PMID: 40016760
- PMCID: PMC11869545
- DOI: 10.1186/s12909-025-06849-w
Exploring the potential of artificial intelligence chatbots in prosthodontics education
Abstract
Background: The purpose of this study was to evaluate the performance of widely used artificial intelligence (AI) chatbots in answering prosthodontics questions from the Dentistry Specialization Residency Examination (DSRE).
Methods: A total of 126 DSRE prosthodontics questions were divided into seven subtopics (dental morphology, materials science, fixed dentures, removable partial dentures, complete dentures, occlusion/temporomandibular joint, and dental implantology). Questions were translated into English by the authors, and this version of the questions were asked to five chatbots (ChatGPT-3.5, Gemini Advanced, Claude Pro, Microsoft Copilot, and Perplexity) within a 7-day period. Statistical analyses, including chi-square and z-tests, were performed to compare accuracy rates across the chatbots and subtopics at a significance level of 0.05.
Results: The overall accuracy rates for the chatbots were as follows: Copilot (73%), Gemini (63.5%), ChatGPT-3.5 (61.1%), Claude Pro (57.9%), and Perplexity (54.8%). Copilot significantly outperformed Perplexity (P = 0.035). However, no significant differences in accuracy were found across subtopics among chatbots. Questions on dental implantology had the highest accuracy rate (75%), while questions on removable partial dentures had the lowest (50.8%).
Conclusion: Copilot showed the highest accuracy rate (73%), significantly outperforming Perplexity (54.8%). AI models demonstrate potential as educational support tools but currently face limitations in serving as reliable educational tools across all areas of prosthodontics. Future advancements in AI may lead to better integration and more effective use in dental education.
Keywords: AI chatbot evaluation; Artificial intelligence applications; Clinical decision-support systems; Dentistry specialization; Prosthodontics education.
© 2025. The Author(s).
Conflict of interest statement
Declarations. Ethics approval and consent to participate: Not applicable. Competing interests: The authors declare no competing interests. Informed consent: Informed consent were not required. Institutional review board: This study does not involve human or animal subjects and therefore did not require approval from the Institutional Ethical Board.
Figures


References
-
- Noda R, Izaki Y, Kitano F, Komatsu J, Ichikawa D, Shibagaki Y. Performance of ChatGPT and Bard in self-assessment questions for nephrology board renewal. Clin Exp Nephrol. 2024;28(5):465–9. - PubMed
-
- Revilla-León M, Barmak BA, Sailer I, Kois JC, Att W. Performance of an artificial intelligence-based chatbot (Chatgpt) answering the European certification in implant dentistry exam. Int J Prosthodont 2024;37(2). - PubMed
-
- Danesh A, Pazouki H, Danesh K, Danesh F, Danesh A. The performance of artificial intelligence language models in board-style dental knowledge assessment: a preliminary study on ChatGPT. J Am Dent Assoc. 2023;154(11):970–4. - PubMed
MeSH terms
LinkOut - more resources
Full Text Sources