Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam
- PMID: 40416223
- PMCID: PMC12102586
- DOI: 10.7759/cureus.82870
Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam
Abstract
Background The past few years have been a time of rapid development in artificial intelligence (AI) and its implementation across numerous fields. This study aimed to compare the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and DeepSeek-R1 (DeepSeek AI, Zhejiang, China) on the Polish specialty examination in infectious diseases. Materials and methods The study was conducted from April 1 to April 4, 2025, using the Autumn 2024 Polish specialty examination in infectious diseases. The examination comprised 120 questions, each presenting five answer options, with only one correct choice. The Center for Medical Education (CEM) in Łódź, Poland decided to withdraw one question due to the absence of a definitive correct answer and inconsistency with up-to-date clinical guidelines. Furthermore, the questions were classified as either 'clinical cases' or 'other' to enable a more in-depth evaluation of the potential of artificial intelligence in real-world clinical practice. The accuracy of the responses was verified using the official answer key approved by the CEM. To assess the accuracy and confidence level of the responses provided by GPT-4o and DeepSeek-R1, statistical methods were employed, including Pearson's χ2 test, and Mann-Whitney U test. Results GPT-4o correctly answered 85 out of 199 questions (71.43%) while DeepSeek-R1 answered correctly 88 out of 199 questions (73.85%). A minimum of 72 (60.5%) correct responses is required to pass the examination. No statistically significant difference was observed between responses to 'clinical case' questions and 'other' questions for either AI model. For both AI models, a statistically significant difference was observed in the confidence levels between correct and incorrect answers, with higher confidence reported for correctly answered questions and lower confidence for incorrectly answered ones. Conclusions Both GPT-4o and DeepSeek-R1 demonstrated the ability to pass the Polish specialty examination in infectious diseases, suggesting their potential as educational tools. Additionally, it is noteworthy that DeepSeek-R1 achieved a performance comparable to GPT-4o, despite being a much newer model on the market and, according to available data, having been developed at significantly lower cost.
Keywords: #artificial intelligence; #chatgpt; #final medical examination; #machine learning; #medical professionals; #medical students; deepseek; infectious disease medicine.
Copyright © 2025, Błecha et al.
Conflict of interest statement
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Figures


References
-
- Artificial intelligence in medicine. Hamet P, Tremblay J. Metabolism. 2017;69S:0–40. - PubMed
-
- Number of ChatGPT users. Number of ChatGPT users. [ Apr; 2025 ]. 2025. https://explodingtopics.com/blog/chatgpt-users?utm_medium=email&utm_sour... https://explodingtopics.com/blog/chatgpt-users?utm_medium=email&utm_sour...
-
- DeepSeek versus ChatGPT: multimodal artificial intelligence revolutionizing scientific discovery. From language editing to autonomous content generation-redefining innovation in research and practice. Kayaalp ME, Prill R, Sezgin EA, Cong T, Królikowska A, Hirschmann MT. https://doi.org/10.1002/ksa.12628 Knee Surg Sports Traumatol Arthrosc. 2025 - PubMed
LinkOut - more resources
Full Text Sources
Miscellaneous