Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Apr 23;17(4):e82870.
doi: 10.7759/cureus.82870. eCollection 2025 Apr.

Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam

Affiliations

Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam

Zuzanna Błecha et al. Cureus. .

Abstract

Background The past few years have been a time of rapid development in artificial intelligence (AI) and its implementation across numerous fields. This study aimed to compare the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and DeepSeek-R1 (DeepSeek AI, Zhejiang, China) on the Polish specialty examination in infectious diseases. Materials and methods The study was conducted from April 1 to April 4, 2025, using the Autumn 2024 Polish specialty examination in infectious diseases. The examination comprised 120 questions, each presenting five answer options, with only one correct choice. The Center for Medical Education (CEM) in Łódź, Poland decided to withdraw one question due to the absence of a definitive correct answer and inconsistency with up-to-date clinical guidelines. Furthermore, the questions were classified as either 'clinical cases' or 'other' to enable a more in-depth evaluation of the potential of artificial intelligence in real-world clinical practice. The accuracy of the responses was verified using the official answer key approved by the CEM. To assess the accuracy and confidence level of the responses provided by GPT-4o and DeepSeek-R1, statistical methods were employed, including Pearson's χ2 test, and Mann-Whitney U test. Results GPT-4o correctly answered 85 out of 199 questions (71.43%) while DeepSeek-R1 answered correctly 88 out of 199 questions (73.85%). A minimum of 72 (60.5%) correct responses is required to pass the examination. No statistically significant difference was observed between responses to 'clinical case' questions and 'other' questions for either AI model. For both AI models, a statistically significant difference was observed in the confidence levels between correct and incorrect answers, with higher confidence reported for correctly answered questions and lower confidence for incorrectly answered ones. Conclusions Both GPT-4o and DeepSeek-R1 demonstrated the ability to pass the Polish specialty examination in infectious diseases, suggesting their potential as educational tools. Additionally, it is noteworthy that DeepSeek-R1 achieved a performance comparable to GPT-4o, despite being a much newer model on the market and, according to available data, having been developed at significantly lower cost.

Keywords: #artificial intelligence; #chatgpt; #final medical examination; #machine learning; #medical professionals; #medical students; deepseek; infectious disease medicine.

PubMed Disclaimer

Conflict of interest statement

Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.

Figures

Figure 1
Figure 1. General summary of GPT-4o and DeepSeek-R1.
Data are presented as N (%), where N represents the number of correct or incorrect responses and (%) indicates the percentage of all questions they represent.
Figure 2
Figure 2. Comparison of confidence levels for correct and incorrect answers for GPT-4o and DeepSeek-R1.
In the statistical analysis of the distribution of correct and incorrect responses for both AI models, the Mann-Whitney test was used. A result was considered statistically significant if p < 0.05. GPT-4o The p-value is 0.0026, indicating a statistically significant difference between the confidence distributions for correct and incorrect answers. This difference may suggest that GPT-4o is more confident in its correct answers compared to its incorrect ones. Deep-Seek-R1 The p-value is 0.0026, indicating a statistically significant difference between the confidence distributions for correct and incorrect answers. This difference may suggest, similarly to GPT-4o, that Deep-Seek-R1 is more confident in its correct answers compared to its incorrect ones.

References

    1. Artificial intelligence in medicine. Hamet P, Tremblay J. Metabolism. 2017;69S:0–40. - PubMed
    1. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Sallam M. Healthcare (Basel) 2023;11 - PMC - PubMed
    1. Number of ChatGPT users. Number of ChatGPT users. [ Apr; 2025 ]. 2025. https://explodingtopics.com/blog/chatgpt-users?utm_medium=email&utm_sour... https://explodingtopics.com/blog/chatgpt-users?utm_medium=email&utm_sour...
    1. DeepSeek versus ChatGPT: multimodal artificial intelligence revolutionizing scientific discovery. From language editing to autonomous content generation-redefining innovation in research and practice. Kayaalp ME, Prill R, Sezgin EA, Cong T, Królikowska A, Hirschmann MT. https://doi.org/10.1002/ksa.12628 Knee Surg Sports Traumatol Arthrosc. 2025 - PubMed
    1. DeepSeek in healthcare: revealing opportunities and steering challenges of a new open-source artificial intelligence frontier. Temsah A, Alhasan K, Altamimi I, Jamal A, Al-Eyadhy A, Malki KH, Temsah MH. Cureus. 2025;17:0. - PMC - PubMed

LinkOut - more resources