Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Feb 14;24(1):143.
doi: 10.1186/s12909-024-05125-7.

Performance of ChatGPT on Chinese national medical licensing examinations: a five-year examination evaluation study for physicians, pharmacists and nurses

Affiliations

Performance of ChatGPT on Chinese national medical licensing examinations: a five-year examination evaluation study for physicians, pharmacists and nurses

Hui Zong et al. BMC Med Educ. .

Abstract

Background: Large language models like ChatGPT have revolutionized the field of natural language processing with their capability to comprehend and generate textual content, showing great potential to play a role in medical education. This study aimed to quantitatively evaluate and comprehensively analysis the performance of ChatGPT on three types of national medical examinations in China, including National Medical Licensing Examination (NMLE), National Pharmacist Licensing Examination (NPLE), and National Nurse Licensing Examination (NNLE).

Methods: We collected questions from Chinese NMLE, NPLE and NNLE from year 2017 to 2021. In NMLE and NPLE, each exam consists of 4 units, while in NNLE, each exam consists of 2 units. The questions with figures, tables or chemical structure were manually identified and excluded by clinician. We applied direct instruction strategy via multiple prompts to force ChatGPT to generate the clear answer with the capability to distinguish between single-choice and multiple-choice questions.

Results: ChatGPT failed to pass the accuracy threshold of 0.6 in any of the three types of examinations over the five years. Specifically, in the NMLE, the highest recorded accuracy was 0.5467, which was attained in both 2018 and 2021. In the NPLE, the highest accuracy was 0.5599 in 2017. In the NNLE, the most impressive result was shown in 2017, with an accuracy of 0.5897, which is also the highest accuracy in our entire evaluation. ChatGPT's performance showed no significant difference in different units, but significant difference in different question types. ChatGPT performed well in a range of subject areas, including clinical epidemiology, human parasitology, and dermatology, as well as in various medical topics such as molecules, health management and prevention, diagnosis and screening.

Conclusions: These results indicate ChatGPT failed the NMLE, NPLE and NNLE in China, spanning from year 2017 to 2021. but show great potential of large language models in medical education. In the future high-quality medical data will be required to improve the performance.

Keywords: Artificial intelligence; ChatGPT; Medical education; Medical examination; Natural language processing.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of competing interests.

Figures

Fig. 1
Fig. 1
The overview of interaction with ChatGPT. The question included background description and choices from three national licensing examinations, including Chinese National Medical Licensing Examination (NMLE), National Pharmacist Licensing Examination (NPLE) and National Nurse Licensing Examination (NNLE). The prompt was designed to force a clear answer, as well as the ability to recognize single-choice or multiple-choice questions. The response of ChatGPT were manually reviewed by an experienced clinician to determine the answer. The correct answer to this question is “D. Cor pulmonale”. It should be noted that while English text was shown in the figure, the experiment itself used Chinese text as both the input and output language
Fig. 2
Fig. 2
The performance of ChatGPT of three national licensing examinations over a period of five years from 2017 to 2021. The examinations included Chinese National Medical Licensing Examination (NMLE), National Pharmacist Licensing Examination (NPLE) and National Nurse Licensing Examination (NNLE)
Fig. 3
Fig. 3
The performance of ChatGPT on different units and question types. For different units, there were no significant difference among (A) Chinese National Medical Licensing Examination (NMLE), (B) National Pharmacist Licensing Examination (NPLE), and (C) National Nurse Licensing Examination (NPLE). (D) However, ChatGPT demonstrated higher performance in single-choice questions than multiple-choice questions with a highly significant difference (ns, no significant difference, ****p < 0.0001)
Fig. 4
Fig. 4
The performance of ChatGPT on different subjects, topics and types of questions in the 2021 NMLE exam

References

    1. Bhinder B, et al. Artificial Intelligence in Cancer Research and Precision Medicine. Cancer Discov. 2021;11(4):900–15. doi: 10.1158/2159-8290.CD-21-0090. - DOI - PMC - PubMed
    1. Moor M, et al. Foundation models for generalist medical artificial intelligence. Nature. 2023;616(7956):259–65. doi: 10.1038/s41586-023-05881-4. - DOI - PubMed
    1. van Dis EAM, et al. ChatGPT: five priorities for research. Nature. 2023;614(7947):224–6. doi: 10.1038/d41586-023-00288-7. - DOI - PubMed
    1. Sarink MJ et al. A study on the performance of ChatGPT in infectious diseases clinical consultation. Clin Microbiol Infect, 2023. - PubMed
    1. Lee TC et al. ChatGPT Answers Common Patient Questions About Colonoscopy. Gastroenterology, 2023. - PubMed

LinkOut - more resources