Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Oct 6.
doi: 10.1097/XCS.0000000000001602. Online ahead of print.

Death of the Personal Statement: A Qualitative Comparison Between Human-Authored and Artificial Intelligence-Generated Medical School Admissions Essays

Affiliations

Death of the Personal Statement: A Qualitative Comparison Between Human-Authored and Artificial Intelligence-Generated Medical School Admissions Essays

Matthew J Vaccaro et al. J Am Coll Surg. .

Abstract

Background: Medical school admissions committees use personal statements as subjective measures of applicants' values, motivations, and experiences. Students at varying levels of education report they would or do use artificial intelligence (AI) in their academic pursuits. It has not been investigated how effectively medical school admissions committees can distinguish AI-generated from human-authored personal statements, or if AI use provides students an advantage.

Study design: Human-authored personal statements were retrieved from the 2019 application cycle (before AI chatbots). ChatGPT 4.0 was used to generate personal statements from summaries of the human-authored essays. In a prospective, single-blind, randomized controlled trial, medical school application readers evaluated unique combinations of AI-generated and human-authored personal statements for essay quality (7-point Likert Scale), speculated authorship (human or AI), and speculation confidence (5-point Likert Scale). ZeroGPT, an AI-detection tool, also speculated authorship.

Results: Seventeen medical school application readers completed 325 scoring rubrics across 309 essays. Readers demonstrated a 56% accuracy in correctly identifying authorship, while ZeroGPT showed 91% accuracy. Readers were more likely to assume human authorship of higher scoring essays (Mann-Whitney U-test, p<0.001). AI-generated essays scored better than their human-authored counterparts (Wilcoxon signed-rank test, p=0.020) with mean scores of 5.02 ± 1.21 and 4.67 ± 1.33, respectively.

Conclusion: AI-generated personal statements were rated more highly and were nearly indistinguishable to medical school application readers. Despite ZeroGPT's higher accuracy in detecting AI use, its rate of false positives remains unacceptably high for use in medical school admissions. The role of personal statements in medical school admissions requires urgent reconsideration to maintain credibility.

Keywords: AI Detection; Academic Writing; Artificial Intelligence; ChatGPT; Medical School Admissions; Personal Statements.

PubMed Disclaimer

LinkOut - more resources