Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Observational Study
. 2025 Apr 22;25(1):591.
doi: 10.1186/s12909-025-07191-x.

Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality

Affiliations
Observational Study

Artificial intelligence based assessment of clinical reasoning documentation: an observational study of the impact of the clinical learning environment on resident documentation quality

Verity Schaye et al. BMC Med Educ. .

Abstract

Background: Objective measures and large datasets are needed to determine aspects of the Clinical Learning Environment (CLE) impacting the essential skill of clinical reasoning documentation. Artificial Intelligence (AI) offers a solution. Here, the authors sought to determine what aspects of the CLE might be impacting resident clinical reasoning documentation quality assessed by AI.

Methods: In this observational, retrospective cross-sectional analysis of hospital admission notes from the Electronic Health Record (EHR), all categorical internal medicine (IM) residents who wrote at least one admission note during the study period July 1, 2018- June 30, 2023 at two sites of NYU Grossman School of Medicine's IM residency program were included. Clinical reasoning documentation quality of admission notes was determined to be low or high-quality using a supervised machine learning model. From note-level data, the shift (day or night) and note index within shift (if a note was first, second, etc. within shift) were calculated. These aspects of the CLE were included as potential markers of workload, which have been shown to have a strong relationship with resident performance. Patient data was also captured, including age, sex, Charlson Comorbidity Index, and primary diagnosis. The relationship between these variables and clinical reasoning documentation quality was analyzed using generalized estimating equations accounting for resident-level clustering.

Results: Across 37,750 notes authored by 474 residents, patients who were older, had more pre-existing comorbidities, and presented with certain primary diagnoses (e.g., infectious and pulmonary conditions) were associated with higher clinical reasoning documentation quality. When controlling for these and other patient factors, variables associated with clinical reasoning documentation quality included academic year (adjusted odds ratio, aOR, for high-quality: 1.10; 95% CI 1.06-1.15; P <.001), night shift (aOR 1.21; 95% CI 1.13-1.30; P <.001), and note index (aOR 0.93; 95% CI 0.90-0.95; P <.001).

Conclusions: AI can be used to assess complex skills such as clinical reasoning in authentic clinical notes that can help elucidate the potential impact of the CLE on resident clinical reasoning documentation quality. Future work should explore residency program and systems interventions to optimize the CLE.

Keywords: Artificial intelligence; Clinical learning environment; Clinical reasoning; Documentation.

PubMed Disclaimer

Conflict of interest statement

Declarations. Ethical approval: The study was approved by the NYU Grossman School of Medicine institutional review board on 12/9/2023 i19-00280. As this was a retrospective, observational study of EHR data review informed consent from each participant was waived for model development and retrospective data analysis by the NYU Grossman School of Medicine Institutional Review Board. Consent for publication: Not applicable. Competing interests: The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
A: High-quality clinical reasoning documentation by academic year (AY) demonstrating a decline AY 2019–2020 driven by notes authored during the peak of the initial COVID-19 surge in New York City (March to June 2020). B: High-quality clinical Reasoning documentation by shift and hour demonstrating notes written during the night shift were more likely to be high-quality than those in the day shift and notes written later in the night shift appeared associated with lower note quality but this relationship disappeared in the multivariable regression. C: High-quality clinical reasoning documentation by note index within shift demonstrating that within each shift every additional note (i.e., higher note index) was associated with lower quality clinical reasoning documentation

References

    1. Connor DM, Durning SJ, Rencic JJ. Clinical reasoning as a core competency. Acad Med. 2020;95(8):1166–71. - DOI - PubMed
    1. Schaye V, Miller L, Kudlowitz D, Chun J, Burk-Rafel J, Cocks P, et al. Development of a clinical reasoning Documentation assessment tool for resident and fellow admission notes: a shared mental model for feedback. J Gen Intern Med. 2022;37(3):507–12. - DOI - PMC - PubMed
    1. Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA assessment tool: assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach Learn Med. 2015;27(2):163–73. - DOI - PubMed
    1. Kulkarni D, Heath J, Kosack A, Jackson NJ, Crummey A. An educational intervention to improve inpatient Documentation of high-risk diagnoses by pediatric residents. Hosp Pediatr. 2018;8(7):430–5. - DOI - PMC - PubMed
    1. Schiff GD, Bates DW. Can electronic clinical Documentation help prevent diagnostic errors? N Engl J Med. 2010;362(12):1066–9. - DOI - PubMed

Publication types

LinkOut - more resources