Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2018 Jul;1(3):e180530.
doi: 10.1001/jamanetworkopen.2018.0530. Epub 2018 Jul 6.

Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists

Affiliations
Comparative Study

Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists

Li Zhou et al. JAMA Netw Open. 2018 Jul.

Abstract

Importance: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied.

Objective: To identify and analyze errors at each stage of the SR-assisted dictation process.

Design setting and participants: This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine-generated document (SR), the medical transcriptionist-edited document (MT), and the physician's signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review.

Main outcomes and measures: Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution.

Results: Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons' SR notes had lower mean error rates than other physicians' (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, -0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, -0.3%; 95% CI, -0.63% to -0.04%; P = .03 and 0.2% vs 0.6%; difference, -0.4%; 95% CI, -0.7% to -0.2%; P = .003).

Conclusions and relevance: Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing.

PubMed Disclaimer

Conflict of interest statement

Conflict of Interest Disclosures: Dr Zhou reported grants from the Agency for Healthcare Research and Quality during the conduct of the study Dr Doan reported personal fees from Sanofi Genzyme outside the submitted work. Dr Meteer reported grants from the Agency for Healthcare Research and Quality during the conduct of the study. Dr Bates reported grants from the National Library of Medicine during the conduct of the study; had a patent (No. 6029138) issued, licensed, and with royalties paid; is a coinventor on patent No. 6029138 held by Brigham and Women’s Hospital on the use of decision support software for medical management, licensed to the Medicalis Corporation; holds a minority equity position in the privately held company Medicalis, which develops web-based decision support for radiology test ordering; serves on the board for S.E.A. Medical Systems, which makes intravenous pump technology; consults for EarlySense, which makes patient safety monitoring systems; receives cash compensation from CDI-Negev, which is a nonprofit incubator for health IT startups; receives equity from Valera Health, which makes software to help patients with chronic diseases; receives equity from Intensix, which makes software to support clinical decision making in intensive care; and receives equity from MDClone, which takes clinical data and produces deidentified versions of it. Dr Bates’ financial interests have been reviewed by Brigham and Women’s Hospital and Partners HealthCare in accordance with their institutional policies. No other disclosures were reported.

Figures

Figure.
Figure.. Stages of Back-End and Front-End Dictation
There are two 2 primary ways that speech recognition (SR) can assist the clinical documentation process. In back-end SR, clinicians’ dictations, the audio original (AO), are captured and converted to text by an SR engine. The SR-generated text is edited by a professional medical transcriptionist (MT), then sent back to the clinician for review and a signed note (SN). In front-end SR, clinicians dictate directly into free-text fields of the electronic health record and edit the transcription.

References

    1. Poissant L, Pereira J, Tamblyn R, Kawasumi Y. The impact of electronic health records on time efficiency of physicians and nurses: a systematic review. J Am Med Inform Assoc. 2005;12(5):-. - PMC - PubMed
    1. Pollard SE, Neri PM, Wilcox AR, et al. . How physicians document outpatient visit notes in an electronic health record. Int J Med Inform. 2013;82(1):39-46. - PubMed
    1. Stewart B. Front-End Speech 2014: Functionality Doesn't Trump Physician Resistance. Orem, UT: KLAS; 2014. https://klasresearch.com/report/front-end-speech-2014/940. Accessed April 26, 2018.
    1. Hodgson T, Magrabi F, Coiera E. Efficiency and safety of speech recognition for documentation in the electronic health record. J Am Med Inform Assoc. 2017;24(6):1127-1133. - PMC - PubMed
    1. Johnson M, Lapkin S, Long V, et al. . A systematic review of speech recognition technology in health care. BMC Med Inform Decis Mak. 2014;14:94. - PMC - PubMed

Publication types