Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Jan;9(1):122-128.
doi: 10.1055/s-0038-1626725. Epub 2018 Feb 21.

Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments

Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments

B E Jones et al. Appl Clin Inform. 2018 Jan.

Abstract

Background: Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes.

Objectives: This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia.

Methods: Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets.

Results: Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80).

Conclusion: System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty.

PubMed Disclaimer

Conflict of interest statement

None.

Figures

Fig. 1
Fig. 1
Study population.
Fig. 2
Fig. 2
Signal detection plot of International Classification of Disease (ICD)-based classification, natural language processing (NLP) classification, and receiver operating characteristic (ROC) curve of the NLP classification. The red square indicates the emergency department (ED)-assigned ICD classification, and the orange square indicate the hospital assigned ICD classification. Overall receiver operator characteristics area under the curve of the NLP was 0.935, with 95% confidence interval = [0.917, 0.953]. The blue dot is the NLP classification (specificity = 95%) calibrated to match the ICD-based classification on specificity.
Fig. 3
Fig. 3
Composition of assertions of pneumonia among cases identified with each cohort selection approach in the training set.

References

    1. Centers for Disease Control and Prevention, National Center for Health Statistics.Underlying Cause of Death 1999–2015 on CDC WONDER Online Database, released December, 2016Data are from the Multiple Cause of Death Files, 1999–2015, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Available at:http://wonder.cdc.gov/ucd-icd10.html. Accessed October 23, 2017
    1. Ramirez J A, Wiemken T L, Peyrani P et al.Adults hospitalized with pneumonia in the United States: incidence, epidemiology, and mortality. Clin Infect Dis. 2017;65(11):1806–1812. - PubMed
    1. Mandell L A, Wunderink R G, Anzueto A et al.Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007;44 02:S27–S72. - PMC - PubMed
    1. Jain S, Self W H, Wunderink R G et al.Community-acquired pneumonia requiring hospitalization among U.S. adults. N Engl J Med. 2015;373(05):415–427. - PMC - PubMed
    1. Ruhnke G W, Coca-Perraillon M, Kitch B T, Cutler D M. Trends in mortality and medical spending in patients hospitalized for community-acquired pneumonia: 1993-2005. Med Care. 2010;48(12):1111–1116. - PMC - PubMed

Publication types