Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Jun 29;5(2):e17.
doi: 10.2196/medinform.7123.

Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records

Affiliations

Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records

Marco Duz et al. JMIR Med Inform. .

Abstract

Background: The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used.

Objective: The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs.

Methods: The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification.

Results: Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours.

Conclusions: The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed.

Keywords: data mining; electronic medical record; text mining; validation studies.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Flowchart summary of the text mining process adopted in the study. The lower portion of the picture summarizes how the final dataset (dark gray) resulted from the subtraction of the exclusion dataset from the inclusion dataset and the final addition of reinclusion dataset obtained from the exclusion dataset.
Figure 2
Figure 2
Flowchart summary of data flow of the computer-assisted text mining process for the validation dataset. The columns include either the number of terms (each term is either a word or a combination of words) in each dictionary, cases, or number of times (“repeats”) terms in the relevant dictionary are identified in the dataset.

References

    1. Kao A, Poteet SR, editors. Natural Language Processing and Text Mining. London: Springer; 2007.
    1. Ford E, Carroll J, Smith H, Scott D, Cassell J. Extracting information from the text of electronic medical records to improve case detection: a systematic review. J Am Med Inform Assoc. 2016 Feb 05;23(5):1007–15. doi: 10.1093/jamia/ocv180. http://jamia.oxfordjournals.org/content/early/2016/02/04/jamia.ocv180 - DOI - PMC - PubMed
    1. Publications.parliament. 2013. [2017-06-05]. The dismantled national programme for IT in the NHS https://www.publications.parliament.uk/pa/cm201314/cmselect/cmpubacc/294... .
    1. Meystre MS, Savova G, Kipper-Schuler K, Hurdle J. Extracting information from textual documents in the electronic health record: a review of recent research. Yearb Med Inform. 2008;3(1):128–44. http://imia.schattauer.de/en/contents/archive/issue/2256/manuscript/9830... - PubMed
    1. Kushida KC, Nichols D, Jadrnicek R, Miller R, Walsh J, Griffin K. Strategies for de-identification and anonymization of electronic health record data for use in multicenter research studies. Med Care. 2012 Jul;50 Suppl:S82–101. doi: 10.1097/MLR.0b013e3182585355. - DOI - PMC - PubMed