Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 May 27;20(1):97.
doi: 10.1186/s12911-020-1104-5.

How to automatically turn patient experience free-text responses into actionable insights: a natural language programming (NLP) approach

Affiliations

How to automatically turn patient experience free-text responses into actionable insights: a natural language programming (NLP) approach

Simone A Cammel et al. BMC Med Inform Decis Mak. .

Abstract

Background: Patient experience surveys often include free-text responses. Analysis of these responses is time-consuming and often underutilized. This study examined whether Natural Language Processing (NLP) techniques could provide a data-driven, hospital-independent solution to indicate points for quality improvement.

Methods: This retrospective study used routinely collected patient experience data from two hospitals. A data-driven NLP approach was used. Free-text responses were categorized into topics, subtopics (i.e. n-grams) and labelled with a sentiment score. The indicator 'impact', combining sentiment and frequency, was calculated to reveal topics to improve, monitor or celebrate. The topic modelling architecture was tested on data from a second hospital to examine whether the architecture is transferable to another hospital.

Results: A total of 38,664 survey responses from the first hospital resulted in 127 topics and 294 n-grams. The indicator 'impact' revealed n-grams to celebrate (15.3%), improve (8.8%), and monitor (16.7%). For hospital 2, a similar percentage of free-text responses could be labelled with a topic and n-grams. Between-hospitals, most topics (69.7%) were similar, but 32.2% of topics for hospital 1 and 29.0% of topics for hospital 2 were unique.

Conclusions: In both hospitals, NLP techniques could be used to categorize patient experience free-text responses into topics, sentiment labels and to define priorities for improvement. The model's architecture was shown to be hospital-specific as it was able to discover new topics for the second hospital. These methods should be considered for future patient experience analyses to make better use of this valuable source of information.

Keywords: Data science; Machine learning; Natural language processing; PREM; Patient experience analysis; Text analytics.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
Data flow diagram of data-preprocessing steps used for topic modeling method
Fig. 2
Fig. 2
Patient experience priority matrix for hospital 1. Topics to be improved, celebrated, monitored are show in the upper left, upper right and lower left quadrant, respectively

References

    1. Hamming, J. F., H. Boosman, and P. J. de Mheen Marang-van. "The Association Between Complications, Incidents, and Patient Experience: Retrospective Linkage of Routine Patient Experience Surveys and Safety Data." Journal of patient safety (2019). - PubMed
    1. Cunningham M, Wells M. Qualitative analysis of 6961 free-text comments from the first National Cancer Patient Experience Survey in Scotland. BMJ Open. 2017;7(6):e015726. doi: 10.1136/bmjopen-2016-015726. - DOI - PMC - PubMed
    1. Blei DM, McAuliffe JD. Supervised topic models. 2010.
    1. Li S. Topic modeling and Latent Dirichlet Allocation (LDA) in Python. 2018.
    1. Abirami AM, Askarunisa A. Sentiment analysis model to emphasize the impact of online reviews in Healthcare industry, vol. 41. 2017.

LinkOut - more resources