Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Feb 10:2016:524-533.
eCollection 2016.

Understanding patient satisfaction with received healthcare services: A natural language processing approach

Affiliations

Understanding patient satisfaction with received healthcare services: A natural language processing approach

Kristina Doing-Harris et al. AMIA Annu Symp Proc. .

Abstract

Important information is encoded in free-text patient comments. We determine the most common topics in patient comments, design automatic topic classifiers, identify comments ' sentiment, and find new topics in negative comments. Our annotation scheme consisted of 28 topics, with positive and negative sentiment. Within those 28 topics, the seven most frequent accounted for 63% of annotations. For automated topic classification, we developed vocabulary-based and Naive Bayes ' classifiers. For sentiment analysis, another Naive Bayes ' classifier was used. Finally, we used topic modeling to search for unexpected topics within negative comments. The seven most common topics were appointment access, appointment wait, empathy, explanation, friendliness, practice environment, and overall experience. The best F-measures from our classifier were 0.52(NB), 0.57(NB), 0.36(Vocab), 0.74(NB), 0.40(NB), and 0.44(Vocab), respectively. F- scores ranged from 0.16 to 0.74. The sentiment classification F-score was 0.84. Negative comment topic modeling revealed complaints about appointment access, appointment wait, and time spent with physician.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
An example of an annotated file.
Figure 2:
Figure 2:
Chart illustrating the flow of Patient feedback documents through our processing systems.
Figure 3:
Figure 3:
Inter-annotation scores across three document batches (each containing 100 documents). Agreement statistic used is F1.
Figure 4:
Figure 4:
Total n-gram counts and unique n-gram counts for each topic. Unigrams = light color (bottom); Bigrams = dark color (top).
Figure 5:
Figure 5:
Distribution of positive and negative comments according to topic.

References

    1. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3:e001570–0. - PMC - PubMed
    1. Siegrist RB., Jr Patient Satisfaction: History, Myths, and Misperceptions. Virtual Mentor American Medical Association Journal of Ethics. 2013;15:982–7. - PubMed
    1. López A, Detz A, Ratanawongsa N, Sarkar U. What Patients Say About Their Doctors Online: A Qualitative Content Analysis. J Gen Intern Med. 2012;27:685–92. - PMC - PubMed
    1. Brody S, Elhadad N. Detecting salient aspects in online reviews of health providers.; AMIA Annu Symp Proc; 2010. pp. 202–6. - PMC - PubMed
    1. Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online Reviews of 500 Urologists. JURO. 2013;189:2269–73. - PubMed

LinkOut - more resources