Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 17;45(7):69.
doi: 10.1007/s10916-021-01737-4.

Lessons Learned from the Usability Evaluation of a Simulated Patient Dialogue System

Affiliations
Free article

Lessons Learned from the Usability Evaluation of a Simulated Patient Dialogue System

Leonardo Campillos-Llanos et al. J Med Syst. .
Free article

Abstract

Simulated consultations through virtual patients allow medical students to practice history-taking skills. Ideally, applications should provide interactions in natural language and be multi-case, multi-specialty. Nevertheless, few systems handle or are tested on a large variety of cases. We present a virtual patient dialogue system in which a medical trainer types new cases and these are processed without human intervention. To develop it, we designed a patient record model, a knowledge model for the history-taking task, and a termino-ontological model for term variation and out-of-vocabulary words. We evaluated whether this system provided quality dialogue across medical specialities (n = 18), and with unseen cases (n = 29) compared to the cases used for development (n = 6). Medical evaluators (students, residents, practitioners, and researchers) conducted simulated history-taking with the system and assessed its performance through Likert-scale questionnaires. We analysed interaction logs and evaluated system correctness. The mean user evaluation score for the 29 unseen cases was 4.06 out of 5 (very good). The evaluation of correctness determined that, on average, 74.3% (sd = 9.5) of replies were correct, 14.9% (sd = 6.3) incorrect, and in 10.7% the system behaved cautiously by deferring a reply. In the user evaluation, all aspects scored higher in the 29 unseen cases than in the 6 seen cases. Although such a multi-case system has its limits, the evaluation showed that creating it is feasible; that it performs adequately; and that it is judged usable. We discuss some lessons learned and pivotal design choices affecting its performance and the end-users, who are primarily medical students.

Keywords: Artificial intelligence; Education; Medical; Medical history taking; Natural language processing; Virtual patient.

PubMed Disclaimer

References

    1. Washburn M., Bordnick P., Rizzo A. S.: A pilot feasibility study of virtual patient simulation to enhance social work students’ brief mental health assessment skills. Soc. Work Health Care 55 (9): 675–693, 2016 - DOI
    1. Barnett S. G., Gallimore C. E., Pitterle M., Morrill J.: Impact of a paper vs virtual simulated patient case on student-perceived confidence and engagement. Am. J. Pharm. Educ. 80 (1): 16, 2016 - DOI
    1. McCoy L., Pettit R. K., Lewis J. H., Allgood J. A., Bay C., Schwartz F. N.: Evaluating medical student engagement during virtual patient simulations: A sequential, mixed methods study. BMC Med. Educ. 16: 20, 2016 - DOI
    1. Tait L., Lee K., Rasiah R., Cooper J. M., Ling T., Geelan B., Bindoff I. (2018) Simulation and feedback in health education: A mixed methods study comparing three simulation modalities. Pharmacy (Basel) 6(2):41–57
    1. Courteille O., Fahlstedt M., Ho J., Hedman L., Fors U., von Holst H., Fellander-Tsai L., Moller H.: Learning through a virtual patient vs. recorded lecture: A comparison of knowledge retention in a trauma case. Int. J. Med. Educ. 9: 86–92, 2018 - DOI

LinkOut - more resources