Question answering systems for health professionals at the point of care-a systematic review
- PMID: 38366879
- PMCID: PMC10990539
- DOI: 10.1093/jamia/ocae015
Question answering systems for health professionals at the point of care-a systematic review
Abstract
Objectives: Question answering (QA) systems have the potential to improve the quality of clinical care by providing health professionals with the latest and most relevant evidence. However, QA systems have not been widely adopted. This systematic review aims to characterize current medical QA systems, assess their suitability for healthcare, and identify areas of improvement.
Materials and methods: We searched PubMed, IEEE Xplore, ACM Digital Library, ACL Anthology, and forward and backward citations on February 7, 2023. We included peer-reviewed journal and conference papers describing the design and evaluation of biomedical QA systems. Two reviewers screened titles, abstracts, and full-text articles. We conducted a narrative synthesis and risk of bias assessment for each study. We assessed the utility of biomedical QA systems.
Results: We included 79 studies and identified themes, including question realism, answer reliability, answer utility, clinical specialism, systems, usability, and evaluation methods. Clinicians' questions used to train and evaluate QA systems were restricted to certain sources, types and complexity levels. No system communicated confidence levels in the answers or sources. Many studies suffered from high risks of bias and applicability concerns. Only 8 studies completely satisfied any criterion for clinical utility, and only 7 reported user evaluations. Most systems were built with limited input from clinicians.
Discussion: While machine learning methods have led to increased accuracy, most studies imperfectly reflected real-world healthcare information needs. Key research priorities include developing more realistic healthcare QA datasets and considering the reliability of answer sources, rather than merely focusing on accuracy.
Keywords: artificial intelligence; clinical decision support; evidence-based medicine; natural language processing; question answering.
© The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Conflict of interest statement
None declared.
Figures






Similar articles
-
Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia.Med J Aust. 2020 Dec;213 Suppl 11:S3-S32.e1. doi: 10.5694/mja2.50881. Med J Aust. 2020. PMID: 33314144
-
SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.Artif Intell Med. 2020 Jan;102:101767. doi: 10.1016/j.artmed.2019.101767. Epub 2019 Nov 28. Artif Intell Med. 2020. PMID: 31980104
-
The future of Cochrane Neonatal.Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
-
Enabling medication management through health information technology (Health IT).Evid Rep Technol Assess (Full Rep). 2011 Apr;(201):1-951. Evid Rep Technol Assess (Full Rep). 2011. PMID: 23126642 Free PMC article. Review.
-
Universal screening for SARS-CoV-2 infection: a rapid review.Cochrane Database Syst Rev. 2020 Sep 15;9(9):CD013718. doi: 10.1002/14651858.CD013718. Cochrane Database Syst Rev. 2020. PMID: 33502003 Free PMC article.
Cited by
-
RealMedQA: A pilot biomedical question answering dataset containing realistic clinical questions.AMIA Annu Symp Proc. 2025 May 22;2024:590-599. eCollection 2024. AMIA Annu Symp Proc. 2025. PMID: 40417548 Free PMC article.
References
-
- Del Fiol G, Workman TE, Gorman PN.. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014;174(5):710-718. - PubMed