Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2017 Sep 1;7(9):e015760.
doi: 10.1136/bmjopen-2016-015760.

Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports

Affiliations
Review

Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports

Jan Y Verbakel et al. BMJ Open. .

Abstract

Objective: Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify common evidence gaps in this process.

Design: Systematic overview of diagnostic horizon scan reports.

Primary outcome measures: We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each point-of-care test was mapped against the Horvath framework for diagnostic test evaluation.

Results: We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (ie, ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported clinical domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.

Conclusions: Evidence generation for new point-of-care diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of point-of-care diagnostic test development and impact on clinically relevant outcomes. While the 'road map' for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo.

Keywords: Diagnosis; PRIMARY CARE; Point-of-care Systems; evidence based medicine; framework; horizon scanning reports.

PubMed Disclaimer

Conflict of interest statement

Competing interests: MJT has received funding from Alere to conduct research and has provided consultancy to Roche Molecular Diagnostics. He is also a co-founder of Phoresia which is developing point-of-care tests. All the other authors have no competing interests to disclose.

Figures

Figure 1
Figure 1
Horvath et al ’s cyclical framework for the evaluation of diagnostic tests. This framework illustrates the key components of the test evaluation process. (1) Analytical performance is the aptitude of a diagnostic test to conform to predefined quality specifications. (2) Clinical performance examines the ability of the biomarker to conform to predefined clinical specifications in detecting patients with a certain clinical condition or in a physiological state. (3) Clinical effectiveness focuses on the test’s ability to improve health outcomes that are relevant to an individual patient, also allowing comparison (4) of effectiveness between tests. (5) A cost-effectiveness analysis compares the changes in costs and health effects of introducing a test to assess the extent to which the test can be regarded as providing value for money. (6) Broader impact encompasses the consequences (eg, acceptability, social, psychological, legal, ethical, societal and organisational consequences) of testing beyond the above-mentioned components.
Figure 2
Figure 2
Setting (%) of the studies by disease area (according to the International Classification of Primary Care-Second edition coding).
Figure 3
Figure 3
Test evaluation component by disease area in absolute number (n) of studies.
Figure 4
Figure 4
Number of years between horizon scan report and original paper publication date by the intended role for each evaluation component. Size of bubbles represents number of studies proportionate to all studies for the intended role. BNP, B-natriuretic peptide; CRP, C reactive protein; FOBT, faecal occult blood test; HbA1c, glycated haemoglobin; hCG, human chorionic gonadotropin; hFABP, heart-type fatty acid-binding protein; INR, international normalised ratio; TSH, thyroid-stimulating hormone; WBC, white cell count.
Figure 5
Figure 5
Sequence of evidence generation for all seven horizon scan reports completing the full evaluation cycle.  INR, international normalised ratio.

References

    1. Wallace E, Lowry J, Smith SM, et al. . The epidemiology of malpractice claims in primary care: a systematic review. BMJ Open 2013;3:e002929.10.1136/bmjopen-2013-002929 - DOI - PMC - PubMed
    1. GV Research IVD. Market Analysis By Product (Tissue Diagnostics, Molecular Diagnostics, Professional Diagnostics, Diabetes Monitoring) And Segment Forecasts To 2020. 100 Pune, India: Grand View Research, 2016.
    1. Greenhalgh T, Robert G, Bate P, et al. . How to spread good ideas: a systematic review of the literature on diffusion, dissemination and sustainability of innovations in health service delivery and organisation. London: NCCSDO, 2004.
    1. Britain G, National Health S. The NHS atlas of variation in diagnostic services: reducing unwarranted variation to increase value and improve quality. Great Britain: Right Care, 2013.
    1. Oxford DEC. NIHR Diagnostic evidence Cooperative Oxford. 2016. https://www.oxford.dec.nihr.ac.uk/