Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011 Dec 9:11:163.
doi: 10.1186/1471-2288-11-163.

The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review

Affiliations

The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review

Brian H Willis et al. BMC Med Res Methodol. .

Abstract

Background: Over the last decade there have been a number of guidelines published, aimed at improving the quality of reporting in published studies and reviews. In systematic reviews this may be measured by their compliance with the PRISMA statement. This review aims to evaluate the quality of reporting in published meta-analyses of diagnostic tests, using the PRISMA statement and establish whether there has been a measurable improvement over time.

Methods: Eight databases were searched for reviews published prior to 31(st) December 2008. Studies were selected if they evaluated a diagnostic test, measured performance, searched two or more databases, stated the search terms and inclusion criteria, and used a statistical method to summarise a test's performance. Data were extracted on the review characteristics and items of the PRISMA statement. To measure the change in the quality of reporting over time, PRISMA items for two periods of equal duration were compared.

Results: Compliance with the PRISMA statement was generally poor: none of the reviews completely adhered to all 27 checklist items. Of the 236 meta-analyses included following selection: only 2(1%) reported the study protocol; 59(25%) reported the searches used; 76(32%) reported the results of a risk of bias assessment; and 82(35%) reported the abstract as a structured summary. Only 11 studies were published before 2000. Thus, the impact of QUOROM on the quality of reporting was not evaluated. However, the periods 2001-2004 and 2005-2008 (covering 93% of studies) were compared using relative risks (RR). There was an increase in the proportion of reviews reporting on five PRISMA items: eligibility criteria (RR 1.13, 95% CI 1.00 - 1.27); risk of bias across studies (methods) (RR 1.81, 95% CI 1.34 - 2.44); study selection results (RR 1.48, 95% CI 1.05 - 2.09); results of individual studies (RR 1.37, 95% CI 1.09 - 1.72); risk of bias across studies (results) (RR 1.65, 95% CI 1.20 - 2.25).

Conclusion: Although there has been an improvement in the quality of meta-analyses in diagnostic research, there are still many deficiencies in the reporting which future reviewers need to address if readers are to trust the validity of the reported findings.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Flowchart of studies showing results of applying the inclusion criteria. Also shown are the types of study or reasons for exclusion.
Figure 2
Figure 2
Number of included meta-analyses per year of publication.
Figure 3
Figure 3
Comparison of periods 2001-04 and 2005-08 by compliance with the PRISMA statement. The numbered items (#) correspond to the PRISMA item numbers (see table 1). RR (95% CI) denotes the relative risk with the associated 95% confidence interval.
Figure 4
Figure 4
Changing pattern of quality assessment in meta-analyses of diagnostic tests. Comparison of the percentage of reviews published per year using the QUADAS tool, other forms of quality assessment and no quality assessment. Earlier years not included due small sample sizes (around 2 studies per year).
Figure 5
Figure 5
Comparison of HTA reviews with other reviews using PRISMA. In nine PRISMA items the HTA reviews were significantly better reported than in other types of reviews. The numbered items (#) correspond to the PRISMA item numbers (see table 1). In item #5, the relative risk was undefined, but, Fisher's exact test demonstrated a significant difference, (p = 0.0038), in favour of the HTA reports. RR (95% CI) denotes the relative risk with the associated 95% confidence interval.
Figure 6
Figure 6
Sensitivity analysis. The HTA reports have been removed from the sample to check robustness of results. The numbered items (#) correspond to the PRISMA item numbers (see table 1). In item #5, the relative risk was undefined, but, Fisher's exact test demonstrated no significant difference, (p = 1.00), between the two periods. Overall there was no change in the significance of results in any of the 27 PRISMA items. RR (95% CI) denotes the relative risk with the associated 95% confidence interval.

References

    1. Buntinx F, Aertgeerts B, Macaskill P. In: The evidence base for clinical diagnosis: theory and methods of diagnostic research. 2. Knottnerus JA, Buntix F, editor. London: BMJ books; 2009. Guidelines for conducting systematic reviews of studies evaluating the accuracy of diagnostic tests; pp. 180–212.
    1. Leeflang M, Deeks JJ, Gatsonis C, Bossuyt PMM. Systematic reviews of diagnostic test accuracy. Ann Intern Med. 2008;149:889–897. - PMC - PubMed
    1. Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58:982–90. doi: 10.1016/j.jclinepi.2005.02.022. - DOI - PubMed
    1. Rutter CM, Gatsonis CA. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Stat Med. 2001;20:2865–2884. doi: 10.1002/sim.942. - DOI - PubMed
    1. Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess. 2000;4(10) - PubMed

Publication types

MeSH terms

LinkOut - more resources