Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Meta-Analysis
. 2019 Jul:233:237-251.
doi: 10.1016/j.socscimed.2019.05.035. Epub 2019 May 28.

Systematic reviews and meta-analyses in the health sciences: Best practice methods for research syntheses

Affiliations
Meta-Analysis

Systematic reviews and meta-analyses in the health sciences: Best practice methods for research syntheses

Blair T Johnson et al. Soc Sci Med. 2019 Jul.

Abstract

Rationale: The journal Social Science & Medicine recently adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA; Moher et al., 2009) as guidelines for authors to use when disseminating their systematic reviews (SRs).

Approach: After providing a brief history of evidence synthesis, this article describes why reporting standards are important, summarizes the sequential steps involved in conducting SRs and meta-analyses, and outlines additional methodological issues that researchers should address when conducting and reporting results from their SRs.

Results and conclusions: Successful SRs result when teams of reviewers with appropriate expertise use the highest scientific rigor in all steps of the SR process. Thus, SRs that lack foresight are unlikely to prove successful. We advocate that SR teams consider potential moderators (M) when defining their research problem, along with Time, Outcomes, Population, Intervention, Context, and Study design (i.e., TOPICS + M). We also show that, because the PRISMA reporting standards only partially overlap dimensions of methodological quality, it is possible for SRs to satisfy PRISMA standards yet still have poor methodological quality. As well, we discuss limitations of such standards and instruments in the face of the assumptions of the SR process, including meta-analysis spanning the other SR steps, which are highly synergistic: Study search and selection, coding of study characteristics and effects, analysis, interpretation, reporting, and finally, re-analysis and criticism. When a SR targets an important question with the best possible SR methods, its results can become a definitive statement that guides future research and policy decisions for years to come.

Keywords: Evidence synthesis; Meta-analysis; Methodological quality; Research synthesis; Risk of bias; Systematic reviews.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
The meta-analysis process depicted in seven steps that build on each other and that sometimes must be repeated as feedback learned during the process emerges.
Fig. 2.
Fig. 2.
Empirical demonstration of the moving constant technique: Sexual risk reduction following a behavioural intervention as a function of each sample’s baseline depression. Sexual risk behaviour declined following the intervention at the last available follow-up to the extent that samples had higher levels of baseline depression (treatment [control] group effects appear as darker [white] triangles and the size of each plotted value reflects its weight in the analysis). The solid regression line indicates trends across initial levels of depression; dashed lines provide 95% confidence bands for these trends. Reproduced from Lennon et al. (2012).
Fig. 3.
Fig. 3.
Contour-enhanced funnel plots showing effect sizes from three literature (a) one with no clear evidence of selective (e.g., publication) bias, as even published studies (solid triangles) commonly achieve null results and unpublished studies (hollow triangles) achieve statistically significant outcomes (this distribution is also homogeneous, τ2 = 0.00047, I2 = 0%); (b) one with marked evidence of selection bias, with only published studies routinely finding a significant effect and unpublished studies routinely finding non-significant effects (τ2 = 0.00047,I2 = 0%); and (c), a literature with marked heterogeneity (τ2 = 0.0145, I2 = 61%). The contours surrounding the null value show at which points individual effects reach significance. Effects in the white zone are statistically non-significant, where the significance level is set at p > . 05.

Similar articles

Cited by

References

    1. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, 2011. Grade guidelines: 3. Rating the quality of evidence. J. Clin. Epidemiol. 64 (4), 401–406. - PubMed
    1. Bayes T, Price R, Canton J, 1763. An Essay towards Solving a Problem in the Doctrine of Chances. - PubMed
    1. Becker BJ, 2005. Failsafe N or file-drawer number. In: Rothstein HR, Sutton AJ, Borenstein M (Eds.), Publication Bias in Meta-Analysis Prevention, Assessment and Adjustments. Wiley Chichester, England, pp. 111–125.
    1. Begg CB, Mazumdar M, 1994. Operating characteristics of a rank correlation test for publication bias. Biometrics 50 (4), 1088–1101. - PubMed
    1. Booth A, 2006. Clear and present questions: formulating questions for evidence based practice. Libr. Hi Tech 24 (3), 355–368.

Publication types