Metastudies for robust tests of theory
- PMID: 29531092
- PMCID: PMC5856505
- DOI: 10.1073/pnas.1708285114
Metastudies for robust tests of theory
Abstract
We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a metastudy, in which many independent experimental variables-that may be moderators of an empirical effect-are indiscriminately randomized. Radical randomization yields rich datasets that can be used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols and enhances the generalizability of these claims. The strategy is made feasible by advances in hierarchical Bayesian modeling that allow for the pooling of information across unlike experiments and designs and is proposed here as a gold standard for replication research and exploratory research. The practical feasibility of the strategy is demonstrated with a replication of a study on subliminal priming.
Keywords: generalizability; many labs; metastudy; radical randomization; robustness.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
References
-
- Pashler H, Wagenmakers EJ. Editor’s introduction to the special section on replicability in psychological science: A crisis of confidence? Perspect Psychol Sci. 2012;7:528–530. - PubMed
-
- Francis G. Publication bias and the failure of replication in experimental psychology. Psychon Bull Rev. 2012;19:975–991. - PubMed
-
- Guan M, Vandekerckhove J. A Bayesian approach to mitigation of publication bias. Psychon Bull Rev. 2016;23:74–86. - PubMed
-
- Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86:638–641.
-
- Vasishth S, Gelman A. 2017. The illusion of power: How the statistical significance filter leads to overconfident expectations of replicability. arXiv:1702.00556.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
