Why small low-powered studies are worse than large high-powered studies and how to protect against "trivial" findings in research: comment on Friston (2012)
- PMID: 23583358
- DOI: 10.1016/j.neuroimage.2013.03.030
Why small low-powered studies are worse than large high-powered studies and how to protect against "trivial" findings in research: comment on Friston (2012)
Abstract
It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to "confirm" true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r<.50) from random noise (α=.05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper presents the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.
Keywords: False positive findings; Inflated effect sizes; Statistical power.
Copyright © 2013 Elsevier Inc. All rights reserved.
Comment in
-
Sample size and the fallacies of classical inference.Neuroimage. 2013 Nov 1;81:503-504. doi: 10.1016/j.neuroimage.2013.02.057. Epub 2013 Apr 11. Neuroimage. 2013. PMID: 23583356
Comment on
-
Ten ironic rules for non-statistical reviewers.Neuroimage. 2012 Jul 16;61(4):1300-10. doi: 10.1016/j.neuroimage.2012.04.018. Epub 2012 Apr 13. Neuroimage. 2012. PMID: 22521475 Review.
Publication types
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
Medical
