Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2015 Aug 17:16:354.
doi: 10.1186/s13063-015-0840-9.

Sample size calculation for a stepped wedge trial

Affiliations
Review

Sample size calculation for a stepped wedge trial

Gianluca Baio et al. Trials. .

Abstract

Background: Stepped wedge trials (SWTs) can be considered as a variant of a clustered randomised trial, although in many ways they embed additional complications from the point of view of statistical design and analysis. While the literature is rich for standard parallel or clustered randomised clinical trials (CRTs), it is much less so for SWTs. The specific features of SWTs need to be addressed properly in the sample size calculations to ensure valid estimates of the intervention effect.

Methods: We critically review the available literature on analytical methods to perform sample size and power calculations in a SWT. In particular, we highlight the specific assumptions underlying currently used methods and comment on their validity and potential for extensions. Finally, we propose the use of simulation-based methods to overcome some of the limitations of analytical formulae. We performed a simulation exercise in which we compared simulation-based sample size computations with analytical methods and assessed the impact of varying the basic parameters to the resulting sample size/power, in the case of continuous and binary outcomes and assuming both cross-sectional data and the closed cohort design.

Results: We compared the sample size requirements for a SWT in comparison to CRTs based on comparable number of measurements in each cluster. In line with the existing literature, we found that when the level of correlation within the clusters is relatively high (for example, greater than 0.1), the SWT requires a smaller number of clusters. For low values of the intracluster correlation, the two designs produce more similar requirements in terms of total number of clusters. We validated our simulation-based approach and compared the results of sample size calculations to analytical methods; the simulation-based procedures perform well, producing results that are extremely similar to the analytical methods. We found that usually the SWT is relatively insensitive to variations in the intracluster correlation, and that failure to account for a potential time effect will artificially and grossly overestimate the power of a study.

Conclusions: We provide a framework for handling the sample size and power calculations of a SWT and suggest that simulation-based procedures may be more effective, especially in dealing with the specific features of the study at hand. In selected situations and depending on the level of intracluster correlation and the cluster size, SWTs may be more efficient than comparable CRTs. However, the decision about the design to be implemented will be based on a wide range of considerations, including the cost associated with the number of clusters, number of measurements and the trial duration.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Power curves for a continuous outcome assuming: 25 clusters, each with 20 subjects; 6 time points including one baseline. We varied the intervention effect size and the ICC variations. Panel (a) shows the analysis for a repeated closed cohort (cross-sectional) design, while panel (b) depicts the results for a closed cohort design. In panel (b) the selected ICCs are reported for cluster and participant level
Fig. 2
Fig. 2
Power curves for a binary outcome assuming: 25 clusters, each with 20 subjects; 6 time points including one baseline. We varied the intervention effect size and the ICC variations. Panel (a) shows the analysis for a repeated closed cohort (cross-sectional) design, while panel (b) depicts the results for a closed cohort design. In panel (b) the selected ICCs are reported for cluster and participant level
Fig. 3
Fig. 3
Power curves for a continuous outcome assuming 24 clusters, each with 20 subjects. We varied the ICC and the number of randomisation crossover points. Panel (a) shows the analysis for a repeated closed cohort (cross-sectional) design, while panel (b) depicts the results for a closed cohort design (assuming individual-level ICC of 0.0016)
Fig. 4
Fig. 4
Power curves for a binary outcome assuming 24 clusters, each with 20 subjects. We varied the ICC and the number of randomisation crossover points. Panel (a) shows the analysis for a repeated closed cohort (cross-sectional) design, while panel (b) depicts the results for a closed cohort design (assuming individual-level ICC of 0.0016)
Fig. 5
Fig. 5
Power curves for a continuous outcome assuming 25 clusters, each with 20 subjects and 6 time points at which measurements are taken (including one baseline time). We varied the way in which the assumed linear time effect is included in the model (if at all). Panel (a) shows the results for a repeated cohort design; panel (b) shows the results for the closed cohort design, assuming a cluster-level ICC of 0.1 and varying the participant-level ICC; panel (c) shows the results for the closed cohort design, assuming a cluster-level ICC of 0.5 and varying the participant-level ICC
Fig. 6
Fig. 6
Power curves for a binary outcome assuming 25 clusters, each with 20 subjects and 6 time points at which measurements are taken (including one baseline time). We varied the way in which the assumed linear time effect is included in the model (if at all). Panel (a) shows the results for a repeated cohort design; panel (b) shows the results for the closed cohort design, assuming a cluster-level ICC of 0.1 and varying the participant-level ICC; panel (c) shows the results for the closed cohort design, assuming a cluster-level ICC of 0.5 and varying the participant-level ICC

Comment in

References

    1. Murray D. The design and analysis of group randomised trials. Oxford, UK: Oxford University Press; 1998.
    1. Gail M, Byar D, Pechacek T, Corle D. Aspects of statistical design for the Community Intervention Trial for Smoking Cessation (COMMIT) Control Clin Trials. 1992;13:6–21. doi: 10.1016/0197-2456(92)90026-V. - DOI - PubMed
    1. Donner A, Birkett N, Buck C. Randomization by cluster: sample size requirements and analysis. Am J Epidemiol. 1981;114:906–14. - PubMed
    1. Donner A. Sample size requirements for stratified cluster randomization designs. Stat Med. 1992;11:743–50. doi: 10.1002/sim.4780110605. - DOI - PubMed
    1. Shoukri M, Martin S. Estimating the number of clusters for the analysis of correlated binary response variables from unbalanced data. Stat Med. 1992;11:751–60. doi: 10.1002/sim.4780110606. - DOI - PubMed

Publication types

LinkOut - more resources