Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Aug;42(4):391-422.
doi: 10.1177/0193841X18799128. Epub 2018 Oct 9.

Optimal Allocation of Interviews to Baseline and Endline Surveys in Place-Based Randomized Trials and Quasi-Experiments

Affiliations

Optimal Allocation of Interviews to Baseline and Endline Surveys in Place-Based Randomized Trials and Quasi-Experiments

Donald P Green et al. Eval Rev. 2018 Aug.

Abstract

Background: Many place-based randomized trials and quasi-experiments use a pair of cross-section surveys, rather than panel surveys, to estimate the average treatment effect of an intervention. In these studies, a random sample of individuals in each geographic cluster is selected for a baseline (preintervention) survey, and an independent random sample is selected for an endline (postintervention) survey.

Objective: This design raises the question, given a fixed budget, how should a researcher allocate resources between the baseline and endline surveys to maximize the precision of the estimated average treatment effect?

Results: We formalize this allocation problem and show that although the optimal share of interviews allocated to the baseline survey is always less than one-half, it is an increasing function of the total number of interviews per cluster, the cluster-level correlation between the baseline measure and the endline outcome, and the intracluster correlation coefficient. An example using multicountry survey data from Africa illustrates how the optimal allocation formulas can be combined with data to inform decisions at the planning stage. Another example uses data from a digital political advertising experiment in Texas to explore how precision would have varied with alternative allocations.

Keywords: cluster-randomized experiment; place-randomized trial; quasi-experiment; repeated cross-section surveys; sample allocation.

PubMed Disclaimer

Conflict of interest statement

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Figures

Figure 1.
Figure 1.
Survey allocation and precision when the outcome variable is economic optimism. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details.
Figure 2.
Figure 2.
Survey allocation and precision when the outcome variable is inclination to protest. Near the top left corner, the filled triangle (for n = 100 interviews per cluster) and circle (for n = 500) show the standard error (SE) of the unadjusted estimate of average treatment effect when all interviews are allocated to the endline survey. The curves plot the SE of the regression-adjusted estimate against the share of interviews allocated to the baseline survey. The open circle on each curve marks the optimal baseline share. See text for details.
Figure 3.
Figure 3.
Survey allocation and precision in the digital advertising example. The top (dotted) curve plots the SE of the unadjusted treatment effect estimate against the treatment group’s share of interviews when all interviews are allocated to the endline survey. The other three curves plot the SE of the regression-adjusted estimate against the treatment group’s share of interviews, holding the baseline survey’s share constant at 25% (the actual allocation), 43% (the suggested allocation), or 50%. See text for details.

References

    1. Afrobarometer. (2009). Merged round 4 data (20 countries) (2008) [Data file] Retrieved from http://www.afrobarometer.org
    1. Afrobarometer. (2015). Merged round 5 data (34 countries) (2011–2013) (last update: July 2015) [Data file] Retrieved from http://www.afrobarometer.org
    1. Angrist J. D., Pischke J. S. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton, NJ: Princeton University Press.
    1. Bloom H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19, 547–556.
    1. Bloom H. S., Riccio J. A. (2005). Using place-based random assignment and comparative interrupted time-series analysis to evaluate the Jobs-Plus employment program for public housing residents. Annals of the American Academy of Political and Social Science, 599, 19–51.

LinkOut - more resources