Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018;21(1):99-131.
doi: 10.1007/s10683-017-9527-2. Epub 2017 May 9.

Conducting interactive experiments online

Affiliations

Conducting interactive experiments online

Antonio A Arechar et al. Exp Econ. 2018.

Abstract

Online labor markets provide new opportunities for behavioral research, but conducting economic experiments online raises important methodological challenges. This particularly holds for interactive designs. In this paper, we provide a methodological discussion of the similarities and differences between interactive experiments conducted in the laboratory and online. To this end, we conduct a repeated public goods experiment with and without punishment using samples from the laboratory and the online platform Amazon Mechanical Turk. We chose to replicate this experiment because it is long and logistically complex. It therefore provides a good case study for discussing the methodological and practical challenges of online interactive experimentation. We find that basic behavioral patterns of cooperation and punishment in the laboratory are replicable online. The most important challenge of online interactive experiments is participant dropout. We discuss measures for reducing dropout and show that, for our case study, dropouts are exogenous to the experiment. We conclude that data quality for interactive experiments via the Internet is adequate and reliable, making online interactive experimentation a potentially valuable complement to laboratory studies.

Keywords: Amazon Mechanical Turk; Behavioral research; Experimental methodology; Internet experiments; Public goods game; Punishment.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Attrition throughout the course of the experiment. Colors depict the group size. We always started with groups of four but let participants continue if a member dropped out. (Color figure online)
Fig. 2
Fig. 2
Contributions over time. Numbers in parentheses are the mean contributions in each experimental condition. Error bars indicate 95% confidence intervals (clustered at the group level)
Fig. 3
Fig. 3
Frequencies of punishment over time. Frequencies are calculated by counting instances of assigning non-zero deduction points out of the total number of punishment opportunities per participant, per recipient, per period. Mean punishment frequencies in parenthesis. Error bars indicate 95% confidence intervals clustered on groups
Fig. 4
Fig. 4
Directionality and severity of punishment in our laboratory and online samples. Stacked bars show frequency distributions of punishment decisions. Each bar shows the distribution for a given difference between punishers and their target’s contribution to the public good

References

    1. Abeler J, Nosenzo D. Self-selection into laboratory experiments: pro-social motives versus monetary incentives. Experimental Economics. 2015;18(2):195–214. doi: 10.1007/s10683-014-9397-9. - DOI
    1. Amir O, Rand DG, Gal YK. Economic games on the internet: The effect of $1 stakes. PLoS ONE. 2012 - PMC - PubMed
    1. Anderhub V, Muller R, Schmidt C. Design and evaluation of an economic experiment via the internet. Journal of Economic Behavior & Organization. 2001;46(2):227–247. doi: 10.1016/S0167-2681(01)00195-0. - DOI
    1. Anderson J, Burks SV, Carpenter J, Gotte L, Maurer K, Nosenzo D, et al. Self-selection and variations in the laboratory measurement of other-regarding preferences across subject pools: evidence from one college student and two adult samples. Experimental Economics. 2013;16(2):170–189. doi: 10.1007/s10683-012-9327-7. - DOI
    1. Behrend TS, Sharek DJ, Meade AW, Wiebe EN. The viability of crowdsourcing for survey research. Behavior Research Methods. 2011;43(3):800–813. doi: 10.3758/s13428-011-0081-0. - DOI - PubMed

LinkOut - more resources