Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-18T13:11:24.311Z Has data issue: false hasContentIssue false

Baseline, Placebo, and Treatment: Efficient Estimation for Three-Group Experiments

Published online by Cambridge University Press:  04 January 2017

Alan S. Gerber*
Affiliation:
Institution for Social and Policy Studies and Department of Political Science, Yale University, 77 Prospect Street, New Haven, CT 06511
Donald P. Green
Affiliation:
Institution for Social and Policy Studies and Department of Political Science, Yale University, 77 Prospect Street, New Haven, CT 06511
Edward H. Kaplan
Affiliation:
School of Management, School of Public Health, and School of Engineering and Applied Science, Yale University, 135 Prospect Street, New Haven, CT 06511
Holger L. Kern
Affiliation:
Institution for Social and Policy Studies, Yale University, 77 Prospect Street, New Haven, CT 06511. From August 2010, Department of Political Science, University of South Carolina, 817 Henderson Street, Columbia, SC 29208
*
e-mail: alan.gerber@yale.edu (corresponding author)

Abstract

Randomized experiments commonly compare subjects receiving a treatment to subjects receiving a placebo. An alternative design, frequently used in field experimentation, compares subjects assigned to an untreated baseline group to subjects assigned to a treatment group, adjusting statistically for the fact that some members of the treatment group may fail to receive the treatment. This article shows the potential advantages of a three-group design (baseline, placebo, and treatment). We present a maximum likelihood estimator of the treatment effect for this three-group design and illustrate its use with a field experiment that gauges the effect of prerecorded phone calls on voter turnout. The three-group design offers efficiency advantages over two-group designs while at the same time guarding against unanticipated placebo effects (which would undermine the placebo-treatment comparison) and unexpectedly low rates of compliance with the treatment assignment (which would undermine the baseline-treatment comparison).

Type
Research Article
Copyright
Copyright © The Author 2010. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Authors' note: The authors are grateful to Mark Grebner, who conceived of the intervention described here and assisted in data collection, and to the Institution for Social and Policy Studies. We also thank the editors and anonymous reviewers, who provided very valuable comments. The experiment reported in this article was reviewed and approved by the Human Subjects Committee at Yale University. Supplementary materials for this article are available on the Political Analysis Web site.

References

Abadie, Alberto. 2003. Semiparametric instrumental variable estimation of treatment response models. Journal of Econometrics 113(2): 231–63.CrossRefGoogle Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434): 444–55.Google Scholar
Angrist, Joshua D., and Pischke, Jörg-Steffen. 2009. Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.Google Scholar
Arceneaux, Kevin, Gerber, Alan S., and Green, Donald P. 2006. Comparing experimental and matching methods using a large-scale voter mobilization experiment. Political Analysis 14(1): 3762.Google Scholar
Boruch, Robert F. 1997. Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: SAGE Publications.CrossRefGoogle Scholar
Bound, John, Jaeger, David A., and Baker, Regina M. 1995. Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. Journal of the American Statistical Association 90(430): 443–50.Google Scholar
Cheng, Jing, and Small, Dylan S. 2006. Bounds on causal effects in three-arm trials with non-compliance. Journal of the Royal Statistical Society, Series B 68(5): 815–36.Google Scholar
Cox, David R., and Hinkley, David V. 1974. Theoretical statistics. London: Chapman and Hall.Google Scholar
de Craen, Anton J.M., Kaptchuk, Ted J., Tijssen, Jan G. P., and Kleijnen, J. 1999. Placebos and placebo effects in medicine: Historical overview. Journal of the Royal Society of Medicine 92(10): 511–5.Google Scholar
Frangakis, Constantine E., and Rubin, Donald B. 2002. Principal stratification in causal inference. Biometrics 58(1): 21–9.Google Scholar
Gerber, Alan S., Green, Donald P., and Kaplan, Edward H. 2004. The illusion of learning from observational research. In Problems and methods in the study of politics, eds. Shapiro, Ian, Smith, Rogers M., and Masoud, Tarek E., 251–73. Cambridge: Cambridge University Press.Google Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W. 2008. Social pressure and voter turnout: Evidence from a large-scale field experiment. American Political Science Review 102(1): 3348.Google Scholar
Gertler, Paul. 2004. Do conditional cash transfers improve child health? Evidence from PROGRESA's control randomized experiment. American Economic Review 94(2): 336–41.CrossRefGoogle ScholarPubMed
Green, Donald P., and Gerber, Alan S. 2008. Get out the vote: How to increase voter turnout. 2nd ed. Washington, DC: Brookings Institution Press.Google Scholar
Efron, B., and Feldman, D. 1991. Compliance as an explanatory variable in clinical trials. Journal of the American Statistical Association 86(413): 917.CrossRefGoogle Scholar
Holland, Paul W. 1986. Statistics and causal inference. Journal of the American Statistical Association 81(396): 945–60.Google Scholar
Imbens, Guido W. 2007. Nonadditive models with endogenous regressors. In Advances in economics and econometrics. Vol. III. Chapter 2, eds. Blundell, Richard, Newey, Whitney, and Persson, Torsten, 1746. Cambridge: Cambridge University Press Google Scholar
Imbens, Guido W., and Angrist, Joshua D. 1994. Identification and estimation of local average treatment effects. Econometrica 62(2): 467–76.Google Scholar
Imbens, Guido W., and Rosenbaum, Paul R. 2005. Robust, accurate confidence intervals with a weak instrument: Quarter of birth and education. Journal of the Royal Statistical Society, Series A 168(1): 109–26.Google Scholar
Imbens, Guido W., and Rubin, Donald B. 1997. Bayesian inference for causal effects in randomized experiments with noncompliance. Annals of Statistics 25(1): 305–27.Google Scholar
Morgan, Stephen L., and Winship, Christopher. 2007. Counterfactuals and causal inference. Cambridge: Cambridge University Press.Google Scholar
Nickerson, David W. 2005. Scalable protocols offer efficient design for field experiments. Political Analysis 13(3): 233–52.Google Scholar
Nickerson, David W. 2008. Is voting contagious? Evidence from two field experiments. American Political Science Review 102(1): 4957.Google Scholar
Rosenthal, Robert. 1985. Designing, analyzing, interpreting, and summarizing placebo studies. In Placebo: Theory, research, and mechanisms, eds. White, Leonard, Tursky, Bernard, and Schwartz, Gary E., 110–36. New York: Guilford Press.Google Scholar
Rubin, Donald B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66(5): 688701.Google Scholar
Rubin, Donald B. 1977. Assignment to treatment group on the basis of a covariate. Journal of Educational Statistics 2(1): 126.CrossRefGoogle Scholar
Rubin, Donald B. 1978. Bayesian inference for causal effects: The role of randomization. Annals of Statistics 6(1): 126.Google Scholar
Rubin, Donald B. 1990. Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science 5(4): 472–80.CrossRefGoogle Scholar
Shadish, William R., Cook, Thomas D., and Campbell, Donald T. 2002. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.Google Scholar
Silverman, William A. 1980. Retrolental fibroplasia: A modern parable. New York: Grune.Google Scholar
Torgerson, David J., and Torgerson, Carole J. 2008. Designing randomized trials in health, education and the social sciences. New York: Palgrave Macmillan.Google Scholar
Supplementary material: PDF

Gerber et al. supplementary material

Appendix

Download Gerber et al. supplementary material(PDF)
PDF 193.8 KB