Hostname: page-component-84b7d79bbc-l82ql Total loading time: 0 Render date: 2024-07-25T14:06:32.261Z Has data issue: false hasContentIssue false

The Downstream Benefits of Experimentation

Published online by Cambridge University Press:  04 January 2017

Donald P. Green
Affiliation:
Department of Political Science, Yale University, 124 Prospect Street, New Haven, CT 06520-8301. e-mail: donald.green@yale.edu
Alan S. Gerber
Affiliation:
Department of Political Science, Yale University, 124 Prospect Street, New Haven, CT 06520-8301. e-mail: alan.gerber@yale.edu

Abstract

The debate about the cost-effectiveness of randomized field experimentation ignores one of the most important potential uses of experimental data. This article defines and illustrates “downstream” experimental analysis—that is, analysis of the indirect effects of experimental interventions. We argue that downstream analysis may be as valuable as conventional analysis, perhaps even more so in the case of laboratory experimentation.

Type
Research Article
Copyright
Copyright © Political Methodology Section of the American Political Science Association 2002 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996. “Identification of Casual Effects Using Instrumental Variables.” Journal of the American Statistical Association 91:444455.CrossRefGoogle Scholar
Bound, John, Jaeger, David A., and Baker, Regina M. 1995. “Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogenous Explanatory Variables Is Weak.” Journal of the American Statistical Association 90:443450.Google Scholar
Gerber, Alan S., and Green, Donald P. 2000. “The Effects of Canvassing, Direct Mail, and Telephone Contact on Voter Turnout: A Field Experiment.” American Political Science Review 94:653663.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P. 2001. “Do Phone Calls Increase Voter Turnout? A Field Experiment.” Public Opinion Quarterly 65:7585.Google Scholar
Gerber, Alan S., Green, Donald P., and Nickerson, David. 2001. “Testing for Publication Bias in Political Science.” Political Analysis 9:385392.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Shachar, Roni. 2000. “Voting May Be Habit Forming.” Unpublished manuscript.Google Scholar
Heckman, James J., and Smith, Jeffrey A. 1995. “Assessing the Case for Social Experiments.” Journal of Economic Perspectives 9:85110.Google Scholar
Howell, William G., and Peterson, Paul E. 2002. The Education Gap. Washington, DC: Brookings Institution Press.Google Scholar
Imbens, Guido W., and Rubin, Donald B. 1997. “Bayesian Inference for Causal Effects in Randomized Experiments with Noncompliance.” Annals of Statistics 25:305327.Google Scholar
Rosenzweig, Mark R., and Wolpin, Kenneth I. 2000. “Natural ‘Natural Experiments’ in Economics.” Journal of Economic Literature 38:827874.CrossRefGoogle Scholar
Wolfinger, Raymond E., and Rosenstone, Steven J. 1980. Who Votes? New Haven: Yale University Press.Google Scholar