Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-20T10:24:13.807Z Has data issue: false hasContentIssue false

Testing the Accuracy of Regression Discontinuity Analysis Using Experimental Benchmarks

Published online by Cambridge University Press:  04 January 2017

Donald P. Green
Affiliation:
Department of Political Science, Institution for Social and Policy Studies, Yale University, 77 Prospect St., New Haven, CT 06511
Terence Y. Leong
Affiliation:
Analyst Institute, 815 Sixteenth Street NW, Washington, DC 20006
Holger L. Kern*
Affiliation:
Institution for Social and Policy Studies, Yale University, 77 Prospect St., New Haven, CT 06511.
Alan S. Gerber
Affiliation:
Department of Political Science, Institution for Social and Policy Studies, Yale University, 77 Prospect St., New Haven, CT 06511
Christopher W. Larimer
Affiliation:
Department of Political Science, University of Northern Iowa, 332 Sabin Hall, Cedar Falls, IA 50614
*
e-mail: holger.kern@yale.edu (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Regression discontinuity (RD) designs enable researchers to estimate causal effects using observational data. These causal effects are identified at the point of discontinuity that distinguishes those observations that do or do not receive the treatment. One challenge in applying RD in practice is that data may be sparse in the immediate vicinity of the discontinuity. Expanding the analysis to observations outside this immediate vicinity may improve the statistical precision with which treatment effects are estimated, but including more distant observations also increases the risk of bias. Model specification is another source of uncertainty; as the bandwidth around the cutoff point expands, linear approximations may break down, requiring more flexible functional forms. Using data from a large randomized experiment conducted by Gerber, Green, and Larimer (2008), this study attempts to recover an experimental benchmark using RD and assesses the uncertainty introduced by various aspects of model and bandwidth selection. More generally, we demonstrate how experimental benchmarks can be used to gauge and improve the reliability of RD analyses.

Type
Research Article
Copyright
Copyright © The Author 2009. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Authors' note: The authors are grateful to Mark Grebner, who designed and implemented the mailing campaign analyzed here, and Joshua Haselkorn, Jonnah Hollander, and Celia Paris, who provided research assistance.

References

Black, D., Galdo, J., and Smith, J. C. 2005. Evaluating the regression discontinuity design using experimental data. Unpublished working paper.Google Scholar
Buddelmeyer, H., and Skoufias, E. 2003. An evaluation of the performance of regression discontinuity design on PROGRESA. Discussion Paper Series No. 827. Bonn, Germany: IZA.Google Scholar
Butler, Daniel M., and Butler, Matthew J. 2006. Splitting the difference? Causal inference and theories of split-party delegations. Political Analysis 14: 439–55.CrossRefGoogle Scholar
Cook, Thomas D., and Wong, Vivian C. 2009. Empirical tests of the validity of the regression-discontinuity design. Annales d'Economie et de Statistique Forthcoming.CrossRefGoogle Scholar
Gerber, Alan, Kessler, Daniel, and Meredith, Marc. 2009. The persuasive effects of direct mail: A regression discontinuity approach. Presented at the 2009 Midwest Political Science Association Meeting.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W. 2008. Social pressure and voter turnout: evidence from a large-scale field experiment. American Political Science Review 102: 3348.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Kaplan, Edward H. 2004. The illusion of learning from observational research. In Problems and methods in the study of politics, eds. Shapiro, Ian, Smith, Rogers, and Massoud, Tarek, 251–73. New York: Cambridge University Press.Google Scholar
Green, Donald P., Leong, Terence Y., Gerber, Alan S., and Larimer, Christopher W. 2008. Testing the accuracy of regression discontinuity analysis using an experimental benchmark. Unpublished manuscript, Institution for Social and Policy Studies, Yale University.Google Scholar
Hahn, Jinyong, Todd, Petra, and Van der Klaauw, Wilbert. 2001. Identification and estimation of treatment effects with a regression discontinuity design. Econometrica 69: 201–9.CrossRefGoogle Scholar
Hainmueller, Jens, and Kern, Holger Lutz. 2008. Incumbency as a source of spillover effects in mixed electoral systems: Evidence from a regression-discontinuity design. Electoral Studies 27: 213–27.Google Scholar
Imbens, Guido, and Kalyanaraman, Karthik. 2009. Optimal bandwidth choice for the regression discontinuity estimator. Unpublished manuscript, Department of Economics, Harvard University.Google Scholar
Imbens, Guido W., and Lemieux, Thomas. 2008. Regression discontinuity designs: A guide to practice. Journal of Econometrics 142: 615–35.Google Scholar
LaLonde, Robert J. 1986. Evaluating the econometric evaluations of training programs with experimental data. American Economic Review 76: 604–20.Google Scholar
Lee, David S. 2008. Randomized experiments from non-random selection in U.S. house elections. Journal of Econometrics 142: 675–97.Google Scholar
Lee, David S., and Lemieux, Thomas. 2009. Regression discontinuity designs in economics. National Bureau of Economic Research Working Paper 14723, Cambridge, MA.Google Scholar
Loader, Clive. 1999. Local regression and likelihood. New York: Springer.Google Scholar
Ludwig, J., and Miller, D. L. 2007. Does head start improve children's life chances? Evidence from a regression discontinuity design. Quarterly Journal of Economics 122: 159208.Google Scholar
McCrary, Justin. 2008. Manipulation of the running variable in the regression discontinuity design: A density test. Journal of Econometrics 142: 698714.Google Scholar
Nickerson, David W. 2007. An evaluation of regression discontinuity techniques using experiments as a benchmark. Poster presented at the Annual Meeting of the Society of Political Methodology, July 18–21, State College, PA.Google Scholar
Pettersson-Lidbom, Per. 2004. Does the size of the legislature affect the size of government? Evidence from two natural experiments. Unpublished manuscript, Department of Economics, Stockholm University.Google Scholar
Rubin, Donald B. 1974. Estimating causal effects of treatments in randomized and non-randomized studies. Journal of Educational Psychology 66: 688701.CrossRefGoogle Scholar
Thistlethwaite, Donald L., and Campbell, Donald T. 1960. Regression-discontinuity analysis: an alternative to the ex-post facto experiment. Journal of Educational Psychology 51: 309–17.Google Scholar