Article contents
Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead
Published online by Cambridge University Press: 04 January 2017
Abstract
Suppose that an experimenter has collected a sample as well as baseline information about the units in the sample. How should she allocate treatments to the units in this sample? We argue that the answer does not involve randomization if we think of experimental design as a statistical decision problem. If, for instance, the experimenter is interested in estimating the average treatment effect and evaluates an estimate in terms of the squared error, then she should minimize the expected mean squared error (MSE) through choice of a treatment assignment. We provide explicit expressions for the expected MSE that lead to easily implementable procedures for experimental design.
- Type
- Articles
- Information
- Copyright
- Copyright © The Author 2016. Published by Oxford University Press on behalf of the Society for Political Methodology
Footnotes
Author's note: I thank Alberto Abadie, Ivan Canay, Gary Chamberlain, Raj Chetty, Nathaniel Hendren, Guido Imbens, Larry Katz, Gary King, Michael Kremer, and Don Rubin, as well as seminar participants at the Harvard development retreat; the Harvard Labor Economics Workshop; the Harvard Quantitative Issues in Cancer Research Seminar; the Harvard Applied Statistics Seminar; UT Austin, Princeton, Columbia, and Northwestern Econometrics Seminars; at RAND; and at the 2013 CEME Conference at Stanford for helpful discussions. Replication data are available on the Harvard Dataverse at http://dx.doi.org/10.7910/DVN/I5KCWI. See Kasy (2016). Supplementary materials for this article are available on the Political Analysis Web site.
References
- 26
- Cited by