Book contents
- Frontmatter
- Contents
- Detailed table of contents
- List of Figures
- List of Tables
- List of Boxes
- Preface and acknowledgements
- 1 Introduction
- Part I Discovering natural experiments
- Part II Analyzing natural experiments
- Part III Evaluating natural experiments
- 8 How plausible is as-if random?
- 9 How credible is the model?
- 10 How relevant is the intervention?
- Part IV Conclusion
- References
- Index
9 - How credible is the model?
Published online by Cambridge University Press: 05 November 2012
- Frontmatter
- Contents
- Detailed table of contents
- List of Figures
- List of Tables
- List of Boxes
- Preface and acknowledgements
- 1 Introduction
- Part I Discovering natural experiments
- Part II Analyzing natural experiments
- Part III Evaluating natural experiments
- 8 How plausible is as-if random?
- 9 How credible is the model?
- 10 How relevant is the intervention?
- Part IV Conclusion
- References
- Index
Summary
This chapter turns to the second of the three dimensions of the evaluative framework discussed in the Introduction: the credibility of the causal and statistical models that analysts employ. To make causal inferences, analysts must maintain some hypotheses about data-generating processes. These hypotheses are “maintained” because evidence does not permit researchers to verify all such a priori assumptions, at least not completely. Indeed, inferring causation requires a theory of how observed data are generated (that is, a response schedule; Freedman 2009: 85–95; Heckman 2000). This theory is a hypothetical account of how one variable would respond if the scholar intervened and manipulated other variables. In observational studies—including natural experiments—the researcher never actually intervenes to change any variables, so this theory remains, at least to some extent, hypothetical.
Yet, data produced by social and political processes can be used to estimate the expected magnitude of a change in one variable that would arise if one were to manipulate other variables—assuming that the researcher has a correct theory of the data-generating process. In quantitative analysis, this theory is usually expressed in the form of a formal statistical model; underlying causal assumptions may or may not be stated explicitly. The key question is whether the maintained hypotheses implied by a given model are plausible depictions of the true data-generating process—and, especially, how that plausibility can be probed and, at least to some extent, validated.
- Type
- Chapter
- Information
- Natural Experiments in the Social SciencesA Design-Based Approach, pp. 256 - 288Publisher: Cambridge University PressPrint publication year: 2012