Hostname: page-component-7bb8b95d7b-s9k8s Total loading time: 0 Render date: 2024-09-27T02:14:21.476Z Has data issue: false hasContentIssue false

Experimental Thinking: A Primer on Social Science Experiments. By James Druckman. New York: Cambridge University Press, 2022. 228p. $99.99 cloth, $29.99 paper.

Review products

Experimental Thinking: A Primer on Social Science Experiments. By James Druckman. New York: Cambridge University Press, 2022. 228p. $99.99 cloth, $29.99 paper.

Published online by Cambridge University Press:  01 June 2023

Thad Dunning*
Affiliation:
University of California, Berkeley thad.dunning@berkeley.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Book Reviews: American Politics
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the American Political Science Association

James Druckman begins his new book, Experimental Thinking: A Primer on Social Science Experiments, by underscoring a remarkable contrast between two presidential addresses given to the American Political Science Association. In 1909, A. Lawrence Lowell declared “we are limited by the impossibility of experiment. Politics is an observational, not an experimental, science.” Yet by 2019 another APSA president, Rogers Smith, asked whether “an excessive emphasis on experiments will unduly constrict the questions political scientists ask.” Clearly, the striking difference reflects the very rapid recent expansion, especially over the past two decades, of experimental political science.

One might expect a figure such as Druckman—a leading scholar who has contributed so fundamentally to the very growth he documents—to celebrate this experimental turn. In part, the book does emphasize the value of experimental research. Yet Druckman also issues many cautionary notes, not so much about the method per se as about the threats posed by the much greater ease today of conducting certain kinds of experiments. Computing advances allowing for large-scale randomization and data processing as well as new opportunities to collect data from social media, internet panels, or elites have sharply expanded both the domains and the scale of experimentation. Concurrently, a movement towards open science (encompassing pre-registration, replication, and other elements) has made reviewers more amenable to the publication of experiments reporting null effects. According to Druckman, these developments “bring with them new opportunities but also a new type of poverty … There is much less at stake with each experiment, given the relative ease of data collection and increasing acceptance of null results … In short, the concerns are … poor designs, inappropriate analyses, limited use of data, and/or flawed interpretations. Even an infinite amount of data cannot compensate for a thoughtlessly designed experiment” (p. 6). The ostensibly greater ease of implementation, Druckman argues, has sometimes disconnected experiments from the full scientific process. To put it another way, “a good experiment is slow moving … counter to the current fast-moving temptations available in the social sciences” (pp. 2-3). Less a textbook on technical aspects of experimental design in the social sciences (of which there are now many excellent examples, including some of Druckman’s other volumes), this is a wide-ranging discussion of how to think about and interpret experiments properly.

Druckman emphasizes several key themes:

  1. 1) Experiments are properly only one part of a long scientific process, which involves defining research questions, deriving testable hypotheses, considering measurement validity, and connecting experimental design to theory (Chapter 2);

  2. 2) Concepts and measurement validity centrally determine the extent to which experiments can inform theory. However, mundane realism (or “the extent to which events occurring in the research setting are likely to occur in the normal course of subjects’ lives”, p. 52) is much less important than many critics often assert. Moreover, in thinking about external validity, many focus on the characteristics of experimental units in relation to some broader population, but experiments also “sample” contexts, treatments, and outcomes, with implications that are too rarely discussed (Chapter 3);

  3. 3) Some rapidly expanding types of experimental designs—for example, elite audit studies, conjoint survey experiments, and lab-in-the-field experiments—can leave substantial interpretive ambiguity (Chapter 4);

  4. 4) Replication is hard and sometimes not meaningful, because contexts, treatments, and outcomes often change in subtle ways, even if a plan for sampling experimental units themselves is replicated (Chapter 5). Most of the chapter sections end with helpful summaries that will be useful for teaching.

These are in my view excellent correctives—especially the core points that i) experiments are just one arrow in the social-scientific quiver; ii) that many questions are not amenable to experimentation; and iii) that considerations not centrally taught in many courses on experimental design, such as concept formation and measurement validity, are critical for successful and useful experimentation. Druckman leaves room not so much for quibbling as for alternatives to the powerful ideas he advances. Chapter 2, for example, offers a fairly expansive definition of experiments, contrasting “scientific” and “statistical” solutions to the fundamental problem of causal inference, i.e., the problem that one cannot observe outcomes simultaneously in the presence and absence of an intervention. I read Paul W. Holland’s (1986) well-known discussion of this issue as also implying differences in the estimand of interest (“Statistics and Causal Inference,” Journal of the American Statistical Association 81[1986]: 945-960). Temporal stability, causal transience, and possibly unit homogeneity are—in the social sciences, often very strong—assumptions that would allow for estimation of unit causal effects, under Druckman’s scientific solution. The statistical solution provided by randomization (or its as-if version, in the case of natural experiments), by contrast, allows only for estimation of group effects.

Druckman reserves criticism that might be most controversial for the implications of open science. While he very plainly underscores the value of writing detailed descriptions of design and analytic procedures before conducting an experiment (i.e., a detailed pre-analysis plan), he also worries about several downsides. First, “inattention to careful data collection can lead to null results” and “overemphasis on pre-analysis plans shifts the basis of publication decisions toward the existence of a priori hypotheses and away from using statistical significance” (p. 136). Second, strict adherence to pre-analysis “assumes that any exploratory data analyses reflect post hoc theorizing, therefore requiring further data collection” (p. 138). And finally, “the process may stunt innovation since scholars become incentivized to only test well-developed hypotheses” (p. 140).

Each of these critiques has merit. Yet an ideal approach might also allow for a methodologically self-conscious interplay between inductive development of theory and its testing. Consider the excellent article by Clayton et al. (“Women Grab Back: Exclusion, Policy Threat, and Women’s Political Ambition” forthcoming, American Political Science Review), who use focus groups with potential political candidates to generate a hypothesis that women’s political exclusion motivates their political ambition when combined with a policy threat to women’s interests. The paper thus uses theorization drawing from planned (and “exploratory”) observation of the world and especially from the perceptions and theories of political actors themselves. However, the authors also and subsequently pre-specify and conduct an experimental test (one which is also reproduced—meaningfully, I think—in two different samples). The combination of clearly inductive but also a priori theorization and subsequent pre-specification of an experimental test eases some concerns that might otherwise arise, for example, from an ex-post stipulation of an interactive hypothesis. From this example, one might draw the conclusion that—just as experiments are only one part of a long scientific process—so is pre-registration.

Indeed, it might be possible to combine productively the best of both worlds. That is, we might integrate the slow work of designing excellent experiments with the somewhat faster work of, for instance, replication—even if as Druckman shows us the latter is often in fact properly thought of in terms of external validity and not “repetition.”

Druckman’s masterful discussion shows how even seemingly uncontroversial aspects of the faster work are anything but straightforward. His emphasis thus invites us to focus on when and how experimental design can in fact inform empirical assessment of theories. This tremendous book offers lessons of experience earned by one of the foremost practitioners of the experimental craft. It deserves to be very widely read.