Skip to main content Accessibility help
×
Call for Papers - Special Issue on Validating Experimental Manipulations

Submissions Due December 10, 2023

The strength of experiments lies in their ability to demonstrate a causal effect. Yet, for an experiment to live up to that promise, the treatment must carefully manipulate only the independent variable. Recent research makes it clear, however, that many common manipulations might affect several variables at once (e.g., Dafoe, Zhang, and Caughey 2018). For example, experimental manipulations targeting a particular emotion often elicit multiple emotions (Searles and Mattes 2015). Other times, scholars struggle to manipulate the focal variable at all (e.g., partisan identification). In spite of the difficulty of designing a high-quality treatment, experimental designs often rely on manipulations that have not been previously validated. Some studies even fail to present any evidence for the validity of the manipulation (see Chester and Lasko 2021). To encourage further research in this area, we are calling for papers that validate experimental manipulations of constructs that are important to political science.

Proposals are required to come in the format of a Registered Report. More details on the format can be found here, but authors should submit a manuscript and study design prior to data collection.[1] Reviewers will evaluate and comment on the quality of the design and the likely benefits of such a study being carried out. If accepted in principle, authors will be invited to pre-register their design, conduct the experiment, and submit the full manuscript for a final review. The final review will focus on whether authors faithfully carried out the experiment, rather than the substantive results. This format is particularly apt for validating experimental manipulations as reviewer feedback will be aimed at improving the quality of the design rather than evaluating the results.

Proposals might take one of the following forms, though this list should not be considered exhaustive:

  • Tests a new manipulation of a construct when one isn’t already available
  • Tests an existing manipulation that has not yet been systematically validated
  • Compares alternative manipulations of a construct to each other
  • Demonstrates flaws in an existing manipulation and tests an improved version of that manipulation
  • Tests the validity of a manipulation across different populations or methods of administration

 

In evaluating a proposal, we encourage reviewers to prioritize the following considerations:

  • Whether the manipulation is likely to be widely used
    • Is the construct widely studied? Is it substantively important to study?
    • If a manipulation of the construct already exists, will the new manipulation represent an important improvement?
    • Are there good theoretical or empirical reasons to believe that existing manipulations are flawed or limited in their application?
    • Will the study provide clarity on best practices for manipulating the construct?
  • Is the target concept clearly defined? It is impossible to validate a manipulation of an ill-defined concept, so proposals must include a careful definition of the target concept.
  • How will the manipulation be validated? Authors should consider validation of a manipulation as a multi-faceted process akin to validating a measure (e.g., Flake and Fried 2020). Proposals should detail how they will assess the manipulation on multiple dimensions of validity (e.g., construct, discriminant).
  • Will the proposed design have sufficient statistical power?
  • How easily can the manipulation be implemented? Authors are encouraged to ease the burden for users by providing any complex code or instructions necessary to implement the manipulation. Tools that are inaccessible to researchers provide little value.

 

Finally, we encourage authors to consider the full range of experimental designs and treatments. The between-subjects design is a “workhorse” in experimental research, but the effects in these designs can be harder to detect than in alternative designs (for discussion, see Clifford, Sheagley, and Piston 2021; Mutz 2011). In addition to more powerful designs, researchers might consider the use of more impactful treatments, such as those involving the use of music (Brader 2005), games (Broockman, Kalla, and Westwood 2023), videos (Guess and Coppock 2020), or images (Abrajano, Elmendorf, and Quinn 2018).

Submissions are due by December 10, 2023. We anticipate the first round of reviews will be complete within three months and authors should submit a revision within three months of the initial decision. At this stage, manuscripts may receive a rejection, in-principle acceptance, or may require a second round of review. Authors who receive an in-principle acceptance will be expected to field their study and submit a complete version of the manuscript within roughly six months. Complete manuscripts will undergo a last stage of review to ensure that authors faithfully carried out their pre-registration plan.

Questions? Contact us at jeps@apsanet.org

 

References

Abrajano, Marisa A., Christopher S. Elmendorf, and Kevin M. Quinn. 2018. “Labels vs. Pictures: Treatment-Mode Effects in Experiments About Discrimination.” Political Analysis 26(1): 20–33.

Brader, Ted. 2005. “Striking a Responsive Chord: How Political Ads Motivate and Persuade Voters by Appealing to Emotions.” American Journal of Political Science 49(2): 388–405.

Broockman, David E., Joshua L. Kalla, and Sean J. Westwood. 2023. “Does Affective Polarization Undermine Democratic Norms or Accountability? Maybe Not.” American Journal of Political Science.

Chester, David S., and Emily N. Lasko. 2021. “Construct Validation of Experimental Manipulations in Social Psychology: Current Practices and Recommendations for the Future.” Perspectives on Psychological Science 16(2): 377–95.

Clifford, Scott, Geoffrey Sheagley, and Spencer Piston. 2021. “Increasing Precision without Altering Treatment Effects: Repeated Measures Designs in Survey Experiments.” American Political Science Review 115(3): 1048–65.

Dafoe, Allan, Baobao Zhang, and Devin Caughey. 2018. “Information Equivalence in Survey Experiments.” Political Analysis 26(4): 399–416.

Flake, Jessica Kay, and Eiko I. Fried. 2020. “Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them.” Advances in Methods and Practices in Psychological Science 3(4): 456–65.

Guess, Andrew, and Alexander Coppock. 2020. “Does Counter-Attitudinal Information Cause Backlash? Results from Three Large Survey Experiments.” British Journal of Political Science 50(4): 1497–1515.

Mutz, Diana C. 2011. Population-Based Survey Experiments. Princeton University Press.

Searles, Kathleen, and Kyle Mattes. 2015. “It’s a Mad, Mad World: Using Emotion Inductions in a Survey.” Journal of Experimental Political Science 2(2): 172–82.

 

 

[1] Authors may include evidence from pre-existing pilot studies, but the focus of the contribution should be on the prospective study design.