Hostname: page-component-cc8bf7c57-pd9xq Total loading time: 0 Render date: 2024-12-11T01:16:59.210Z Has data issue: false hasContentIssue false

Increasing Precision without Altering Treatment Effects: Repeated Measures Designs in Survey Experiments

Published online by Cambridge University Press:  12 April 2021

SCOTT CLIFFORD*
Affiliation:
University of Houston
GEOFFREY SHEAGLEY*
Affiliation:
University of Georgia
SPENCER PISTON*
Affiliation:
Boston University
*
Scott Clifford, Associate Professor, Department of Political Science, University of Houston, sclifford@uh.edu.
Geoffrey Sheagley, Assistant Professor, School of Public and International Affairs, University of Georgia, geoff.sheagley@uga.edu.
Spencer Piston, Assistant Professor, Department of Political Science, Boston University, spiston@bu.edu.

Abstract

The use of survey experiments has surged in political science. The most common design is the between-subjects design in which the outcome is only measured posttreatment. This design relies heavily on recruiting a large number of subjects to precisely estimate treatment effects. Alternative designs that involve repeated measurements of the dependent variable promise greater precision, but they are rarely used out of fears that these designs will yield different results than a standard design (e.g., due to consistency pressures). Across six studies, we assess this conventional wisdom by testing experimental designs against each other. Contrary to common fears, repeated measures designs tend to yield the same results as more common designs while substantially increasing precision. These designs also offer new insights into treatment effect size and heterogeneity. We conclude by encouraging researchers to adopt repeated measures designs and providing guidelines for when and how to use them.

Type
Research Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Andrews, Amelia C., Clawson, Rosalee A., Gramig, Benjamin M., and Raymond, Leigh. 2017. “Finding the Right Value: Framing Effects on Domain Experts.” Political Psychology 38 (2): 261–78.CrossRefGoogle Scholar
Ansolabehere, Stephen, Rodden, Jonathan, and Snyder, James M.. 2008. “The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint, and Issue Voting.” American Political Science Review 102 (2): 215–32.CrossRefGoogle Scholar
Aronson, Elliot, Ellsworth, Phoebe C., Merrill Carlsmith, J., and Gonzales, Marti Hope. 1976. Methods of Research in Social Psychology . New York: McGraw-Hill.Google Scholar
Banks, Antoine J. 2014. Anger and Racial Politics: The Emotional Foundations of Racial Attitudes in America . New York: Cambridge University Press.CrossRefGoogle Scholar
Bansak, Kirk, Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2018. “The Number of Choice Tasks and Survey Satisficing in Conjoint Experiments.” Political Analysis 26 (1): 112–19.CrossRefGoogle Scholar
Bansak, Kirk, Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2021. “Conjoint Survey Experiments.” In Advances in Experimental Political Science , eds. Druckman, James N. and Green, Donald P., 1941. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Bauer, Nichole M. 2017. “The Effects of Counterstereotypic Gender Strategies on Candidate Evaluations.” Political Psychology 38 (2): 279–95.CrossRefGoogle Scholar
Blair, Graeme, Cooper, Jasper, Coppock, Alexander, and Humphreys, Macartan. 2019. “Use Change Scores or Control for Pre-treatment Outcomes? Depends on the True Data Generating Process.” DeclareDesign (blog). January 15, 2019. https://declaredesign.org/blog/2019-01-15-change-scores.html.Google Scholar
Bloom, Howard S. 1995. “Minimum Detectable Effects: A Simple Way to Report the Statistical Power of Experimental Designs.” Evaluation Review 19 (5): 547–56.CrossRefGoogle Scholar
Bloom, Howard S. 2008. “The Core Analytics of Randomized Experiments for Social Research.” In The SAGE Handbook of Social Research Methods , eds. Alasuutari, Pertti, Bickman, Leonard, and Brannen, Julia, 115–33. New York: Sage Publications.CrossRefGoogle Scholar
Bowers, Jake. 2011. “Making Effects Manifest in Randomized Experiments.” In Cambridge Handbook of Experimental Political Science , eds. Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur, 459–80. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Bulus, Metin, Dong, Nianbo, Kelcey, Benjamin, and Spybrook, Jessaca. 2019. PowerUpR: Power Analysis Tools for Multilevel Randomized Experiments. Vienna, Austria: R Foundation for Statistical Computing. https://cran.r-project.org/web/packages/PowerUpR/index.html.Google Scholar
Campbell, Donald T., and Stanley, Julian C.. 1963. Experimental and Quasi-experimental Designs for Research . Boston: Houghton Mifflin Company.Google Scholar
Charness, Gary, Gneezy, Uri, and Kuhn, Michael A.. 2012. “Experimental Methods: Between-subject and Within-subject Design.” Journal of Economic Behavior and Organization 81 (1): 18.CrossRefGoogle Scholar
Chong, Dennis, and Druckman, James N.. 2007. “A Theory of Framing and Opinion Formation in Competitive Elite Environments.” Journal of Communication 57 (1): 99118.Google Scholar
Cialdini, Robert B., Trost, Melanie R., and Newsom, Jason T.. 1995. “Preference for Consistency: The Development of a Valid Measure and the Discovery of Surprising Behavioral Implications.” Journal of Personality and Social Psychology 69 (2): 318–28.CrossRefGoogle Scholar
Clifford, Scott, Leeper, Thomas J., and Rainey, Carlisle. 2019. “Increasing the Generalizability of Survey Experiments Using Randomized Topics: An Application to Party Cues.” Paper presented at the Annual Meeting of the American Political Science Association, Washington, DC.Google Scholar
Clifford, Scott, Sheagley, Geoffrey, and Piston, Spencer. 2021. “Replication Data for: Increasing Precision Without Altering Treatment Effects: Repeated Measures Designs in Survey Experiments.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/9MQDK7.CrossRefGoogle Scholar
Clifford, Scott, and Wendell, Dane G.. 2016. “How Disgust Influences Health Purity Attitudes.” Political Behavior 38 (1): 155–78.CrossRefGoogle Scholar
Cohen, Geoffrey L. 2003. “Party Over Policy: The Dominating Impact of Group Influence on Political Beliefs.” Journal of Personality and Social Psychology 85 (5): 808–22.CrossRefGoogle ScholarPubMed
Coppock, Alexander, Leeper, Thomas J., and Mullinix, Kevin J.. 2018. “Generalizability of Heterogeneous Treatment Effect Estimates across Samples.” Proceedings of the National Academy of Sciences 115 (49): 12441–6.CrossRefGoogle Scholar
Dong, Nianbo, and Maynard, Rebecca. 2013. “PowerUp!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-experimental Design Studies.” Journal of Research Educational Effectiveness 6 (1): 2467.CrossRefGoogle Scholar
Downing, James W., Judd, Charles M., and Brauer, Markus. 1992. “Effects of Repeated Expressions on Attitude Extremity.” Journal of Personality and Social Psychology 63 (1): 1729.CrossRefGoogle Scholar
Druckman, James N., and Bolsen, Toby. 2011. “Framing, Motivated Reasoning, and Opinions about Emergent Technologies.” Journal of Communication 61 (4): 659–88.CrossRefGoogle Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur. 2011. “Experimentation in Political Science.” In Cambridge Handbook of Experimental Political Science , eds. Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur, 312. New York: Cambridge University Press.CrossRefGoogle Scholar
Druckman, James N., and Leeper, Thomas J.. 2012. “Learning More from Political Communication Experiments: Pretreatment and Its Effects.” American Journal of Political Science 56 (4): 875–96.CrossRefGoogle Scholar
Druckman, James N., Peterson, Erik, and Slothuus, Rune. 2013. “How Elite Partisan Polarization Affects Public Opinion Formation.” American Political Science Review 107 (1): 5779.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P.. 2012. Field Experiments: Design, Analysis, and Interpretation . New York: W. W. Norton.Google Scholar
Gilens, Martin. 2001. “Political Ignorance and Collective Policy Preferences.” American Political Science Review 95 (2): 379–96.CrossRefGoogle Scholar
Goggin, Stephen N., Henderson, John A., and Theodoridis, Alexander G.. 2020. “What Goes with Red and Blue? Mapping Partisan and Ideological Associations in the Minds of Voters.” Political Behavior 42 (4): 9851013.CrossRefGoogle Scholar
Goh, Jin X., Hall, Judith A., and Rosenthal, Robert. 2016. “Mini Meta-analysis of Your Own Studies: Some Arguments on Why and a Primer on How.” Social and Personality Psychology Compass 10 (10): 535–49.CrossRefGoogle Scholar
Hainmueller, Jens, Hangartner, Dominik, and Yamamoto, Teppei. 2015. “Validating Vignette and Conjoint Survey Experiments against Real-world Behavior.” Proceedings of the National Academy of Sciences of the United States of America 112 (8): 23952400.CrossRefGoogle ScholarPubMed
Hainmueller, Jens, Hopkins, Daniel J., and Yamamoto, Teppei. 2013. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22 (1): 130.CrossRefGoogle Scholar
Harbord, Roger M., and Higgins, Julian P. T.. 2008. “Meta-regression in Stata.” The Stata Journal 8 (4): 493519.CrossRefGoogle Scholar
Horiuchi, Yusaku, Markovich, Zachary D., and Yamamoto, Teppei. 2019. “Does Conjoint Analysis Mitigate Social Desirability Bias?” MIT Research Paper No. 2018-15.Google Scholar
Huddy, Leonie, Mason, Lilliana, and Aarøe, Lene. 2015. “Expressive Partisanship: Campaign Involvement, Political Emotion, and Partisan Identity.” American Political Science Review 109 (1): 117.CrossRefGoogle Scholar
Iyengar, Shanto, and Kinder, Donald R.. 1987. News That Matters . Chicago: University of Chicago Press.Google Scholar
Jenke, Libby, Bansak, Kirk, Hainmueller, Jens, and Hangartner, Dominik. 2020. “Using Eye-tracking to Understand Decision-making in Conjoint Experiments.” Political Analysis 29 (1): 75101.CrossRefGoogle Scholar
Jerit, Jennifer, Barabas, Jason, and Clifford, Scott. 2013. “Comparing Contemporaneous Laboratory and Field Experiments on Media Effects.” Public Opinion Quarterly 77 (1): 256–82.CrossRefGoogle Scholar
Kam, Cindy D., and Trussler, Marc J.. 2016. “At the Nexus of Observational and Experimental Research: Theory, Specification, and Analysis of Experiments with Heterogeneous Treatment Effects.” Political Behavior 39 (4): 789815.CrossRefGoogle Scholar
Klar, Samara, and Krupnikov, Yanna. 2016. Independent Politics: How American Disdain for Parties Leads to Political Inaction . New York: Cambridge University Press.CrossRefGoogle Scholar
Klar, Samara, Leeper, Thomas J., and Robison, Joshua. 2019. “Studying Identities with Experiments: Weighing the Risk of Posttreatment Bias against Priming Effects.” Journal of Experimental Political Science 7 (1): 5660.CrossRefGoogle Scholar
Krupnikov, Yanna, and Levine, Adam Seth. 2014. “Cross-sample Comparisons and External Validity.” Journal of Experimental Political Science 1 (1): 5980.CrossRefGoogle Scholar
Krupnikov, Yanna, and Piston, Spencer. 2015. “Racial Prejudice, Partisanship, and White Turnout in Elections with Black Candidates.” Political Behavior 37 (2): 397418.CrossRefGoogle Scholar
McDermott, Rose. 2011. “Internal and External Validity.” In Cambridge Handbook of Experimental Political Science , eds. Green, Donald P., Druckman, James N., Lupia, Arthur, and Kuklinski, James H., 2740. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Montgomery, Jacob M., Nyhan, Brendan, and Torres, Michelle. 2018. “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It.” American Journal of Political Science 62 (3): 760–75.CrossRefGoogle Scholar
Mummolo, Jonathan, and Peterson, Erik. 2019. “Demand Effects in Survey Experiments: An Empirical Assessment.” American Political Science Review 113 (2): 517–29.CrossRefGoogle Scholar
Mutz, Diana C. 2011. Population-based Survey Experiments . Princeton, NJ: Princeton University Press.Google Scholar
Mutz, Diana C., and Reeves, Byron. 2005. “The New Videomalaise: Effects of Televised Incivility on Political Trust.” American Political Science Review 99 (1): 115.CrossRefGoogle Scholar
Nelson, Thomas E., Clawson, Rosalee A., and Oxley, Zoe M.. 1997. “Media Framing of a Civil Liberties Conflict and Its Effect on Tolerance.” American Political Science Review 91 (3): 567–83.CrossRefGoogle Scholar
Nicholson, Stephen P. 2011. “Dominating Cues and the Limits of Elite Influence.” Journal of Politics 73 (4): 1165–77.CrossRefGoogle Scholar
Nyhan, Brendan, and Reifler, Jason. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32 (2): 303–30.CrossRefGoogle Scholar
Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349 (6251): aac4716aac4716.CrossRefGoogle Scholar
Orne, Martin T. 1962. “On the Social Psychology of the Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications.” American Psychologist 17 (11): 776–83.CrossRefGoogle Scholar
Piston, Spencer. 2018. Class Attitudes in America: Sympathy for the Poor, Resentment of the Rich, and Political Implications . New York: Cambridge University Press.CrossRefGoogle Scholar
Roese, Neal J., and Olson, James M.. 1994. “Attitude Importance as a Function of Repeated Attitude Expression.” Journal of Experimental Social Psychology 30 (1): 3951.CrossRefGoogle Scholar
Schueler, Beth E., and West, Martin R.. 2016. “Sticker Shock.” Public Opinion Quarterly 80 (1): 90113.CrossRefGoogle ScholarPubMed
Smith, Tom W. 1987. “That Which We Call Welfare by Any Other Name Would Smell Sweeter an Analysis of the Impact of Question Wording on Response Patterns.” Public Opinion Quarterly 51 (1): 7583.CrossRefGoogle Scholar
Swire-Thompson, Briony, DeGutis, Joseph, and Lazer, David. 2020. “Searching for the Backfire Effect: Measurement and Design Considerations.” Journal of Applied Research in Memory and Cognition 9 (3): 286–99.CrossRefGoogle ScholarPubMed
Tourangeau, Roger, and Rasinski, Kenneth A.. 1988. “Cognitive Processes Underlying Context Effects in Attitude Measurement.” Psychological Bulletin 103 (3): 299314.CrossRefGoogle Scholar
Valentino, Nicholas A., Hutchings, Vincent L., and White, Ismail K.. 2002. “Cues That Matter: How Political Ads Prime Racial Attitudes during Campaigns.” American Political Science Review 96 (1): 7590.CrossRefGoogle Scholar
Valentino, Nicholas A., Neuner, Fabian G., and Matthew Vandenbroek, L.. 2018. “The Changing Norms of Racial Political Rhetoric and the End of Racial Priming.” The Journal of Politics 80 (3): 757–71.CrossRefGoogle Scholar
Zaller, John R. 1992. The Nature and Origins of Mass Opinion . Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Zaller, John, and Feldman, Stanley. 1992. “A Simple Theory of the Survey Response: Answering Questions versus Revealing Preferences.” American Journal of Political Science 36 (3): 579616.CrossRefGoogle Scholar
Zizzo, Daniel John. 2010. “Experimenter Demand Effects in Economic Experiments.” Experimental Economics 13 (1): 7598.CrossRefGoogle Scholar
Supplementary material: Link

Clifford et al. Dataset

Link
Supplementary material: File

Clifford et al. supplementary materials

Appendix

Download Clifford et al. supplementary materials(File)
File 951.7 KB
Submit a response

Comments

No Comments have been published for this article.