Hostname: page-component-77c89778f8-swr86 Total loading time: 0 Render date: 2024-07-16T14:19:55.346Z Has data issue: false hasContentIssue false

Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results

Published online by Cambridge University Press:  04 January 2017

Annie Franco
Affiliation:
Department of Political Science, Stanford University, 616 Serra St. Stanford, CA 94305, USA, e-mail: abfranco@stanford.edu
Neil Malhotra*
Affiliation:
Graduate School of Business, Stanford University, 655 Knight Way, Stanford, CA 94305, USA
Gabor Simonovits
Affiliation:
Department of Political Science, Stanford University, 616 Serra St. Stanford, CA 94305, USA, e-mail: gabor.simonovits@gmail.com
*
e-mail: neilm@stanford.edu (corresponding author)

Abstract

The accuracy of published findings is compromised when researchers fail to report and adjust for multiple testing. Preregistration of studies and the requirement of preanalysis plans for publication are two proposed solutions to combat this problem. Some have raised concerns that such changes in research practice may hinder inductive learning. However, without knowing the extent of underreporting, it is difficult to assess the costs and benefits of institutional reforms. This paper examines published survey experiments conducted as part of the Time-sharing Experiments in the Social Sciences program, where the questionnaires are made publicly available, allowing us to compare planned design features against what is reported in published research. We find that: (1) 30% of papers report fewer experimental conditions in the published paper than in the questionnaire; (2) roughly 60% of papers report fewer outcome variables than what are listed in the questionnaire; and (3) about 80% of papers fail to report all experimental conditions and outcomes. These findings suggest that published statistical tests understate the probability of type I errors.

Type
Letters
Copyright
Copyright © The Author 2015. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Author's note: Supplementary materials for this article are available on the Political Analysis Web site. Replication data are available on the Dataverse site for this article, http://dx.doi.org/10.7910/DVN/28766.

References

Anderson, Richard G. 2013. Registration and replication: A comment. Political Analysis 21:3839.Google Scholar
Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., Fiedler, S., Funder, D. C., Kliegl, R., Nosek, B. A., Perugini, M., Roberts, B. W., Schmitt, M., van Aken, M. A. G., Weber, H., and Wicherts, J. M. 2013. Recommendations for increasing replicability in psychology. European Journal of Personality 27:108–19.Google Scholar
Casey, Katherine, Glennerster, Rachel, and Miguel, Edward. 2012. Reshaping institutions: Evidence on aid impacts using a preanalysis plan. Quarterly Journal of Economics 127:17551812.Google Scholar
Chambers, Christopher D. 2013. Registered reports: A new publishing initiative at Cortex. Cortex 49:609–10.Google Scholar
Chan, An-Wen, and Altman, Douglas G. 2005. Identifying outcome reporting bias in randomised trials on PubMed: Review of publications and survey of authors. British Medical Journal 330:753.Google Scholar
Chan, An-Wen, Hróbjartsson, Asbjørn, Haahr, Mette T., Gøtzsche, Peter C., and Altman, Douglas G. 2004. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. Journal of the American Medical Association 291:2457–65.Google Scholar
Franco, Annie, Malhotra, Neil, and Simonovits, Gabor. 2014. Publication bias in the social sciences: Unlocking the file drawer. Science 345:15021505.Google Scholar
Franco, Annie, Malhotra, Neil, and Simonovits, Gabor. 2015. Replication data for: Underreporting in political science survey experiments: Comparing questionnaires to published results. http://dx.doi.org/10.7910/DVN/28766 Dataverse [Distributor] V1 [Version], January 21, 2015.Google Scholar
Gelman, Andrew, Hill, Jennifer, and Yajima, Masanao. 2012. Why we (usually) don't have to worry about multiple comparisons. Journal of Research on Educational Effectiveness 5:189211.Google Scholar
Gelman, Andrew, and Tuerlinckx, Francis. 2000. Type S error rates for classical and Bayesian single and multiple comparison procedures. Computational Statistics 15:373–90.Google Scholar
Gerber, Alan, and Malhotra, Neil. 2008. Do statistical reporting standards affect what is published? Publication bias in two leading political science journals. Quarterly Journal of Political Science 3:313–26.Google Scholar
Humphreys, Macartan, Sanchez de la Sierra, Raul, and van der Windt, Peter. 2013. Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration. Political Analysis 21:120.Google Scholar
King, Gary, Gakidou, Emmanuela, Ravishankar, Ninnala, Moore, Ryan T., Lakin, Jason, Vargas, Manett, María Téllez-Rojo, Martha, Hernández Ávila, Juan Eugenio, Hernández Ávila, Mauricio, and Hernández Llamas, Héctor. 2007. A “politically robust” experimental design for public policy evaluation, with application to the Mexican Universal Health Insurance Program. Journal of Policy Analysis and Management 26:479506.Google Scholar
Laitin, David D. 2013. Fisheries management. Political Analysis 21:4247.Google Scholar
Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K. M., Gerber, A., Glennerster, R., Green, D. P., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B. A., Petersen, M., Sedlmayr, R., Simmons, J. P., Simonsohn, U., and Van der Laan, M. 2014. Promoting transparency in social science research. Science 343:3031.Google Scholar
Monogan, III, James, E. 2013. A case for registering studies of political outcomes: An application in the 2010 House elections. Political Analysis 21:2137.Google Scholar
Moreno, Santiago G., Sutton, Alex J., Turner, Erick H., Abrams, Keith R., Cooper, Nicola J., Palmer, Tom M., and Ades, A. E. 2009. Novel methods to deal with publication biases: Secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. British Medical Journal 339:b2981.Google Scholar
Rising, Kristin, Bacchetti, Peter, and Bero, Lisa. 2008. Reporting bias in drug trials submitted to the Food and Drug Administration: Review of publication and presentation. PLoS Medicine 5:217.Google Scholar
Simmons, Joseph P., Nelson, Leif D., and Simonsohn, Uri. 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science 22:13591366.Google Scholar
Supplementary material: PDF

Franco et al. supplementary material

Appendix

Download Franco et al. supplementary material(PDF)
PDF 193.3 KB