Hostname: page-component-84b7d79bbc-rnpqb Total loading time: 0 Render date: 2024-07-28T21:45:25.789Z Has data issue: false hasContentIssue false

Self-Administered Field Surveys on Sensitive Topics

Published online by Cambridge University Press:  10 June 2020

Matthew Nanes
Affiliation:
Saint Louis University, St. Louis, MO, USA, e-mail: matthew.nanes@slu.edu, Twitter: @MatthewJNanes
Dotan Haim
Affiliation:
Florida State University, Tallahassee, FL, USA, e-mail: dhaim@fsu.edu, Twitter: @HaimDotan

Abstract

Research on sensitive topics uses a variety of methods to combat response bias on in-person surveys. Increasingly, researchers allow respondents to self-administer responses using electronic devices as an alternative to more complicated experimental approaches. Using an experiment embedded in a survey in the rural Philippines, we test the effects of several such methods on response rates and falsification. We asked respondents a sensitive question about reporting insurgents to the police alongside a nonsensitive question about school completion. We randomly assigned respondents to answer these questions either verbally, through a “forced choice” experiment, or through self-enumeration. We find that self-enumeration significantly reduced nonresponse compared to direct questioning, but find little evidence of differential rates of falsification. Forced choice yielded highly unlikely estimates, which we attribute to nonstrategic falsification. These results suggest that self-administered surveys can be effective for measuring sensitive topics on surveys when response rates are a priority.

Type
Research Article
Copyright
© The Experimental Research Section of the American Political Science Association 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Nico Ravanilla played a substantial role in designing and implementing the survey. The authors thank Konstantin Ash, Kolby Hanson, Connor Huff, and Steven Rogers for their comments. This work was supported by Evidence in Governance and Politics Metaketa IV. Both authors received significant financial support for their work through Evidence in Governance and Politics, and the University of California San Diego Policy Design and Evaluation Lab. Additionally, Nanes received significant financial support from The Asia Foundation unrelated to this research, and Haim received significant financial support from the United Nations Development Program unrelated to this research. The data and code to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network at https://doi.org/10.7910/DVN/GL28QD.

References

Adida, C. L., Ferree, K. E., Posner, D. N. and Robinson, A. L.. 2016. Whos Asking? Interviewer Coethnicity Effects in African Survey Data. Comparative Political Studies 49(12): 1630–60.CrossRefGoogle Scholar
Ahlquist, J. S. 2018. List Experiment Design, Non-Strategic Respondent Error, and Item Count Technique Estimators. Political Analysis 26(1): 3453.CrossRefGoogle Scholar
Berman, E., Shapiro, J. N. and Felter, J. H. 2011. Can Hearts and Minds be Bought? The Economics of Counterinsurgency in Iraq. Journal of Political Economy 119(4): 766819.CrossRefGoogle Scholar
Blair, G., Chou, W. and Imai, K. 2019. List Experiments with Measurement Error. Political Analysis 27(4): 455–80.CrossRefGoogle Scholar
Blair, G. and Imai, K. 2012. Statistical Analysis of List Experiments. Political Analysis 20(1): 4777.CrossRefGoogle Scholar
Blair, G., Imai, K. and Zhou, Y.-Y. 2015. Design and Analysis of the Randomized Response Technique. Journal of the American Statistical Association 110(511): 1304–19.CrossRefGoogle Scholar
Bush, S. S. and Prather, L. 2019. Do Electronic Devices in Face-to-Face Interviews Change Survey Behavior? Evidence from a Developing Country. Research & Politics 6(2): 17.CrossRefGoogle Scholar
Corstange, D. 2009. Sensitive Questions, Truthful Answers? Modeling the List Experiment with Listit. Political Analysis 17(1): 4563.CrossRefGoogle Scholar
Gelman, A. 2014. Thinking of doing a list experiment? Heres a list of reasons why you should think again, Technical report. https://statmodeling.stat.columbia.edu/2014/04/23/thinking-list-experiment-heres-list-reasons-think/.Google Scholar
Gnambs, T. and Kaspar, K. 2014. Disclosure of Sensitive Behaviors Across Self-Administered Survey Modes: A Meta-Analysis. Behavior Research Methods 47(4): 1237–59.CrossRefGoogle Scholar
Imai, K. 2011. Multivariate Regression Analysis for the Item Count Technique. Journal of the American Statistical Association 106(494): 407–16.CrossRefGoogle Scholar
Kim, J., Kang, J.-h., Kim, S., Smith, T. W., Son, J. and Berktold, J. 2010. Comparison Between Self-Administered Questionnaire and Computer-Assisted Self-Interview for Supplemental Survey Nonresponse. Field Methods 22(1): 5769.CrossRefGoogle Scholar
Kraay, A. and Murrell, P. 2013. Misunderestimating corruption, The World Bank.CrossRefGoogle Scholar
Kramon, E. and Weghorst, K. 2019. ‘(Mis) Measuring Sensitive Attitudes with the List Experiment: Solutions to List Experiment Breakdown in Kenya’. Public Opinion Quarterly 83(S1): 236263.CrossRefGoogle Scholar
Kuklinski, J. H., Cobb, M. D. and Gilens, M. 1997. Racial Attitudes and the “New South”. The Journal of Politics 59(2): 323349.CrossRefGoogle Scholar
Lyall, J., Blair, G. and Imai, K. 2013. Explaining Support for Combatants During Wartime: A Survey Experiment in Afghanistan. American Political Science Review 107(04): 679705.CrossRefGoogle Scholar
Lyall, J., Zhou, Y. and Imai, K. 2018, ‘Reducing insurgent support among at-risk populations: Experimental evidence from cash transfers and livelihood training in afghanistan’, SSRN Working Paper 3026531.Google Scholar
Nanes, M. and Haim, D. 2020. Replication Data for: Self-Administered Field Surveys on Sensitive Topics. Harvard Dataverse. https://doi.org/10.7910/DVN/GL28QD.Google Scholar
Nanes, M. J. 2020. Police Integration and Support for Anti-Government Violence in Divided Societies: Evidence from Iraq. Journal of Peace Research 57(2): 329343.CrossRefGoogle Scholar
Nanes, M. and Lau, B. 2018. Surveys and Countering Violent Extremism: A Practitioner Guide. The Asia Foundation. https://asiafoundation.org/publication/surveys-countering-violent-extremism/.Google Scholar
Rosenfeld, B., Imai, K. and Shapiro, J. N. 2016. An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions. American Journal of Political Science 60(3): 783802.CrossRefGoogle Scholar
Tourangeau, R. and Yan, T. 2007. Sensitive Questions in Surveys. Psychological Bulletin 133(5): 859.CrossRefGoogle ScholarPubMed
Warner, S. L. 1965. Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias. Journal of the American Statistical Association 60(309): 6369.CrossRefGoogle ScholarPubMed
Supplementary material: Link

Nanes and Haim Dataset

Link
Supplementary material: PDF

Nanes and Haim supplementary material

Online Appendix

Download Nanes and Haim supplementary material(PDF)
PDF 287.9 KB