Skip to main content Accessibility help
×
Hostname: page-component-7479d7b7d-c9gpj Total loading time: 0 Render date: 2024-07-12T01:10:08.641Z Has data issue: false hasContentIssue false

27 - Tailored and targeted designs for hard-to-survey populations

Published online by Cambridge University Press:  05 September 2014

Marieke Haan
Affiliation:
University of Groningen
Yfke Ongena
Affiliation:
University of Groningen
Roger Tourangeau
Affiliation:
Westat Research Organisation, Maryland
Brad Edwards
Affiliation:
Westat Research Organisation, Maryland
Timothy P. Johnson
Affiliation:
University of Illinois, Chicago
Kirk M. Wolter
Affiliation:
University of Chicago
Nancy Bates
Affiliation:
US Census Bureau
Get access

Summary

Introduction

Obtaining survey data has become a challenging task, as response rates have decreased over the years in the United States and Europe (Atrostic, Bates, Burt, & Silberstein, 2001; de Heer, 1999). Collecting data from hard-to-survey populations is even more difficult; they are either hard to reach or known for low cooperation rates (for a more extensive discussion, see Tourangeau, Chapter 1 in this volume).

Complete lists covering many hard-to-survey populations do not exist (Sudman & Kalton, 1986) and there is no simple method to define these groups (Lin & Schaeffer, 1995; Smith, 1983). Nevertheless, researchers have made an attempt to identify characteristics of typical nonrespondents (e.g., Caetano, Ramisetty-Mikler, & McGrath, 2003; Gannon, Northern, & Carroll, 1971; Shahar, Folsom, & Jackson, 1996). Many nonresponse characteristics found in these studies are sample-specific and therefore not useful for other investigations. While most surveys cannot produce response rates by population group, use of inclusion rates (i.e., a ratio of the estimate in a survey to an official estimate) can provide useful information (Griffin, 2012). Hard-to-survey groups also possess characteristics that have demonstrated barriers to participating in many studies.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Atrostic, B. K., Bates, N., Burt, G., & Silberstein, A. (2001). Nonresponse in U.S. government household surveys: consistent measures, recent trends, and new insights. Journal of Official Statistics, 17(2), 209–26.Google Scholar
Axinn, W. G., Link, C. F., & Groves, R. M. (2011). Responsive survey design, demographic data collection, and models of demographic behavior. Demography, 48(3), 1127–49.CrossRefGoogle ScholarPubMed
Bauer, J. E. (2008). Tailoring. In Lavrakas, P. J. (ed.), Encyclopedia of Survey Research Methods (pp. 874–76). Thousand Oaks, CA: Sage.Google Scholar
Behr, A., Bellgardt, E., & Rendtel, U. (2005). Extent and determinants of panel attrition in the European Community Household Panel. European Sociological Review, 21(5), 489–512.CrossRefGoogle Scholar
Billiet, J., Philippens, M., Fitzgerald, R., & Stoop, I. (2007). Estimation of non-response bias in the European Social Survey: using information from reluctant sample members. Journal of Official Statistics, 23(2), 135–62.Google Scholar
Blohm, M., & Diehl, C. (2001). Wenn Migranten Migranten befragen, Zum Teilnahmeverhalten von Eindwanderern bei Bevölkerungsbefragungen. Zeitschrift für Soziologie, 30(3), 223–42.Google Scholar
Botman, S., & Thornberry, O. (1992). Survey design features correlates of non-response. In Joint Statistical Meetings Proceedings, Survey Research Methods Section (pp. 309–314). Alexandria, VA: American Statistical Association.Google Scholar
Brehm, J. (1993). The Phantom Sample Members: Opinion Surveys and Political Representation. Ann Arbor, MI: University of Michigan Press.Google Scholar
Brick, J. M., & Williams, D. (2013). Explaining rising nonresponse rates in cross-sectional surveys. ANNALS of the American Academy of Political and Social Science, 645(1), 36–59.CrossRefGoogle Scholar
Caetano, R., Ramisetty-Mikler, S., & McGrath, C. (2003). Characteristics of non-sample members in a U.S. national longitudinal survey on drinking and intimate partner violence. Addiction, 98(6), 791–97.CrossRefGoogle Scholar
Callegaro, M., Kruse, Y., Thomas, M., & Nukulkij, P. (2009, May). The Effect of Email Invitation Customization on Survey Completion Rates in an Internet Panel: A Meta-Analysis of 10 Public Affairs Surveys. Paper presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL.
Campanelli, P., Sturgis, P., & Purdon, S. (1997). Can You Hear Me Knocking: An Investigation into the Impact of Interviewers on Survey Response Rates. London, United Kingdom: Survey Methods Centre SCPR.Google Scholar
Chesnut, J. (2010). Testing an Additional Mailing Piece in the American Community Survey, Final report. Washington, DC: US Census Bureau.Google Scholar
Cialdini, R. B., Braver, S. L., & Wolf, W. S. (1991). A New Paradigm for Experiments on the Causes of Survey Nonresponse. Paper presented at the Second International Workshop on Household Survey Nonresponse, Washington, DC.Google Scholar
Cialdini, R. B., Braver, S. L., & Wolf, W. S. (1993, September). Predictors of Non-Response in Government and Commercial Surveys. Paper presented at the Fourth International Workshop on Household Survey Nonresponse, Bath.
Cialdini, R. B., Braver, S. L., Wolf, W. S., & Pitts, S. (1992, September). Who Says No to Legitimate Survey Requests? Evidence from a New Method for Studying the Causes of Survey Non-Response. Paper presented at the Third International Workshop on Household Survey Nonresponse, The Hague.Google Scholar
Cohen, G., & Duffy, J. C. (2002). Are sample members to health surveys less healthy than sample members?Journal of Official Statistics, 18(1), 13–23.Google Scholar
Couper, M., & Wagner, J. (2011, August). Using Paradata and Responsive Design to Manage Survey Non-Response. Invited paper presented to the World Statistics Congress of the International Statistical Institute Conference, Dublin.Google Scholar
Cunningham, P., Martin, D., & Brick, M. (2003). An experiment in call scheduling. In Joint Statistical Meetings Proceedings, Survey Research Methods Section (pp. 59–66). Deerfield, IL: American Association for Public Opinion Research.Google Scholar
de Heer, W. (1999). International response trends: results of an international survey. Journal of Official Statistics, 15(2), 129–42.Google Scholar
de Leeuw, E. D. (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2), 233–55.Google Scholar
de Leeuw, E. D., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advance letters on response in telephone surveys. Public Opinion Quarterly, 71(3), 413–43.CrossRefGoogle Scholar
Dillman, D. A., Clark, J. R., & Sinclair, M. A. (1995). How prenotice letters, stamped return envelopes, and reminder postcards affect mailback response rates for census questionnaires. Survey Methodology, 21(2), 1–7.Google Scholar
Dillman, D. A., West, K. K., & Clark, J. R., (1994). Influence of an invitation to answer by telephone on response to census questionnaires. Public Opinion Quarterly, 58(4), 557–68.CrossRefGoogle Scholar
Durrant, G. B., D’Arrigo, J., & Steele, F. (2011). Using paradata to predict best times of contact, conditioning on household and interviewer influences. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 1029–49.CrossRefGoogle Scholar
Durrant, G. B., Groves, R. M., Staetsky, L., & Steele, F. (2010). Effects on interviewer attitudes and behaviors on refusal in household surveys. Public Opinion Quarterly, 74(1), 1–36.CrossRefGoogle Scholar
Durrant, G. B., & Steele, F. (2009). Multilevel modeling of refusal and non-contact in household surveys: evidence from six UK government surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 172(2), 361–81.CrossRefGoogle Scholar
Feskens, R. C. W. (2009). Difficult Groups in Survey Research and the Development of Tailor-Made Approach Strategies. Utrecht: University of Utrecht.Google Scholar
Feskens, R. C. W., Hox, J. J., Lensvelt-Mulders, G. J. L. M., & Schmeets, J. J. G. (2007). Non-response among ethnic minorities: a multivariate analysis. Journal of Official Statistics, 23(3), 387–408.Google Scholar
Forsyth, B., Rothgeb, J., & Willis, G. (2004). Does pretesting make a difference? An experimental test. In Presser, S., Rothgeb, J. M., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., & Singer, E. (eds.), Methods for Testing and Evaluating Survey Questionnaires (pp. 525–46). Hoboken, NJ: John Wiley & Sons.CrossRefGoogle Scholar
Fumagalli, L., Laurie, H., & Lynn, P. (2013). Experiments with methods to reduce attrition in longitudinal surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(2), 499–519.CrossRefGoogle Scholar
Gannon, M. J., Northern, J. C., & CarrollJr., S. J. (1971). Characteristics of nonsample members among workers. Journal of Applied Psychology, 55(6), 586–88.CrossRefGoogle Scholar
Gillian, E., Loosveldt, G., Lynn, P., Martin, P., Revilla, M., Saris, W., & Vannieuwenhuyze, J. (2010). ESS Prep6 – Mixed-Mode Experiment. Final Mode Report. Unpublished research report. London: Centre for Comparative Social Surveys, City University.Google Scholar
Goyder, J. (1987). The Silent Minority. Nonsample Members on Sample Surveys. Cambridge: Polity Press.Google Scholar
Goyder, J., Lock, J., & McNair, T. (1992). Urbanization effects on survey non-response: a test within and across cities. Quality and Quantity, 26(1), 39–48.CrossRefGoogle Scholar
Grady, W. R. (1981). National Survey of Family Growth, Cycle II: Sample design, estimation procedures, and variance estimation. Data Evaluation and Methods Research, Series 2, Number 87. DHHS Publication No. (PHS) 81–1361. Hyattsville, MD: US Department of Health and Human Services, National Center for Health Statistics.
Griffin, D. H. (2012). Evaluating Response in the American Community Survey by Race and Ethnicity. Final Report. Washington, DC: USCensus Bureau.Google Scholar
Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56(4), 475–95.CrossRefGoogle Scholar
Groves, R. M., & Couper, M. P. (1998). Nonresponse in Household Interview Surveys. New York: John Wiley & Sons.CrossRefGoogle Scholar
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004). Survey Methodology. Hoboken, NJ: John Wiley & Sons.Google Scholar
Groves, R. M., & Heeringa, S. G. (2006). Responsive design for household surveys: tools for controlling survey errors and costs. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169(3), 439–57.CrossRefGoogle Scholar
Groves, R. M., & Kahn, R. L. (1979). Surveys by Telephone: A National Comparison with Personal Interviews. New York: Academic Press.Google Scholar
Groves, R. M., & McGonagle, K. A. (2001). A theory-guided interviewer training protocol regarding survey participation. Journal of Official Statistics, 17(2), 249–66.Google Scholar
Groves, R. M., & Peytcheva, E. (2008). The impact of non-response rates on non-response bias: a meta-analysis. Public Opinion Quarterly, 72(2), 167–89.CrossRefGoogle Scholar
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation. Public Opinion Quarterly, 64(3), 299–308.CrossRefGoogle ScholarPubMed
Haan, M., Ongena, Y. P., & Aarts, C. W.A. M. (2014). Reaching hard-to-survey populations: mode choice and mode preference. Journal of Official Statistics, 30(2), 1–25.CrossRefGoogle Scholar
Hoffer, T., Grigorian, K., & Fesco, R. (2007, July). Effectiveness of Using Respondent Mode Preference Data. Paper presented at the Joint Statistical Meetings of the American Statistical Association, Salt Lake City.Google Scholar
Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). Telephone vs. face-to-face interviewing of national probability samples with long questionnaires: comparisons of respondent satisficing and social desirability response bias. Public Opinion Quarterly, 67(1), 79–125.CrossRefGoogle Scholar
Hoopman, R., Terwee, C. B., Muller, M. J., Öry, F. G., & Aaronson, N. K. (2009). Methodological challenges in quality of life research among Turkish and Moroccan ethnic minority cancer patients: translation, recruitment and ethical issues. Ethnicity & Health, 14(3), 237–53.CrossRefGoogle ScholarPubMed
Japec, L. (2008). Interviewer error and interviewer burden. In Lepkowski, J. M., Tucker, C., Brick, J. M., de Leeuw, E. D., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (eds.), Advances in Telephone Survey Methodology (pp. 187–211). Hoboken, NJ: John Wiley and Sons.Google Scholar
Joshipura, M. (2008). 2005 ACS Respondent Characteristics Evaluation: Evaluation Report. DSSD American Community Survey Research and Evaluation Memorandum Series Chapter #ACS-RE-2. Washington, DC: US Census Bureau.Google Scholar
Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A comparison of web and mail survey response rates. Public Opinion Quarterly, 68(1), 94–101.CrossRefGoogle Scholar
Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–49.CrossRefGoogle Scholar
Keeter, S., Miller, C., Kohut, A., Groves, R. M., & Presser, S. (2000). Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly, 64(2), 125–48.CrossRefGoogle Scholar
Kreuter, F. (2013). Facing the nonresponse challenge. ANNALS of the American Academy of Political and Social Science, 645(1), 23–35.CrossRefGoogle Scholar
Kreuter, F., & Kohler, U. (2009). Analyzing contact sequences in call record data. Potential and limitations of sequence indicators for non-response adjustments in the European Social Survey. Journal of Official Statistics, 25(2), 203–26.Google Scholar
Kreuter, M. W. (2003). Tailored and targeted health communication: strategies for enhancing information relevance. American Journal of Health Behavior, 27(Suppl. 3), 227–32.CrossRefGoogle ScholarPubMed
Kreuter, M. W., Farrell, D., Olevitch, L., & Brennan, L. (1999). Tailored Health Messages: Customizing Communication with Computer Technology. Mahway, NJ: Erlbaum.Google Scholar
Laflamme, F., & Karaganis, M. (2010, December). Implementation of Responsive Collection Design for CATI Surveys at Statistics Canada. Paper presented at the Symposium on Recent Advances in the Use of Paradata in Social Survey Research, London.Google Scholar
Laurie, H., & Lynn, P. (2009). The use of respondent incentives on longitudinal surveys. In Lynn, P. (ed.), Methodology of Longitudinal Surveys (pp. 205–33). Hoboken, NJ: John Wiley & Sons.CrossRefGoogle Scholar
Laurie, H., Smith, R., & Scott, L. (1999). Strategies for reducing nonresponse in a longitudinal panel survey. Journal of Official Statistics, 15(2), 269–82.Google Scholar
Lillard, L. A., & Panis, C. W. A. (1998). Panel attrition from the Panel Study of Income Dynamics: Household income, marital status and mortality. Journal of Human Resources, 33(2), 437–57.CrossRefGoogle Scholar
Lin, I., & Schaeffer, N. C. (1995). Using survey participants to estimate the impact of nonparticipation. Public Opinion Quarterly, 59(2), 236–58.CrossRefGoogle Scholar
Lipps, O. (2012). Using information from telephone panel surveys to predict reasons for refusal. Methoden – Daten – Analysen, 6(1), 3–20.Google Scholar
McGonagle, K., Couper, M., & Schoeni, R. F. (2011). Keeping track of panel members: an experimental test of a between-wave contact strategy. Journal of Official Statistics, 27(2), 319–38.Google ScholarPubMed
Marsden, P. V., & Wright, J. D. (2010). Handbook of Survey Research. Bingley: Emerald.Google Scholar
Martin, E., Abreu, D., & Winters, F. (2001). Money and motive: effects of incentives on panel attrition in the survey of income and program participation. Journal of Official Statistics, 17(2), 267–84.Google Scholar
Maynard, D. W., Freese, J., & Schaeffer, N. C. (2011). Requests, blocking moves, and rational (inter)action in survey introductions. American Sociological Review, 75(5), 791–998.CrossRefGoogle Scholar
Maynard, D. W., & Schaeffer, N. C. (2002). Refusal conversion and tailoring. In Maynard, D. W., Houtkoop-Steenstra, H., Schaeffer, N. C., & van der Zouwen, J. (eds.), Standardization and Tacit Knowledge: Interaction and Practice in the Survey Interview (pp. 219–39). New York: John Wiley & Sons.Google Scholar
Moore, J. (1988). Miscellanea, self/proxy response status and survey response quality: a review of literature. Journal of Official Statistics, 4(2), 155–72.Google Scholar
Morton-Williams, J. (1993). Interviewer Approaches. London: Dartmouth Publishing Company.Google Scholar
Nealon, J. (1983). The effects of male vs. female telephone interviewers. In Joint Statistical Meetings Proceedings, Survey Research Methods Section (pp. 139–141). Alexandria, VA: American Statistical Association.Google Scholar
Olson, K. (2013). Paradata for nonresponse adjustment. ANNALS of the American Academy of Political and Social Science, 645(1), 142–70.CrossRefGoogle Scholar
Olson, K., Lepkowski, J. M., & Garabrant, D. H. (2011). An experimental examination of the content of persuasion letters on nonresponse rates and survey estimates in a nonresponse follow-up study. Survey Research Methods, 5(1), 21–26.Google Scholar
Olson, K., Smyth, J. D., & Wood, H. M. (2012). Does providing sample members with their preferred survey mode really increase participation rates?Public Opinion Quarterly, 76(4), 611–35.CrossRefGoogle Scholar
Olson, K., & Witt, L. (2011). Are we keeping the people who used to stay? Changes in correlates of panel survey attrition over time. Social Science Research, 40(4), 1037–50.CrossRefGoogle Scholar
Pondman, L. M. (1998). The Influence of the Interviewer on the Refusal Rate in Telephone Surveys. Amsterdam: Free University of Amsterdam.Google Scholar
Porter, S. R. (2004). Raising response rates: what works?New Directions for Institutional Research, 121, 5–21.CrossRefGoogle Scholar
Rodgers, W. (2011). Effects of increasing the incentive size in a longitudinal study. Journal of Official Statistics, 27(2), 279–99.Google Scholar
Schneider, S. J., Cantor, D., Malakhoff, L., Arieira, C., Segel, P., Nguyen, K., & Tancreto, J. G. (2005). Telephone, internet and paper data collection modes for the census 2000 short form. Journal of Official Statistics, 21(1), 89–101.Google Scholar
Schouten, B., Calinescu, M., & Luiten, A. (2011). Optimizing Quality of Response Through Adaptive Survey Designs. Discussion Paper (201118). The Hague/Heerlen: Statistics Netherlands.Google Scholar
Shahar, E., Folsom, A. R., & Jackson, R. (1996). The effect of nonresponse on prevalence estimates for a referent population: insights from a population-based cohort study. Annals of Epidemiology, 6(6), 498–506.CrossRefGoogle ScholarPubMed
Singer, E. (2002). The use of incentives to reduce nonresponse in household surveys. In Groves, R. M., Dillman, D. A., Eltinge, J. L., & Little, R. J. A. (eds.), Survey Nonresponse (pp. 163–178). New York: John Wiley & Sons.Google Scholar
Singer, E., Groves, R. M., & Corning, A. (1999). Differential incentives: beliefs about practices, perceptions of equity, and effects on survey participation. Public Opinion Quarterly, 63(2), 251–60.CrossRefGoogle Scholar
Singer, E., Van Hoewyk, J., & Maher, M. P. (2000). Experiments with incentives in telephone surveys. Public Opinion Quarterly, 64(2), 171–88.CrossRefGoogle ScholarPubMed
Smith, T. W. (1983). The hidden 25 percent: an analysis of nonresponse on the 1980 General Social Survey. Public Opinion Quarterly, 47(3), 386–404.CrossRefGoogle Scholar
Stoop, I. (2005). The Hunt for the Last Respondent. Non-Response in Sample Surveys. The Hague: Social and Cultural Planning Agency.Google Scholar
Sudman, S., & Kalton, G. (1986). New developments in the sampling of special populations. Annual Review of Sociology, 12, 401–29.CrossRefGoogle Scholar
Tancreto, J. G., Zelenak, M. F., Davis, M., Ruiter, M., & Matthews, B. (2012). 2011 American Community Survey Internet Tests: Results from First Test in April 2011. Final Report. Washington, DC: USCensus Bureau.Google Scholar
Tourangeau, R., & Ye, C. (2009). The framing of the survey request and panel attrition. Public Opinion Quarterly, 73(2), 338–48.CrossRefGoogle Scholar
Uhrig, S. C. N. (2008). The Nature and Causes of Attrition in the British Household Panel Study. Working Paper 2008–05. Colchester: Institute for Social and Economic Research, University of Essex.Google Scholar
Wagner, J. R. (2008). Adaptive Survey Design to Reduce Nonresponse Bias. Ann Arbor, MI: University of Michigan Press.Google Scholar
Watson, N., & Wooden, M. (2009). Identifying factors affecting longitudinal survey response. In Lynn, P. (ed.), Methodology of Longitudinal Surveys (pp. 157–81). New York: John Wiley & Sons.CrossRefGoogle Scholar
Watson, N., & Wooden, M. (2011). Re-Engaging with Survey Non-Respondents: The BHPS, SOEP and HILDA Survey Experience. Working Paper Series No. 2/11. Melbourne: Melbourne Institute of Applied Economic and Social Research, University of Melbourne.Google Scholar
West, B. T. (2012). An examination of the quality and utility of interviewer observations in the National Survey of Family Growth. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(1), 211–25.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×