Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-25T07:09:44.289Z Has data issue: false hasContentIssue false

Survey Nonresponse and Mass Polarization: The Consequences of Declining Contact and Cooperation Rates

Published online by Cambridge University Press:  02 May 2022

AMNON CAVARI*
Affiliation:
Reichman University, Israel
GUY FREEDMAN*
Affiliation:
University of Texas at Austin, United States
*
Amnon Cavari, Assistant Professor, Lauder School of Government, Diplomacy and Strategy, Reichman University (IDC), Israel, cavari@idc.ac.il.
Guy Freedman, PhD Candidate, Department of Government, University of Texas at Austin, United States, freedmanguy@utexas.edu.
Rights & Permissions [Opens in a new window]

Abstract

Recent studies question whether declining response rates in survey data overstate the level of polarization of Americans. At issue are the sources of declining response rates—declining contact rates, associated mostly with random polling mechanisms, or declining cooperation rates, associated with personal preferences, knowledge, and interest in politics—and their differing effects on measures of polarization. Assessing 158 surveys (2004–2018), we show that declining cooperation is the primary source of declining response rates and that it leads to survey overrepresentation of people who are more engaged in politics. Analyzing individual responses to 1,223 policy questions in those surveys, we further show that, conditional on the policy area, this survey bias overestimates or underestimates the partisan divide among Americans. Our findings question the perceived strength of mass polarization and move forward the discussion about the effect of declining survey response on generalizations from survey data.

Type
Letter
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the American Political Science Association

Scholars increasingly consider how the rapid decline in survey response affects various measures of public opinion. The emerging consensus is that low response rates—reaching today below 10% in most polling organizations in the United States—do not necessarily generate survey bias (Jennings and Wlezien Reference Jennings and Wlezien2018; Keeter Reference Keeter2018). Rather, the quality of survey measurements depends on the correlation between the characteristics of the resulting sample and the measures of interest (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021; Prosser and Mellon Reference Prosser and Mellon2018). Such bias may be at issue when measuring polarization, where overrepresentation of certain, more engaged, knowledgeable, and polarized groups may affect survey measurements (Abramowitz and Saunders Reference Abramowitz and Saunders2008).

To illustrate this suggested bias, Figure 1 plots the Kendall tau correlation coefficients between party identification (Democrat-Republican; 7 scale) and response to six policy questions that have been repeatedly asked in every American National Election Study (ANES) from 1984 to 2020 (Liberal-Conservative; 7 scale), along with the unit response rates in those surveys.

Table 1. The Effect of Measures of Response Rates on Measures of Mass Polarization in the US

Note: Models estimated using the “lmridge” package in R (Imdad and Aslam Reference Imdad and Aslam2018). The K tuning parameter is the KM4 estimation proposed in (Muniz and Kibria Reference Muniz and Kibria2009). They find that KM4 yields the lowest MSE values in samples of N = 100 and when the correlation between two predictors is 0.9, which most resembles our data. *p < 0.05, **p < 0.01, *** p < 0.001.

Figure 1. Correlation between Policy Preferences and Party Attachment in ANES Data

The figure demonstrates a gradual increase in measured polarization—higher correlation coefficients—of all series until 2012, a surprising drop in 2016, and a return to the trend in 2020. We suggest that the changing response rate (white line) explains the puzzling changes in polarization. Response rates—the proportion of respondents who participate in a poll—in ANES surveys have been among the highest in U.S. polling. But, even this highly acclaimed series has not been immune to the general trend of declining response in survey data—at about 70% in the 1980s, dropping to 60% in the following two decades, and below 50% in the last three election cycles. With every drop, we see an increase in the measure of polarization. In 2016 both trends reversed—response rate rose, and all six policy measures declined. In 2020 both trends reversed again. This descriptive illustration speaks for itself: existing measures of polarization are strongly associated with survey response rates (r = -0.82, p = 0.000). While we concur that polarization of Americans is real (Abramowitz Reference Abramowitz2018; Campbell Reference Campbell2016), the low response rates in current probability samples may be exaggerating the perceived level of polarization.

In a previous study (Cavari and Freedman Reference Cavari and Freedman2018; hereafter CF), we offered empirical support for this claim. Analyzing rich survey data from Pew, we demonstrated that as survey response declines, survey samples overrepresent politically engaged respondents who report more polarized views on several domestic issues. Using the same data and additional simulations, Mellon and Prosser (Reference Mellon and Prosser2021; hereafter MP) suggest that the relationship between survey response and nonresponse bias depends on the cause of low survey response, specifically, declining contact rates, associated with random polling mechanisms and caller ID features that offer screening abilities, or declining cooperation rates, associated mostly with personal preferences, knowledge, and interest in politics.

To establish that survey response produces a survey bias of engaged and involved respondents, we should, therefore, consider the cause of nonresponse. If the primary causes are random effects of increased cold-calling or socially driven ability to decline cooperation using caller ID, we should expect no consistent, directional effect of contact rates on polarization (Contact Hypothesis). If, however, the primary cause is the purposeful refusal to participate in surveys, then we should expect the personal preferences, knowledge, and interest in politics that affect cooperation to generate an engagement bias that is correlated with measures of polarization (Cooperation Hypothesis).

We expect that the effect of declining cooperation on measures of polarization is conditioned on the policy domain. Specifically, we expect that the decline in cooperation rates—resulting in an overrepresentation of an engaged public—is associated with increased measures of polarization on domestic performance issues (economy, immigration, and energy). Because neither party owns these issues, public attitudes and diverging partisan views are affected by political knowledge and awareness of elite positions (Egan Reference Egan2013). We do not expect a similar association on civil rights and social welfare—issues that have traditionally displayed strong disagreements between the parties, have further polarized over time (Campbell Reference Campbell2016; Webster and Abramowitz Reference Webster and Abramowitz2017), and are shared by most levels of political awareness (Claassen and Highton Reference Claassen and Highton2009). In contrast, we expect that on foreign policy, where Americans possess little information and appear to rely heavily on various leadership cues to form an opinion (Guisinger and Saunders Reference Guisinger and Saunders2017), a decline in cooperation rates is associated with a decrease in polarization. Politically engaged participants, who make up a larger share of low-cooperation-rate surveys, are expected to demonstrate a weaker divide: they are more likely to be informed about, and their position affected by, real events and nonpartisan professional cues (Gelpi Reference Gelpi2010; Sulfaro Reference Sulfaro1996), they are more likely to revert to more structured purposeful attitudes (Cavari and Freedman Reference Cavari and Freedman2021; Page and Bouton Reference Page and Bouton2006), and they are less likely to see distinct foreign policy types among elites (Kertzer, Brooks, and Brooks Reference Kertzer, Brooks and Brooks2021).

To test the relative effect of contact and cooperation rates on polarization over various policy domains, we updated the data used by CF and MP to include survey responses to 1,223 policy questions from 158 Pew surveys collected between 2004 and 2018 that report information on the two measures of survey response. Following the two previous studies, we operationalized the dependent variable (polarization) as Cohen’s d coefficient of mean differences between the parties and divided the data into their six issue domains: economy, civil rights, energy, immigration, social welfare, and foreign affairs.Footnote 1 The results suggest that declining cooperation rates are the primary factor in declining response rates; and that poor response rates—caused by random contact and by purposive cooperation—bias survey measurements of polarization. The cause, direction, and strength of bias are conditional on the policy issue.

Declining Response Rate and Survey Bias

The apparent decline in unit response in probability sampling survey data has generated scholarly interest in the extent to which this decline causes survey bias. The evidence suggests that a drop in survey response does not necessarily generate survey bias (Prosser and Mellon Reference Prosser and Mellon2018). For example, Jennings and Wlezien (Reference Jennings and Wlezien2018) analyze the accuracy of election polls in national surveys from 45 countries between 1942 and 2017. They find that although declining response rates pose real challenges to the representativeness of surveys, we may have a reasonable portrait of electoral preferences because there are more polls today, often with larger samples, and most pollsters have incorporated weighting and other techniques to increase representativeness. However, this assumption does not hold when nonresponse is correlated with the variable of interest. Such survey bias may explain some of the 2020 preelection polling misses (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021; Keeter, Kennedy, and Deane Reference Keeter, Kennedy and Deane2020; Panagopoulos Reference Panagopoulos2021).

Studying the effect of nonresponse on measures of polarization is difficult because we lack information on the demographics and preferences of those not included in the polls (Berinsky Reference Berinsky2004; Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021). And yet, examining the characteristics of those in the sample can reveal possible biases that may correlate with measures of polarization. Specifically, a rich body of scholarly work shows that surveys overrepresent politically interested respondents (Groves, Presser, and Dipko Reference Groves, Presser and Dipko2004; Keeter et al. Reference Keeter, Kennedy, Dimock, Best and Craighill2006; Mellon and Prosser Reference Mellon and Prosser2017; Tourangeau, Groves, and Redline Reference Tourangeau, Groves and Redline2010) and that politically engaged respondents are more polarized than unengaged respondents (Abramowitz and Saunders Reference Abramowitz and Saunders2008).

In panel 1 of Figure 2, we offer an empirical illustration of the engagement bias in the Pew data. For each survey, we calculated the proportion of the sample with higher (academic) education—a primary correlate (and cause) of political engagement (Burns, Schlozman, and Verba Reference Burns, Schlozman and Verba2001; Hillygus Reference Hillygus2005; Perrin and Gillis Reference Perrin and Gillis2019)—and plotted it as a function of unit response in the survey. We reversed the horizontal axis to demonstrate the effect of a decline in unit response. Each dot is one survey. The curved line is a fitted cubic line.Footnote 2 As unit response declines, the share of respondents with a college degree in survey data increases—an incremental rise as unit response declines below 30% and a rapid climb as unit response drops below 10%.

Figure 2. Correlation between Survey Response and Level of Education in Survey Data in the US

In panel 2, we compare census data on annual college education attainment to the percentage of self-reported college graduates in our surveys (reporting the weighted average of all surveys each year). The figure illustrates the consistent, yet growing, gap over time. Although education bias in surveys data is characteristic of the last two decades, it has increased substantially in recent years. We are especially concerned with survey data when unit response rates drop below 10% and the education gap in surveys data is over 15 percentage points.

Comparing the Decline of Contact and Cooperation Rates

Following MP, we examine the cause of nonresponse—failure to contact respondents or refusal to cooperate with the interviewer. Figure 3 plots each measure from all available Pew surveys (N = 158).Footnote 3 Contact rate refers to the proportion of all cases in which a responsible person from the contacted housing unit was reached. Cooperation rate refers to the proportion of cases interviewed of all eligible units contacted. Unit response rate refers to the number of complete interviews divided by the number of eligible reporting units in the sample. Because of the variation between landline and cellphones in our variables of interest, we plot our measures for each separately but use unified trending lines.

Figure 3. Trends in the Decline of Contact, Cooperation, and Response Rates in the US, 2004–2018

All three measures have declined considerably. In 2004, contact rates were at about 70%, and cooperation rates among those contacted were about 50%. These two components produced overall unit response rates of 30%. Over time, pollsters have found it more challenging to contact respondents (gray line), and of those contacted, only very few agree to cooperate (white line). This combination produces the low unit response rates that are characteristic of recent telephone surveys—under 10% (black line). The decline in response rates, therefore, is a feature of the selection that results from new contact technologies and social habits of telephone use and of declining cooperation from people who are contacted. Yet, although both components of survey response have dropped, the decline of contact rate (ratio of 0.7 from 2004 to 2018) is overshadowed by the more significant collapse of cooperation rate (ratio of 0.3).

Response Rates and Party Polarization

To assess the effect of declining response rates on perceived polarization, we estimated our measure of polarization in each policy question in our data (N = 1,223, divided into six policy domains) as a function of response rates and elite polarization (Table 1). For response rates, we accounted for the three measures discussed above: the overall unit response and its two components—contact and cooperation. Because unit response is a function of contact and cooperation, we estimated two models for each policy domain—one with overall unit response rates (model 1 for each policy domain) and one with contact and cooperation rates (model 2 for each policy domain).Footnote 4

To account for the possible effect of increasing elite polarization on American public opinion, we included in all models the level of congressional polarization on the issue measured. Our data are House roll-call votes on policy-related issues between 2004 and 2018. Similar to the public opinion data, we coded the vote—aye (1) or nay (0)—of each Representative on each roll call and calculated the average Cohen’s d for the mean difference between Republicans and Democrats on all votes.

To control for the evident trend of increased polarization over time, we included a linear time trend.Footnote 5 Because of the high correlation between time and polarization,Footnote 6 we estimated ridge regression models, which reduce the bias in the variance of correlated predictors by shrinking their coefficients toward zero (Hoerl and Kennard Reference Hoerl and Kennard1970; Rawlings, Pantula, and Dickey Reference Rawlings, Pantula and Dickey1998; Seber and Lee Reference Seber and Lee2003; Tripp Reference Tripp1983).Footnote 7

Overall Response Models (1). Consistent with conventional wisdom and with MP, the effect of time is positive and significant on all domestic issues—economy, energy, immigration, civil rights, and welfare—and negative on foreign policy. Americans are increasingly polarizing on most issues on the public agenda. Consistent with CF, the results suggest that the overall response rates contribute to the increase in measured polarization on the three major domestic issues—economy, energy, and immigration—insignificant on civil rights and welfare, and positive (and significant) on foreign policy.

Contact and Cooperation Models (2). When we separate the two components of survey response—contact and cooperation—we find support for the cooperation hypotheses on our performance topics: economy, energy, immigration, and foreign affairs. Cooperation rates are negatively associated with polarization on the three domestic issues and positively associated with polarization on foreign affairs. As expected, we find no association between cooperation rates and polarization on civil rights and welfare.

The effect of contact rate is not significant on economy, negative on energy, immigration, and civil rights, and positive on welfare and foreign affairs. These conflicting results are consistent with our null hypothesis regarding contact rates. Further research is needed to draw conclusions about the random or purposeful nature of contact rates and how they affect measures of polarization.

Conclusion

The political polarization of Americans has attracted significant scholarly, media, and foundation attention in recent years, with growing concern about what this trend means for American democracy. Therefore, getting mass polarization right is a primary task for political scientists and should be a concern to the news industry. We show that declining response rates in probability surveys—a primary tool for assessing polarization—elevate perceived polarization on some topics (economy, energy, immigration) and downplay it on others (foreign affairs). Simply put, we are mismeasuring one of the most heated topics in political science today.

More broadly, survey data are frequently used in political science research and routinely discussed in the news, surveys are used by decisions makers to formulate a policy or by candidates to devise an electoral strategy, and evidence from surveys is shown to affect the political behavior of Americans. Given the importance of this tool and the various challenges it creates, the polling industry experiments with various polling techniques that include probability and nonprobability sampling designs. Despite the declining response rates in probability surveys, they are still a valuable tool that can produce accurate and reliable estimates of public behavior, far better than the alternative methods of nonprobability internet samples (Dutwin and Buskirk Reference Dutwin and Buskirk2017; Prosser and Mellon Reference Prosser and Mellon2018). And yet, the minuscule response rate that has become the norm today in probability samples demands caution in using this tool, especially if we suspect that nonresponse is associated with our outcome variable.

Correcting these biases in such low-response truncated surveys using postsample statistical tools may lead to colossal errors (Brehm Reference Brehm1993). Any application of postsample weights assumes we can infer the attitudes of those not responding from those who respond. However, this assumption is rarely met. Those not responding may have attitudes that are similar to those of their peer demographic or political groups (between-group nonresponse) or have different views from their peer groups caused by some unidentified factors (within-group nonresponse). Without knowing which of the two explanations (or a mix) is true, we cannot be confident in our estimation (Clinton et al. Reference Clinton, Agiesta, Brenan, Burge, Connelly, Edwards-Levy and Fraga2021). People not responding to the survey may be more, less, or equally polarized as people responding to the survey. Even if nonresponse does not have a pronounced effect on measures of overall policy or voting preferences, which focus on and estimate an average value, it may have dramatic effects when assessing a divide on a policy, which focuses on the variance between groups. And, unlike preelection polls, we do not have a postpolling reference (actual vote) to use as a benchmark for comparison. We simply do not know the extent to which Americans are polarized over policy.

Low response rates, which are a staple of current polling, bias our understanding of political and social life that we, as researchers, are tasked with. The evidence presented here suggests that the onus is on the researcher to justify generalizations based on survey data that relies on the selective group captured in samples with low response rates. Nonresponse undermines scientific representation, reducing the extent to which surveys provide an accurate portrait of the public (Brehm Reference Brehm1993). Empirical research formulates strict routines to improve confidence when making causal claims based on statistical models. Researchers should apply similar caution when evaluating survey data with a response rate of 6% or 8% to understand what the general public wants, thinks, and does politically.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S0003055422000399.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/UECUBY.

Acknowledgments

We thank Ken Goldstein, Alex Mintz, Chris Wlezien, the anonymous reviewers, and editors of APSR for their helpful comments on various versions of this project.

CONFLICT OF INTEREST

The authors declare no ethical issues or conflicts of interest in this research.

ETHICAL STANDARDS

The authors affirm this research did not involve human subjects.

Footnotes

1 Categorization of policies relies on the policy coding scheme of the Policy Agendas Project, combined into the six main categories described by the first author (Cavari Reference Cavari2017, 125). See the online supplementary material for description of categories and operationalization of our variables. A list of survey questions used and their policy categorization is available on the dataverse (Cavari and Freedman Reference Cavari and Freedman2022).

2 We produced the trend line using k-fold cross-validation, splitting the data (N = 158) into 10 random samples (k), and regressing the proportion of college-educated on unit response rate within each sample. We estimated each model 15 times, raising the unit response rate to the power of 1 through 15 and saving the R 2. We then calculated the mean R 2 for each power across all 10 samples to find the best fit.

3 All rates are computed according to the American Association for Public Opinion Research. 2016. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 9th edition. AAPOR.

4 To account for differences in response rates between landline and cellphone surveys and the changing share of each type in surveys over time, we calculated the weighted response rates as a function of the percentage of landline and cellphone respondents.

5 Adding a polynomial term for time does not affect the results. Model summary is included in the supplementary materials.

6 Pearson correlations (p < 0.001): Economy 0.95; Energy 0.93; Immigration 0.96; Civil Rights 0.97; Welfare 0.96; Foreign Policy 0.97.

7 Ridge regression has been used in the study of politics in various contexts: in measuring electoral change (Miller Reference Miller1972), assessing the domestic effects of US defense spending (Mintz and Huang Reference Mintz and Huang1990), and, most recently, to offer insights into the benefits of predictive modeling (Cranmer and Desmarais Reference Cranmer and Desmarais2017).

References

Abramowitz, Alan I. 2018. The Great Alignment: Race, Party Transformation, and the Rise of Donald Trump. New Haven, CT: Yale University Press.CrossRefGoogle Scholar
Abramowitz, Alan I., and Saunders, Kyle L.. 2008. “Is Polarization a Myth?Journal of Politics 70 (2): 542–55.CrossRefGoogle Scholar
Berinsky, Adam J. 2004. Silent Voices: Public Opinion and Political Participation in America. Princeton, NJ: Princeton University Press.Google Scholar
Brehm, John. 1993. The Phantom Respondents: Opinion Surveys and Political Representation. Ann Arbor: University of Michigan Press.Google Scholar
Burns, Nancy, Schlozman, Kay Lehman, and Verba, Sidney. 2001. The Private Roots of Public Action: Gender, Equality, and Political Participation. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Campbell, James E. 2016. Polarized: Making Sense of a Divided America. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Cavari, Amnon. 2017. The Party Politics of Presidential Rhetoric. New York: Cambridge University Press.CrossRefGoogle Scholar
Cavari, Amnon, and Freedman, Guy. 2018. “Polarized Mass or Polarized Few? Assessing the Parallel Rise of Survey Nonresponse and Measures of Polarization.” Journal of Politics 80 (2): 719–25.CrossRefGoogle Scholar
Cavari, Amnon, and Freedman, Guy. 2021. American Public Opinion toward Israel: From Consensus to Divide. New York: Routledge.Google Scholar
Cavari, Amnon, and Freedman, Guy. 2022. “Replication Data for: Survey Nonresponse and Mass Polarization: The Consequences of Declining Contact and Cooperation Rates.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/UECUBY.CrossRefGoogle Scholar
Claassen, Ryan L., and Highton, Benjamin. 2009. “Policy Polarization among Party Elites and the Significance of Political Awareness in the Mass Public.” Political Research Quarterly 62 (3): 538–51.CrossRefGoogle Scholar
Clinton, Josh, Agiesta, Jennifer, Brenan, Megan, Burge, Camille, Connelly, Marjorie, Edwards-Levy, Ariel, and Fraga, Bernard. 2021. “Task Force on 2020 Pre-Election Polling: An Evaluation of the 2020 General Election Polls.” New York: American Association for Public Opinion Research.Google Scholar
Cranmer, Skyler J., and Desmarais, Bruce A.. 2017. “What Can We Learn from Predictive Modeling?Political Analysis 25 (2):145–66.CrossRefGoogle Scholar
Dutwin, David, and Buskirk, Trent D.. 2017. “Apples to Oranges or Gala versus Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples.” Public Opinion Quarterly 81 (S1): 213–39.CrossRefGoogle Scholar
Egan, Patrick J. 2013. Partisan Priorities: How Issue Ownership Drives and Distorts American Politics. New York: Cambridge University Press.CrossRefGoogle Scholar
Gelpi, Christopher. 2010. “Performing on Cue? The Formation of Public Opinion toward War.” Journal of Conflict Resolution 54 (1): 88116.CrossRefGoogle Scholar
Groves, Robert M., Presser, Stanley, and Dipko, Sarah. 2004. “The Role of Topic Interest in Survey Participation Decisions.” Public Opinion Quarterly 68 (1): 231.CrossRefGoogle Scholar
Guisinger, Alexandra, and Saunders, Elizabeth N.. 2017. “Mapping the Boundaries of Elite Cues: How Elites Shape Mass Opinion across International Issues.” International Studies Quarterly 61 (2): 425–41.CrossRefGoogle Scholar
Hillygus, Sunshine D. 2005. “The Missing Link: Exploring the Relationship between Higher Education and Political Engagement.” Political Behavior 27 (1): 2547.CrossRefGoogle Scholar
Hoerl, Arthur E., and Kennard, Robert W.. 1970. “Ridge Regression: Biased Estimation for Nonorthogonal Problems.” Technometrics 12 (1): 5567.CrossRefGoogle Scholar
Imdad, Muhammad Ullah, and Aslam, Muhammad. 2018. “{lmridge}: Linear Ridge Regression with Ridge Penalty and Ridge Statistics.” (version 1.2). R package. https://CRAN.R-project.org/package=lmridge.Google Scholar
Jennings, Will, and Wlezien, Christopher. 2018. “Election Polling Errors across Time and Space.” Nature Human Behaviour 2 (4): 276–83.CrossRefGoogle ScholarPubMed
Keeter, Scott. 2018. “Are Public Opinion Polls Doomed?Nature Human Behaviour 2 (4): 246–47.CrossRefGoogle ScholarPubMed
Keeter, Scott, Kennedy, Courtney, and Deane, Claudia. 2020. “Understanding How 2020 Election Polls Performed and What It Might Mean for Other Kinds of Survey Work.” Pew Research Center. November 13. https://www.pewresearch.org/fact-tank/2020/11/13/understanding-how-2020s-election-polls-performed-and-what-it-might-mean-for-other-kinds-of-survey-work/.Google Scholar
Keeter, Scott, Kennedy, Courtney, Dimock, Michael, Best, Jonathan, and Craighill, Peyton. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” International Journal of Public Opinion Quarterly 70 (5): 759–79.CrossRefGoogle Scholar
Kertzer, Joshua D., Brooks, Deborah Jordan, and Brooks, Stephen G.. 2021. “Do Partisan Types Stop at the Water’s Edge?Journal of Politics 83 (4): 1764–82.CrossRefGoogle Scholar
Mellon, Jonathan, and Prosser, Christopher. 2017. “Missing Nonvoters and Misweighted Samples: Explaining the 2015 Great British Polling Miss.” Public Opinion Quarterly 81 (3): 661–87.CrossRefGoogle Scholar
Mellon, Jonathan, and Prosser, Christopher. 2021. “Correlation with Time Explains the Relationship between Survey Nonresponse and Mass Polarization.” Journal of Politics 83 (1): 390–95.CrossRefGoogle Scholar
Miller, William L. 1972. “Measures of Electoral Change Using Aggregate Data.” Journal of the Royal Statistical Society: Series A (General) 135 (1): 122–42.CrossRefGoogle Scholar
Mintz, Alex, and Huang, Chi. 1990. “Defense Expenditures, Economic Growth, and the ‘Peace Dividend.’American Political Science Review 84 (4): 1283–93.CrossRefGoogle Scholar
Muniz, Gisela, and Kibria, B. M. Golam. 2009. “On Some Ridge Regression Estimators: An Empirical Comparison.” Communications in Statistics—Simulation and Computation 38 (3): 621–30.CrossRefGoogle Scholar
Page, Benjamin I., and Bouton, Marshall M.. 2006. The Foreign Policy Disconnect: What Americans Want from Our Leaders but Don’t Get. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Panagopoulos, Costas. 2021. “Accuracy and Bias in the 2020 U.S. General Election Polls.” Presidential Studies Quarterly 51 (1): 214–27.CrossRefGoogle Scholar
Perrin, Andrew J., and Gillis, Alanna. 2019. “How College Makes Citizens: Higher Education Experiences and Political Engagement.” Socius 5: 116.CrossRefGoogle Scholar
Prosser, Christopher, and Mellon, Jonathan. 2018. “The Twilight of the Polls? A Review of Trends in Polling Accuracy and the Causes of Polling Misses.” Government and Opposition 53 (4): 757–90.CrossRefGoogle Scholar
Rawlings, John O., Pantula, Sastry G., and Dickey, David A.. 1998. Applied Regression Analysis. New York: Springer Verlag.CrossRefGoogle Scholar
Seber, George A. F., and Lee, Alan J.. 2003. Linear Regression Analysis. Hoboken, NJ: John Wiley & Sons.CrossRefGoogle Scholar
Sulfaro, Valerie A. 1996. “The Role of Ideology and Political Sophistication in the Structure of Foreign Policy.” American Politics Quarterly 24 (3): 303–37.CrossRefGoogle Scholar
Tourangeau, Roger, Groves, Robert M., and Redline, Cleo D.. 2010. “Sensitive Topics and Reluctant Respondents: Demonstrating a Link between Nonresponse Bias and Measurement Error.” Public Opinion Quarterly 74 (3): 413–32.CrossRefGoogle Scholar
Tripp, Robert E. 1983. “Non-Stochastic Ridge Regression and Effective Rank of the Regressors Matrix.” PhD diss. Department of Statistics, Virginia Polytechnic Institute and State University.Google Scholar
Webster, Steven W., and Abramowitz, Alan I.. 2017. “The Ideological Foundations of Affective Polarization in the US Electorate.” American Politics Research 45 (4): 621–47.CrossRefGoogle Scholar
Figure 0

Table 1. The Effect of Measures of Response Rates on Measures of Mass Polarization in the US

Figure 1

Figure 1. Correlation between Policy Preferences and Party Attachment in ANES Data

Figure 2

Figure 2. Correlation between Survey Response and Level of Education in Survey Data in the US

Figure 3

Figure 3. Trends in the Decline of Contact, Cooperation, and Response Rates in the US, 2004–2018

Supplementary material: Link

Cavari and Freedman Dataset

Link
Supplementary material: PDF

Cavari and Freedman supplementary material

Cavari and Freedman supplementary material

Download Cavari and Freedman supplementary material(PDF)
PDF 109.6 KB
Submit a response

Comments

No Comments have been published for this article.