Hostname: page-component-7479d7b7d-qlrfm Total loading time: 0 Render date: 2024-07-10T07:20:28.008Z Has data issue: false hasContentIssue false

The SRC Panel Data and Mass Political Attitudes

Published online by Cambridge University Press:  27 January 2009

Extract

One of the richest data sources for the study of public opinion is the Survey Research Center's panel study conducted in the late 1950s. Because the SRC interviewed its national panel of Americans three times over a four-year period, the SRC panel data allows the analysis of changes in survey responses over time. The most remarkable discovery from the SRC panel was that panelists changed their reported opinions on policy issues with considerable frequency when asked the same policy questions in different years. Moreover, the amount of observed change in the individual responses varied little with the time interval between responses. That is, the correlations between responses to the same issue item in 1956 and 1958 or in 1958 and 1960 (two years apart) were almost as low as the correlations to the same issue item in 1956 and 1960 (four years apart).

Type
Articles
Copyright
Copyright © Cambridge University Press 1979

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Converse, Philip E., ‘The Nature of Belief Systems in Mass Publics’, in Apter, David, ed., Ideology and Discontent (New York: The Free Press, 1964), pp. 206–61Google Scholar; Converse, Philip E., ‘Attitudes and Non-Attitudes: Continuation of a Dialogue’, in Tufte, Edward R., ed., The Quantitative Analysis of Social Problems (Reading, Mass.: Addison-Wesley, 1970), pp. 168–89Google Scholar. Butler and Stokes have made a similar interpretation from their British panel data. See Butler, David and Stokes, Donald, Political Change in Britain (London: Macmillan, 1969), pp. 173200.Google Scholar

2 Pierce, John C. and Rose, David P., ‘Nonattitudes and American Public Opinion: The Examination of a Thesis’, American Political Science Review, LXVIII (1974), 626–49CrossRefGoogle Scholar; Achen, Christopher, ‘Mass Political Attitudes and the Survey Response’, American Political Science Review, LXIX (1975), 1218–31.CrossRefGoogle Scholar

3 See the communications in response to Achen's article and Achen, 's reply in American Political Science Review, LXX (1976), 1226–31.Google Scholar

4 Converse, , ‘The Nature of Belief Systems in Mass Publics’, pp. 238–45.Google Scholar

5 Achen, , ‘Mass Political Attitudes and the Survey Response’, pp. 1220–1Google Scholar; Pierce, and Rose, , ‘Nonattitudes and American Public Opinion’, pp. 644–5.Google Scholar

6 Converse, Philip E., ‘Comment’, American Political Science Review, LXVIII (1974), 650–60Google Scholar, a reply to Pierce, and Rose, , ‘Nonattitudes and American Public Opinion’.Google Scholar

7 The statistical literature defines ‘reliability’ and ‘error variance’ to be related as follows. Assuming the error variance is random (that is, errors are uncorrelated with true scores), the total (or observed) variance is the sum of the true variance and the error variance. ‘Reliability’ can be defined as the ratio of the true variance to the observed variance, or:

reliability = true variance/total variance

= (total variance – error variance)/total variance.

The error variance for an aggregate of individuals is the mean of the component individuals' error variances, which may themselves vary. That is, the variances of the individuals' multiple responses around their personal means (true scores) may vary. The variability of individual error variance is the point in dispute.

8 Achen, , ‘Mass Political Attitudes and the Survey Response’, p. 1226.Google Scholar

9 Heise, David R., ‘Separating Reliability and Stability in Test–Retest Correlations’, American Sociological Review, XXXIV (1969), 93101.CrossRefGoogle Scholar

10 Wiley, David E. and Wiley, James A., ‘The Estimation of Error in Panel Data’, American Sociological Review, XXXV (1970), 112–17.CrossRefGoogle Scholar

11 For details of the applications of the Heise and Wiley–Wiley models to the SRC panel data, see Erikson, Robert S., ‘Analyzing One-Variable Three-Wave Panel Data: A Comparison of Two Models’, Political Methodology, V (1978), forthcoming.Google Scholar

12 It would be difficult to specify a set of assumptions that could allow a considerable amount of true attitude change when the observed test-retest correlations are virtually invariant with the temporal distance between measurements. But one way the stability estimates could be lowered is by arbitrarily assigning lower than average reliabilities to the midpoint 1958 scores. A possible reason for considering this option is that 1958 was a midterm year. If issue attitudes are less crystallized in non-presidential years, the error variances might be greater in 1958 than in 1956 or 1960.

Fortunately, it is possible to make an independent check on the possibility that the observed attitudes for 1958 were disproportionately unreliable. If 1958's observed opinions were less reliable than for presidential years, one ought to have greater difficulty predicting 1958 attitudes from standard demographic variables. To see whether such a pattern could be found, attitudes for each issue for each wave were separately regressed against a series of dummy variables (measured for the given year) representing age, race, religion, education, head's occupation, and region. On the average, the multiple Rs were greatest for 1958, next greatest for 1960, and lowest for 1956. Assuming constancy to the relationship between background variables and attitudes, this ordering suggests the highest reliabilities were generally for 1958 and the lowest for 1956. Since this is exactly the ordering suggested by the Wiley–Wiley assumptions, the evidence disconfirms the notion that the high stability estimates were spuriously obtained as a result of low item reliability in 1958.

13 The appropriate formula can be found in Lord, Frederick M., ‘Elementary Models for Measuring Change’, in Harris, Chester W., ed., Problems in Measuring Change (Madison, Wise.: University of Wisconsin Press, 1963), pp. 2138.Google Scholar

14 Converse's estimates are based on an analysis of dichotomized responses, as discussed below. Pierce and Rose employ the Heise method to obtain estimates of true attitude stability on the domestic policy items. Achen obtains his estimates with the help of an assumption that true change scores over successive time intervals are uncorrelated with each other. For a comparison of the different methods, see Erikson, , ‘The Analysis of One-Variable Three-Wave Panel Data’Google Scholar

15 Converse, , ‘Attitudes and Non-Attitudes’, pp. 174–5Google Scholar; Converse, , ‘The Nature of Belief Systems in Mass Publics’, fn. 39, p. 259.Google Scholar

16 By assuming equal response distributions for true and random responses on ‘school segregation’, one allows the variance of true attitudes on the underlying continuum to be uncorrelated with the error variance. In the previous black-and-white model examples (Tables 4A, 4B 5A), the dichotomous divisions of random responses were allowed to differ from the divisions of true responses, thus violating the usual assumption in reliability analysis that true and error variances are uncorrelated. It may be noted that in all the examples of Tables 4 and 5 the non-attitude holders' probabilities of a ‘pro’ response (their mean responses) can actually be considered their ‘true’ positions. For example, the true attitudes of non-opinion holders on ‘power and housing’ are assumed to be a 586 probability of a ‘pro’ response. Thus, the term ‘non-attitude’ is technically a misnomer in the sense that by definition, every respondent has a theoretical mean (true) position.

17 Converse appears to have recognized the possibility of this alternative model. See ‘The Nature of Belief Systems’, fn. 41, p. 259; and ‘Attitudes and Non-attitudes’, p. 175.Google Scholar

18 Useful discussions of normal-ogive models can be found in Lord, Frederick M. and Novick, Melvin R., Statistical Theories of Mental Test Scores (Reading, Mass.: Addison-Wesley, 1968), pp. 358–94Google Scholar; and in Torgerson, Warren S., Theory and Method of Scaling (New York: Wiley 1958), pp. 385–91.Google Scholar

19 Because the z's are directly analogous to the standardized observed responses in the interval-level measurement error models, one can employ the Heise method on the r 1's to estimate the stability of latent attitudes and the reliabilities of the z's under the assumption of attitude change. Such stability and reliability estimates tend to be slightly higher than when the Heise method is employed directly on the five-point responses. Thus, the hypotheses that true attitudes are highly stable receives additional support from a test that does not assume interval-level measurement of observed attitudes.

20 Particularly if the marginal distributions are constant over time, the interval-level version of the black-and-white model would assume that any change in position along the five-point scale (such as from ‘strongly agree’ to ‘agree’) is evidence of random behaviour. This extreme model can readily be rejected on the basis of empirical evidence. See Pierce, and Rose, , ‘Nonattitudes and American Public Opinion’, pp. 634–5.Google Scholar

21 ‘Purifying’ the inconsistent-response group by including only those who respond inconsistently on both items reduces the working Ns down to the thirty to sixty range. Though erratic, the Qs for these purified groups tend in the expected direction, particularly for issue-pairs with very high Qs for the ‘consistent’ responses. For all five issue-pairs on which the Qs for ‘consistent’ respondents exceed ± 90, the Qs for those inconsistent on both issues are in the expected direction, with an average departure from zero of 32.

22 Achen, , ‘Mass Political Attitudes and the Survey Response’, pp. 1226–9.Google Scholar

23 Hunter, John E. and Coggin, T. Daniel, ‘Communication’, American Political Science Review, LXX (1976), 1226–9.CrossRefGoogle Scholar

24 The rare ‘no answer’ responses to these items were coded as zeros. Because some of the ‘political sophistication’ components were obtained from 1956 and 1960 post-election interviews, respondents who were not surveyed in both post-election waves are not given ‘political sophistication’ scores.

25 The estimates of respondent error variance are far from perfect measures, since the estimates are based on only three opinion readings. These estimates may also reflect a certain amount of true attitude change, although this contamination ought to be slight. Achen's method of estimating respondent error variance is a little different from that employed here.

26 Differences between test-retest correlations for high and low sophisticates may reflect differences in true attitude variance and differences in true attitude stability as well as differences in error variance. Based on the Wiley–Wiley method, the estimated error variances on the eight items actually tend to be slightly higher for the high sophisticates than for low sophisticates.

27 On most issues, opinion direction and sophistication scores are correlated in the manner one would expect from the relationship between political sophistication and high socioeconomic status. For example, the low sophisticates are largely in favour of guaranteed jobs and aid to schools, while high sophisticates are divided. On foreign policy issues, high sophisticates generally take an internationalist position, while low sophisticates are more divided.

28 It is conceivably a mistake to assume that the response stabilities of the most politically sophisticated respondents should serve as the standard for the instrument. For example, we might imagine that the stability of a person's responses to an item is a function of his interest in the issue, but that interest in the issue is not correlated with political sophistication. But for this to be true, respondents' levels of interest in different issues must be statistically independent of each other, to allow for the fact that response stability on one issue cannot predict response stability on another issue.

In his initial discussion of the panel data, Converse (‘The Nature of Belief Systems’, pp. 245–6)Google Scholar notes surprisingly little tendency for stable respondents (with consistently pro or consistently con opinions) on one issue to be stable respondents on other issues. This finding led Converse to suggest the existence of separate ‘issue publics’ for each issue – virtually non-overlapping sets of people who are concerned about different issues. In other words, people may be sufficiently concerned about some issues to express true attitudes, but uninterested in other issues, upon which they therefore express non-attitudes.

Although we have rejected the strict black-and-white interpretation of the panel's issue items, the possibility remains for a milder version of Converse's ‘issue public’ hypothesis. People may tend to give their most stable responses on certain issues which are of greatest interest to them. If so, the measuring instrument obtains highly reliable responses when people are sufficiently interested. Although this interpretation is compatible with the data, it requires interest levels on given issues to be uncorrelated with each other or with political sophistication.

29 Converse, , ‘The Nature of Belief Systems in Mass Publics’, pp. 227–34.Google Scholar

30 Pierce, and Rose, , ‘Nonattitudes and American Public Opinion’, p. 645Google Scholar; Achen, , ‘Mass Political Attitudes and the Survey Response’, p. 1229.Google Scholar

31 The magnitudes of the correlations shown in Table 10 are not comparable to those reported by Converse and in Table 7 on p. 105 above, since different measures of association are employed. Converse's gammas and Table 7's Qs for dichotomized scores give inflated coefficients relative to Pearson's r, used in Table 10.

32 In a study of attitude constraint in the 1960s, Bennett also finds little tendency for inter-item correlations to increase with presumed indicators of political sophistication. See Bennett, Steven E., ‘Consistency in the Public's Social Welfare Attitudes in the 1960s’, American Journal of Political Science, XVII (1973), 544–70CrossRefGoogle Scholar. For a study with contrary findings, see Axelrod, Robert, ‘The Structure of Public Opinion on Policy Issues’, Public Opinion Quarterly, XXXI (1967), 5160.CrossRefGoogle Scholar

33 In exact opposition to the assertion made here. Searing et al. have argued that because observed adult attitudes are unstable and relatively uncorrelated with one another, there is little structure to adults' political attitudes that can be explained by pre-adult political socialization. See Searing, Donald D., Schwartz, Joel J. and Lind, Alden E., ‘The Structuring Principle: Political Socialization and Belief Systems’, American Political Science Review, LXVII (1973), 415–32CrossRefGoogle Scholar Of course, adult attitudes are not entirely formed during the pre-adult years. The estimate that policy attitudes are quite stable over a two or four-year span does not imply that adult attitudes rarely change in the longer run. For example, from the average Wiley-Wiley estimate of attitude stability over four years (·87 × ·97= ·84), the projected attitude stability over n four-year spans is ·84n. Thus, say, over a span of thirty-two years the stability of attitudes would be ·848, or only ·25.

34 Nie, Norman, Verba, Sidney and Petrocik, John R., The Changing American Voter (Cambridge, Mass.: Harvard University Press, 1976), pp. 123–55.Google Scholar

35 Bishop, George, Tuchfarber, Alfred J. and Oldendick, Robert W., ‘Change in the Structure of American Political Attitudes: The Nagging Question of Question Wording’, American Journal of Political Science, XXII (1978), 250–69CrossRefGoogle Scholar; Sullivan, John L., Piereson, James E. and Marcus, George E., ‘Ideological Constraint in the Mass Public’, American Journal of Political Science, XXII (1978), 233–49.CrossRefGoogle Scholar