Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-04T22:41:51.089Z Has data issue: false hasContentIssue false

Experience and Cultural–Repertoire Based Avenues of Trust: An Analysis of Public Trust in Statistical Agencies and their Data

Published online by Cambridge University Press:  03 March 2016

Michelle Smirnova
Affiliation:
Department of Sociology, University of Missouri at Kansas City E-mail: smirnovam@umkc.edu
Paul Scanlon
Affiliation:
National Center for Health Statistics1 E-mail: wyv6@cdc.gov
Rights & Permissions [Opens in a new window]

Abstract

Declining trust in statistical agencies has recently complicated the endeavour to collect high-quality, timely data that are used to inform US policy and practice. Given this context, understanding how respondents choose to trust particular statistical agencies and their products is incredibly important. This article details a series of cognitive interviews (85) and focus groups (3) used to measure how the US public develops trust for statistical agencies, their statistical products and their use of administrative records. Results show that respondents use two models of trust in their rationale: experience based and cultural‒repertoire based. When respondents did not have experience with a particular institution and/or its product, cultural values including personal liberty, cost-savings and the promotion of social goods (for example, government-sponsored schools and hospitals) were found to influence their motivations to trust or distrust. As a result, appeals to cultural values may have the potential to increase trust among respondents. Familiarity with statistical agencies and their products may also increase respondents’ levels of trust.

Type
Articles
Creative Commons
This is a work of the U.S. Government and is not subject to copyright protection in the United States.
Copyright
Copyright © Cambridge University Press 2016

Introduction

Trust is a key component to stable social arrangements in an industrialised, capitalist society (Simmel, Reference Simmel1990; Giddens, Reference Giddens1991; Durkheim, Reference Durkheim and Halls1997; Putnam, Reference Putnam2000), particularly in democracies (Putnam, Reference Putnam2000; Paxton, Reference Paxton2002). Writing in 1907, Georg Simmel argued that ‘without the general trust people have in each other, society itself would disintegrate’ (Reference Simmel1990:178). While we are instructed not to blindly trust strangers, a certain amount of trust is necessary for even the most basic cooperation in our economic, political and social relationships. For example, walking into a building requires the trust of visible others inside, in addition to the many invisible others who designed, constructed, and maintain its structural integrity. Giddens (Reference Giddens1991) notably argued that uncertainty and risk are defining elements of the modern age, where we must cooperate with and place our trust in strangers. Social trust is requisite of an inclusive and open society that invests in the future, promotes economic development and fosters societal happiness (Fukuyama, Reference Fukuyama1995; Whiteley, Reference Whiteley2000; Newton, Reference Newton2001; Uslaner, Reference Uslaner2002; Herros and Criado, Reference Herreros and Criado2008).

Despite its fundamental social role, we often do not pay attention to trust as a variable unless it is in question. As Möllering (Reference Möllering2006: 6) argues, ‘trust is strongly related to taken-for-grantedness . . . and thus explains why trust tends to become topical [only] when it is problematic’. Declining trust in US statistical agencies has drawn more attention to the issue of trust, particularly for those who rely upon public cooperation to produce social data. In response, researchers seek to understand not only the reasons why people become distrustful, but also why they choose to trust in the first place. Utilising a ‘trust repertoire’ approach, this study sought to understand interpretive logic that respondents use when deciding whether to trust government statistical agencies, their statistical products and the method of sharing administrative records. Based upon a case study exploring the US public's attitudes towards data sharing among national statistical organisations (NSOs), we argue that while trust can be based upon rational calculations or familiarity, the values and beliefs of a particular culture may be treated as explanatory variables in understanding motivations to trust. This research is important to social policy scholarship, in that trust is fundamental to the effective implementation and safeguarding of institutional initiatives.

The mission of the United States Federal Statistical System (FSS), the agencies responsible for collecting, analysing, storing and releasing official statistics, is to collect high-quality, timely data in order to provide the public with accurate and reliable official statistics, which are useful to inform policy and practices. Declining trust in statistical agencies and companies contributes to decreasing survey response rates (Groves and Couper, Reference Groves and Couper1998; Groves, Reference Groves2006). These decreasing response rates not only make survey measurement and related follow-up more burdensome and costly, they can also decrease the quality of the survey data as a result of selection bias produced by unit non-response bias and item non-response (Groves et al., Reference Groves, Dillman, Eltinge and Little2001).

Given this situation, the FSS joined other national statistical organisations (NSOs) to study how and why people trust or distrust statistical agencies and their data. This international effort was responsible for the development of a model of trust in official statistics, as detailed by Ivan Fellegi (Reference Fellegi2010), as well as a model attitudinal survey designed to monitor the public's trust in official statistics. This model, commonly referred to as the Fellegi Model (and shown later in Figure 1), operationalises trust by dividing attitudes towards statistical institutions themselves (for example, the Census Bureau) from attitudes towards the products they collect and disseminate (for example, the population count); the model survey's questions were designed to capture these two sets of attitudes.

Source: Fellegi (2010).

Figure 1. Fellegi's model of trust in official statistics

In this article, we discuss how a series approach of eighty-five cognitive interviews across two rounds, in addition to three focus groups, were used to test the Fellegi model and elicit cultural repertoires (Mizrachi et al., Reference Mizrachi, Drori and Anspach2007) used by respondents in the trusting process. We measured the attitudes specifically regarding data sharing and administrative record use so as to make sure opinions were rooted in concrete situations rather than abstract hypotheticals. In doing so, we found that familiarity with statistical agencies and their products, in addition to the promotion of mainstream US cultural values, served as mediating factors in respondents' decisions to trust statistical agencies and particular statistical products.

Measuring trust

Trust is generally ‘the expectation that arises within a community of regular, honest and cooperative behavior, based on commonly shared norms, on the part of members of that community’ (Fukuyama, Reference Fukuyama1995: 26). Trust can be based upon personal relationships with known individuals or based upon confidence in an institution, group or organisations (Luhmann, Reference Luhmann1979; Giddens, Reference Giddens1990; Aryee et al., Reference Aryee, Budhwar and Chen2002). Luhmann (Reference Luhmann1979) argues that system trust is built up by continual affirmative experiences using the system. Luhmann's definition requires familiarity with the system, thereby allowing people to subjectively reduce uncertainty and simplify their relationships with others. Giddens (Reference Giddens1990) challenges this formulation in contending that institutional trust is not necessarily contingent upon previous experiences with an agency, but rather involves confidence in the reliability of a system based on the suitability of abstract principles.

These two formulations loosely map onto the distinction between what we have termed experience based and cultural‒repertoire based trust. Experience based trust involves a personal association between trustor and trustee. It is based upon positive shared experiences and the expectation of future interaction between the trusted individual and her/his associated groups. In contrast, cultural‒repertoire based trust deals with unknown groups and/or strangers and does not predominantly hinge upon specific situations (Stolle, Reference Stolle2002). This form of trust is not based upon the probability that the prospective trustee can in fact be trusted; rather, the decision to trust is based upon principles that the trustor has concern for the common good, and will act in accordance with those objectives (Whiteley, Reference Whiteley2000). Experience based trust develops from familiarity, whereas cultural‒repertoire based trust must be enacted in situations without familiarity. In our study, we found that both of these forms of trust contribute to respondents’ perceptions of the FSS.

Drawing upon Mizrachi et al.’s (Reference Mizrachi, Drori and Anspach2007) ‘trust repertoires’, we focus on three interrelated dimensions that shape the practice of trust: agency, culture and power. Agency involves the ability of social actors to choose and apply strategies of trust in different social contexts (Schatzki et al., Reference Schatzki, Knorr Centina and Von Savigny2001). Culture serves as the repertoire of symbols and practices from which forms of trust are selected, composed and applied (Swidler, Reference Swidler1986). Finally, power relationships within a particular political context necessarily affect the choice and the meaning attached to a particular form of trust, and therefore institutions that can sanction individuals are perceived differently than non-sanctioning ones.

Drawing from Swidler's (Reference Swidler1986) cultural toolkit, which envisions culture as a ‘repertoire’ or as ‘strategies of action’, mainstream American politico-cultural values are deployed as explanatory variables in understanding the motivation to trust. Particularly in political campaigns, such ‘objectively’ positive values (for example, cost-savings and curing diseases) are highlighted by candidates who seek public support. Culture operates as both system and practice in that it helps actors make sense of the social world through particular beliefs and values, but it is also utilised to engage in strategic behavior within the established social world (Sewell, Reference Sewell2005). In modern societies, the state and its institutions are often legitimated through the education system, laws and established cultural myths that offer ‘rationalized and impersonal prescriptions that identify various social purposes as technical ones and specify in a rulelike way the appropriate means to pursue these technical purposes’ (Meyer and Rowan, Reference Meyer and Rowan1977: 343‒4). Such cultural myths are fundamental to the operation of institutions, particularly when their success or trustworthiness must be established beyond the discretion of an individual or particular organisation.

In this article, we identify repertoires of trust through the analysis of a series of cognitive interviews and focus groups designed to elicit respondents’ motivations to trust NSOs and their data. These qualitative methods were used to develop and evaluate the Federal Statistical System Public Opinion Survey (FSS POS), a twenty-five-question nightly telephone poll conducted by Gallup. Based upon this research, we found that the decision to trust statistical agencies and their products was driven by (1) familiarity or experience based trust of the entity and its data or (2) a cultural‒repertoire based trust that was motivated by cultural values of (a) accuracy and cost-savings, (b) the protection of personal liberties from state interference and (c) the promotion of social goods such as public health and education.

Data and methodology

Unlike other research that seeks to identify the determinants of trust and mistrust through experiments and surveys (Cook and Cooper, Reference Cook, Cooper, Ostrom and Walker2003; Ostrom and Walker, Reference Ostrom and Walker2003), this study relies on qualitative data, cognitive interview narratives and focus group discussions, in order to understand the logic of the trusting process, in particular cultural and political contexts, with particular attention to experiential and cultural–repertoire based trust. We treat the actor as the engine of trust (Swidler, Reference Swidler1986); the application of different forms of trust in diverse situations requires ‘knowledge of schemas, which means the ability to apply them to new contexts’ (Sewell, Reference Sewell2005: 20).

Möllering (Reference Möllering2001) argued that trust research needs to more carefully consider how people understand their life-world, a goal that is best approached with open-ended interviews. Gates (Reference Gates2011) and Mayer (Reference Mayer2002) similarly advocated for the administration of focus groups and cognitive interviews in developing a survey that ‘would allow for a more definitive understanding of the frequency and the magnitude of concerns among the general public and particular demographic groups. This information could then be used to understand the impact of these concerns on nonresponse, how to anticipate problems, and the best ways to address concerns via outreach and education’ (Mayer, Reference Mayer2002: 35). In response to these demands, the present research focuses on narrative interpretations from cognitive interviews and focus groups to capture the motivation and logic processes within the trust process.

Cognitive interviewing methodology

Cognitive interviewing is a qualitative methodology that offers the ability to understand the cognitive process behind answers to survey questions (Willis, Reference Willis2005; Miller et al., Reference Miller, Chepp, Willson and Padilla2015) and to determine how respondents interpret survey questions (Miller, Reference Miller, Madans, Miller, Maitland and Willis2011; Willis, Reference Willis2005). The goal of this method is to understand exactly what the respondent was thinking when answering survey questions, and how they interpreted the concepts within the question. In order to accomplish this, we first ask questions directly from the survey instrument, but then we ask them why they answered the way they did, how they interpreted specific terms in the question, or what the terms meant to them. They are asked to provide specific examples and their justification for behaviour in those contexts. Our study involves respondent narrative and intensive follow-up verbal probing after each question and at the end of the survey. Probes were semi-scripted and intended to cover certain pre-identified topics, but also allowed the interviewer the flexibility to follow unanticipated problems that surfaced.

Analysis of cognitive interviewing data uses a constant comparative method, and can generally be understood as having four steps. The first step of the cognitive interviewing process occurs within the interview as the interviewer attempts to understand how the respondent has come to understand, process, and then answer a survey question. During the interview, basic response errors can be identified by comparing respondents’ answers to the survey questions to the narrative they provide during the interview. When logical contradictions are evident between the narrative and the survey answer, the interviewer explores why these contradictions occurred. The next step in analysis occurs once the interview is over, and involves creating summary notes of the interviews and then systematically comparing these notes across all the respondents. This level of comparative analysis reveals patterns in the way people answer survey questions. At this level, it is possible to identify the construct that is being captured by the survey question and illustrate the substantive meaning behind the survey statistic. The final stage of analysis is a comparison of patterns across sub-groups, identifying whether particular groups of respondents interpret or process a question differently from other groups. At this level of analysis, that is identifying patterned differences among sub-groups, we begin to understand where potential for bias would occur in survey estimates.

The cognitive interviews described here were specifically designed to evaluate a twenty-five-item questionnaire (FSS POS) whose purpose was to measure both knowledge of and attitudes toward statistical entities and their practices, as well as attempting to detail the various underlying ‘interpretations’ that might motivate trust. In adapting the initial questionnaire for this study, we focused on definitions of trust in statistical products and trust in statistical institutions that are derived from work by Ivan Fellegi (Reference Fellegi2004). Figure 1 illustrates the Fellegi model.

In crafting questions to address these concepts, we considered questions previously developed and tested by an earlier Organization for Economic Cooperation and Development (OECD) working group (OECD, 2011), as well as those used by the United Kingdom's Office of National Statistics, the National Centre for Social Research in the UK, and by the Eurobarometer. Additionally, we consulted research that examined US public knowledge of statistics (Curtin, Reference Curtin2007) in order to create a questionnaire that would be intelligible to the general US population. Since previous research indicated that age and socioeconomic status have been found to have a significant influence on respondents’ privacy and confidentiality concerns with regard to the Federal Government (Singer et al., Reference Singer, Mathiowetz and Couper1993, Reference Singer, Bates and Van Hoewyk2011), our respondents were recruited to reflect demographic diversity. Respondents were purposively sampled to obtain this demographic diversity from the Washington, DC and Salt Lake, UT metropolitan areas for the cognitive interviews. Table 1 summarises the respondents’ demographics.

Table 1 Demographics of cognitive interview and focus group respondents

Note: In the cognitive interviews, some interviewers did not collect Age or Education information, and therefore this data is missing.

Table 2 Preferences for administrative data linkage by data type and source

Note: The totals in Table 2 are not equal because different cognitive interviews focused on different questions during the limited hour and a half time period. While the Social Security question was asked of everyone, other questions were not asked of all respondents.

All interviews were conducted by government employees in public spaces (for example, coffee shops, food pantries) and employees asked about trust in agencies other than the one for which they worked. Such neutral locations and interviewing strategies were used in an attempt to ameliorate the bias that stems from government agencies interviewing members of the public about trust in their employers. The size of the sample fits the explorative nature of this study. We also reordered questions across participants to control for ordering effects. The interviews lasted from forty-five to sixty minutes and were recorded and transcribed in full; not all questions were asked of each respondent because of time constraints.

Focus group methodology

Focus groups rely on the ‘explicit use of group interaction’ as research data (Merton et al., Reference Merton, Fiske and Kendall1956). The focus groups were used to establish respondents’ perceptions of particular entities and their data, in addition to their opinions on data sharing between such entities. The focus groups began at an individual response level and then opened up to a group discussion so that participants had time to formulate their own opinions before being influenced by others. Participants were initially instructed to conduct a ‘pile-sort’ of a number of institutionsFootnote 2 from which they would be comfortable with the Census Bureau getting information, and then a subsequent pile-sort of the types of informationFootnote 3 which they would prefer the Census Bureau getting directly from the respondent and which they would be comfortable with the Census Bureau getting from another source (Weller and Romney, Reference Weller and Romney1988). The focus group data were analysed using similar inductive and deductive coding schemes as developed during the first round of cognitive interviews.

In order for participants to feel comfortable speaking freely, the groups were constructed to be semi-homogeneous in terms of those demographic features that may influence their opinions on the subject matter of interest. In this study, the groups (like the sampling for the cognitive interviews) were constructed according to age, education and socio-economic status, and participants were purposively sampled from the Washington, DC metropolitan area (see Table 1 above). The first group included respondents who were highly educated and over fifty years of age. The second group included respondents of lower education and lower socio-economic status; they were of different ages. The final group included respondents in their twenties who held at least a bachelor's degree. Race was not considered in the recruitment, as the analysis of the first round of cognitive interviews did not find race to be a significant factor in people's attitudes towards privacy and confidentiality. Nonetheless, two out of the three groups included participants of multiple races; whereas the high-education, over-fifty group consisted only of self-reported White people. In the two mixed-race groups, there was no evidence that racial minorities spoke less frequently or freely than others. In fact, in both of these groups the most frequent contributor was a self-identified Black male.

Research design and method of analysis

This project utilised an iterative approach to qualitative analysis. The first phase involved a set of forty-two cognitive interviews to evaluate an OECD questionnaire designed to measure trust in government statistics and statistical agencies. The analysis of these cognitive interviews revealed that the decisions to either trust or distrust a government statistical agency were not consistent across the sample, particularly among those respondents who were less familiar with official statistics. In order to understand what produced such inconsistency, a set of three focus groups were conducted, which revealed that respondents utilised two separate pathways to trust: one that was experience based and another that was cultural–repertoire based. Given these two pathways, a number of cultural repertoire framing devices were developed and added to a modified version of the original questionnaire, which was evaluated using a second round of forty-three cognitive interviews. Analysis of these interviews revealed that such framing devices motivate respondents who are less familiar with official statistics to employ cultural–repertoire based trust.

By comparing the data from both the cognitive interviews and the focus groups, we are able to determine the salient issues regarding federal statistics, the Federal Statistical System and data sharing more generally. The analysis across all three rounds of data collection (two rounds of cognitive interviewing and one round of focus groups) was conducted using the Q-Notes qualitative analysis software, developed by NCHS. This iterative, grounded theory approach allowed researchers to code for patterns across narrative responses, to identify general patterns and areas of concern and to disentangle the various motivations behind trusting federal statistical agencies.

Findings

First round of interviews: gauging familiarity and trust in statistical agencies and products

The first round of cognitive interviews was used as a heuristic to gauge how much respondents knew about statistical agencies and their products and whether this knowledge was correlated with trust. The first round's questionnaire began by asking respondents about their familiarity with general and specific statistics. For example, respondents were asked if they used statistics at home or for work, and if they were familiar with the population count or unemployment rates. They were also asked if they knew which statistical institution collected, analysed and disseminated these statistics. Familiarity with these statistical institutions and products correlated with respondents’ trust of these institutions and how the institutions might use the provided information. Familiarity can build trust because it offers concrete ideas of what to expect based on previous interactions and can provide a framework for future expectations. Familiarity can both create trust when the experience was favourable and ruin trust when it is not. Familiarity reduces uncertainty by creating ‘relatively reliable expectations’ (Luhmann, Reference Luhmann1979: 19).

Prior to participating in the cognitive interview, it appears that many respondents had given little thought to the topic of administrative data use for surveys. As a result, their opinions were easily swayed by question wording. For example, we observed that respondents were more likely to agree than disagree with each version of the question shown below, even though each one asked them to consider the opposite method of data collection.

[VERSION 1] Sometimes federal statistical agencies need to get information such as employment history or retirement benefits. They can do it by getting the information from other government agencies or by asking people for it directly in a survey. How do you yourself feel about federal agencies trying to save government money and save people's time by sharing information with each other? Are you strongly in favor, somewhat in favor, somewhat against, strongly against them sharing information with each other?

Version 1 (above) asks if respondents support record linkage over survey administration in order to save time and money. In this frame, the majority of respondents said they favor record linkage. The following question asks the same respondents if they prefer federal agencies collecting ‘information directly through surveys’, to which the majority also responded affirmatively.

[VERSION 2] Some people think people's privacy would be better protected if each agency collected the information directly through surveys. How do you feel about federal agencies collecting information directly? Are you strongly in favor, somewhat in favor, somewhat against, or strongly against federal agencies collecting information directly?

In Version 2, the order of these two questions was reversed, but people still supported both statements even though they reflect opposing views on data sharing.

Respondents with very little knowledge of federal statistics sometimes had difficulty understanding what a question was asking altogether. If this confusion was great enough, they could not determine what to consider and, therefore, could not answer the question at all. For example, in one question, ‘There is political interference in the work of federal statistical agencies’, one respondent could not answer because it did not make sense to him. He asked, ‘Who would be doing the interfering? Because I thought the government would be one big government, so who would be playing interference?’.

When respondents were familiar with statistical agencies and their data, they used experiences to justify their trust. When they were unfamiliar, they sometimes based their decisions on other agencies with which they had more familiarity or they based their decisions upon abstract principles and broader cultural values. For example, one survey question asks whether respondents believe the following statement to be true: ‘Information collected to create federal statistics is sometimes used by the police and the FBI to keep track of people who break the law.’ Many respondents were not thinking of statistics as much as they were thinking about the government accessing people's personal files or police records. They cited examples such as ‘terrorist’ and ‘sexual predator lists’, ‘travel records’ (such as airline tickets), and ‘personal files’ (one person talked about how the government kept a large file on John Lennon). Although the question is used to ask about the confidentiality afforded to respondents whose information is included in federal statistics, respondents based their answers upon other salient examples of information sharing that were not related to statistics. As such, it appears that their answers were based not upon how secure they believed their own information was being kept, but rather upon their ability to recall particular circumstances under which they believe such confidentiality might be legitimately breached (for example information about a criminal on the run). When asked how they felt about the Census Bureau obtaining personal information directly from entities that have already collected this data, respondents took both type of data and the entity into consideration when deciding whether or not to trust this exchange of information. Table 2 outlines these responses.

The respondents who were in favour of data on their earnings and income being collected by the Social Security Administration and IRS explained their reasoning by saying, ‘It won't hurt you, it's not sensitive’, ‘So long as [confidentiality] was a guarantee, I'm OK with it’ or ‘It would be fine as long as it was used for a good purpose. I can't think of a motivation to use it badly.’ Here, respondents begin to voice two types of trust concerns: those pertaining to the institution and those pertaining to the data. When discussing the institution, respondents draw from their personal experience with this entity or the experiences they have heard about second hand through peers or the media. When discussing the data, they draw from the purported use of that data and the potential social goods to come from their use. The particular concerns associated with each type of trust are outlined in Table 3.

Table 3 Perceived benefits and harm of statistical data and institutions

Below we outline the subsequent focus groups and cognitive interviews used to explore these inconsistencies, which revealed two distinctive types of trust repertoires.

Focus groups and round 2 interviews: experience-based trust of statistical institutions

In discussing the merits and risks involved in data sharing, respondents often brought up the IRS, FBI and health insurance companies as examples of enforcement that have the power to impose sanctions based on gathered information. These institutions are salient to respondents because they either have direct experience with them (for example, the IRS or health insurance companies) or hear about them often (for example, the FBI). Some were trusting of the IRS, while others mistrusted the agency. This may be attributed to the varied experiences respondents have had with each agency. If the IRS proved itself to be untrustworthy with the respondents (or a close friend or family member), then the positive cultural–repertoire based trust might be trumped by negative experience based trust. Respondents who opposed these entities sharing their data or receiving data from others, did so out of fear of possible sanction. Respondents who believed that the decennial census and other social surveys were important, sometimes worried that information might be misused to deny respondents particular social benefits such as health insurance or unemployment assistance, or that the information would be used to force people to pay higher income or property taxes. Other respondents feared identity theft. Many of these fears were rooted in the idea that an outside hacker or rogue employee might leak the information for their own personal gain. Again, we see how experience based distrust can trump cultural–repertoire based trust if respondents doubt the confidentiality or protection of their information. When considering the implications of information sharing between the government and private firms, both cognitive interview respondents and focus group participants feared that their information might be sold or misused, resulting in negative personal consequences.

Each focus group began by asking respondents (1) what types of information they believed each agency or entity had about them, (2) whether it was legal for the agency to share this information and (3) how they would feel about that agency or entity sharing information. The organic conversations revolved around general feelings towards each agency, as well as their fears of losing control of their data and the negative sanctions that might result from this information falling into the wrong hands. When discussing such issues of privacy and appropriate procedures for agencies collecting and sharing personal information, respondents frequently referenced the Health Insurance Portability and Accountability Act (HIPAA), Google and Facebook as explanation of ‘proper’ and ‘improper’ procedures.Footnote 4 In these conversations, HIPAA was lauded for collecting information appropriately by asking for explicit consent from patients to share personal, private information. This may also be a consequence of familiarity with and experience-based trust in a particular entity. In contrast, Google and wireless carriers were criticised for inappropriately sharing (and obtaining) information because they did not explicitly ask for permission.

Respondents identified ‘for profit’ companies as less trustworthy sources of information because they use information for their own agenda. For instance, when asked about willingness to share credit card purchases, a twenty-six-year-old respondent explained that she was worried that the government could monitor her food purchases and deem her choices unhealthy. She feared that she might be denied health insurance if the government gave this information to other agencies or companies. Such findings contradict the supposed tendency of millennials to be more trusting and less concerned about privacy. Their internet literacy, coupled with their familiarity with Google and Facebook's privacy policies, results in an experience based trust or distrust of these entities. In an attempt to understand the situations in which respondents trusted collecting agencies with whom they were less familiar or did not have enough interaction to form experience based trust, we analysed the second round of cognitive interview and focus group data with an eye towards the participants’ rationales to trust or distrust.

Some respondents feared that their information might be sold and their confidentiality would be compromised. In these situations, many thought about marketing firms and how their information has ended up on mailing and telephone lists that companies use to track them. Other cognitive interview respondents echoed this sentiment, stating that the government tracking one's purchases is ‘creepy’ and feels like a ‘Big Brother’ approach. When probed further on this subject, some respondents identified ‘for profit’ companies as less trustworthy sources of information; several respondents brought up random mailings from marketing firms as examples of how their information might be bought or sold without their best interests in mind. This logic reflects a negative experience based trust that stems from other for-profit companies. Respondents identified accuracy as an essential consideration and priority, but only supported sharing of data between sources perceived to be credible and trustworthy. Those institutions seen as having an ‘agenda’, are perceived to do more harm than good in providing information:

P9: I guess what my concern is, is the influence that the private sector can have on the government, for instance for redistricting, the state is going to use information from the private sector in order to draw their maps. Well these entities, what is keeping them from skewing that in order to get the desired results that they want.

Moderator: Ok, so it is an issue of accuracy then.

P9: Yeah, because if it's inaccurate then . . .

P8: Absolutely. I think also from a medical point of view, that in terms of statistics, I think it's very important to have accuracy, to get that information from whatever source. [. . .] I think it is very important for the Federal government to have access to something like that, whether it is directly asking me for that information personally or if they are getting from Medicare or Medicaid.

The use of health data was a common example offered by focus group and cognitive interview respondents. Many used the Health Insurance Portability and Accountability Act (HIPAA) as an example of a data-sharing technique that benefits users; the many respondents who supported this legislation argued that the shared information was ‘correct’ and served ‘a good purpose’. One respondent argued that because the patient tells the doctor her/his experience and the doctor writes it down, the record already exists and therefore would be more accurate than self-report. Another respondent shared this opinion, explaining that, ‘Doctors are better at reporting because that is their job and they know the diagnoses better than you do.’ Respondents believed that there could be personal and societal benefits if such entities shared health information.

Focus group and round 2 interviews: cultural–repertoire trust of statistical products

The social benefits of accurate data and the reduction of government costs were important considerations for respondents when weighing opinions about data sharing. Respondents who favoured data sharing between statistical agencies often used the justification that the numbers on file were ‘right’ and that sharing already collected information may benefit society, as long as there were no perceived personal risks.

Several respondents specifically brought up the importance of the Census Bureau and its data in terms of promoting social causes. These causes ranged from political representation to affordable housing to location of hospitals and schools and support for underrepresented groups. Funding for medical issues such as diabetes or HIV was also mentioned as a potential benefit resulting from federal statistics. Personal and public health issues registered as shared values of our respondents, issues that could be leveraged to motivate trust.

Federal government spending was another especially salient topic of concern in the contemporary climate, and it also played a significant factor in trust justifications. There were a number of respondents who were initially resistant to data sharing who changed their opinions when it was suggested that sharing administrative records could potentially save government money. For example, the following exchange from the first focus group illustrates this:

Moderator: Ok, so that's one of three options: they can go to you, or go to another Federal agency or from another entity. And you would prefer they would come to you?

P5: Yes

Moderator: For everything?

P5: Yes.

P9: Well that's the most expensive way. Do you want to pay for that with your taxes?

Later on, another participant chimed in:

P4: I agree with her, if the information is already there, why spend money if you already have it, I also think that it is important to have up to date information, like income, unemployment. . .

In another conversation, respondents brought up the importance of these statistics to social policy and research:

Moderator: Ok, so [data-sharing] is an issue of accuracy then.

P9: Yeah, because if it's inaccurate then. . .

P8: Absolutely. I think also from a medical point of view, that in terms of statistics, I think it's very important to have accuracy, to get that information from whatever source . . . I think it is very important for the Federal government to have access to something like that, whether it is directly asking me for that information personally or if they are getting from Medicare or Medicaid.

From these preliminary findings, it became apparent that the decision to trust a particular institution was often tied to the data in so far as the data might promote shared cultural values, such as social and health benefits, accuracy and cost-savings.

Trusting institutions and products: cultural value frames in round 2 interviews

In order to test the cultural values that factor into motivations to trust a statistical institution or product, both a cost and a social good frame were added to the second round cognitive interview questions pertaining to data-sharing. ‘Cost’ refers to the time, effort and/or money it takes the federal government to create statistics. ‘Social good’ refers to the various social programmes (for example, food stamps and unemployment), infrastructure (for example, roads and schools) and political outcomes (for example, representation in Congress) that may be tied to government surveys. During the focus groups, respondents appeared to be supportive of administrative data usage and data sharing when they realised (a) that it could save government money or (b) that federal statistics determine where local, state and federal agencies build things such as new schools, roads and firehouses.

In the interviews, respondents were first presented with an un-framed control question, followed by a ‘process’ question that incorporated some standard confidentiality language. Following these two unframed questions, the respondents were asked the framed questions:

Control:

If the Census Bureau could obtain names and ages from the Social Security Administration to get a better idea of where these types of services should be located . . .

Cost Frame:

The 2010 Census cost over $10 billion dollars. To reduce this cost for the 2020 Census, the Census Bureau could get your name and age from the Social Security Administration. If this method could save government money . . .

Social Good Frame:

Census numbers determine where local, state, and federal agencies build new schools, roads, and firehouses. If the Census Bureau could obtain names and ages from the Social Security Administration to get a better idea of where these types of services should be located . . .

. . . would you be strongly in favor of it, somewhat in favor of it, neither in favor nor against it, somewhat against it, or strongly against it?

In testing these frames in the second round of cognitive interviews, we found that the cost frame question resulted in considerable change in opinion among those who were initially resistant to such data sharing. Specifically, in response to this question, twenty-nine out of the thirty-nineFootnote 5 respondents (74 per cent) were in favour, sixteen of whom increased their favour after this frame was presented (41 per cent). The social good frame question had very similar results, yielding significant change in opinion. Twenty-eight out of the thirty-eight respondents (74 per cent) were in favour, fifteen of whom increased their favour after this frame was presented (39 per cent). The process, cost and social good frames were then added as a supplement to the production FSS POS with similar results.Footnote 6

By combining the qualitative methods of cognitive interviewing and focus groups, we were able to glean a more textured and nuanced understanding of trust in statistical institutions and products. Parsing trust into (1) experience based trust in statistical institutions and (2) cultural–repertoire based trust in statistical products, and identifying the motivating factors or rationalisations of the trusting process helped develop the cost and social good frame questions that could be used in the large-scale survey (Childs et al., Reference Childs, King and Fobia2015). This technique can lay the groundwork for successful messaging in communications campaigns and survey-related documents, such as pre-notice letters and informed consent documents, that might motivate otherwise hesitant responders.

Conclusion

Understanding trust and how it impacts decision-making is vital for the operation of both public and private institutions. As noted above, Möllering (Reference Möllering2006) indicates that people and institutions only pay attention to trust when it becomes distrust and therefore becomes problematic. When it comes to public and private institutions’ abilities to collect information from stakeholders and the general public, this distrust has indeed become problematic. Across all types of organisations and all types of surveys, respondents are less sure that their data are secure, and response rates have correspondingly decreased. As such, it is crucial that communications with potential respondents preemptively address their concerns and make it clear why participating in survey research furthers their values and addresses their concerns. Trust is neither a static nor monolithic motivator; institutions can modify their respondents’ perceptions as we have demonstrated in our research.

In this study, we examined not only respondents’ willingness to participate in a federal survey, but also their attitudes towards allowing federal statistical agencies to collect and share their information in non-traditional ways. We found two factors that contributed to respondents’ level of trust in the process of data-sharing: experience based trust in statistical institutions and cultural–repertoire based trust in their products. Institutional trust was mediated through cultural repertoires such as the fear of sanction and the desire for cost savings; whereas trust in statistical products was motivated by the desire for accuracy and the promotion of social goods (through the use of data). These two factors were interrelated, in that one cultural factor (for example, using data to support HIV research) may outweigh other cultural factors (for example, fear of sanction). The appreciation of robust and accurate data that can help communities is mediated through such fears of government infiltration and principles of personal liberty. This is an important finding in that it shows that culture may be a mediating factor in the decision to trust. Although this research focused on the public trust in federal statistical agencies, we would expect a similar mediating process occurs for respondents when deciding whether to trust other public institutions as well.

While it would be untenable to argue that Americans share a singular, logically consistent, consensual culture, the US government is a powerful cultural actor that plays a role in defining meanings and establishing national values and identities. As such, it is not altogether unsurprising that many elements of the US state myth (for example, protection of personal liberties, government-sponsored social goods) serve as cultural frames that shape the relationship between citizens and state (Meyer and Rowan, Reference Meyer and Rowan1977). Although the ‘official cultural map’ denotes the importance of social statistics to social policy, it is important to remember that Americans internalise these ideals to differing levels, and that subordinated groups may criticise or resist this ideological logic if it is perceived to cause personal harm (for example, health insurance denial, tax audits) (Sewell, Reference Sewell2005: 173). As such, our respondents’ motivations to trust or distrust reveal the complexities of culture as both ideology and practice, as measured through disparate beliefs, values and behaviours. As Sewell argues, ‘cultural coherence, to the extent that it exists, is as much the product of power and struggles for power as it is of semiotic logic’ (2005: 173). This research contributes to empirical and theoretical studies of trust, particularly with regard to the cooperative relationship between the citizen and the state as well as social policy research that explores how institutions can garner the trust and cooperation of their constituents.

Footnotes

2 Entities represented on the cards for the ‘pile-sort’ included: the IRS, Facebook, Mastercard or Visa, Post Office, Social Security Administration, Health Insurance Company, Unemployment office, Employer, Marketing Firm, CMA.

3 Types of information represented on the cards for the ‘pile-sort’ included: Name and Address, Income, Phone Number, Social Security Number, Employment History, and Medical Records.

4 Although we designed the focus groups prompts around government statistical agencies, the participants frequently brought Google and Facebook into the conversations, and used them as points of common reference.

5 Although there were forty-three respondents in the second round of interviews, not all questions were asked of all respondents, so we do not have forty-three responses to all questions. This method was chosen to maximise the number of unique questions that could be pre-tested. Questions were asked in different orders to each respondent to also control for ordering effects.

6 For a recent discussion of the results of this survey, particularly how knowledge correlates to trust, see Childs et al. (Reference Childs, King and Fobia2015).

References

Aryee, S., Budhwar, P. S. and Chen, Z. X. (2002) ‘Trust as a mediator of the relationship between organizational justice and work outcomes: test of a social exchange model’, Journal of Organizational Behavior, 23, 3, 267–85.CrossRefGoogle Scholar
Childs, J. H., King, R. and Fobia, A. C. (2015) ‘Confidence in US Federal Statistical Agencies’, Survey Practice, 8, 5, 19.CrossRefGoogle Scholar
Cook, K. S. and Cooper, R. M. (2003) ‘Experimental studies of cooperation, trust and social exchange’, in Ostrom, E. and Walker, J. (eds.), Trust, Reciprocity and Gains from Association: Interdisciplinary Lessons from Experimental Research, New York: Russell Sage Foundation, pp. 277333.Google Scholar
Curtin, R. (2007) ‘What US consumers know about economic conditions’, paper presented at the OECD World Forum on Statistics, Knowledge and Policy, June 2007.Google Scholar
Durkheim, E. (1997) The Division of Labor in Society, translated by. Halls, W. D., New York: Free Press.Google Scholar
Fellegi, I. (2004) ‘Maintaining the credibility of official statistics’, Statistical Journal of the United Nations Economic Commission for Europe, 21, 3,4, 191–8.CrossRefGoogle Scholar
Fellegi, I. (2010) ‘Report of the electronic working group on measuring trust in official statistics’, OECD Meeting of the Committee on Statistics, 7–8 June, Paris.Google Scholar
Fukuyama, E. (1995) Trust: The Social Virtues and the Creation of Prosperity, New York: Free Press.Google Scholar
Gates, G. W. (2011) ‘How uncertainty about privacy and confidentiality is hampering efforts to more effectively use administrative records in producing US national statistics’, Journal of Privacy and Confidentiality, 3, 2, http://repository.cmu.edu/jpc/vol3/iss2/ [accessed 13.11.2015].CrossRefGoogle Scholar
Giddens, A. (1990) The Consequences of Modernity, Cambridge: Polity Press.Google Scholar
Giddens, A. (1991) Modernity and Self-identity, Cambridge: Polity Press.Google Scholar
Groves, R. M. (2006) ‘Nonresponse rates and nonresponse bias in household surveys’, Public Opinion Quarterly, 70, 5, 646–75.CrossRefGoogle Scholar
Groves, R. M. and Couper, M. O. (1998) Nonresponse in Household Interview Surveys, New York: John Wiley & Sons.CrossRefGoogle Scholar
Groves, R. M., Dillman, D. A., Eltinge, J. L. and Little, R. J. A. (eds.) (2001) Survey Nonresponse, Hoboken, NJ: Wiley.Google Scholar
Herreros, E. and Criado, H. (2008) ‘The state and the development of social trust’, International Political Science Review, 29, 1, 5371.CrossRefGoogle Scholar
Luhmann, N. (1979) Trust and Power, Chichester: Wiley.Google Scholar
Mayer, T. (2002) Privacy and Confidentiality Research and the US Census Bureau Recommendations Based on a Review of the Literature, Washington, DC: US Census Bureau.Google Scholar
Merton, R., Fiske, M. and Kendall, P. (1956) The Focused Interview, New York: The Free Press.Google Scholar
Meyer, J. W. and Rowan, B. (1977) ‘Institutionalized organizations: formal structure as myth and ceremony’, American Journal of Sociology, 83, 2, 340–63.CrossRefGoogle Scholar
Miller, K. (2011) ‘Cognitive Interviewing’, in Madans, J., Miller, K., Maitland, A. and Willis, G. (eds.), Question Evaluation Methods: Contributing to the Science of Data Quality, Hoboken, NJ: Wiley, pp. 5175.Google Scholar
Miller, K., Chepp, V., Willson, S. and Padilla, J. L. (eds.) (2015) Cognitive Interviewing Methodology, Hoboken, NJ: Wiley.Google Scholar
Mizrachi, N., Drori, I. and Anspach, R. (2007) ‘Repertoires of trust: the practice of trust in a multinational organization amid political conflict’, American Sociological Review, 72, 1, 143–65.CrossRefGoogle Scholar
Möllering, G. (2001) ‘The nature of trust: from Georg Simmel to a theory of expectation, interpretation and suspension’, Sociology, 35, 2, 403–20.Google Scholar
Möllering, G. (2006) Trust: Reason, Routine, Reflexivity, Oxford: Elsevier.Google Scholar
Newton, K. (2001) ‘Trust, social capital, civil society, and democracy’, International Political Science Review, 22, 2, 201–14.CrossRefGoogle Scholar
OECD Working Group (2011) Measuring Trust in Official Statistics – Cognitive Testing, Paris: OECD.Google Scholar
Ostrom, E. and Walker, J. (2003) Trust and Reciprocity: Interdisciplinary Lessons from Experimental Research, New York: Russell Sage Foundation.Google Scholar
Paxton, P. (2002) ‘Social capital and democracy: an interdependent relationship’, American Sociological Review, 67, 2, 254–77.CrossRefGoogle Scholar
Putnam, R. D. (2000) Bowling Alone: The Collapse and Revival of American Community, New York: Simon & Schuster.Google Scholar
Schatzki, T., Knorr Centina, K. and Von Savigny, E. (2001) The Practice Turn in Contemporary Theory, London: Routledge.Google Scholar
Sewell, W. (2005) Logics of History: Social Theory and Social Transformation, Chicago: University of Chicago Press.CrossRefGoogle Scholar
Simmel, G. (1990) The Philosophy of Money, 2nd edn, London: Routledge.Google Scholar
Singer, E., Bates, N. and Van Hoewyk, J. (2011) ‘Concerns about privacy, trust in government, and willingness to use administrative records to improve the decennial census’, Public Perception and Societal Conflict, Annual Meeting of the American Association for Public Opinion Research, 12–15 May, Phoenix, Arizona.Google Scholar
Singer, E., Mathiowetz, N. and Couper, M. (1993) ‘The impact of privacy and confidentiality concerns on survey participation: the case of the 1990 Census’, Public Opinion Quarterly, 57, 4, 465–82.CrossRefGoogle Scholar
Stolle, D. (2002) ‘Trusting strangers – generalized trust in perspective’, Schwerpunktheft der Österreichischen Zeitschrift für Politikwissenschaft, 31, 4, 397412.Google Scholar
Swidler, A. (1986) ‘Culture in action: symbols and strategies’, American Sociological Review, 51, 2, 273–86.CrossRefGoogle Scholar
Uslaner, E. M. (2002) The Moral Foundations of Trust, Cambridge: Cambridge University Press.Google Scholar
Weller, S. C. and Romney, A. K. (1988) Systematic Data Collection, Newbury Park, CA: Sage Publications.CrossRefGoogle Scholar
Willis, G. B. (2005) Cognitive Interviewing: A Tool for Improving Questionnaire Design, Thousand Oaks, CA: Sage Publications.CrossRefGoogle Scholar
Whiteley, P. (2000) ‘Economic growth and social capital’, Political Studies, 48, 3, 443–66.Google Scholar
Figure 0

Figure 1. Fellegi's model of trust in official statistics

Source: Fellegi (2010).
Figure 1

Table 1 Demographics of cognitive interview and focus group respondents

Figure 2

Table 2 Preferences for administrative data linkage by data type and source

Figure 3

Table 3 Perceived benefits and harm of statistical data and institutions