Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-22T08:24:12.909Z Has data issue: false hasContentIssue false

Security implications and governance of cognitive neuroscience

An ethnographic survey of researchers

Published online by Cambridge University Press:  30 July 2015

Margaret E. Kosal*
Affiliation:
Georgia Institute of Technology
Jonathan Y. Huang
Affiliation:
Georgia Institute of Technology
*
Correspondence: Margaret E. Kosal, Sam Nunn School of International Affairs, Georgia Institute of Technology, 781 Marietta Street, NW, Atlanta, GA 30318. Email: margaret.kosal@inta.gatech.edu

Abstract

In recent years, significant efforts have been made toward elucidating the potential of the human brain. Spanning fields as disparate as psychology, biomedicine, computer science, mathematics, electrical engineering, and chemistry, research venturing into the growing domains of cognitive neuroscience and brain research has become fundamentally interdisciplinary. Among the most interesting and consequential applications to international security are the military and defense community’s interests in the potential of cognitive neuroscience findings and technologies. In the United States, multiple governmental agencies are actively pursuing such endeavors, including the Department of Defense, which has invested over $3 billion in the last decade to conduct research on defense-related innovations. This study explores governance and security issues surrounding cognitive neuroscience research with regard to potential security-related applications and reports scientists’ views on the role of researchers in these areas through a survey of over 200 active cognitive neuroscientists.

Type
Perspective
Copyright
© Association for Politics and the Life Sciences 2015 

In recent years, significant efforts have been made toward elucidating the potential of the human brain. Spanning fields as disparate as psychology, biomedicine, computer science, mathematics, electrical engineering, and chemistry, research venturing into the growing domains of cognitive neuroscience and brain research has become fundamentally interdisciplinary. Indeed, research on the human mind has provided a platform for scientists to collaborate beyond their individual fields. Yet, the fervor over cognitive neuroscience research has not been limited to academic and scientific pursuits. Applications of this research, particularly in the areas of pharmacology, imaging, and computer interface design (and hence engineering), have received considerable attention beyond the academy. Reference Marcus1,Reference Moreno2,Reference Huang and Kosal3,Reference Tracey and Flower4,Reference Giordano5,6,7

International scientific bodies, including the United Kingdom’s Royal Society, 8 have also engaged in discussions on the field’s policy relevance. NATO’s New Strategic Concept, released in 2010, noted that “research breakthroughs will transform the technological battlefield … Allies and partners should be alert for potentially disruptive developments in such dynamic areas as information and communications technology, cognitive and biological sciences, robotics, and nanotechnology.” 9 Probing the policy implications of this research, these and other voices are beginning to ask about the potential dual use of neuroscientific breakthroughs and technologies and are raising policy, strategic, and ethical concerns about security-related uses. In this paper, we argue that such questions are critical for policy scholars. As advances in the security, intelligence, and offensive applications of neuroscientific research grow and expand, how to properly leverage such new knowledge will likely emerge as one of the leading technical security studies puzzles of the twenty-first century.

Not only does the military application of cognitive neuroscience require the attention of policy makers, but engaging with the issues surrounding effective translation of neuroscientific knowledge to security uses provides a new perspective on policy and politics. Of course, challenges to policy making as a result of scientific advancements are not new. Technological and scientific progress has long influenced states’ security policies. Anticipating and responding to potential emerging threats to security and understanding disruptive technologies are intrinsic to the security dilemma. Perhaps the most notable example in the study of technology’s impact on state interactions is the invention of nuclear weapons and the reconfiguration of strategic logic to deterrence. The mutual assured destruction logic underlying nuclear deterrence constrains a state’s choices of strategy. In addition, the concept of Revolution in Military Affairs (RMA), which posits that military technological transformations and the accompanying organizational and doctrinal adaptations can lead to new forms of warfare, also studies the impact of technology in the security policy realm. Most notably, RMA discussions have underpinned the concept of network-centric warfare, operations that link combatants and military platforms to each other in order to facilitate information sharing, as a result of the progress in information technologies. Reference Arquilla and Ronfeldt10,11,Reference Dombrowski and Gholz12

Like past scientific and technological breakthroughs, advances in cognitive neuroscience will likely have an impact on future security thinking, doctrine, and policy. Since cognitive neuroscience research is human-focused, the implications of findings are integrally tied to the study of social processes, including politics and international relations. Some studies linking cognitive neuroscience to politics have examined how psychological and brain science research challenges assumptions embedded in the study of political decision making. Reference McDermott13,Reference Mercer14,Reference Vander Valk15 Take rationality, for instance. Rather than being devoid of emotion, decision making is highly influenced by it, as shown by Damasio’s work on the somatic marker hypothesis as well as Marcus, Neuman, and MacKuen’s development of the affective intelligence model. Reference Marcus, Neuman and MacKuen16 The assumption of rationality often embedded in the study of political interactions can thus be problematic. Reference Masters17 Indeed, scholars have begun to explain the positive impacts of emotion as part of the decision-making process, including issues of trust and identity in international politics as well as the rationality and irrationality underlying deterrence logic. Reference Stein, Paul, Morgan and Wirtz18

The defense and intelligence communities’ interest in cognitive neuroscience has raised concerns that the security applications of this research might require some form of governance, suggesting a potential need for regulation of research-related developments through institutionalized oversight beyond the current requirements of Institutional Review Boards and other mechanisms. The purported aim of such enhanced oversight would be to manage the development process while defining appropriate directions and boundaries where security applications are concerned. For such reasons, an engaged conversation between those involved in cognitive neuroscience research and the policy community that may deploy potential research applications seems more necessary than ever. Yet assumptions regarding the motivations and capabilities of scientists and possible security implications of their discoveries have permeated discussions about the security implications and possible governance of cognitive sciences research rather than hard facts. In particular, little is known about how cognitive neuroscientists—the very people whose research is under the spotlight—view these issues.

This study, based on an ethnographic survey of 209 cognitive neuroscientists, serves as an initial step toward gathering such information. We use the term ethnographic to indicate that this study is intended to understand the culture and practices of the cognitive neuroscience research community and to probe perceptions about dual-use applications of scientific findings. Findings related to government funding, ethical discourses, and researcher attitudes toward potential governance show that scientific engagement is an effective mechanism for addressing questions concerning the proper role for cognitive neuroscientists in these debates. This investigation further seeks to assess the extent to which cognitive neuroscientists are aware of the dual-use and security implications of their research; how researchers think about existing security relevant institutional structures, in terms of funding, regulations, and supervision; and whether cognitive neuroscientists think additional oversight is necessary in light of their research’s dual-use potential (and, if so, what form such oversight should take).

The term “dual use,” which is central to this study, requires some explanation. Historically, dual use referred to technologies that could be meaningfully used by both the civilian and military sectors. In light of the ever-changing security environment in which the potential for technologies to be misused by both state and nonstate actors has become increasingly prevalent, a new conceptualization of dual use, in which the same technologies can be used legitimately for human betterment and misused for nefarious purposes, such as terrorism, has emerged. 19 The National Institutes of Health’s Office of Science Policy have recently adopted a similar understanding of dual use in its discussions and policies on biosecurity. 20 In keeping with these understandings, this study adopts a definition of dual use as research “conducted for legitimate purposes that generates knowledge, information, technologies, and/or products that could be utilized for both benevolent and harmful purposes” 21 (i.e., research that can have beneficial impacts as well as unintended deleterious consequences).

Government spending on neuroscience

Since the emergence of cognitive neuroscience as an area of study, research on human cognition has taken on an increasingly dual-use nature. The breadth of recent spending by the federal government on cognitive neuroscience illustrates the importance of this area of research as a key element now informing strategic policy making. Indeed, programs in psychological and brain sciences are garnering the attention of multiple funding agencies.

In FY2010 and FY2015, the National Institutes of Health (NIH) reported more than $1.9 billion in funding appropriated to brain research through the National Institute of Biomedical Imaging and Bioengineering and the National Institute of Neurological Disorders and Stroke. The National Science Foundation (NSF) also maintains active research programs in Perception, Action, and Cognition; Cognitive Neuroscience; Neural Systems; and Collaborative Research in Computational Neuroscience. To cite just one other recent example, in April 2013 President Obama announced the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative with initial funding of $100 million. In 2014, the NIH further developed a 12-year plan for the initiative, which, along with other federal government and private partners, calls for potential funding that amounts to several billions of dollars. This commitment signals the intention of the United States to maintain a leadership position at the frontier of this emerging area of science and technology.

In addition to the NIH and NSF, significant interest in neuroscientific research has originated from the defense community. The Defense Advanced Research Projects Agency (DARPA) has a long-standing interest in these areas of research. In its 2007 Strategic Plan, DARPA delineated research priorities in the cognitive sciences that span such categories as “bio-revolution” and “cognitive computing.” In 2009, DARPA funded more than $134 million in projects related to neuroscience, and its FY2011 budget request estimate shows that DARPA invested at least $240 million in a wide range of basic and applied research projects relating to cognitive science and neuroscience, including human-assisted neural devices, mathematics of the brain, cognitive computing systems, machine intelligence, revolutionizing prosthetics, maintaining combat performance, and something called Neovision2. 22 More recently, as part of the aforementioned BRAIN initiative, more than half of the initial funding (FY2014) was allocated to DARPA.

In addition to DARPA, Department of Defense (DOD) funding for cognitive science and neuroscience has been channeled through the scientific offices of the uniformed service branches (i.e., Army, Navy, and Air Force). In FY2011, for example, the President’s Budget shows that the Air Force invested more than $24 million across its programs on mathematical description of cognitive decision making, cognitive modeling for enhancing human effectiveness, and performance evaluation in extreme environments. 23 In a similar fashion, the Navy requested more than $34 million in programs on human systems; human performance, training, and education for the Marine Corps Landing Force; and in-house laboratory research on human performance sciences. 24 The Army also requested more than $55 million for research programs involving human engineering, neuroergonomics, robotics, human behavior, and projects intending to predict and enhance warfighters’ cognitive performance, prevent Post-Traumatic Stress Disorder (PTSD), and treat Traumatic Brain Injury through the use of “neuroprotectants” such as drugs and therapies designed to reduce the effects of traumatic incidents. 25 Across DARPA and the military service branches, the DOD has clearly become a major funder of neuroscientific research.

While the basic research programs funded by an agency like the NSF may be perceived to have minimal direct policy relevance, research programs funded by the DOD are different in their objectives: even basic research programs are mission-oriented in that they have some relation to a defense-related technology need or capability gap. Scientists and engineers engaged in DOD-funded programs are thus inextricably tied to policy choices made about military technology, force posture, defensive needs, and strategic planning. In this context, the findings and products of neuroscience provide the technological means for policy makers to achieve particular political goals. For this reason, researcher views about potential military applications of their work are an important element of broader research and development considerations.

On the ethics of neuroscience

A significant portion of the scientific and policy literature on the implications of neuroscientific research is concerned with the ethics of such research. The concerns raised have largely engaged two areas of debate: human enhancement and thought privacy, both of which are relevant to military and security research on so-called “cognitive enhancement”—a contested area of research. Reference Parens and Illes26 While some embrace the potential of neuropharmaceuticals (drugs or other therapeutic agents that act on the central nervous system and treat neurological disorders) and advocate a form of industry self-regulation to guide their development and use, Reference Gazzaniga27,Reference Naam28 others have raised concerns about the potential for privileged access to neuroenhancers and the possible disruption of brain functioning or other natural physiological processes. Reference Fukuyama29

On the other hand, the issue of thought privacy emanates from advancements in noninvasive imaging and stimulation techniques used for neurological research such as functional magnetic resonance imaging (fMRI), near-infrared spectroscopy (NIRS), magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS). The concern here is that such applications could lead to their potential future use in lie detection and interrogation. Reference Wild30,Reference Canli, Brandon, Casebeer, Crowley, DuRousseau, Greely and Pascual-Leone31 These discussions on neuropharmacology and neural imaging reveal many underlying socially relevant questions about neuroscientific research.

Questions concerning the role that neuroscience research should play in national security have been debated with growing intensity in recent years. Some have advocated against the inclusion and use of neuroscientific techniques for national security purposes, Reference Rosenberg and Gehrie32,Reference Rippon and Senior33 while others justify the defense and intelligence community’s involvement in light of maintaining military superiority. Reference Giordano, Forsythe and Olds34 Ethicists have advocated for the need to consider neuroethics in discussions about national security, Reference Marks35 with some arguing that the security potential of neuroscientific research is best framed under considerations about human rights. Reference Justo and Erazun36,Reference Lunstroth and Goldman37 The proper place for neuroscientific research in security policy remains contested, and neither a strict security nor ethical framework is likely to suffice for all parties with a stake in these discussions.

Transcending these debates is a recognition among almost all ethicists who have examined the issue of the necessity of scientist engagement in discussions about the potential security uses of neuroscientific findings and related technologies. Calling scientists out of their “disillusionment” with the policy world, Canli and colleagues emphasize the importance of partnerships between scientists, policy makers, and ethicists. 38 Similarly, Resnik, in discussing the classified research on brain imaging, has advocated for an open dialogue between scientists and government officials regarding dual-use research. Reference Resnik39 Existing discourses on the ethics of neuroscience point to the need for establishing a shared norm that engages both scientists and policy makers on the ramifications of this research. The related question as to how neuroscientists see the proper institutionalization of such a norm is addressed below.

Research governance and scientist engagement

Facing the known problems and unknown risks that the emerging research on neuroscience brings, one may be tempted to take a precautionary stance Reference Dinneen40 and resort to the idea that these problems and risks can be regulated in some way or reduced to an acceptable level via the establishment of an anticipatory governance structure. Using nanotechnology as an example, some scholars assert that an emerging technology can carry hazards that cannot be accurately evaluated a priori. Reference Kosal41,Reference David, David and Thompson42 Particularly, since little is known about the risks that nanotechnology carries to human health and the environment, some argue that a precautionary approach is necessary. Reference Clift, Hunt and Mehta43 Yet, to establish regulation based primarily on hazard and precaution is difficult, for most emerging technologies carry potential benefits that would otherwise not be considered. Reference Kosal44 As a result, some scholars propose that risk-benefit analysis is more suitable. The European Commission’s regulatory shift from precaution to “smart regulation,” which refers to the use of impact assessment for regulatory decisions and a product-based approach employing risk-benefit rather than hazard analysis alone, provides an apt example. Reference Torriti and Einsiedel45 “Smart regulation” is considered a more comprehensive model that not only includes evaluations of risks and benefits but also issues involving safety and hazard. Such an approach may be useful in the governance of emerging technologies of an uncertain nature without jeopardizing the possibility of accounting for their positive externalities—that is, the beneficial capabilities that accompany the development of an emerging technology. Furthermore, scholars and scientists who do not favor a hazard-based precautionary governance approach may find themselves shying away from a top-down regulatory regime while favoring self-governance.

Analyzing the opportunities and risks of various emerging technologies, scholars have proposed different ways of managing their development. Looking at the information and biological revolutions and recognizing that “their control and use are largely in the hands of the individual,” some argue that the concept of governance on research and technology development needs revamping. Reference Fukuyama and Wagner46 To a large extent, research and development in cognitive neuroscience follows this trend in that its advancement is unlikely to require the complex governmental involvement that nuclear technologies did. To address this type of technological advancement where interests are more distributed, Fukuyama and Wagner propose three models of governance that would involve a wider spectrum of stakeholders: a distributed decision-making model that involves a large number of organizations and users; a citizen councils model that makes recommendations to more formal governing bodies based upon consensus from deliberation; and an NGO-oriented model that bypasses the need to involve individual citizens or the state. 47 Yet, although these bottom-up models provide more incentives for and greater access to governance for stakeholders, the role of scientists in the governance structure remains unclear and underspecified.

Importantly, some scholars are cautious about regulating technologies. Aside from the classic argument that regulations stifle technological progress, some suggest that the sense of control promulgated through self-governance is often misleading and that governance of an emerging technology is often subject to or influenced by sociopolitical concerns of the time. Reference Rip, Healey and Rayner48 Skeptics of an active governance model also assert that “establishing and maintaining regulatory controls will always struggle to keep pace with science and technology” due to the diversity of emerging technologies, the increasing pace of globalization, and the limited availability of resources to identify and curtail threats. Reference Whitman and Rappert49 For skeptics, the extent to which governance can be established is highly dependent on technological progress; as a result, scientists who are on the front lines of research become important actors in the determination of the scope and direction of technological governance.

Regardless of their perception on how governance is to be pursued, most scholars would agree on the importance of engaging scientists in an open dialogue so that they can have better awareness of the implications of their research. Those in favor of a governance structure based on risk-benefit analysis rely on the assistance of scientists to establish proper evaluation of risks. Those proposing a bottom-up governance model allow room for scientists to be engaged in the process as stakeholders. Finally, for skeptics of governance, scientists play an integral role in determining the pace of technological progress that delimits the extent to which governance structures can be established.

Indeed, a recent project jointly conducted by the National Academy of Sciences and the American Association for the Advancement of Science on the dual-use concerns of biosecurity research explored the types of governance life scientists (referring to researchers in the fields of biological and biomedical sciences, health sciences, agricultural sciences, and natural resources in academia, government, or industry) envision. The committee formed to conduct this joint study found that, at the time of the survey in 2007, some life scientists were amenable to the idea of voluntary self-governance. 50 The report showed that support existed for the development of a system of internal regulations regarding dual-use research. When asked, researchers—at least in the general life sciences—reveal that they are not impervious to the concerns of dual use, and neither are they entirely antagonistic toward the concept of governance. These findings lend support to the idea that engaging scientists in the discussion of research governance is likely to be meaningful and not entirely unwelcome.

The attention that neuroscience research has attracted from the defense community points to its direct implications for national and international security. Not only are potential security applications of neuroscience relevant to military strategy and policy, but the ethical concerns associated with this research—particularly over issues of privacy, enhancement, and potential misuse, as well as the strategies for funding and institutional oversight—are important questions for policy makers to consider. Not as clear is the role that neuroscientists themselves should play in formulating policy.

A traditional view of science holds that science is neutral and that whatever implications scientific findings may have should be determined by how those in positions of power use them. From this perspective, some scientists may find that where their research has policy relevance, their voices and views are seemingly inconsequential to policy makers, and policies are formulated without due consideration to the opinions of scientists engaged in the research. Yet, despite any disgruntlement scientists may have toward their perceived lack of influence, it is worth noting that, given their unique role of holding technical expertise, they can in fact affect policy choices and outcomes.

Within the United States, the attention given to governmental policies regarding science and the role of scientists in advising policy making grew in prominence in the post-World War II period. Reference Bush51 Since then, many scholars have sought to understand the roles that scientists take in policy development and their unique interactions with federal agencies and policy makers. In a study comparing the domestic structure and international context behind scientist/policy maker interactions across multiple countries, Solingen identified four political-economic analytical constructs that specify the contexts for such interactions. Reference Solingen and Solingen52 These constructs range from a “happy convergence” context, in which the interests of the state and scientists align and where the interactions between them are interdependent—and scientists exercise a special role as an instrument of persuasion—to a “deadly encounters” context, in which political accountability replaces any trace of autonomy in scientific inquiry and the government’s need for control leads to persecution of scientists, and where the scientist’s role in the policy-making process is nonexistent. Despite the varying degrees of influence scientists may hold in each of these contexts, scientific findings and scientists themselves may still hold persuasive power in both pluralistic and totalitarian systems (albeit in drastically different ways).

Other comparative work examining state-scientist relations shows that different national cultures support scientific communities and foster scientific cultures in unique ways. Reference Kurz-Milcke and Gigerenzer53 National and scientific cultures vary, and, as a result, scientific interpretation on any given subject can differ greatly across states. Cultural influences may color scientific judgment and interpretation, and, depending on the weight that scientific interpretation holds in policy making, states may devise highly divergent policies toward a single issue of concern. Whether from a political-economic or cultural perspective, the scientist’s role as an information holder and interpreter is not necessarily unidimensional. Oftentimes, scientists can decide and determine for themselves the role they want to play in the policy-making process. Reference Pielke54

In his work on epistemic communities, Haas attributes substantial weight to the political influence that a scientific community may have on policy making. Reference Haas55 Primarily concerned with international policy coordination on issues of environmental protection, Haas suggests that networks of knowledge-based experts who share the same belief in cause-and-effect relationships are an important factor in national and international policy making. In particular, Haas argues that epistemic communities’ source of power, which is rooted in their authoritative claim to knowledge, allows members to play an often decisive role in “articulating the cause-and-effect relationships of complex problems, helping states identify their interests, framing issues for collective debate, proposing specific policies, and identifying salient points for negotiation” that can guide, if not determine, state policy towards science. Reference Haas56

Method

The primary aim of this study is to investigate the views of cognitive neuroscientists regarding the ethical, institutional, and security implications of their research. While it is easy to find anecdotal accounts or expert opinions from individual scholars or commentators who attempt to bridge the research and policy realms, there is a paucity of empirical data on the collective views of research scientists themselves. More specifically, this survey was fielded with the goals of showing the general contour of neuroscientists’ understanding of the security implications of their research; providing a broad understanding of scientist perspectives on different governance structures, including codes of conduct and management of funding; and assessing the level of support that neuroscientists have for regulation of research and their views on the extent of institutional oversight necessary to prevent dual-use risks. Overall, the methods for this study were modeled on those used in a survey conducted by the National Academy of Sciences and the American Association for the Advancement of Science on dual-use concerns in the field of biosecurity.

For the study, only scientists engaged in research that falls under the general field of cognitive neuroscience were asked to participate. Definitions of this research area as specified by the National Research Council in its 2008 report, Emerging Cognitive Neuroscience and Related Technologies, were adopted for consistency. The term cognitive refers to “psychological and physiological processes underlying human information processing, emotion, motivation, social influence, and development.” 57 Under this definition, the field of cognitive science at large can include behavioral and biological science disciplines as well as other contributing disciplines such as mathematics and computer science. The term neuroscience is understood “broadly to include the study of central nervous system and somatic, autonomic, and neuroendocrine processes.” Also included in the study were researchers in areas designed to mimic human cognitive processes, such as artificial intelligence, or AI.

Since the study was focused on issues that primarily concern experimental scientists, such as lab management, scholars from such contributing disciplines as philosophy and linguistics were not included.

Survey design

The survey was designed to tap the attitudes, opinions, and perspectives of cognitive neuroscientists on the important policy questions surrounding their research as well as its dual-use potential. The survey was divided into six parts. Part one included general questions about respondents’ research. Scientists were asked about their specific research area and perceptions of their contributions to society and the defense community. Part two was intended to assess respondents’ perspectives on dual use and the potential for neuroscientific findings and technologies to be co-opted for criminal purposes. Part three asked scientists about the ethical implications of their research, where their exposure to ethical discourses, opinions about codes of conduct, and perspectives on philosophical issues surrounding cognitive neuroscience research were addressed.

Parts four and five of the survey queried respondents about the existing institutional structure of their research, including questions concerning laboratory management, the publication process, institutional constraints, government spending and funding, and governmental regulations. Lastly, respondents were asked about their academic position, gender, and citizenship. Prior to dissemination, the survey was pilot-tested among a small group of cognitive neuroscience researchers whose feedback, primarily on the wording and ordering of questions, was incorporated into the final version of the survey instrument.

Sample and data collection

An existing contact list of scientists engaged in cognitive neuroscience was not readily available; therefore, the sampling frame for this survey was constructed de novo. A list of domestic academic institutions supporting cognitive neuroscientific research was compiled, and approximately 2,000 respondents were identified manually from publicly available information, typically through Web sites or other online listings.

To qualify for the study, potential respondents had to meet the following criteria: (1) the respondent has stated research interests that fall under the definition of cognitive neuroscience described above; (2) the respondent has an academic appointment in a department or program that conducts research which falls under our definition of cognitive neuroscience; or (3) the respondent has other professional experiences in research areas that fall under our definition of cognitive neuroscience. 57 At the time of the survey, all respondents contacted had primary, secondary, or adjunct appointments in the cognitive science, cognitive neuroscience, and/or psychology departments or programs at their respective academic institutions.

The survey was fielded online over a six-month period between July and December 2009. The request to participate was detailed in an email and the questionnaire hosted by an online platform, SurveyMonkey, that allowed respondents to have access to the study from any computer with an Internet connection. The survey platform recorded the IP address of each respondent, a fact each person participating in the study was made aware of during the consent procedure at the beginning of the questionnaire. Survey responses were otherwise collected anonymously. No incentive was offered for completing the survey, and respondents had the ability to opt out at any time.

Initial requests to participate were distributed to a total of 1,990 potential respondents between the end of July and early August. One reminder was sent to each identified scientist at the beginning of October. The last response to the survey was received on December 19. Of the potential respondents contacted, 209 responded to the survey in a substantive fashion, and 178 completed the questionnaire. (Not all who completed the survey responded to every question, since by design respondents were allowed to skip a question if they felt it did not pertain to them.)

Among those who responded to the department affiliation question ( $n=149$ ), most identified their home department or program as psychology (52 percent), followed by cognitive science (29 percent) and neuroscience (28 percent), computer science/engineering (12 percent), biomedical engineering (5 percent), and electrical engineering (4 percent). Of those who listed their academic rank ( $n=168$ ), a plurality (47 percent) held the title professor, followed by associate professor (30 percent), assistant professor (19 percent), and research professor (4 percent). Response rates across these disciplinary affiliations and academic titles were fairly consistent.

Results

The following section summarizes and provides an analysis of the data collected. Due to the ethnographic and exploratory nature of the study, the analyses rely on frequency distributions and cross-tabulations to paint a general picture of neuroscientists’ views of the security implications of their research and their outlook on research governance. Actual question wordings are referenced where fitting. The complete questionnaire is available from the lead author upon request.

Security implications

The survey began by asking subjects about their work’s relevance to developing technology for military applications. When asked, “Do you consider your work to be directly related to developing technology for military applications?” the overwhelming majority of respondents, 82 percent, answered negatively. Just 18 percent answered in the affirmative. However, when further asked, “Do you see potential dual-use applications of your research?” about a third (32 percent) of those who thought they were developing military applications did not think their research had dual-use potential; at the same time, another third (31 percent) who thought they were not developing military applications did see dual-use potential of their research. The responses revealed a disconnect between military research and dual-use potential and hinted that some scientists would consider technology used for defense to be distinct from technology that could have dual-use implications (see Figure 1).

Figure 1. Agreement among scientists on the potential for research to be used maliciously.

This disconnect in researcher perceptions of potential dual use was further explored by asking respondents specifically about the potential of their work being co-opted for criminal purposes. When asked, “Could you imagine your research being co-opted for criminal purposes?” over half of respondents who agreed with the dual-use potential of their research answered negatively. Similarly, when asked, “Could you imagine your research being co-opted or used for malfeasant application by a state-based program or terrorists?” well over half of those who agreed with their research’s dual-use potential answered negatively.

Additional items probed perceptions of research performed by others in the field. More than half (57 percent) of respondents thought their colleagues’ work could carry dual-use potential; among these respondents, 37 percent could not see such potential in their own work. Similarly, a slight majority (52 percent) of researchers could see their colleagues’ work being co-opted for criminal purposes—including many who disagreed with the prospect of their own work being co-opted for criminal purposes. Showing a consistent pattern, 48 percent of those answering could see their colleague’s research being co-opted for malfeasant application by a state-based program or terrorists, whereas only 13 percent agreed with such a prospect for their own work.

Several explanations could be posited for the varied perception of dual-use, criminal, or malfeasant potential among researchers. Given that the questions asked respondents to compare their own research portfolio to the universe of other known portfolios, the range of work in the comparison group could be assumed to vary more widely than one’s own research portfolio. Another explanation might pertain to cognitive bias, where judgment errors arise from false memories or social attributions. Reference Haselton, Nettle, Andrews and Buss58,Reference Tavris and Aronson59 More specifically, this first- versus third-person disparity in the potential of research could flow from an unintended self-serving bias, where it is easier for respondents to see the positives in their own work and the negatives in others. While there is some evidence of that, it is not pronounced. When respondents were asked to consider the possibility of their own work leading to unintended malicious applications compared to that of their colleagues, more than 40 percent of researchers could see the potential negative consequences in others’ work while not finding the potential for such consequences in their own work. One reason for this result may be that researchers understand their own work better and, as a result, may evaluate the potential risks and consequences more accurately than others’ work.

To probe this differential between scientists’ evaluation of their own work’s security implications compared to other researchers, a “self perception indicator” measure was devised to reflect the average score that respondents give to their own work. In three different questions, scientists were asked the extent to which they agreed that their work had dual-use implications, potential for criminal use, and potential for state-based malfeasant applications. Using a 5-point scale, response options were coded $-2$ for strongly disagree, $-1$ for disagree, 0 for undecided, 1 for agree, and 2 for strongly agree. Responses were then aggregated and averaged into the self-perception measure. An “others perception indicator” was also constructed using a similar process of aggregating and averaging the values respondents attributed to the potential security implications of others’ work. These two indicators reflected respondents’ perceptions of their own work and its potential for misuse as well as security concerns about the work performed by others.

The differentials in respondent views about their work and that of others were then calculated from these two indicators. By subtracting self-perception scores from other perception scores, a differential ranging from $-4$ to $+4$ was generated. A differential score of $+4$ indicated that respondents viewed others’ work to have great potential for raising security concerns but their own work to have none, whereas a differential of $-4$ indicated the opposite (i.e., respondents considered their own work to carry the potential of raising security concerns while others’ works to have none). A score of 0 indicated no perceived difference. The distribution of respondents’ differential scores is shown in Figure 2 above.

Figure 2. Self-other view differentials on potential security concerns of research.

The distribution in Figure 2 does not support a strong self-serving bias by cognitive neuroscientists about the potential security implications of their work. The vast majority of differential scores (85 percent) were 0, $+1$ , or $+2$ , with a differential of $+1$ being the most common. This slightly positively skewed distribution reveals that more researchers were inclined to attribute a greater potential of security concerns to others’ work than their own.

Additionally, the lack of perception of potential security risks was itself notable. Almost 50 percent of survey respondents were full professors. Achievement of this rank is typically associated with a decade or more in the field. Yet 39 percent did not see any potential dual-use applications of research performed by colleagues or other researchers in the cognitive sciences, domestically or internationally. Among those, 23 percent strongly disagreed with any dual-use potential. With respect to criminal use, 25 percent did not perceive any risk in others’ research, and 31 percent (8 percent strongly) did not foresee any potential for malfeasant application of any research in the field by terrorists or a state-based weapons program. Approximately a quarter of the research scientists surveyed did not report any potential security risk in cognitive neuroscience research, whether performed by themselves or others.

Scientists were also asked whether they have considered the security implications of their scholarly publications and whether they consider security implications when they are reviewing a publication. With regard to submitting their own research for publication, almost 9 in 10 respondents (89 percent) gave a negative response. Among those who had considered security implications, most had not considered such implications very strongly. With regard to reviewing manuscripts for publication, 86 percent answered that they had never reviewed an article that could be considered to carry dual-use implications. From the responses collected, there was limited consideration of potential security implications as part of the publication process.

Perspectives on research governance

In addition to assessing scientists’ perspectives on the security implications of their research, the study also asked about research ethics and self-regulation. Scientists showed ambivalence toward the establishment of governance structures, such as codes of conduct or advisory boards for their research. When asked, “What do you think of the creation of a national research advisory board in the research of cognitive science?” 47 percent said they were undecided, 28 percent disagreed (including 12 percent who strongly disagreed), and 25 percent agreed. When asked whether they would support the development of a domestic and/or international code of conduct for research in cognitive neuroscience, approximately 44 percent said they would support such a code (both at home and abroad), even though a third of respondents said they were undecided about such efforts. A quarter opposed a national code of conduct, and 29 percent opposed an international code.

While the definition of a “code of conduct” was not specified in any particular way in the survey, it could be understood that any explicitly stated norm or practice that scientists themselves have established and agreed to follow would qualify under this concept. Nevertheless, the notion of a code of conduct would be less formal in its structure and less authoritative in its mandate than the establishment of a government agency advising on research, such as a national advisory board. Responses to the survey showed that scientists would be more willing to govern their work from a bottom-up approach of self-regulation than a top-down form involving government oversight.

Interestingly, exposure to research ethics at professional meetings correlated with respondents’ support for codes of conduct. At the time of the survey, most respondents had not attended more than a couple of conferences that have a research ethics component in the last several years, and less than 15 percent reported participation in a professional conference with a strong focus on research ethics in the last year. Nevertheless, increased exposure to ethics discussions at conferences associated with support for codes of conduct.

Finally, the survey responses showed that scientists were less willing to accept formal, institutionalized forms of regulation on their work than forms of self-regulation and, wherever possible, preferred to have minimal government involvement. The majority of scientists (60 percent among which 30 percent were strongly opposed) did not agree with the suggestion that there should be an ethics board to monitor publications. Almost a third of respondents were ambivalent about such a proposal, and only 9 percent supported such a review board with none supporting strongly. This lack of support for institutionalized forms of regulation was also reflected through scientists’ strong preference for open science. When asked, more than 75 percent rejected the proposition that scientists should be obligated to refrain from publishing findings that have dual-use potential or the potential to be misused for malevolent or harmful purposes.

Researchers were also asked whether scientists in general should be obligated to refrain from publishing findings if their research has potential security implications downstream from basic research. As mentioned, a vast majority of respondents disagreed with the notion that findings with dual-use implications or potential for malign applications should not be published. Yet, even though respondents did not favor curtailing publication of research regardless of its implications, they also seemed ambivalent about what policy measures would be adequate to prevent the misuse of research. A slight majority (53 percent) reported they were “unsure” when asked, “Do you think current policies are adequate to prevent misuse of cognitive science and neuroscience research?” When then asked what additional policies should be employed to prevent misuse of their research, most respondents indicated that they were not sure what has been done or what could be done in the policy realm. These results suggest that not only were most scientists unclear about what kind of regulatory policies would be appropriate, but they also preferred to have their academic activities remain free from external oversight and formal governance.

In general, the survey responses showed that researchers preferred a softer, less institutionalized, and less intrusive form of internal regulation to a harder, institutionalized form of external control. The increased bureaucratic costs to research as a result of formal institutions that regulate research, predominantly through lost time, is not factored into the results presented here. This difference in the support of internal versus external regulation can be shown through a second differential measure, this one indicating the difference between support for external and internal regulation. The calculation for this measure is analogous to the perception indicators described in the previous section.

By subtracting “support for external regulation” from “support for internal regulation,” a differential range between $-4$ and $+4$ was again created. In this case, a differential score of $+4$ indicated that respondents expressed strong support for internal (i.e., self) regulation mechanisms but not for external regulation, whereas a differential score of $-4$ indicated strong support for external regulation but not for internal. Figure 3 shows that scientists expressed a small preference for self regulation. The positively skewed distribution indicates that there were more scientists who supported regulatory mechanisms that were internally governed by the scientists themselves (e.g., codes of conduct) than those who preferred external regulations such as advisory or monitoring boards. Only 9 percent of respondents supported external regulation mechanisms more than internal ones.

Perspectives on funding

The final portion of the survey concerned the opinions of scientists on institutional structures for research, namely, lab management, funding, and policies. Almost 70 percent of respondents indicated that there was no difference in the management of foreign graduate students and postdoctoral fellows as compared to American graduate students in their research and that, for the most part (60 percent), the amount of foreign graduate students had remained about the same for several years around the time of the survey. In addition, despite the heightened security concerns of the post-9/11 period, about two-thirds of respondents had not seen any change in their research or lab management (68 and 65 percent, respectively). For those who had seen changes, most indicated increased difficulty in getting visas for foreign students or visiting scholars, more stringent access to laboratories, more regulations for handling chemicals and toxins, and more stringent Internal Review Board (IRB) processes.

Figure 3. Self-other view differentials on support for regulation.

When asked about funding, approximately 35 percent of respondents reported receiving funding from the Department of Defense or one of its component agencies such as DARPA. And when asked to estimate the percentage of government funding from various federal agencies, most respondents estimated funding through the DOD to be between 15 and 50 percent. However, when asked, “Which agency do you think should be the lead funder for cognitive science research?” only 1 percent suggested that it should be the DOD, while most indicated that it should be the National Institutes of Health or National Science Foundation (57 percent and 40 percent, respectively). Nevertheless, it appeared that most researchers were unsure of the exact amount of cognitive neuroscience funding was provided from the federal government. Finally, almost two-thirds of respondents (65 percent) believed that scientists themselves, as peer reviewers in the federal grant funding process, should be the ones determining which projects receive funding and how much is spent for particular areas of neuroscientific research.

In addition, most scientists surveyed distinguished between offensive and defensive military applications of neuroscientific research. A slight majority (53 percent) answered that they would not accept funding from the DOD or related agencies for research intended for offensive military purposes, while more than three-fourths (77 percent) said they would accept funding for research intended for defensive military purposes (e.g., improved treatments for PTSD). Such distinctions between offensive and defensive military applications of neuroscience were prevalent. As shown in Figure 4, approximately half of those who did not support government funding for offensive purposes would accept DOD funding if the research was defensive in nature, and a little more than 40 percent of those who strongly opposed government funding of research for military purposes would still accept funding from the DOD as long as the research was defensive.

Figure 4. Researchers’ willingness to accept DOD funding by support for government spending on military applications.

Caveats

Based as they are on a limited $N$ survey conducted six years prior, these results are subject to various limitations. Before specifying propositions that might be advanced considering the results, certain caveats need to be highlighted to define the context in which this study was done. Besides the age of the data, problems in sampling and nonresponses are addressed.

For this survey, the sampling population encompassed experimental scientists residing in the United States who engaged in cognitive neuroscience research as defined by the National Research Council. Since this study was intended to evaluate the perspectives, views, and opinions of active scientists, the target population was limited to these sampling parameters. Nevertheless, the sampling frame devised for this survey could have been subjected to some unintentional bias. Reliant on manual selection of individuals whose public information and biography indicated that their research fit the scope of the survey, the sampling for this study was invariably subject to some human error, both in terms of how biographical information was interpreted and in the determination of an individual’s “fit” for the study. Although a relatively large sample frame was used to maximize participation, sampling was not randomized because a complete list of all researchers in cognitive neuroscience was not available. And even though a reminder was sent to potential respondents, the number of eventual respondents was still small.

In addition to potential sampling bias, nonresponse is a particularly difficult problem to resolve in survey research, especially since no monetary or other reward incentive was offered for taking part in the study. Nonresponse is further exacerbated by the fact that in recent years, survey response rates have been declining. Reference Dillman, Eltinge, Groves, Little, Groves, Dillman, Eltinge and Little60 With the sampling frame used for this study, the response rate was approximately 11 percent, which is comparable to other online surveys and, in some cases, higher. With low response rates, self-selection may be at play (e.g., some respondents may be motivated because they find the topic interesting and salient, have expertise in a field, or are just more inclined to answer surveys), thereby potentially creating a subsample that may not be representative of the population of interest.

Discussion

Neuroscientific research has in no small way caught the attention of the security community, and the defense and intelligence sectors in particular have been engaged in funding the basic research necessary to unpack the potential of the human brain while developing relevant applications. This increased attention from the security community has raised a host of concerns about the ethical, policy, and potential dual-use implications of neuroscience research.

To address the multifaceted potential and challenges that cognitive neuroscience research brings, closer collaboration is needed between those actively engaged in neuroscientific inquiry and those devising policy for applications of this research. By providing an opportunity for researchers to express their views about the security implications, ethics, and potential regulation of their research, this study provides insight into the things that matter for scientists. In particular, the results from this survey highlight four key findings that should be considered when engaging scientists in policy discourses on the implications of neuroscientific research:

  1. 1. A significant percentage of scientists who responded to the survey (25 to 30 percent) do not perceive potential dual-use or security risks in anyone’s research. The specific origin and explanation for this outlook is not explained by this study, but it is not just self-serving bias or the result of individual research agendas that are narrowly tailored.

Such findings have implications for research governance and risk mitigation policies if such policies are focusing largely on scientific researchers and their pursuits. To better engage neuroscientists in policy discourses on potential security implications of their research, researchers may need to become more aware of the different ways their findings may be applied—and the fluidity in the definitions of security-relevant technology applications.

A commonly encountered policy prescription is to engage more scientists in pursuits like professional education and codes of conduct. Support for codes of conduct is tepid or mixed at best. For example, approximately 44 percent of researchers in this study supported a domestic or international code of conduct, while a third were undecided. But even if codes of conduct were to be instituted, a significant number of scientists (25 to 30 percent) may still not recognize potential dual-use or security risks in research. For this reason, this study also hopes to help raise awareness among cognitive neuroscientists about the ramifications of their research, so that they may become more inclined to inform the policy-making process on governing neuroscientific research.

  1. 2. Most scientists surveyed favored the open nature of academic research and expressed a preference for some form of self-regulation over formal oversight and external regulations, although most researchers were against the idea of generalized preemptive regulation of their basic research pursuits.

By taking into consideration scientists’ preference for open science and concerns about rigid regulations and mandates, policy makers may consider governance structures that allow scientists to exert a certain level of self-regulation. By shying away from a top-down, command-and-control precautionary form of research governance, federal officials may be able to devise policies that are less likely to stifle scientific progress while still reducing potential risks. Institutional restrictions directed at scientists are unlikely to be received positively and may be met with resistance. It remains unclear at what level of governance the potential negative consequences of cognitive neuroscience research can be effectively managed.

  1. 3. Despite some level of aversion to military research, the neuroscientists surveyed here generally approve of military funding that is couched in terms of defense.

In addition, as scientists perceive a difference between offensive and defensive applications of research, it is possible that by framing military investments in this area of research in strictly defensive terms, scientists’ interests and priorities could be aligned with government interests and priorities, creating a “happy convergence” of scientist-state interaction with mutual benefits.

  1. 4. Most neuroscientists who responded to the survey were not concerned about or even aware of the ethical issues that their research presents related to security policies; although, for those who are exposed to ethical discourses, a higher level of exposure associates with more support for both internal and external forms of regulation.

From this, it may be concluded that closer engagement between scientists, ethicists, and policy makers may be desirable for the development of a meaningful governance structure to emerge. Since government funding often plays an important role in supporting, guiding, directing, and defining scientific progress, meaningful engagement between scientists and policy makers could prove invaluable, particularly when active researchers possess the technical expertise crucial to inform policy options. In this case, strategies and policy options that help manage the development of neuroscientific research could benefit from scientists’ input. The results from this study provide a view of how neuroscientists perceive dual-use applications and probe what policy engagement may entail, but in terms of better defining scientist-state relations and establishing more effective technological governance, significant work is still needed.

References

Marcus, Steven J., ed., Neuroethics: Mapping the Field (New York: Dana Press, 2002).Google Scholar
Moreno, Jonathan D., Mind Wars: Brain Research and National Defense (New York: Dana Press, 2006).Google Scholar
Huang, Jonathan Y. and Kosal, Margaret E., “Security implications of cognitive science research,” Bulletin of the Atomic Scientists , June 20, 2008, http://www.thebulletin.org/security-impact-neurosciences.Google Scholar
Tracey, Irene and Flower, Rod, “The warrior in the machine: Neuroscience goes to war,” Nature Reviews Neuroscience , 2014, 15(12): 825834.Google Scholar
Giordano, James, ed., Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (Boca Raton, FL: CRC Press, 2014).Google Scholar
National Research Council, Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Research in the Next Two Decades, Emerging Cognitive Neuroscience and Related Technologies (Washington, DC: National Academies Press, 2008).Google Scholar
National Research Council, Committee on Opportunities in Neuroscience for Future Army Applications, Opportunities in Neuroscience for Future Army Applications (Washington, DC: National Academies Press, 2009).Google Scholar
Royal Society, Brain Waves: Neuroscience, Conflict and Security(London: The Royal Society, 2012).Google Scholar
North Atlantic Treaty Organization, NATO 2020: Assured Security; Dynamic Engagement(Brussels: NATO Public Diplomacy Division, 2010), p. 15.Google Scholar
Arquilla, John and Ronfeldt, David, eds., In Athena’s Camp: Preparing for Conflict in the Information Age (Santa Monica, CA: RAND Corporation, 1997).Google Scholar
Department of Defense, Office of Force Transformation, The Implementation of Network-Centric Warfare(Washington, DC: Force Transformation, Office of the Secretary of Defense, 2005).Google Scholar
Dombrowski, Peter and Gholz, Eugene, Buying Military Transformation: Technological Innovation and the Defense Industry (New York: Columbia University Press, 2006).Google Scholar
McDermott, Rose, “The feeling of rationality: The meaning of neuroscientific advances for political science,” Perspectives on Politics , 2004, 2(4): 691706.Google Scholar
Mercer, Jonathan, “Rationality and psychology in international politics,” International Organization , 2005, 59(1): 77106.Google Scholar
Vander Valk, Frank, ed., Essays on Neuroscience and Political Theory: Thinking the Body Politic (New York: Routledge, 2012).Google Scholar
Marcus, George E., Neuman, W. Russell, and MacKuen, Michael, Affective Intelligence and Political Judgment (Chicago: University of Chicago Press, 2000).Google Scholar
Masters, Roger D., “The biological nature of the state,” World Politics , 1983, 35(2): 161193.Google Scholar
Stein, Janice Gross, “Rational deterrence against ‘irrational’ adversaries? No common knowledge,” in Complex Deterrence: Strategy in the Global Age, Paul, T. V., Morgan, Patrick M., and Wirtz, James J., eds. (Chicago: University of Chicago Press, 2009), pp. 5884.Google Scholar
National Research Council, Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology, Biotechnology Research in the Age of Terrorism (Washington, DC: National Academies Press, 2004), p. 1.Google Scholar
United States Government, United States Government Policy for Oversight of Life Sciences Dual Use Research of Concern, March 29, 2012, http://www.phe.gov/s3/dualuse/Documents/us-policy-durc-032812.pdf.Google Scholar
United States Government, United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern, September 24, 2014, http://www.phe.gov/s3/dualuse/Documents/durc-policy.pdf.Google Scholar
Department of Defense Fiscal Year (FY) 2011 President’s Budget, Defense Advanced Research Projects Agency, Justification Book Volume 1: Research, Development, Test & Evaluation, Defense-Wide–0400, February 2010,http://comptroller.defense.gov/Portals/45/Documents/defbudget/fy2011/budget_justification/pdfs/03_RDT_and_E/DARPA_RDT_E_PB11.pdf.Google Scholar
Department of Defense Fiscal Year (FY) 2011 President’s Budget, Air Force Justification Book Volume 1: Research, Development, Test & Evaluation, Air Force–3600, February 2010, http://www.saffm.hq.af.mil/shared/media/document/AFD-100201-046.pdf.Google Scholar
Department of Defense Fiscal Year (FY) 2011 President’s Budget Estimates, Justification of Estimates: Research Development, Test & Evaluation, Navy, Budget Activity 1–3, February 2010, http://www.secnav.navy.mil/fmc/fmb/Documents/11pres/RDTEN_BA1-3_Book.pdf.Google Scholar
Department of the Army, Office of the Secretary of the Army (Financial Management and Controller), Descriptive Summaries of the Research, Development, Test and Evaluation, Army Appropriations, Volume I–III, February 2010, http://asafm.army.mil/Documents/OfficeDocuments/Budget/BudgetMaterials/FY11/rforms//vol1.pdf.Google Scholar
Parens, Erik, “Creativity, gratitude, and the enhancement debate,” in Neuroethics: Defining the Issues in Theory, Practice, and Policy, Illes, Judy, ed. (New York: Oxford University Press, 2006), pp. 7586.Google Scholar
Gazzaniga, Michael S., The Ethical Brain (New York: Dana Press, 2005), pp. 7184.Google Scholar
Naam, Ramez, More Than Human: Embracing the Promise of Biological Enhancement (New York: Broadway Books, 2005), pp. 6177.Google Scholar
Fukuyama, Francis, Our Posthuman Future: Consequences of the Biotechnology Revolution (New York: Picador, 2003).Google Scholar
Wild, Jennifer, “Brain-imaging ready to detect terrorists, say neuroscientists,” Nature , 2005, 437(7058): 457.Google Scholar
Canli, Turhan, Brandon, Susan, Casebeer, William, Crowley, Philip J., DuRousseau, Don, Greely, Henry, and Pascual-Leone, Alvaro, “Neuroethics and national security,” American Journal of Bioethics , 2007, 7(5): 313.Google Scholar
Rosenberg, Leah and Gehrie, Eric, “Against the use of medical technologies for military or national security interests,” American Journal of Bioethics , 2007, 7(5): 22–24.Google Scholar
Rippon, Gina and Senior, Carl, “Neuroscience has no role in national security,” AJOB Neuroscience , 2010, 1(2): 3738.Google Scholar
Giordano, James, Forsythe, Chris, and Olds, James, “Neuroscience, neurotechnology, and national security: The need for preparedness and an ethics of responsible action,” AJOB Neuroscience , 2010, 1(2): 3536.Google Scholar
Marks, Jonathan H., “A neuroskeptic’s guide to neuroethics and national security,” AJOB Neuroscience , 2010, 1(2): 412.Google Scholar
Justo, Luis and Erazun, Fabiana, “Neuroethics and human rights,” American Journal of Bioethics , 2007, 7(5): 1617.Google Scholar
Lunstroth, John and Goldman, Jan, “Ethical intelligence from neuroscience: Is it possible? American Journal of Bioethics , 2007, 7(5): 1820.Google Scholar
Canli et al., p. 10.Google Scholar
Resnik, David, “Neuroethics, national security, and secrecy,” American Journal of Bioethics , 2007, 7(5): 15.Google Scholar
Dinneen, Nathan, “Precautionary discourse: Thinking through the distinction between the precautionary principle and the precautionary approach in theory and practice,” Politics and the Life Sciences , 2013, 32(1): 221.Google Scholar
Kosal, Margaret E., Nanotechnology for Chemical and Biological Defense (New York: Springer, 2009), pp. 8997.Google Scholar
David, Kenneth, “Socio-technical analysis of those concerned with emerging technology, engagement, and governance,” in What Can Nanotechnology Learn from Biotechnology? Social and Ethical Lessons for Nanoscience from the Debate over Agrifood, Biotechnology, and GMOs, David, Kenneth and Thompson, Paul B., eds. (Burlington, MA: Elsevier Academic Press, 2008), p. 8.Google Scholar
Clift, Roland, “Risk management and regulation in an emerging technology,” in Nanotechnology: Risk, Ethics, and Law, Hunt, Geoffrey and Mehta, Michael D., eds. (London: Earthscan, 2006), pp. 140153.Google Scholar
Kosal, Margaret E., “Strategy, technology, and governance: Shift of responsibility from nation-states to individuals,” paper presented at the Atlanta Conference on Science and Innovation Policy (Atlanta, September 27, 2013).Google Scholar
Torriti, Jacopo, “Impact assessments and emerging technologies: From precaution to ‘smart regulation’?” in Emerging Technologies: From Hindsight to Foresight, Einsiedel, Edna F., ed. (Vancouver: University of British Columbia Press, 2009), pp. 289306.Google Scholar
Fukuyama, Francis and Wagner, Caroline S., Information and Biological Revolutions: Global Governance Challenges—Summary of a Study Group (Santa Monica, CA: RAND Corporation, 2000), p. ix.Google Scholar
Fukuyama and Wagner, p. xi.Google Scholar
Rip, Arie, “Governance of new and emerging science and technology,” in Unnatural Selection: The Challenges of Engineering Tomorrow’s People, Healey, Peter and Rayner, Steve, eds. (London: Earthscan, 2009), pp. 209220.Google Scholar
Whitman, Jim, “Global governance and twenty-first century technology,” in Technology and Security: Governing Threats in the New Millennium, Rappert, Brian, ed. (New York: Palgrave Macmillan, 2007), pp. 106107.Google Scholar
National Research Council, Committee on Assessing Fundamental Attitudes of Life Scientists as a Basis for Biosecurity Education, A Survey of Attitudes and Actions on Dual Use Research in the Life Sciences: A Collaborative Effort of the National Research Council and the American Association for the Advancement of Science (Washington, DC: National Academies Press, 2009), p. 5.Google Scholar
Bush, Vannevar, Science: The Endless Frontier (Washington, DC: U.S. Government Printing Office, 1945).Google Scholar
Solingen, Etel, “Domestic structure and the international context: Toward models of state-scientists interaction,” in Scientists and the States: Domestic Structures and the International Context, Solingen, Etel, ed. (Ann Arbor: University of Michigan Press, 1994), pp. 131.Google Scholar
Kurz-Milcke, Elke and Gigerenzer, Gerd, Experts in Science and Society (New York: Kluwer Academic/Plenum Publishers, 2004).Google Scholar
Pielke, Roger A. Jr., The Honest Broker: Making Sense of Science in Policy and Politics (Cambridge: Cambridge University Press, 2007), p. 7.Google Scholar
Haas, Peter M., Saving the Mediterranean: The Politics of International Environmental Cooperation (New York: Columbia University Press, 1990), p. 55.Google Scholar
Haas, Peter M., “Introduction: Epistemic communities and international policy coordination,” International Organization , 1992, 46(1): 135.Google Scholar
National Research Council, 2008, p. 2.Google Scholar
Haselton, Martie G., Nettle, Daniel, and Andrews, Paul W., “The evolution of cognitive bias,” in The Handbook of Evolutionary Psychology, Buss, David M., ed. (Hoboken, NJ: John Wiley & Sons, 2005), pp. 724747.Google Scholar
Tavris, Carol and Aronson, Elliot, Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts (Orlando, FL: Harcourt, 2007).Google Scholar
Dillman, Don A., Eltinge, John L., Groves, Robert M., and Little, Roderick J. A., “Survey nonresponse in design, data collection, and analysis,” in Survey Nonresponse, Groves, Robert M., Dillman, Don A., Eltinge, John L., and Little, Roderick J. A., eds. (New York: John Wiley & Sons, 2002), pp. 326.Google Scholar
Figure 0

Figure 1. Agreement among scientists on the potential for research to be used maliciously.

Figure 1

Figure 2. Self-other view differentials on potential security concerns of research.

Figure 2

Figure 3. Self-other view differentials on support for regulation.

Figure 3

Figure 4. Researchers’ willingness to accept DOD funding by support for government spending on military applications.