Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-19T01:34:47.652Z Has data issue: false hasContentIssue false

Chapter Three - Scanning horizons in research, policy and practice

from Part I - Identifying priorities and collating the evidence

Published online by Cambridge University Press:  18 April 2020

William J. Sutherland
Affiliation:
University of Cambridge
Peter N. M. Brotherton
Affiliation:
Natural England
Zoe G. Davies
Affiliation:
Durrell Institute of Conservation and Ecology (DICE), University of Kent
Nancy Ockendon
Affiliation:
University of Cambridge
Nathalie Pettorelli
Affiliation:
Zoological Society of London
Juliet A. Vickery
Affiliation:
Royal Society for the Protection of Birds, Bedfordshire

Summary

New and emerging environmental issues make policy and practice difficult.A pressing need to respond when knowledge of the problem is limited is added to an already challenging conservation agenda. Horizon-scanning is an evolving approach that draws on diverse information sources to identify early indications of poorly recognised threats and opportunities. There are many ways to conduct horizon scans, ranging from automated techniques that scan online content and mine text to manual methods that systematically consult large groups of people (often experts). These different approaches aim to sort through vast volumes of information to look for signals of change, for example the rise in microplastics or the use of mobile phones to gather data in remote forests. Identifying these new threats and opportunities is the first important step towards further researching and managing them. This chapter reviews different approaches to horizon-scanning, together with ways of encouraging uptake of scanning outputs. It concludes by introducing emerging technologies that will add value to horizon-scanning in the future.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

3.1 Introduction

Conservationists have long had to deal with a number of prominent, recurring issues, such as habitat loss and fragmentation, pollution, invasive species and wildlife harvesting, to name a few. On top of these well-known challenges, others have emerged. Over the last half century, these have included the impact of halogenated pesticides and defoliants, acid rain from coal-fired electricity generation, ecological impacts of biofuel production and atmospheric releases of ozone-depleting chemicals. In more recent times, concerns have emerged around microplastics and exploitation of the Arctic, although some changes also bring opportunities for conservation, such as using mobile phones to collect data. New and emerging issues tend to make policy and practice more difficult. They add to an already challenging agenda, and often require a response when knowledge of the problem is limited.

Emerging from the relatively new field of ‘futures’ studies, horizon scanning is still developing as a method. By crowd sourcing information and drawing on communities of practice to sort, verify and analyse that information, horizon scanning offers an efficient way to look for early indications of poorly recognised threats and opportunities (Sutherland & Woodroof, Reference Sutherland and Woodroof2009; van Rij, Reference van Rij2010). It aims to minimise surprises by foreseeing these threats and opportunities, enabling policy-makers and researchers to respond quickly to developing problems. Horizon scanning is an approach primarily used to retrieve, sort and organise information from different sectors that is relevant to the question at hand, in a similar process to intelligence gathering. It can also include varying degrees of analysis, interpretation and prioritisation, but deciding which issues to act on, and how to act on them, typically takes place after the horizon scanning, and is assisted by other ‘futures’ tools, such as visioning, causal layered analysis, scenario planning and backcasting (e.g. Glenn & Gordon, Reference Glenn and Gordon2009; Inayatullah, Reference Inayatullah and Gutierrez Junquera2013; Cook et al., Reference Cook, Inayatullah and Burgman2014a). Recent frameworks have also been developed to link different futures tools, such as horizon scanning and scenario planning, together (Rowe et al., Reference Rowe, Wright and Derbyshire2017).

Horizon scanning outputs come in a wide range of forms. Some broadly describe a single trend that cuts across different parts of society, such as the rise of big data, or the future of a general area of interest, such as ‘Environmental Sustainability and Competitiveness’ (Policy Horizons Canada, 2011). These outputs are usually aligned with more general foresight programmes. Other exercises look at a set of more specific potential threats, such as invasive species that may arrive in the UK and threaten biodiversity (Roy et al., Reference Roy, Peyton and Aldridge2014), and compare them in an approach similar to risk assessment. For the last 10 years, conservation scientists have run annual horizon scans to identify emerging issues with the potential to impact global conservation (e.g. Sutherland et al., Reference Sutherland, Butchart and Connor2018). A similar approach has also been used to identify important scientific questions that, if answered, would help guide conservation practice and policy (e.g. Sutherland et al., Reference Sutherland, Adams and Aronson2009).

As with any policy advisory work, there is always a risk that useful information is gathered but not followed up, as decisions are often driven by other, usually non-scientific, factors. This risk may be higher with unsolicited (grassroots scans produced by a community of practitioners, researchers or academics) rather than solicited scans (called for by policy- and decision-makers). It can be unclear where the responsibility lies for integrating outputs into policy-making, and uptake depends on the organisational culture at the time (Delaney & Osborne, Reference Delaney and Osborne2013). Schultz (Reference Schultz2006) pointed to a conceptual contradiction between evidence-based policy and horizon scanning, where the latter searches for issues that may not be fully supported by a definitive body of evidence. A more optimistic perspective is that horizon scanning needs to be embedded in a broader strategic foresight framework, to increase the likelihood that findings are translated into practice (e.g. van Rij, Reference van Rij2010; Cook et al., Reference Cook, Inayatullah and Burgman2014a). As mentioned above, horizon scanning identifies emerging and novel threats and opportunities as a first step, but other foresight tools serve different purposes along the pathway to adopting appropriate policy. These other foresight tools are not explicitly covered in this chapter, but we provide an example, The Antarctic Science Scan and Roadmap Challenges Exercise, of a hybrid horizon scanning activity where an accompanying road map was also produced to outline actionable recommendations (Box 3.2).

In this chapter, we introduce both general and specific approaches to horizon scanning, outline some ways of achieving and measuring impact and explore how horizon scanning may evolve in the future.

3.2 Approaches to horizon scanning

‘Exploratory horizon scanning’ identifies novel issues by searching for the first ‘signals’ of change across a wide range of sources (such as an early scientific paper describing a potentially impactful new technology). ‘Issue-centred scanning’ monitors issues that have already been identified by searching for additional signals that confirm or deny that the issue is truly emerging (Amanatidou et al., Reference Amanatidou, Butter and Carabias2012). Signals can be organised into clusters (multiple pieces of information) that can either contribute to the evidence base around pre-identified issues, or form a long list of novel issues that are potentially emerging (Figure 3.1). The long list of issues can be further analysed and prioritised into a shortlist using methods detailed below. Some horizon scanning exercises take further steps to make the output more useful for the end user, for example, by assessing the policy relevance of the issues or the feasibility of addressing them, and by identifying those that warrant ongoing monitoring (Sutherland et al., Reference Sutherland, Allison and Aveling2012).

Figure 3.1 General framework for horizon scanning, reflecting the key steps in the procedure (ovals), inputs and products (rounded rectangles), key outputs (rectangles), actors and end users (triangles), and activities and methods (floating text).

Process adapted from Amanatidou et al. (Reference Amanatidou, Butter and Carabias2012).

There is a range of different ways to carry out horizon scanning; we introduce the main stages and provide some specific examples in the boxed texts and Table 3.1. Because our definition of horizon scanning concentrates largely on information retrieval, sorting and, to some extent, analysis and prioritisation, we focus here on methods that facilitate these activities.

Table 3.1 Approaches to horizon scanning (some activities and examples overlap)

ApproachExamples
Manual search of an invited expert group with Delphi-style prioritisationGlobal conservation (e.g. Sutherland et al., Reference Sutherland, Butchart and Connor2018), Antarctic science (e.g. Kennicutt et al., Reference Kennicutt, Chown and Cassano2015), bioengineering (Wintle et al., Reference Wintle, Boehm and Rhodes2017), Mediterranean conservation (Kark et al., Reference Kark, Sutherland and Shanas2016)
Manual search of a large crowd-sourced group (open call) with Delphi-style prioritisation (invited)Future of the Illegal Wildlife Trade (Esmail et al., Reference Esmail, Wintle and Sas-Rolfes2019)
Automated open-source search and manual analysis/prioritisation (usually by a community of experts)IBIS (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017), Global Disease Detection Program (Centers for Disease Control and Prevention, www.cdc.gov/globalhealth/healthprotection/gdd/index.html), HealthMap (www.healthmap.org/en/), ProMed (www.promedmail.org/)
Advanced text analytics to identify emerging issues and research areas (e.g. sentiment analysis, machine learning)FUSE Program (www.iarpa.gov/index.php/research-programs/fuse), Meta (https://meta.org/), X risk database (www.x-risk.net/)
Manual searches within an organisation (by employees, interns or volunteers), results tagged and catalogued in a databaseUS Forest Service (Hines et al., Reference Hines, Bengston and Dockry2018), UK Department for Environment, Food and Rural Affairs (Garnett et al., Reference Garnett, Lickorish and Rocks2016)
Comprehensive programme (including scanning, sentiment analysis, scenario planning; manual and automated)Singapore’s Centre for Strategic Futures (www.csf.gov.sg/), partnered with the Risk Assessment and Horizon Scanning Programme Office
Expert opinion (voting, survey)Global Risks Report 2019 (World Economic Forum, 2019)
Regular meeting of a cross-disciplinary horizon-scanning group to discuss emerging issues and build databaseAustralasian Joint Agencies Scanning Network (www.ajasn.com.au/), Human Animal Infections and Risk Surveillance group (www.gov.uk/government/collections/human-animal-infections-and-risk-surveillance-group-hairs#risk-assessments-and-process)

3.2.1 Scoping

Like any major project, horizon scans need to be scoped and clear guidelines developed to assist scanners. A comprehensive scoping exercise addresses the following questions.

  • What is the guiding question that defines what you want to know?

  • How broadly or narrowly defined is the field of interest?

  • What are the key drivers of change and activities in the field? It is common to organise thinking around a STEEP (Social, Technological, Economic, Environmental and Political factors) framework.

  • What is the spatial scope? For instance, are you seeking issues with global or more localised impact?

  • How far into the future should scanners be projecting?

  • Who should be involved?

  • Who are the potential end users?

Many of these considerations will be constrained by the resources available and the needs of the end user, but tools such as stakeholder analysis (Reed et al., Reference Reed, Graves and Dandy2009), domain mapping (Lesley et al., Reference Lesley, Floyd and Oermann2002) and issues trees (Government Office for Science, 2017) can be useful. Scoping exercises may also involve some pilot scanning to get a feel for how well-defined the task is. For example, preliminary scanning in a US Forest Service project that aimed to identify emerging issues that could affect forests and forestry in the future revealed that ‘natural resources and the environment’ was too broad a topic for their exercise. Instead, it was narrowed to ‘forests’ (Hines et al., Reference Hines, Bengston and Dockry2018).

Horizon scans that rely heavily on people rather than computers to do the scanning reflect the biases of those participants. A well-structured procedure for obtaining judgements from participants (e.g. Figure 3.2) will go a long way to mitigate psychological biases (Burgman, Reference Burgman2015b), but in order to capture a broad array of perspectives, involving a diverse group of people to identify and prioritise candidate issues is critical. A cognitively diverse group – comprising individuals who think differently – is thought to maximise collective wisdom and objectivity (Page, Reference Page2008). A good proxy for cognitive diversity is demographic diversity. Achieving demographic diversity can be challenging in practice. For example, there may be language barriers to overcome, and people with certain occupations (e.g. scholars) may be over-represented in horizon scans conducted by researchers. Inviting contributions from further afield, both geographically and from outside immediate peer circles, broadens the scope of issues considered. This might be achieved by putting out an open call for issues online and advertising it through relevant websites and email lists (e.g. Esmail et al., Reference Esmail, Wintle and Sas-Rolfes2019), or posting a call for ideas on social media.

Figure 3.2 The Delphi-style horizon-scanning approach often used in conservation (Sutherland et al., Reference Sutherland, Fleishman and Mascia2011).

Figure reproduced from Wintle et al. (Reference Wintle, Boehm and Rhodes2017), published under the Creative Commons Attribution 4.0 Licence.

3.2.2 Gathering inputs

Inputs to a scan can either be gathered manually (by people) or with the aid of automated software, which is then (usually) analysed by people. Manual scanning typically involves a group of people monitoring current research and relevant trends (e.g. technology trends, disease trends or population trends) via desktop searches, attending conferences and consulting other people in their networks. Information can be manually scanned in news articles, social media, publications, grey literature and other output of relevant organisations (such as models and projections). This is typically the first step in a ‘Delphi-style’ method that then goes on to analyse and prioritise candidate issues in a structured approach, usually involving one or more expert workshops (see Boxes 3.1 and 3.2 for examples and further descriptions of the procedure). Scanners could be provided with guidelines by a facilitator to direct their search, including suggestions of where to look. Manual methods have the advantage of accessing content that may not exist online (e.g. grey literature or unpublished research), or content that may be difficult to locate in the absence of known keywords to direct database and online searches. The downside of manual methods is that they are labour-intensive and may be exposed to the biases of the searcher, as they are less systematic.

Box 3.1. A Delphi-style method for horizon scanning in conservation

With its foundations in the Delphi Method (Linstone & Turoff, Reference Linstone and Turoff1975; Mukherjee et al., Reference Mukherjee, Huge and Sutherland2015), this structured approach (Figure 3.2) was first applied in horizon scanning for conservation by Sutherland et al. (Reference Sutherland, Bailey and Bainbridge2008). There are now several variants. The key features that make this approach ‘Delphi-style’ are iteration (issues are submitted, scored, discussed and scored again) and anonymity of submissions and scoring. Typically, about 25 conservation experts from around the world participate in the following procedure. Over the course of several months, participants independently scan material from a variety of sources (e.g. papers, reports, websites, conferences) looking for issues (threats or opportunities) that are relatively novel, but that we should start planning for. Over email, each participant anonymously submits short summaries of two to five issues they have selected as the best ‘horizon-scanning’ candidates, defined as reflecting a combination of novelty, plausibility and potential future impact on global conservation. The facilitator compiles the issue summaries and circulates them back to the group, who anonymously score each issue in terms of its suitability as a ‘horizon-scanning’ item (using the definition above). A shortlist of the top scoring issues, containing perhaps twice the total number sought, is recirculated back to participants. Each participant is assigned approximately five issues (not their own) to investigate further, gathering further evidence to support or oppose the issues’ suitability. This means each issue will be cross-examined by at least two to three people. These five issues are usually assigned to people who are not considered experts in that subject matter, in the hope that they will have fewer preconceptions about the issue and that the experts will add their knowledge anyway. The whole group then meets at a workshop and systematically discusses each of the shortlisted issues (e.g. to consider new perspectives, relevant research, and whether the issue is genuinely novel or just a repackaging of an old issue). The issues are kept anonymous to reduce biases and allow for an open discussion. After the discussion, participants individually score the issues a second time. The top-scoring 15 are redrafted by one of the other group members and published each year in Trends in Ecology & Evolution (e.g. Sutherland et al., Reference Sutherland, Butchart and Connor2018).

Box 3.2. Antarctic science scan and Roadmap Challenges project

The international Antarctic community came together to horizon scan the highest priority scientific questions that researchers should aspire to answer in the next two decades and beyond. The approach included online submission of questions from the science research community, followed by a subset of 75 representatives (by nomination and voting) attending a workshop. At the workshop, approximately 1000 submitted questions were winnowed down to the 80 most important through methodical debate, discussion, revision and elimination by voting. All information used, including the 1000 submitted questions, was made publicly available in a database at a horizon scan website (Kennicutt et al., Reference Kennicutt, Chown and Cassano2014). The horizon scan was followed by the Antarctic Roadmap Challenges project that was designed to delineate the critical requirements for delivering the highest priority research identified. The project addressed the challenges of enabling technologies, facilitating access to the region, providing logistics and infrastructure and capitalising on international cooperation. The process uniquely brought together scientists, research funders and those that provide the logistics for field research in the Antarctic. Online surveys of the community were conducted to identify the highest priority technological needs, and to assess the feasibility (time to development) and cost of these requirements. Sixty experts were assembled at a workshop to consider a series of topic-specific summary papers submitted by a range of Antarctic communities, survey results and summaries from the horizon scan, as well as existing documents addressing future Antarctic science directions, technologies and logistics requirements (Kennicutt et al., Reference Kennicutt, Chown and Cassano2015).

Computer-assisted scanning is increasingly used for automating the process of gathering a vast quantity of inputs, often crowd-sourced and usually from the internet (Palomino et al., Reference Palomino, Bardsley and Bown2012). Several such tools are now used in agriculture and health biosecurity to provide early detection of disease outbreaks (see Table 3.1 and Box 3.3 for examples) (Salathé et al., Reference Salathé, Bengtsson and Bodnar2012; Kluberg et al., Reference Kluberg, Mekaru and McIver2016; Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). Early online information, such as a tweet about a Tasmanian devil with a tumour on its face, or a YouTube video about a new device for targeting an invasive species, although unverified to begin with, may be critical for establishing the first in a series of signals that suggests a new or emerging threat (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). Information on the internet can be retrieved in a number of ways. Keywords can be inserted into whole web search engines and/or particular websites can be targeted in more depth (e.g. Twitter can be searched using search terms, handles and hashtags). Research, news and current affairs can also be accessed via the RSS feeds of particular news and science sites, or by email and subscription to social media and blogs. Online data are often retrieved with the help of web scraping (accessing and storing particular web pages) and web crawling (accessing and storing links, and links of links from that page) (Hartley et al., Reference Hartley, Nelson and Arthur2013). With the recent increase in ‘fake news’, web searches require some form of quality control and vetting of sources: a process that can also be useful for exposing fake news. Large volumes of text scraped from the web, articles, patents, reports and other publications can be mined and filtered for potential relevance using automated software, such as machine learning algorithms.

Box 3.3. Online horizon scanning: intelligence-gathering for biosecurity

The International Biosecurity Intelligence System (IBIS) is a generic web-based application that focuses on animal, plant and marine health, and provides continuing surveillance of emerging pests and diseases, including environmental ones (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). It also detects other environmental issues, such as harmful algal blooms. It is open source, in that it gathers articles from regular feeds of trusted sources (e.g. industry news, research) and publicly available online material, like news reports, blogs, published literature and Twitter feeds. Searches can be directed by broadly relevant keywords, such as ‘disease’ or ‘outbreak’ or ‘dead’, in addition to specific diseases of concern (e.g. ‘oyster herpes virus’). Articles can also be manually submitted by registered users to the application directly. A large expert community – the registered users, who are self-selected and approved by the administrator – then filter the articles, promoting those that they deem important and relevant to the home page, and demoting those that appear to be irrelevant or junk. Automated tools also assist with filtering (e.g. with machine learning and network cluster analysis), but as machine learning is still in its infancy, its use is limited to disease outbreaks from trusted sources. Items classified as junk by people are retained in a database to help the system’s artificial intelligence (AI) algorithms learn. The broader user community (anyone who signs up online) is alerted to items that have been flagged by the registered users as important, via a daily email new digest. IBIS is also ‘open-analysis’, meaning that analysis of the publicly available information is performed openly by registered users. They can create or contribute to an emerging/ongoing issues dashboard that features a window for adding content, a Delphi-based forecasting section, links to related reports, share functions, comments and a map showing the location of events of interest (e.g. an outbreak). Registered users can also conduct their own searches and use integrated analytical tools to construct intelligence reports. IBIS has been effective for guiding policies and active risk management decisions for the Australian Government since 2006. The system may produce up to five Intel briefs a week on major issues affecting biosecurity and trade, allowing the government to respond to threats much faster than before. For instance, the system picked up a report of oyster herpes virus from a UK farm, which had previously purchased used aquaculture equipment from a disease-stricken oyster farm in France. Intelligence from IBIS revealed that businesses that had been closed down by the disease had been liquidating their equipment and selling to other countries. In response to this, the Australian Government changed its biosecurity policy to decontaminate all used aquaculture equipment on arrival (Burgman, Reference Burgman2015a).

Automated scanning is fast, systematic and comprehensive in its scope, but often relies on people – sometimes experts – to screen, review, and perhaps investigate all reports before on-posting or incorporating them (Lyon, Reference Lyon2010). For tools that scan across a wide range of topics, and those that use ongoing surveillance, this can be onerous and time-consuming. There are three other notable challenges to relying on online content for horizon scanning. First, material needs to already be posted on the web, and there may be a delay before an event, such as an invasive species incursion, is reported online. The second is that useful content is not always publicly available, as it can lie behind pay walls, be stored on intranets (e.g. grey literature), or secured because it is commercially, politically or personally sensitive. The third challenge is that most methods for obtaining online content rely on using the right keywords, which requires some idea of what you are looking for.

3.2.3 Sorting, cataloguing and clustering

Tagging and cataloguing content derived from both manual and automated scans (e.g. by relevance, credibility, source type, sectoral origin) (e.g. Garnett et al., Reference Garnett, Lickorish and Rocks2016; Hines et al., Reference Hines, Bengston and Dockry2018) occurs concurrently with input gathering by scanners. Content can be further reorganised and vetted at a later stage. During this process, new search terms to direct further scanning can be generated, or existing search terms refined. Content can be organised according to a framework that also considers the level of response required and the strength of the evidence, which can help prioritise risks and other identified issues at a later stage (Garnett et al., Reference Garnett, Lickorish and Rocks2016). Clustering methods, such as network analysis (Könnölä et al., Reference Könnölä, Salo and Cagnin2012; Saritas & Miles, Reference Saritas and Miles2012), are useful for capturing cross-cutting issues that affect a number of topics of interest.

3.2.4 Analysing and prioritising

At this stage, a long list of issues will have been compiled, with some more suitable to the project aims than others. This can be an opportune time to reiterate objectives. Do you seek issues that most have not heard of? Do you intend to identify broad, developing topics or very specific developments (for example, the ‘increase in hydropower’ versus ‘fragmentation effects of hydropower in the Andean Amazon’)? Are you interested in issues likely to arise soon or events that have a smaller probability of playing out in the long-term future? Does the output need to be useful to policy-makers? Many exercises, especially those with follow-up plans, aim to prioritise a select number of ‘most suitable’ issues, and the precise manner in which such prioritisation decisions are made makes a real difference to the quality of the output (Sutherland & Burgman, Reference Sutherland and Burgman2015). Our experience with exercises that aim to identify novel issues is that participants gravitate towards well-known although important issues. Avoiding this requires strong chairing and a group that accepts the objective. To help overcome the problem, each participant can be asked whether they have heard of each issue, so that well-known topics can be excluded from the shortlist.

Within a manual Delphi-style approach (described in Boxes 3.1 and 3.2), issues are prioritised through an iterative scoring or voting process, usually facilitated online or in a workshop with a group of experts. The goal is to reduce a pool of potential horizon scanning items or ideas to a smaller subset. The number of items, or issues, covered in the final list can vary, but tends to reflect around 10–30% of the initial items put forward (e.g. Kennicutt et al., Reference Kennicutt, Chown and Cassano2014; Parker et al., Reference Parker, Acland and Armstrong2014; Kark et al., Reference Kark, Sutherland and Shanas2016; Wintle et al., Reference Wintle, Boehm and Rhodes2017; Sutherland et al., Reference Sutherland, Butchart and Connor2018). As a point of comparison, the horizon scans described in Box 3.1 describe 15 issues annually, while the Antarctic hybrid horizon scan identified 80, shorter, priority scientific questions (Box 3.2). The final number may be constrained by how many the end user can realistically give their attention to (for a busy policy-maker, this may only be 15–20 half-page summaries), but is also driven by the number of (in)appropriate issues submitted. The main purpose of prioritisation is to remove issues that do not satisfy the selection criteria (novelty, plausibility, potential impact) and select those that are the most urgent or time-sensitive. Prioritisation of issues will inevitably involve trade-offs, especially where different group members have different perspectives. Because individuals’ diverging opinions can be masked in aggregated scores, analysing interrater concordance (e.g. with Kendall’s W) affords insights into the level of agreement between contributors. In a diverse group, we would expect a wide variety of viewpoints to be voiced, but a core of shared opinions is often discernible (e.g. Wintle et al., Reference Wintle, Boehm and Rhodes2017).

Items identified in a computerised scan (e.g. articles returned from a keyword search) are also prioritised by groups of people with varying levels of content expertise. People may be employed to sort through material, such as in governmental horizon-scanning programmes like in Singapore, or they may volunteer to do so because they are interested in the output, such as a farmer or epidemiologist concerned with news of disease outbreaks. Initially, items are sorted according to their relevance to the scanning aims (often done in the initial tagging/sorting process). Irrelevant items are discarded or moved to low priority. A second form of prioritisation involves flagging issues or topics that are particularly noteworthy (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). This can be because signals have grown stronger (more evidence is gathered to suggest an issue is becoming a threat or presenting an opportunity for action) (Cook et al., Reference Cook, Wintle and Aldrich2014b), or it might be because the potential consequences are so severe that the issue warrants immediate attention, even when evidence is limited or the probability is low (‘wild cards’).

3.2.5 Using the output

The previous step described prioritisation within the horizon scan to reduce a candidate set of issues. In that step, issues are ideally not judged according to importance, but rather according to less-subjective criteria, such as the likelihood of occurring or exceeding some threshold within a given timeframe. Prioritising which issues are the most important, and therefore should be acted on, is a different goal, and might be decided through follow-up, explicitly values-driven exercises involving representatives from government or relevant organisations (e.g. Sutherland et al., Reference Sutherland, Allison and Aveling2012).

Bringing together a cross-section of policy-makers in a follow-up exercise can be useful, not only to identify those issues that require further monitoring or evidence before being acted on, but also to encourage prioritisation of cross-organisational issues, knowledge sharing, and collaborative development of policy. Ideally, feasibility assessments of the options available would be included (as carried out in the extension of the recent Antarctic scan, Box 3.2).

3.2.6 Evaluating the process

Assessing the success of horizon scans in identifying emerging issues is challenging, and has rarely been attempted. However, a recent review by Sutherland et al. (Reference Sutherland, Fleishman and Clout2019) examined the first of the annual global conservation scans described in Box 3.1 (Sutherland et al., Reference Sutherland, Clout and Côté2010) to consider how the issues identified in 2009 had developed. This was assessed using several approaches: a mini-review was carried out for each topic; the trajectory of the number of articles in the scientific literature and news media that mentioned each topic in the years before and after their identification was examined; and a Delphi-style scoring process was used to assess each topic’s change in importance. This showed that five of the 15 topics, including microplastic pollution, synthetic meat and environmental applications of mobile-sensing technology, appeared to have shown increased salience and effects. The development of six topics was considered moderate, three had not emerged and the effects of one topic were considered low.

As part of the same exercise, 12 global conservation organisations were questioned in 2010 about their awareness of, and current and anticipated involvement in, each of the topics identified in 2009 (Sutherland et al., Reference Sutherland, Allison and Aveling2012). This survey was repeated in 2018 (Sutherland et al., Reference Sutherland, Fleishman and Clout2019). Awareness of all topics had increased, with the largest increases associated with microplastic pollution and synthetic meat; the change in organisational involvement was highest for microplastics and mobile-sensing technology. Perhaps the most surprising result was the number that had not heard of what are now mainstream issues: 77% for microplastics, 54% for synthetic meat and 31% for the use of mobile sensing technology. A decade ago the idea of collecting environmental data using phones was cutting-edge.

Thus, efforts have begun to examine the development of previously identified horizon-scan topics, but further research into the impact of horizon scans, and a consideration of issues that may have been ‘missed’ (not identified but subsequently emerged as important) is needed.

3.3 Making a difference with horizon scanning

Gauging the extent to which horizon-scanning outputs inform policy, future research directions and resource investments is not always straightforward and no-one has yet tested the effectiveness of this process. In instances where the primary decision-making organisation uses horizon scanning internally to assist with deliberations (e.g. scans to set priorities for a government agency), actions can be mapped directly against outcomes. In these cases, implementing the actions indicates impact. In other cases, scans can be driven by a community outside of government to set agreed future directions that can then be used to persuade external resource allocators. Even in cases where policy appears to reflect issues flagged in a horizon scan, it is difficult to trace direct influence, as inputs from multiple sources are often blended in final policy decisions without attribution. It also may take years for real-world impact to be realised. Nevertheless, there are ways in which uptake of horizon-scanning output can be encouraged.

As a starting point, horizon scanning outputs can be matched to the organisations they are most relevant to. For example, policy-makers and practitioners can come together in a follow-up workshop to assess the importance of previously identified horizon-scanning issues for their organisation (Sutherland et al., Reference Sutherland, Allison and Aveling2012, Reference Sutherland, Fleishman and Clout2019). Or, the end user (e.g. policy-makers and practitioners) can be engaged in the horizon scan from the outset, as in a recent scan of research priorities for protected areas (Dudley et al., Reference Dudley, Hockings and Stolton2018). Similarly, horizon-scanning networks involving representatives from a range of government agencies, such as the Australasian Joint Agencies Scanning Network, or the UK Human Animal Infections and Risk Surveillance group, provide an ongoing forum for sharing information on new and emerging issues that potentially impact different departments and organisations. Regular meetings and reports are used to deliver this information to policy-makers in a timely way (Delaney & Osborne, Reference Delaney and Osborne2013).

In-depth follow-up analyses of horizon-scanning issues may also help policy-makers decide which to target first. A formal risk analysis of likelihood and consequences might be most appropriate for horizon-scanning outputs that compare similarly well-defined issues, for example, comparing one invasive species with another (e.g. Roy et al., Reference Roy, Peyton and Aldridge2014). It may be more challenging if some of the issues in the candidate set are more coarse-grained than others (e.g. comparing ocean warming with a specific emerging fungal disease in some snakes). Nonetheless, risk-based prioritisation at least offers a framework for comparing and forecasting issues (Brookes et al., Reference Brookes, Hernandez-Jover and Black2014) and for formally considering the strength of evidence for each (Garnett et al., Reference Garnett, Lickorish and Rocks2016).

Simply making horizon-scanning outputs known and available to policy-makers can encourage uptake. For example, issues identified in the annual global conservation scans (Box 3.1; Sutherland et al., Reference Sutherland, Butchart and Connor2018) have previously helped inform the UK’s Natural Environment Research Council ‘Forward Look’ strategic planning, but when a decision-maker does not already have a use in mind, it may be unclear what to do with horizon-scanning information without more context and guidance. Detecting signals and potential issues is only the first step towards making a difference: further intelligence about drivers is then needed to make sense of that information. For example, incorporating available data and modelling on air traffic movements with disease surveillance data might have helped anticipate the emergence of West Nile Virus in the United States in 1999 (Garmendia et al., Reference Garmendia, Van Kruiningen and French2001; Brookes et al., Reference Brookes, Hernandez-Jover and Black2014). It is the combination of horizon scanning, intelligence analysis (which provides context for the scanning output) and forecasting the chances of events unfolding that is particularly helpful in translating scanning outputs for policy-making. This can be embedded in a workflow, parts of which can be automated, such as compiling the context, narrative and structure into a digestible report on an important emerging issue (e.g. Box 3.3). When forecasting and open-analysis communities are already in place, this workflow can be delivered efficiently (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017).

Horizon scanning that occurs within organisations is evolving into a more effective tool than it was in its infancy. To facilitate the spread of best practice and reduce duplication, the UK has seen greater integration of horizon-scanning activities between different government departments, mainly in response to the Day Review (Reference Day2013). The review recommended that horizon scans: (i) look beyond short-term agendas and parliamentary terms, (ii) focus on specific areas rather than broad topics in order to get more traction, (iii) are championed by those who use them in strategic decision-making, (iv) produce shorter outputs that are more likely to get the attention of senior decision-makers and (v) draw on inputs and existing analyses sourced from a ‘wide range of external institutions, academia, industry specialists and foreign governments’. The extent to which all these recommendations have been implemented is unclear, but they represent a clear set of guidelines to follow.

There are a range of other useful frameworks that can be used for translating scanning outputs including roadmapping the steps towards acting on different horizon-scanning issues, for example, by assessing the feasibility and estimating how long it would take to develop technologies needed to address particular research gaps (Box 3.2; Kennicutt et al., Reference Kennicutt, Chown and Cassano2015). The Antarctic science scan and roadmap has since been used to set National Antarctic Program goals, judge the effectiveness and relevance of past investments, and guide investment of other national programmes (National Academies of Sciences Engineering and Medicine, 2015; www.nsf.gov/funding/pgm_summ.jsp?pims_id=505320&org=OPP&from=home).

3.4 Future directions

We have discussed some of the pros and cons of different approaches to horizon scanning. If using a manual approach, structured methods are essential for mitigating the social and psychological biases that human horizon scanners are prone to, especially when forecasting complex and uncertain futures (Hanea et al., Reference Hanea, McBride and Burgman2017). Although historically it has been criticised for confusing opinion with systematic prediction (Sackman, Reference Sackman1975), an iterative Delphi-style approach offers the advantage of drawing on the collective wisdom of a group, while affording individuals the opportunity to give private, anonymous judgements and revise them in light of information and reasoning provided by others. Compared with other elicitation approaches, such as traditional meetings, the Delphi method has also been found to improve forecasts and group judgements (Rowe & Wright, Reference Rowe, Wright and Armstrong2001). Manual approaches could be further improved by making the search for issues more systematic. Semi-automated tools and AI will increasingly enable searches uninfluenced by the biases of the manual searcher. For example, the Dutch ‘Metafore’ horizon-scanning approach (De Spiegeleire et al., Reference De Spiegeleire, van Duijne, Chivot, Daim, Chiavetta, Porter and Saritas2016), developed in The Hague Centre for Strategic Studies, already uses some automated approaches to systematically collect, parse, visualise and analyse a large ‘futures’ database to complement their manual scanning.

Future horizon scanning and intelligence gathering may also see more open-analysis, ‘citizen science’ tools becoming adopted. While organisations are increasingly scanning open-source material (including news and social media), analyses typically remain internal (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). This means the analyses are generally not available to external users in an unfiltered form or in a timely way, which is particularly important for risks such as disease spread. Governments may opt for confidentiality for both security and political reasons. For instance, negative public perceptions about a suspected emerging herpes virus in oysters might affect trade, which might delay the disclosure of this information by authorities, in turn delaying risk mitigation actions (Grossel et al., Reference Grossel, Lyon, Nunn, Robinson, Burgman, Nunn and Walshe2017). Intelligence tools (e.g. Box 3.3) that draw on a community of users to openly analyse news and information on potentially emerging issues offer more timely and transparent synthesis of information, which encourages more responsive decision-making. Examples of this can be seen in citizen science, for example where citizen volunteers have helped analyse satellite-based information in the wake of natural disasters to help emergency responders to rapidly assess the damage (Yore, Reference Yore2017). In conservation science, involving a broader community of people in a participatory process like open-analysis may also increase public support for science and the environment (Dickinson & Bonney, Reference Dickinson and Bonney2012). More open-source and open-analysis scanning tools in the future will also likely be complemented with better information visualisation and GIS (e.g. including maps that indicate where a relevant incident has taken place) (Dickinson et al., Reference Dickinson, Shirk and Bonter2012), not only for identifying novel issues and monitoring issues that are already emerging, but also for locating and efficiently communicating this information.

Advanced text analytics, including text mining, will also provide a more comprehensive and systematic approach to future horizon scans. Indeed, some horizon-scanning centres, such as Singapore’s Risk Assessment and Horizon Scanning programme, already use sentiment analysis – a way of computationally categorising subjective opinions expressed in text (e.g. positive, negative or neutral) – to uncover themes in content retrieved by their analysts. Even more sophisticated text analytics are becoming available, for example, to explore areas of disagreement, conflict or debate in the text of scientific literature to help track developments in science and technology (Babko-Malaya et al., Reference Babko-Malaya, Meyers and Pustejovsky2013). They can also be used to detect language expressing excitement about a new idea, and other indicators of emergence, such as the increasing use of acronyms and abbreviations indicating that the scientific community is beginning to accept a technology or idea as established (Reardon, Reference Reardon2014). Through automation, new computational tools have the capacity to process a massive volume of papers and patents to anticipate which developments will have the biggest impact in the future (Murdick, Reference Murdick2015). These advances in text analytics have recently led to the development of a particularly powerful open-source AI tool, Meta (https://meta.org/), to help biomedical scientists and funders to connect emerging research areas and potential collaborators and inform investment. Due to the complexity of emerging issues (and complex environment for machines to learn in), progress towards detecting issues effectively through AI is slow. Computers may never outperform humans at natural language understanding, but steady improvements in the technology, coupled with the speed at which text can be processed by computers – in a range of languages – will undoubtedly add value to horizon scanning in the future.

3.5 Acknowledgements

BCW was supported by the Centre for the Study of Existential Risk at the University of Cambridge and funded by the Templeton World Charity Foundation (TWCF). WJS is funded by Arcadia. Geoff Grossel provided useful insights into online scanning methods and applications.

References

Amanatidou, E., Butter, M., Carabias, V., et al. 2012. On concepts and methods in horizon scanning: lessons from initiating policy dialogues on emerging issues. Science & Public Policy, 39, 208221.Google Scholar
Babko-Malaya, O., Meyers, A., Pustejovsky, J., et al. 2013. Modeling debate within a scientific community. In 2013 International Conference on Social Intelligence and Technology (pp. 5763). New York, NY: IEEE.CrossRefGoogle Scholar
Brookes, V. J., Hernandez-Jover, M., Black, P. F., et al. 2014. Preparedness for emerging infectious diseases: pathways from anticipation to action. Epidemiology and Infection, 143, 20432058.Google Scholar
Burgman, M. 2015a. Governance for effective policy-relevant scientific research: the shared governance model. Asia & the Pacific Policy Studies, 2, 441451.CrossRefGoogle Scholar
Burgman, M. A. 2015b. Trusting Judgements: How to Get the Best out of Experts. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Cook, C. N., Inayatullah, S., Burgman, M. A., et al. 2014a. Strategic foresight: how planning for the unpredictable can improve environmental decision-making. Trends in Ecology & Evolution, 29, 531541.Google Scholar
Cook, C. N., Wintle, B. C., Aldrich, S. C., et al. 2014b. Using strategic foresight to assess conservation opportunity. Conservation Biology, 28, 14741483.CrossRefGoogle ScholarPubMed
Day, J. 2013. Review of Cross-government Horizon Scanning. London: Cabinet Office.Google Scholar
De Spiegeleire, S., van Duijne, F. & Chivot, E. 2016. Towards Foresight 3.0: the HCSS Metafore Approach – a multilingual approach for exploring global foresights. In Daim, T. U., Chiavetta, D., Porter, A. L. & Saritas, O., editors, Anticipating Future Innovation Pathways Through Large Data Analysis (pp. 99117). Cham: Springer International Publishing.CrossRefGoogle Scholar
Delaney, K. & Osborne, L. 2013. Public sector horizon scanning-stocktake of the Australasian joint agencies scanning network. Journal of Futures Studies, 17, 5570.Google Scholar
Dickinson, J. L. & Bonney, R. 2012. Citizen Science: Public Collaboration in Environmental Research.Ithaca, NY: Cornell University Press.Google Scholar
Dickinson, J. L., Shirk, J., Bonter, D., et al. 2012. The current state of citizen science as a tool for ecological research and public engagement. Frontiers in Ecology and the Environment, 10, 291297.CrossRefGoogle Scholar
Dudley, N., Hockings, M., Stolton, S., et al. 2018. Priorities for protected area research. Parks, 24, 3550.Google Scholar
Esmail, N., Wintle, B.C., Sas-Rolfes, M., et al. 2019. Emerging illegal wildlife trade issues in 2018: a global horizon scan. SocArXiv. April 25. https://doi.org/10.31235/osf.io/b5azxGoogle Scholar
Garmendia, A. E., Van Kruiningen, H. J. & French, R. A. 2001. The West Nile virus: its recent emergence in North America. Microbes and Infection, 3, 223230.CrossRefGoogle ScholarPubMed
Garnett, K., Lickorish, F. A., Rocks, S. A., et al. 2016. Integrating horizon scanning and strategic risk prioritisation using a weight of evidence framework to inform policy decisions. Science of the Total Environment, 560–561, 8291.CrossRefGoogle ScholarPubMed
Glenn, J. C. & Gordon, T. J., editors. 2009. Futures Research Methodology – Version 3.0. The Millennium Project.Google Scholar
Government Office for Science. 2017. The Futures Toolkit, Edition 1.0.Google Scholar
Grossel, G., Lyon, A. & Nunn, M. 2017. Open-source intelligence gathering and open-analysis intelligence for biosecurity. In Robinson, A. P., Burgman, M. A., Nunn, M. & Walshe, T., editors, Invasive Species: Risk Assessment and Management (pp. 8492). Cambridge: Cambridge University Press.Google Scholar
Hanea, A. M., McBride, M., Burgman, M.A., et al. 2017. Investigate Discuss Estimate Aggregate for structured expert judgement. International Journal of Forecasting, 33, 267279.Google Scholar
Hartley, D. M., Nelson, N. P., Arthur, R. R., et al. 2013. An overview of Internet biosurveillance. Clinical Microbiology and Infection, 19, 10061013.Google Scholar
Hines, A., Bengston, D. N., Dockry, M. J., et al. 2018. Setting up a horizon scanning system: a U.S. federal agency example. World Futures Review, 10, 136151.Google Scholar
Inayatullah, S. 2013. Futures studies: theories and methods. In Gutierrez Junquera, F., editor, There’s a Future: Visions for a Better World (pp. 3666). Madrid: Banco Bilbao Vizcaya Argentaria Open Mind.Google Scholar
Kark, S., Sutherland, W. J., Shanas, U., et al. 2016. Priority questions and horizon scanning for conservation: a comparative study. PLOS ONE, 11, e0145978.Google Scholar
Kennicutt, M. C., Chown, S. J., Cassano, J. J., et al. 2014. Polar research: six priorities for Antarctic science. Nature, 512, 2325.Google Scholar
Kennicutt, M. C., Chown, S. J., Cassano, J. J., et al. 2015. A roadmap for Antarctic and Southern Ocean science for the next two decades and beyond. Antarctic Science, 27, 318.CrossRefGoogle Scholar
Kluberg, S., Mekaru, S., McIver, D., et al. 2016. Global capacity for emerging infectious disease detection: 1996–2014. Emerging Infectious Diseases, 22(10), e151956.Google Scholar
Könnölä, T., Salo, A., Cagnin, C., et al. 2012. Facing the future: scanning, synthesizing and sense-making in horizon scanning. Science and Public Policy, 39, 222231.Google Scholar
Lesley, M., Floyd, J. & Oermann, M. 2002. Use of MindMapper software for research domain mapping. Computers Informatics Nursing, 20, 229235.CrossRefGoogle ScholarPubMed
Linstone, H. A. & Turoff, M. 1975. The Delphi Method: Techniques and Applications. Reading, MA: Addison-Wesley Pub. Co.Google Scholar
Lyon, A. 2010. Review of online systems for biosecurity intelligence-gathering and analysis. ACERA Project 1003.Google Scholar
Mukherjee, N., Huge, J., Sutherland, W. J., et al. 2015. The Delphi technique in ecology and biological conservation: applications and guidelines. Methods in Ecology and Evolution, 6. 10971109.CrossRefGoogle Scholar
Murdick, D. 2015. Foresight and Understanding from Scientific Exposition (FUSE): predicting technical emergence from scientific and patent literature. IARPA, editor. US Office of the Director of National Intelligence, www.iarpa.gov/images/files/programs/fuse/04-FUSE.pdf.Google Scholar
National Academies of Sciences Engineering and Medicine. 2015. A Strategic Vision for NSF Investment in Antarctic and Southern Ocean Research. Washington, DC: Author.Google Scholar
Page, S. E. 2008. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton, NJ: Princeton University Press.Google Scholar
Palomino, M. A., Bardsley, S., Bown, K., et al. 2012. Web-based horizon scanning: concepts and practice. Foresight, 14, 355373.Google Scholar
Parker, M., Acland, A., Armstrong, H. J., et al. 2014. Identifying the science and technology dimensions of emerging public policy issues through horizon scanning. PLoS ONE, 9, e96480.Google Scholar
Policy Horizons Canada. 2011. Leading the Pack or Lagging Behind: A Foresight Study on Environmental Sustainability and Competitiveness. Government of Canada.Google Scholar
Reardon, S. 2014. Text-mining offers clues to success: US intelligence programme analyses language in patents and papers to identify next big technologies. Nature News, 509, 410.Google Scholar
Reed, M. S., Graves, A., Dandy, N., et al. 2009. Who’s in and why? A typology of stakeholder analysis methods for natural resource management. Journal of Environmental Management, 90, 19331949.Google Scholar
Rowe, E., Wright, G. & Derbyshire, J. 2017. Enhancing horizon scanning by utilizing pre-developed scenarios: analysis of current practice and specification of a process improvement to aid the identification of important ‘weak signals’. Technological Forecasting & Social Change, 125, 224235.Google Scholar
Rowe, G. & Wright, G. 2001. Expert opinions in forecasting: role of the Delphi technique. In Armstrong, J. S., editor, Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 125144). Norwell, MA: Kluwer Academic Publishers.Google Scholar
Roy, H. E., Peyton, J., Aldridge, D. C., et al. 2014. Horizon scanning for invasive alien species with the potential to threaten biodiversity in Great Britain. Global Change Biology, 20, 38593871.Google Scholar
Sackman, H. 1975. Delphi Critique: Expert Opinion, Forecasting, and Group Process. Lexington, MA: Lexington Books.Google Scholar
Salathé, M., Bengtsson, M., Bodnar, T. J., et al. 2012. Digital epidemiology. PLoS Computational Biology, 8, e1002616.Google Scholar
Saritas, O. & Miles, I. 2012. Scan‐4‐Light: a Searchlight function horizon scanning and trend monitoring project. Foresight, 14, 489510.Google Scholar
Schultz, W. L. 2006. The cultural contradictions of managing change: using horizon scanning in an evidence-based policy context. Foresight, 8, 312.CrossRefGoogle Scholar
Sutherland, W. J., Adams, W. M., Aronson, R. B., et al. 2009. One hundred questions of importance to the conservation of global biological diversity. Conservation Biology, 23, 557567.Google Scholar
Sutherland, W. J., Allison, H., Aveling, R., et al. 2012. Enhancing the value of horizon scanning through collaborative review. Oryx, 46, 368374.Google Scholar
Sutherland, W. J., Bailey, M. J., Bainbridge, I. P., et al. 2008. Future novel threats and opportunities facing UK biodiversity identified by horizon scanning. Journal of Applied Ecology, 45, 821833.CrossRefGoogle Scholar
Sutherland, W. J. & Burgman, M. 2015. Policy advice: use experts wisely. Nature, 526, 317318.Google Scholar
Sutherland, W. J., Butchart, S. H. M., Connor, B., et al. 2018. A 2018 horizon scan of emerging issues for global conservation and biological diversity. Trends in Ecology and Evolution, 33, 4758.CrossRefGoogle ScholarPubMed
Sutherland, W. J., Clout, M., Côté, I. M., et al. 2010. A horizon scan of global conservation issues for 2010. Trends in Ecology and Evolution, 25, 17.CrossRefGoogle ScholarPubMed
Sutherland, W. J., Fleishman, E., Clout, M., et al. 2019. Ten years on: a review of the first global conservation horizon scan. Trends in Ecology and Evolution, 34, 139153.CrossRefGoogle ScholarPubMed
Sutherland, W. J., Fleishman, E., Mascia, M. B., et al. 2011. Methods for collaboratively identifying research priorities and emerging issues in science and policy. Methods in Ecology and Evolution, 2, 238247.CrossRefGoogle Scholar
Sutherland, W. J. & Woodroof, H. J. 2009. The need for environmental horizon scanning. Trends in Ecology & Evolution, 24, 523527.CrossRefGoogle ScholarPubMed
van Rij, V. 2010. Joint horizon scanning: identifying common strategic choices and questions for knowledge. Science and Public Policy, 37, 718.Google Scholar
Wintle, B. C., Boehm, C. R., Rhodes, C., et al. 2017. A transatlantic perspective on 20 emerging issues in biological engineering. Elife, 6, e30247.Google Scholar
World Economic Forum. 2019. The Global Risks Report 2019: 14th Edition. Geneva: WEF.Google Scholar
Yore, R. 2017. Here’s how citizen scientists assisted with the disaster response in the Caribbean. The Conversation, 18 October 2017.Google Scholar
Figure 0

Figure 3.1 General framework for horizon scanning, reflecting the key steps in the procedure (ovals), inputs and products (rounded rectangles), key outputs (rectangles), actors and end users (triangles), and activities and methods (floating text).

Process adapted from Amanatidou et al. (2012).
Figure 1

Table 3.1 Approaches to horizon scanning (some activities and examples overlap)

Figure 2

Figure 3.2 The Delphi-style horizon-scanning approach often used in conservation (Sutherland et al., 2011).

Figure reproduced from Wintle et al. (2017), published under the Creative Commons Attribution 4.0 Licence.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×