We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter reviews the evidence behind the anti-misinformation interventions that have been designed and tested since misinformation research exploded in popularity around 2016. It focuses on four types of intervention: boosting skills or competences (media/digital literacy, critical thinking, and prebunking); nudging people by making changes to social media platforms’ choice architecture; debunking misinformation through fact-checking; and (automated) content labelling. These interventions have one of three goals: to improve relevant skills such as spotting manipulation techniques, source criticism, or lateral reading (in the case of boosting interventions and some content labels); to change people’s behavior, most commonly improving the quality of their sharing decisions (for nudges and most content labels); or to reduce misperceptions and misbeliefs (in the case of debunking). While many such interventions have been shown to work well in lab studies, there continues to be an evidence gap with respect to their effectiveness over time, and how well they work in real-life settings (such as on social media).
Most people who regularly use the Internet will be familiar with words like “misinformation,” “fake news,” “disinformation,” and maybe even “malinformation.” It can appear as though these terms are used interchangeably, and they often are. However, they don’t always refer to the same types of content, and just because a news story or social media post is false doesn’t always mean it’s problematic. To add to the confusion, not all misinformation researchers agree on the definition of the problem, or employ a unified terminology. This chapter discusses the terminology around misinformation, guided by illustrative examples of problematic news content. It also looks at what misinformation isn’t: what makes a piece of information “real” or “true”? Finally, we explore how researchers have defined misinformation and how these definitions can be categorized, before presenting the working definition that is used throughout this book.
This chapter discusses how governments and supernational institutions have tried to tackle misinformation through laws and regulations. Some countries have adopted new legislation making the spread or creation of misinformation illegal; this has often been met with criticism by human rights organizations, for instance, because governments cannot act as neutral arbiters of truth. The UK and EU have adopted expansive regulatory frameworks that regulate not only misinformation but rather the online information space in its entirety. The United States is generally wary of any new legislation that imposes limits on speech, and doesn’t currently have legislative initiatives that are as broad in scope as those in the UK and EU. Instead, some entities in the US have tried out investing in communications campaigns about mis- and disinformation; these are aimed at individuals (and not companies or misinformation producers), and their effectiveness is evaluated in a very different way.
This chapter discusses the ups and downs of the authors’ program of research on misinformation, which has involved creating “fake news” games and videos to reduce susceptibility to various common types of manipulation. Despite some successes, there are also substantial nuances to their work: limited cross-cultural generalizability, in some cases an unintentional effect of game-based interventions on people’s evaluation of “real news” or non-misinformation, a negligible impact of such interventions on people’s social media sharing behavior, and perhaps most importantly the (potentially rapid) disappearance of the intervention effect over time and the presence of testing effects: actively rehearsing the lessons from the interventions (e.g, about how to spot a particular manipulation technique) appears to be important for their longevity and effectiveness. These limitations have substantial implications for anyone who wants to design (and test) an intervention to counter misinformation.
Examples of misinformation having real-world consequences aren’t hard to find, and misinformation has been a hugely important topic of discussion in popular media, among politicians, and in scientific research. This chapter discusses whether misinformation poses a problem for society: for example, by damaging the democratic process or fostering potentially problematic health behaviors. We discuss competing perspectives in detail: scientists disagree over the extent to which misinformation can be harmful at a societal level, and what definition of “misinformation” is used matters a lot in determining the scope of the problem. The chapter concludes with a case study: information warfare in the present-day Russian–Ukrainian war.
Misinformation has only recently seen a surge in research interest and public attention, but the concept itself is much older. Not only have humans manipulated and lied to each other since the dawn of language, but animals are also known to use manipulation to achieve certain goals. This chapter provides a historical overview of misinformation. It first looks at some of its possible evolutionary origins, before tracing how false information has been used as a tool of persuasion throughout history, and discussing the role of technological innovations such as the printing press and mass communications. Finally, we look at the recent advent of the internet era, and what role misinformation plays in society today.
This chapter discusses the individual-level and societal-level factors that underlie why people believe and share misinformation, including analytical and open-minded thinking, political partisanship, trust, political and affective polarization, psychological appeal, repetition, emotion, and intergroup sentiment. We look at misinformation belief and sharing separately: just because someone believes a false claim to be true doesn’t mean they’ll share it on social media, and someone sharing a piece of misinformation doesn’t always mean they also believe it.
The virus that causes COVID-19 came from outer space. King Charles III of Britain is Count Dracula’s cousin. John Kellogg invented cornflakes to prevent people from having impure sexual thoughts. The US military hides information about UFOs from the public. The average person accidentally swallows about eight spiders per year in their sleep. While each claim may sound absurd, some of these statements are true, and some are false. The prologue to this book introduces the topic of misinformation and some of its complexities, explains the rationale behind why this book was written and who it’s intended for, and lays out the chapter structure.
This chapter looks at if and how the consumption and sharing of (mis)information are shaped by the environments that we use to communicate. Most people have heard of terms such as “echo chambers” and “filter bubbles,” but what are they exactly? And how prevalent and problematic are they in our internet-addled times? The question of whether echo chambers and filter bubbles have contributed to the spread and prevalence of misinformation is not at all obvious. This debate is ongoing, and where you land on this spectrum may affect what solutions you believe are required. This chapter offers a balanced perspective, discussing the arguments for and against the notion that echo chambers have serious implications for the spread of misinformation, polarization, and democracy.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.