Skip to main content Accessibility help
×
Hostname: page-component-cc8bf7c57-5wl6q Total loading time: 0 Render date: 2024-12-10T12:10:40.619Z Has data issue: false hasContentIssue false

3 - The Flooded Zone

How We Became More Vulnerable to Disinformation in the Digital Era

from Part II - The Current Situation

Published online by Cambridge University Press:  06 October 2020

W. Lance Bennett
Affiliation:
University of Washington
Steven Livingston
Affiliation:
George Washington University, Washington DC

Summary

Starr describes how we have became so vulnerable to disinformation in this digital era. Heargues, that, like analyses of democratization, which have turned in recent years to thereverse processes of democratic backsliding and breakdown, analyses of contemporarycommunication need to attend to the related processes of backsliding and breakdown inthe media – or what he refers to as “media degradation.” After defining that term inrelation to democratic theory, Starr focuses on three developments that have contributedto the increased vulnerability to disinformation: 1) the attrition of journalistic capacities; 2)the degradation of standards in both the viral and broadcast streams of the new mediaecology; and 3) the rising power of digital platforms with incentives to prioritize growthand profits and no legal accountability for user-generated content. Neoliberal policies oflimited government and reduced regulation of business and partisan politics contributed tothese developments, but while demands are growing for regulation, it remains uncertainwhether government can act effectively.

Type
Chapter
Information
The Disinformation Age
Politics, Technology, and Disruptive Communication in the United States
, pp. 67 - 92
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BY
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY 4.0 https://creativecommons.org/cclicenses/

As the twenty-first century began, the digital revolution seemingly validated two general ideas about the contemporary world. The first was the era’s dominant ideological preference for a reduced role for the state. The Internet of the 1990s and early 2000s appeared to be neoliberalism’s greatest triumph; government regulation was minimal, and digital innovation and entrepreneurship were creating new online markets, new wealth, and new bases of empowerment, connection, and community.

The digital revolution also seemed to validate a second idea: an optimistic narrative about technological progress and its political implications. According to that narrative, the new means of communication expanded access to the news, delivered it faster and more reliably, and afforded broader opportunities for free expression and public discussion. Now, with both personal computers and access to the Internet, individuals would have unlimited information at their fingertips, as well as unprecedented computational and communicative power.1 All this would be good for democracy. Celebrants of the digital era saw the new technology as inherently tending to break down centralized power; the further the Internet spread around the world, the more it would advance freedom and threaten dictatorships.2

These early judgments have now come to seem not just premature but downright naïve. But what exactly went wrong? Here, I want to argue that the early understanding of the implications of digital innovation for the news media and democracy fell prey to three errors. First, the prevailing optimism at the century’s turn highlighted what digital innovation would add to the public sphere, hardly imagining that it would subtract anything of true value. The optimistic narrative undervalued the ways in which the predigital public sphere served democratic interests. It assumed, in particular, that the emerging digital economy left to itself would be no less supportive of a free press than the predigital economy.

Second, the optimistic vision failed to appreciate that the new technology’s affordances are a double-edged sword. As should be all too clear now, online communication is capable of spreading disinformation and hatred just as fast and cheaply as reliable information and civil discourse; indeed, virality favors false and emotional messages.3 The opportunities for greater individual choice in sources of news have been double-edged because, when given the chance, people are inclined to seek sources that confirm their preexisting biases and to self-segregate into groups with similar views, a pattern that much research has shown heightens group polarization.4 The new structure of communication has also created new means of microtargeting disinformation in ways that journalists and others cannot readily monitor, much less try to correct in real time.

Third, like generals still fighting the last war, the digital visionaries who saw the new technology as breaking down established forms of centralized power were blind to the new possibilities for monopoly, surveillance, and control. They mistakenly believed that the particular form the Internet had taken during the 1990s was inherent in the technology and therefore permanent, when it was, in fact, contingent on constitutive choices about the Internet’s development and open to forces that could fundamentally change its character. In a different era, the Internet would have developed differently. But in the United States, which dominated critical decisions about the technology, government regulation and antitrust enforcement as well as public ownership were all in retreat, and these features of neoliberal policy allowed the emergence of platform monopolies whose business models and algorithms helped propagate disinformation.

The digital revolution has made possible valuable new techniques of reporting and analysis, such as video journalism and data journalism, as well as greater engagement of the public in both originating and responding to news. But there is no denying the seriousness of the problems that have emerged. Just as studies of democratization have had to focus on the reverse processes of democratic backsliding and breakdown, so we need to attend to the related processes of backsliding and breakdown in the development of the media.5 I use the term “degradation” to refer to those backsliding processes. In telecommunications engineering, degradation refers to the loss of quality of an electronic signal (as it travels over a distance, for example); by analogy, media degradation is a loss of quality in news and public debate.

To be sure, the meaning of quality is more ambiguous and contestable for news and debate than for an electronic signal. But it ought to be uncontroversial to say that the quality of the news media, from a democratic standpoint, depends on two criteria: the provision of trustworthy information and robust debate about matters of public concern. The first, trustworthy information, depends in turn on the capacities of the media to produce and disseminate news and on the commitment to truth-seeking norms and procedures – that is, both the resources and the will to search out the truth and to separate facts from falsehoods in order to enable the public to hold both government and powerful private institutions to account. The second criterion, robust debate, requires not only individual rights of free speech but also institutions and systems of communications that afford the public access to a variety of perspectives.

Media degradation can take the form of a decline in any of these dimensions. In contemporary America, that decline has taken the form of a degradation in the capacities of professional journalism and a degradation of standards in online media, particularly the insular media ecosystem that has emerged on the far right. Social media, rather than encouraging productive debate, have amplified sensationalism, conspiracy theories, and polarization. In a degraded media environment, many people don’t know what to believe, a condition ripe for political exploitation. In early 2018, Steve Bannon, publisher of Breitbart News and Donald Trump’s former strategist, gave a concise explanation of how to exploit confusion and distrust: the way to deal with the media, he said, is “to flood the zone with shit.”6 That not only sums up the logic of Trump’s use of lies and distraction; it also describes the logic of disinformation efforts aimed at sowing doubts about science and democracy, as in industry-driven controversies over global warming and in Russian uses of social media to influence elections in western Europe as well as the United States. “Flooding” the media with government propaganda to distract from unfavorable information is also one of the primary techniques the Chinese regime currently uses to manage discontent.7

In the past, the mass media were not immune from analogous problems; the “merchants of doubt” in the tobacco and oil and gas industries also deliberately flooded the zone.8 But the new structure of the media has greatly reduced the capacity of professional journalists to act as a countervailing influence and to interdict and correct falsehood. How journalism lost its power and authority, how the new media environment helped undermine standards of truth seeking, and how the great social media platforms came to aid and abet the propagation of hatred and lies – these are all critical parts of the story of the new age of disinformation.

The Attrition of Journalistic Capacities

The optimistic narrative of the digital revolution is a story of disruptive yet ultimately beneficial innovation. As improved ways of producing goods and services replace old ones, new enterprises are born while obsolete methods and legacy organizations die out. This kind of “creative destruction” has certainly happened in many industries, including in some segments of the media such as music and video. But no historical law ensures that every such transformation will be more creative than destructive from the standpoint of liberal democratic values, especially where the market alone cannot be expected to produce a public good at anything like an optimal level.

News about public issues is a public good in two senses. It is a public good in the political sense because it is necessary for democracy to work, and it is a public good in the strict economic meaning of the term because it has two features that distinguish it from private goods: it is non-rival (my “consumption” of news, unlike ice cream, does not prevent you from “consuming” it too), and it is non-excludable (even if provided initially only to those who pay, news usually cannot be kept from spreading). These characteristics enable many people to get news without paying for it and prevent the producers of news from capturing a return from all who receive it. As a result, market forces alone will tend to underproduce it, even in strictly economic terms.

Historically, there have been three general solutions to the problem of news being underproduced in the market. The first solution consists of selective subsidies – that is, subsidies to specific media outlets. Such subsidies have come from governments, political parties, groups in civil society, and powerful patrons typically interested in promoting their own views, and consequently have afforded news organizations little independence. The second type of solution consists of general non-selective media subsidies that are more compatible with editorial autonomy: below-cost postal rates for all newspapers and other publications regardless of viewpoint; tax exemptions applicable to all media outlets; and governmental and philanthropic funds for independent, public-service broadcasting.

In its early history, the United States used both selective subsidies (mainly through government printing contracts for party newspapers) and non-selective subsidies (through the Post Office) to support the development of the press. But since the late nineteenth century, America has had an almost entirely commercial model for the news media in which the financing for high-quality journalism has come via the third method for supporting news that would otherwise be underproduced – cross-subsidies. The various sections of a newspaper, from the classified ads to the sports and business pages and political news, were akin to different lines of business; the profitable lines cross-subsidized the reporting on public issues that might not have been justified from a narrower view of return on investment. During the second half of the twentieth century, the newspaper business was also highly profitable; the consolidation of the industry in metropolitan areas left advertisers with few alternatives to reach potential consumers and gave the surviving papers considerable pricing power in advertising rates. With 80 percent of their revenue typically coming from advertising and only 20 percent from subscriptions and newsstand sales, newspapers could pay for most of the original reporting in a community (radio and television news were distinctly secondary), while generating healthy profit margins.9

By undercutting the position of newspapers and other news media as intermediaries between advertisers and consumers, the Internet has destroyed the cross-subsidy system, along with the whole business model on which American journalism developed. Advertisers no longer need to support news enterprises in order to reach consumers. With the development of Craigslist, eBay, and other sites, the classified ads that had been a cash cow for newspapers disappeared. The Internet also disaggregated the various types of news (sports, business, and so on) that newspaper had assembled, allowing readers to go to specialized news sites instead of buying their local paper. Today most online advertising revenue goes to companies that produce no content at all; in 2017 Facebook and Google alone took 63 percent of digital advertising revenue.10

The capture of digital advertising revenue by the big platform monopolies helps explain why the digital revolution has not led to a growth in online news that could have offset the decline in legacy media. Journalism now depends far more on generating revenue from readers than it did in the past, but many of those readers see no reason to pay, since alternative sources of online news continue to be available for free. At the top of the market, a few national news organizations such as the New York Times and Washington Post have instituted paywalls and appear on their way to a successful digital transition as their aging print readership dwindles; subscriptions can also sustain specialized news sites, particularly for business and finance. But regional and community newspapers have sharply contracted and show no signs of revival. Although digital news sites have developed – some of them on a nonprofit basis – they have not come close to replacing what has been lost in reporting capacities, much less in readership. Despite the scale of decline in local journalism, most Americans seem to be unaware of a problem. According to a survey by the Pew Research Center in 2018, 71 percent think their local news media are doing well financially; only 14 percent, however, have paid for local news in any form.11

The decline in employment in news organizations gives a sense of the scale of lost reporting capacities. According to data from the US Bureau of Labor Statistics, total employment in both daily and weekly newspapers declined by 62 percent from 1990 to 2017, from 455,000 to 173,900.12 Those numbers include not only reporters and editors but also salespeople, secretaries, and others. A more narrowly defined measure – reporters and editors at daily newspapers – shows a decline over the same period of 42 percent, from 56,900 to 32,900, according to an annual survey of newsrooms by the American Society of Newspaper Editors.13 Broader measures that include digital news organizations are available only for the more recent period. From 2008 to 2017, according to a Pew Research Center analysis of data from the Bureau of Labor Statistics, the number of editors, reporters, photographers, and videographers employed by news organizations of all kinds, fell from about 114,000 to 88,000, a decline of 23 percent. Newspapers, which cut newsrooms by 45 percent over that period, accounted for nearly all the decline.14

The geography of journalism has also changed. While internet-related publishing jobs have grown on the coasts, journalism in the heartland has shrunk. By 2016, 72 percent of journalists worked in counties won by Hillary Clinton, while newspapers underwent the greatest decline in areas won by Trump.15 As a result of the overall contraction and geographical shift, the United States has now been left with an increasing number of “news deserts”, communities without any local newspaper. About 20 percent of newspapers have closed since 2004, while many of the survivors have become ad shoppers with hardly any original news: “newspapers in name only” (NINOs) as one analyst calls them.16 The people who live in the news deserts and communities with NINOs may be especially dependent on the news they receive via social media.

The decline of newspapers has not only brought a falloff in reporting and investigating throughout much of the United States; financially weakened news organizations are also less capable of maintaining their editorial independence and integrity. This is a real cost to freedom of the press, if one thinks of a free press as being capable of standing up against powerful institutions of all kinds. When news organizations teeter on the edge of insolvency, they are more susceptible to threats of litigation that could put them out of business, and more anxious to curry the favor of such advertisers as they still have. The major professional news organizations used to maintain a strict separation between their editorial and business divisions, but new digital start-ups haven’t adopted that rule and older news organizations no longer defend it as a matter of principle. The adoption of “native advertising” – advertising produced by an in-house unit and made to look nearly indistinguishable from editorial content – is one sign of that change.17

For all its faults, the predigital structure of the public sphere enabled news organizations to thrive while producing critical public goods. That structure had a value for democracy that digital enthusiasts failed to grasp. It allowed for considerable institutional autonomy and professionalism and enabled journalists to limit the spread of rumors and lies. But with new technological and institutional developments, those checks on the degradation of standards have collapsed.

The Degradation of Standards

To the celebrants of digital democracy, the downfall of the public sphere’s gatekeepers counted as one of the chief benefits of the Internet. Speech would no longer need the permission of the great media corporations, their owners or publishers, editors or reporters, programming executives or producers. The online world has indeed afforded greater opportunities for the unfiltered expression of individual opinion and the unedited posting of images, videos, and documents. By the same token, however, the gates have swung wide open to rumors, lies, and increasingly sophisticated forms of propaganda, fraud, and deception.

News spreads in two ways, from one to one and from one to many. The new media environment has transformed both sets of processes compared to the predigital era. Online networks allow for more rapid and extensive viral spread from one person to another than the old word-of-mouth did. The new technology has also lowered the barriers to entry for one-to-many communication – “broadcasting” in the general sense of that term.

Broadcasts now include dissemination not only by mass media with high capital costs but also by lower-budget websites, aggregators, and sources on social media with large numbers of followers. Among those sources are individual social media stars (“influencers”), who can broadcast news and opinion, unrestrained by traditional gatekeepers or journalistic norms. For example, the alt-right gamer PewDiePie (Felix Kjellberg) has nearly 96 million subscribers on YouTube. Trump accumulated millions of followers largely on the basis of his virtual-reality TV show before he became a political candidate. The online world is also populated by bots, trolls, and fake-news sites, and it is subject to strategies for gaming searches and other means of both microtargeting messages and shaping what diffuses fastest and furthest.

The one-to-one and one-to-many streams have never been entirely separate; varying combinations of the two always determine the full pattern of communication. In this respect, every media system is a hybrid. In the classic model of the mass media from the 1940s, the sociologist Paul Lazarsfeld posited a “two-step flow” from the mass media to local opinion leaders, and from those opinion leaders to others in their community.18 Lazarsfeld didn’t consider a prior step: how the news reached the mass media. In the new media environment, the flow of communication may have a long series of traceable steps, leading up to and away from broadcasters of all types, with total diffusion depending on the branching structure of cascades. A study of one billion news stories, videos, and other content on Twitter finds a great deal of structural diversity in diffusion, but “popularity is largely driven by the size of the largest broadcast” rather than by viral spread.19 In short, while the spread of disinformation depends on both virality and broadcasting, the preponderant factor is still likely to be the behavior of broadcasters – not just legacy news organizations but also new digital media, individual social media influencers (including political leaders), and other sources with wide reach.

Disinformation flourishes in both the viral and broadcast streams of the new media ecology. Another study of online diffusion using data from Twitter finds that “false stories spread significantly farther, faster and more broadly than did true ones. Falsehoods were 70 percent more likely to be retweeted, even when controlling for the age of the original tweeter’s account, its activity level, the number of its followers and followees, and whether Twitter had verified the account as genuine.” According to this analysis, virality favors falsehood because the false items tend to be more novel and emotional than the true items.20

The new forms of broadcasting have also helped amplify the spread of disinformation. Here it helps to backtrack to the changes in the late twentieth century that led to the emergence – or rather reemergence – of aggressively partisan media outlets.

By the mid-twentieth century, the mass media in the United States no longer had strong connections to political parties, as newspapers had in the nineteenth century before the turn toward advertising as a source of income, and toward professionalism and objectivity as journalistic ideals. American radio and television also developed on a commercial rather than party foundation and, in their news operations, emulated the ideals of print journalism. During television’s early decades, when most areas had only two or three stations, the networks often created a captive audience for the news by scheduling their evening news broadcasts at the same time. In a market with few competitors, the three national television networks – CBS, NBC, and ABC – rationally sought to maximize their advertising income by seeking the widest possible audience, staying close to the political center, and avoiding any partisan identification.

As the number of TV channels increased, however, two things changed. First, people with little interest in politics were free to switch to entertainment shows, while the more politically oriented could watch more news than ever on cable. The news dropouts, according to an estimate by Markus Prior, amounted to about 30 percent of the old TV news audience, while the news addicts represented about 10 percent.21 Other evidence on news consumption in the late twentieth century also suggests rising disparities in exposure to news as older habits of reading the newspaper over breakfast or watching the evening news died out. No longer socialized into those habits by their families, young adults reported lower rates of getting news in any form.22

While viewers with lower political interest dropped out, the audience that remained for news was both more partisan and more polarized. With the increased number of channels, catering to partisans also became a more rational business model for broadcast news, just as it became more profitable on radio and cable TV to specialize in other kinds of niche programming (“narrowcasting”). In 1987 the Federal Communications Commission abandoned the fairness doctrine, which had required broadcasters to offer public affairs programming and a balance of viewpoints. Many radio stations stopped broadcasting even a few minutes of news on the hour, while conservative talk radio led by Rush Limbaugh took off. Ideologically differentiated news channels then developed on cable TV, first with Fox and later with MSNBC. The Internet further strengthened these tendencies toward partisan media since it had no limit on the number of channels, much less any federal regulation requiring balance. These developments created the basis for a new, ideologically structured media environment in which the more politically engaged and more partisan could find news and opinion aligned with their own perspectives, and the less politically engaged could escape exposure to the news entirely.

This new environment, however, has not given rise to the same journalistic practices and patterns of communication on the right and left. The media in the United States now exhibit an asymmetrical structure, as Yochai Benkler, Robert Faris, and Hal Roberts have shown in a study of how news was linked online and shared on social media from 2015 to 2018. On the right, the authors find an insular media ecosystem skewed toward the extreme, where even the leading news organizations (Fox and Breitbart) do not observe norms of truth seeking. But journalistic norms continue to constrain the interconnected network of news organizations that runs from the center-right (e.g., the Wall Street Journal) through the center to the left.23

During the period Benkler and his coauthors studied, falsehoods emerged on both the right and left, but they traveled further on the right because they were amplified by the major broadcasters in the right-wing network. Even after stories were shown to be false, Fox, Breitbart, and other influential right-wing news organizations failed to correct them or to discipline the journalists responsible for spreading them. The much-denounced mainstream media, in contrast, checked one another’s stories, corrected mistakes, and disciplined several journalists responsible for errors. As a result of these differences, the right-wing media ecosystem was fertile ground during the 2016 election for commercial clickbait and both home-grown and Russian disinformation.

What explains the direction taken by the right-wing media ecosystem? In their book Network Propaganda, Benkler and his colleagues do not assume any differences in psychological make-up or receptivity to false news on the right and left. According to their model, people generally consume news both to find out what is going on in the world and to confirm their worldview and identity; consequently, while seeking to become informed, they also don’t want to suffer “cognitive discomfort” from sources that challenge their assumptions. As long as the system is subject to what the authors call a “reality-check dynamic,” the major media outlets follow truth-seeking norms while maintaining a neutral stance to minimize consumers’ discomfort when the reported news contradicts their prior beliefs. The system undergoes a structural change, however, when new media appear that attract a partisan audience by providing identity-confirming news and claiming that other (mainstream) outlets are lying. Politicians thrive in this ecosystem by aligning their rhetoric and positions with the partisan media and their publics. Benkler and his coauthors call this dynamic the “propaganda feedback loop” and argue that it began operating on the right in the early 1990s, with the advent of Limbaugh and Fox News, while the left-of-center public was able to satisfy its thirst for motivated reasoning from the broader, truth-seeking media ecosystem that often contradicted the right’s insular media. According to this interpretation, therefore, it was the sequence of developments (the right’s media innovations coming first in the 1990s) that determined the present pattern.

Conservative beliefs and experience, however, may have been the more decisive factor in the development of hyperpartisan media on the right. Conservatives were already alienated from professional journalism before the 1990s. By the 1970s – amid growing disillusionment with the Vietnam War, the publication of the Pentagon Papers, and the Watergate scandal – many professional journalists became more critical of official pronouncements and adopted a more adversarial posture toward both government and business.24 After playing an important role in the civil rights movement, journalists also often reported sympathetically on other liberalizing cultural shifts. Outraged by these changes in society, conservatives were also outraged by the messengers whose reports on them were often approving. The backlash against racial and cultural change consequently became a backlash against the mainstream media. When the technological and institutional conditions opened up for new right-wing media, sympathetic business interests were ready to underwrite the media outlets, the politicians, and allied groups, setting in motion the forces Benkler and his colleagues describe as the “propaganda feedback loop.” Liberals and progressives, in contrast, were not nearly as disaffected from the mainstream; the far left also did not represent as lucrative a market as the far right to sustain an alternative media ecosystem, nor did it enjoy the same patronage. The lines of division in the media consequently became drawn between the far right and the rest.

Moreover, the divorce of right-wing media from the mainstream of journalism and professional practices of truth seeking is consistent with the general pattern of asymmetric polarization in American politics. According to analyses of changes in Congress, public opinion, and party platforms, Republicans have moved further to the right than Democrats have moved to the left.25 The right is at war with science, the universities, and other knowledge-related institutions, a conflict that Trump’s presidency has brought to the apex of federal power. His repeated statements that the press is “the enemy of the people” are just one aspect of this general epistemic conflict.26 Much of his base is alienated not only from liberalism in the everyday political sense, but more fundamentally from liberal modernity.

The claim that the Internet has given rise to partisan echo chambers and filter bubbles needs to be treated carefully with that larger conflict in mind. The insularity of the right-wing media ecosystem described by Benkler and his coauthors fits the pattern of conservative resistance to the wider culture. The developments in radio and cable TV already reflected the alienation of the right from mainstream media. It is not clear that the advent of the Internet has generally resulted in people being less exposed to contrary views. Indeed, some research suggests that people may encounter more political disagreement in social media than in person, and they find such disagreement extremely stressful and unpleasant. The anger and vitriol in many online exchanges may have increased “negative partisanship,” the level of mutual antagonism between Republicans and Democrats, conservatives and liberals.27

Compared to the patterns in the mid-twentieth century, the news media and their audiences have been reconfigured along political lines. Americans used to receive news and opinion from national media – broadcast networks, wire services, and newsmagazines – that stayed close to the center and generally marginalized radical views on both the right and the left. Now the old gatekeepers have lost that power to regulate and exclude, and news audiences have split. By opening up the public sphere to a broader variety of perspectives, including once-shunned radical positions, the new environment should have advanced democratic interests. But the forms of communication have aggravated polarization and mutual hostility and the spread of disinformation.

While the mass-media gatekeepers no longer have as much power as they once had to interdict falsehood, the digital revolution has given rise to new forms of organization that could perform that function. The most important of these are the corporations that control the platforms on which news and debate travel. That has put the platforms and the people who own and run them at the center of the political conflict over disinformation.

Platform Power and Disinformation

Social media platforms – as the potential checkpoint for disinformation and potential chokepoint for free speech – now occupy the position formerly held by the gatekeepers of the mass media. From their beginnings, however, the companies in control of the platforms have represented themselves only as facilitating speech and access to information. When Larry Page and Sergei Brin founded Google in 1998, they said its mission was “to organize the world’s information and make it universally accessible and useful.”28 Facebook declared that it existed “to give people the power to share and make the world more open and connected.” Twitter’s mission statement was nearly the same: “to give everyone the power to create and share ideas and information instantly, without barriers.”29 In short, unlike the institutions that seek to provide trustworthy knowledge – journalism, science, educational institutions – the social media platforms did not see their role as involving judgment or selection in guarding against error and counteracting those who intentionally spread it.

But contrary to how the companies framed their role and to the early hopes for a radically decentralized digital public sphere, the platforms have accumulated extraordinary power to regulate online communication. The algorithms they use – for example, in Google’s search and YouTube recommendation engine, Facebook’s news feed, and Twitter’s trending topics – determine the content, sources, and viewpoints that gain visibility among different users. The companies also now set rules determining the kinds of speech and images that are allowable on their platforms; which groups, channels, subreddits, or other forms of organization will be permitted or shut down; how individuals will be identified and whether their identities will be verified; and how aggressively, if at all, fakes, bots, and trolls will be pursued and eliminated.30 The tools the companies provide for liking, sharing, and commenting influence virality. Their policies determine the standards advertisers must meet on their platforms, whether users can readily distinguish between advertising and content, and whether ads are visible to others besides those targeted to receive them – all questions that have taken on especially wide importance because of the use of social-media advertising in political campaigns.

The major platform companies not only rule their own world; they also now dominate their poor relations in the news business. Besides losing advertising revenue to Facebook and Google, the news media are now at the mercy of changes in the platforms’ algorithms that determine what kinds of content, and therefore what kinds of publishing strategies, succeed or fail.

Despite these considerable powers, the social media giants continue to present themselves as mere facilitators of their users’ speech. Congress did not make them responsible for what users put online. Indeed, federal legislation passed in 1996 freed internet intermediaries from virtually all liability for user-generated content, enabling them to make policy and design choices solely with their own business interests in mind. For the social media platforms, those business interests have revolved around two objectives – achieving massive scale and maximizing advertising income. The two are closely related, and not just because more users mean more eyeballs. The greater the scale of a platform, the greater the network externalities that make it indispensable to users. The greater, too, are the capacities to extract data from users that enable the platforms to develop more advanced systems of artificial intelligence and target advertising more efficiently.

Freed from public accountability for user-generated content and bent on maximizing scale and advertising revenue, the social media platforms until recently had no incentive to invest resources to identify disinformation, much less to block it. They could ignore the accuracy, source, and purpose of ads, as Facebook did during the 2016 election, when it accepted ads placed by Russians (and paid for in rubles), intended to aggravate divisions among Americans and to help Trump win. The platforms’ algorithms, as a recent review of the political science literature explains, also made them vulnerable to disinformation: “Optimized for engagement (number of comments, shares, likes, etc.), they often help in spreading disinformation packaged in emotional news stories with sensational headlines.”31

Google’s YouTube was a prime example of this pattern. An investigation by the Wall Street Journal in 2018 found that after detecting users’ political biases, YouTube typically recommended videos echoing “those biases, often with more extreme viewpoints,” feeding “far-right or far-left videos to users who watched relatively mainstream news sources, such as Fox News and MSNBC.”32 The impact was likely considerable. According to YouTube, its recommendation algorithm drives more than 70 percent of viewing time, which in late 2016 passed one billion viewing hours a day – close to the total viewing time for all television and growing more quickly. YouTube didn’t intend to prioritize sensationalist conspiracy theories from fringe sources; that result followed from the logic of an algorithm set up to make the site as “sticky” and as profitable as possible.33

How did social media, whose leaders claimed they want to connect the world, come to connect the agents of disinformation so efficiently to their targets? One thing we know: digital technology itself did not dictate this outcome. Like radio broadcasting in the early twentieth century, the Internet could have developed in different ways; the form of a new medium depends critically on the configuration of political forces at key moments of institutional choice. In the Internet’s case, those choices reflected a general turn in the late twentieth century toward neoliberalism, that is, the use of state power to shrink the state and create free markets, on the assumption that unleashing market forces would bring better outcomes than any kind of government regulation.

From World War II through the Cold War, the federal government, chiefly through the Defense Department, had played the central role in financing and guiding the development of electronics, computers, and computer networks, including the forerunners of the Internet.34 But with the retreat of the state from the economy in the late twentieth century came a diminished role in regulating communications, and a greater reliance on the market. The breakup of the Bell telephone system in the early 1980s and the opening of the Internet to commercial development in the early 1990s were milestones in that process. The Internet’s explosive early growth, as I suggested earlier, appeared to validate the neoliberal premise that lifting government restrictions over a domain would unlock enormous economic and social value. National policy in the 1990s even subsidized the Internet by exempting internet service providers from network access charges. Internet intermediaries received broad immunity from liability for user-generated content under Section 230 of the Communications Decency Act, adopted as part of general telecommunications legislation in 1996.

Two other areas of national policy, antitrust and privacy law, helped lay the basis for the rise of online platform monopolies. Since the 1980s, the federal government has greatly relaxed enforcement of the antitrust laws against big corporations, thanks to the influence of theories holding that corporate dominance of a market is no problem if it improves “consumer welfare,” interpreted largely to mean lower consumer prices. That interpretation has made it difficult to prosecute antitrust cases in the tech sector, especially against companies like Google and Facebook that offer consumers services for free. After failing to break up Microsoft in an antitrust suit that ended with a consent decree in 2002, the government raised no obstacles as online platform companies expanded, bought out potential rivals, and gained monopoly power; for example, Facebook was able to acquire WhatsApp and Instagram without facing antitrust action.

The government also raised no obstacles to the platforms’ accumulation and sharing of personal data; unlike the European Union, the United States has adopted no legislation protecting consumer privacy online. The government left it to the online companies to set their own privacy policies, which evolved into increasingly broad authorizations for the companies to share data. In its initial privacy policy in 1999, for example, Google said that when sharing information about users with third parties, “we only talk about our users in aggregate, not as individuals,” but Google excised that limitation in three months.35 The government can take action against the companies if they violate their own privacy policies and deceive consumers, but this does not guarantee institutional change, though it has led to fines.36 According to one market-oriented theory, privacy is itself a purchasable good; if consumers value privacy, they can choose firms that provide it, a theory which presumes consumers have had a choice in services where often there is no competition and obtaining data about users is a core part of the business.

In the absence of privacy protections, Google, Facebook, and other companies have been able to sweep up data from their users’ computer-mediated communications and actions to create a new kind of enterprise specializing in behavioral prediction and modification. Inverting the public sphere, the firms have developed the most comprehensive systems ever devised for tracking individual behavior. This is what Shoshana Zuboff calls “surveillance capitalism,” which in her conception is not just a new business model, but a “new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales.”37

The connection between surveillance capitalism and disinformation lies in the increased capacity of platforms to microtarget messages and alter behavior without people being aware of their influence. Although most users of social media probably understand that their data is used to decide what ads to show them, they may not be aware how much personal data the companies have and what the data enables them to do. In two published experiments, Facebook itself demonstrated the platform’s capacity to modify behavior on a mass scale. In the run-up to the 2010 congressional elections, the company’s researchers conducted a randomized, controlled experiment on 61 million users. Two groups were shown information about voting at the top of their news feed; the people in one of those groups also received a social message with up to six pictures of their Facebook friends who had received that information and clicked “I voted.” Other Facebook users received no special voting information. Sure enough, Facebook’s intervention, especially the social message about users’ friends, had a significant effect; altogether the researchers estimated that the experiment led to 340,000 additional votes being cast.38 In a second experiment demonstrating “massive-scale emotional contagion through social networks,” Facebook researchers provided some users more negative information in their news feed and other users more positive information, affecting the emotional mood not just of the immediate recipients but also of their friends.39

Microtargeting is not necessarily a bad thing per se; a political campaign can legitimately use microtargeted messages to get more of its supporters to vote. But using the same means, a campaign may be able to deliver covert lies and suppress voting among its opponents. Microtargeting has been especially likely to be a vector of disinformation because social media are able to deliver such messages outside the public sphere, thereby preventing journalists from policing deception, and opponents from rebutting attacks.

Facebook’s policies during Brexit and the US elections of 2016 facilitated covert disinformation. Not only did Facebook aid the Brexit and Trump campaigns by allowing the firm Cambridge Analytica to harvest the personal data of tens of millions of Facebook users in violation of Facebook’s own privacy policies; it also allowed microtargeting through “unpublished page post ads,” generally known as “dark posts,” which were invisible to the public at large. As an advertising firm explained in 2013 shortly after Facebook began allowing dark posts in news feeds, they were effective partly because they blurred “the line between advertising and content on Facebook” and could be delivered “as a status update, photo, video, question, or shared link – what people have come to expect from brands already in their News Feeds.” Moreover, they also benefited from “viral lift”: “As people engage with the ad unit as they would any other piece of content in their News Feeds (be it by Liking, commenting or sharing), their friends also can see this activity. Advertisers benefit from this additional, free wave of visibility.”40 But the dark posts then disappeared and were never publicly archived.

The social media companies did not create tools for disinformation deliberately, but they were reckless and naïve. “Move fast and break things” was Facebook’s motto. The companies were so certain of their own goodness that they failed to see the problems with the accumulation of so much power in their own hands. They had radically altered the means of political communication, but they had none of professional journalism’s traditions of editorial responsibility, traditions that in liberal democracies have at least mitigated the dangers of the mass media. How to govern this new regime has now become one of the central challenges of our time.

Governing the New Regime

Since 2016, a backlash against the tech industry has radically changed the political context for social media. Journalists and researchers have exposed the platforms’ vulnerability to manipulation and propaganda, their failures to protect users’ privacy, and the role of their algorithms in amplifying disinformation and extremism. Both Republicans and Democrats have expressed outrage about the industry’s practices and called for changes in antitrust, privacy, and other policies. The companies themselves are in the process of making changes internally, and a variety of independent efforts are developing means of combatting disinformation as well. These private efforts and proposals for changes in public policy are so varied – and evolving so quickly – that I will only outline here what seem to me to be the most important points about them.

Facebook, YouTube, and Twitter are now more openly and aggressively engaged in the regulation of user content, proactively identifying and eliminating fake accounts and taking down content that violates their standards and rules. Facebook as well as Twitter has eliminated dark posts, requiring that all ads be publicly visible and archived.41 In an important shift, both Facebook and YouTube have announced changes in their algorithms that they claim will limit the prominence of what they call “borderline content.” In Mark Zuckerberg’s description, this is “sensationalist and provocative content” that “can undermine the quality of public discourse and lead to polarization.”42 Facebook is not blocking these posts, only limiting how often they show up in news feeds. In an explanation of how Facebook was preparing for the 2018 elections, Zuckerberg said, “Posts that are rated as false [on the basis of independent fact-checkers] are demoted and lose on average 80% of their future views.”43 YouTube announced in January 2019 that it would change its recommendation algorithm to reduce the spread of “borderline content and content that could misinform users in harmful ways.” But the company continued to display such videos in searches and to distribute them in the channels of conspiracy theorists with millions of followers. Critics argue that the actual scope and impact of YouTube’s new policies are limited.44

Such efforts to combat disinformation and polarization are politically fraught. In May 2018, Twitter announced that it was taking steps to limit “troll-like behaviors that distort and detract from the public conversation” on its platform. To identify these tweets, its algorithm took into account not only an individual user’s account but also how that account was connected to others that “violate our rules.” Not long after, several Republicans complained that Twitter was “shadow banning” them. In a shadow ban, a social media company allows a user to continue to post items, but no one else sees the posts; Twitter was not doing this to the Republicans. But some of their accounts were briefly downgraded in search, possibly because Twitter’s algorithm linked them to purveyors of right-wing conspiracy theories.45

This episode was one of a series in which conservatives accused Twitter, Facebook, and Google of discriminating against them. Such charges are unlikely to go away even if, for example, the social media platforms rely only on independent fact-checking organizations to determine whether sources are reliable. According to a Pew survey, 70 percent of Republicans believe fact-checkers are biased, while only 29 percent of Democrats think so.46 Independent fact-checkers may indeed rate news sites in the right-wing media ecosystem as less reliable than the sites that run from center-right to the left for the reasons that Benkler and his colleagues have identified: the right-wing sources do not observe the same truth-seeking journalistic norms. But those who judge reliability for social media may not act on the basis of such findings, for fear of political retribution from Republicans.

Hate speech is another area where social media platforms run into political problems on the right. In September 2019, Twitter said it was considering changes to target speech that “dehumanizes” people on the basis of a wide variety of characteristics, including race, sexual orientation, and political beliefs; but it ended up only taking limited steps against speech dehumanizing people on the basis of their religion.47 Broader measures against dehumanizing speech might well have a disparate effect on right-wing groups.

Ironically, after years of denouncing Democrats for supposedly wanting to bring back the fairness doctrine in broadcasting, conservatives now want a new fairness doctrine for social media. Senator Josh Hawley, a Missouri Republican, has proposed legislation that would require internet intermediaries to demonstrate that they are politically unbiased in order to obtain the broad freedom from liability for user content conferred by Section 230 of the CDA.48 The measure seems calculated to deter social media platforms from taking any steps on news source reliability, hate speech, or other issues that would differentially affect right-wing media.

Imposing new duties on social media companies in the governance of their platforms has support beyond the Republican Party. One proposal would condition their freedom from liability under Section 230 on a duty of reasonable care to prevent conduct that would be illegal if conducted offline.49 Another proposal would treat digital platforms as “information fiduciaries.”50 It seems unlikely, however, that either of these would much affect the platforms’ content moderation practices; indeed, they may have the opposite effect of ratifying the status quo.51 A proposal for a more comprehensive Digital Platforms Act would draw on the history of communications regulation to create a new regulatory regime to deal with a wide range of problems.52 But a host of obstacles, political and judicial, confront such measures. The political opposition will come both from Republicans who object to regulation in general, and from Democrats with ties to the high-tech industry. Even if such a measure could pass, the Supreme Court might overturn it on First Amendment grounds.

In the long run, the digital platforms will come under government regulation around the world. They are now trying to administer rules for information, communication, and economic exchange in countries with diverse cultures, legal traditions, and political regimes, all the while accumulating vast stores of personal data and the means of covertly modifying behavior, public opinion, and election outcomes. It is an unsustainable concentration of power. The power of the platforms has developed so fast, and with so little public or political understanding, that governments have lagged in responding – but law will be coming.

In the United States, however, a new regulatory regime may not be coming right away. Although both Republicans and Democrats are angry about the platforms, they do not agree about what ought to be done, nor even about what is wrong. The continued ideological dominance of neoliberal ideas, particularly in the courts, and the political influence of the tech industry create additional barriers to substantial reform. The parties’ views of the media are so antithetical that bipartisan measures in support of professional journalism are inconceivable. The degradation of the media would be a difficult problem to address at any moment; it is peculiarly difficult at a time when the leaders of one of America’s two major parties have made degrading the media into a central part of their political strategy. As long as that party has power at the national level, there will be no chance of undoing the damage from the perverse effects of the digital era. The best we can do is to try to survive the flooded zone and hope to build a better framework at a more rational time.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×