I keep a list of things that have no name but need one. Like the feeling you get when you stare at a word so long that it looks like it’s spelled wrong. Or that social glitch that happens when you’re about to pass someone on the sidewalk, but neither of you can tell which side the other wants to walk on, so when the moment comes you both do that jerky little stutter-step thing that somehow, miraculously, always manages to resolve itself. Or when you’re sitting in a chair and someone walks behind you, and you scoot forward to give them room to pass even when you don’t need to, just to acknowledge their existence and the fact that they’re passing. Or when you’re in a taxi and your driver maneuvers in some way that cuts off another driver or pedestrian, and your impulse is to apologize to them because it’s your taxi and you benefitted from his transgression, but on the other hand it wasn’t your fault, so as you pass the aggrieved party you make some token gesture out the window, like a little pinched-lip half-smile, as though to half-assedly signal “Sorry!”
“The limits of my language,” wrote the philosopher Ludwig Wittgenstein, “mean the limits of my world.”1 We expand our awareness, both of ourselves and of our world, when we expand our language. We see things we didn’t know to see before, and we learn how to talk about them with others.2 What did we call “clickbait” before that word came into being? Or “binge-watching,” or “humblebrag,” or “FOMO”?
Diogenes also needed to coin new terms to describe the way he wanted to relate to the world. When people asked him where he was from, he replied that he was “a citizen of the world” – a kosmopolitês, or “cosmopolitan.”3 No one had ever said this before, so no one knew what it meant. The term certainly didn’t have the connotation it has today: Diogenes was no moneyed jet-setter. In fact, at one point in his life Diogenes was put on sale as a slave. It’s said that when the slave-master brought him before a group of potential buyers, he directed Diogenes to tell them what he could do. Diogenes retorted, “Govern men.” One potential buyer was so impressed by this reply that he immediately purchased Diogenes and put him in charge of educating his children. The “citizen of the world,” it seemed, had become the product.
We need new words to describe how we want to relate to our new empires of the mind. A vast project of industrialized persuasion has emerged under our feet. It competes to capture and exploit our attention, and we want to account for the ways this threatens the success of our personal and political lives. What we need, then, is a richer and more capacious way of talking about attention. As Tony Judt writes in Ill Fares the Land, “you must be able to name a problem if you wish to solve it.”4
However, in our societal and political discussions we lack such a language. As a result, we’ve failed to account for the wider set of technological “distractions” that threaten us most. We still grapple with attention using conceptual tools developed in environments of information scarcity. We don’t have a way of thinking about attention as a thing. The limits of our language are the limits of our attentional world.
What is attention? “Everyone knows what attention is,” wrote William James in his 1899 text The Principles of Psychology. In reality, no one really knows what attention is. (And I’m not just taking the contrary position because my name happens to be the inverse of his.) The term “attention” is used in many different ways across a wide range of domains.5 In fact, even within the narrowly specialized psychology and neuroscience literatures, researchers can’t seem to agree.6
Generally speaking, though, when we use the term “attention” in day-to-day parlance, we typically mean what cognitive scientists call the “spotlight” of attention, or the direction of our moment-to-moment awareness within the immediate task domain.7 The “spotlight” of attention is the sort of attention that helps us do what we want to do. It includes the way I’m selecting certain pieces of information from my sensory stream as I write this: I’m looking at a certain section of my computer screen; I’m typing a particular key on my keyboard. (In fact, just as I was writing the previous sentence, a helicopter went whopwhopwhop past my window and disappeared behind a tree, momentarily distracting the spotlight of my attention.)
Yet this is exactly the surface-level sort of “distraction” at which our day-to-day language about attention already operates. Expanding our language means diving down to deeper levels of attention. How can we access those deeper levels with a view to clarifying the distinct challenges of the attention economy?
Perhaps pivoting our question may help. Rather than asking “What is attention?”, I wonder whether a better question would be, “What do we pay when we ‘pay’ attention?” In this light, new spaces of possibility open up that allow us to venture well beyond the domain of the “spotlight” of attention.
What do you pay when you pay attention? You pay with all the things you could have attended to, but didn’t: all the goals you didn’t pursue, all the actions you didn’t take, and all the possible yous you could have been, had you attended to those other things. Attention is paid in possible futures forgone. You pay for that extra Game of Thrones episode with the heart-to-heart talk you could have had with your anxious child. You pay for that extra hour on social media with the sleep you didn’t get and the fresh feeling you didn’t have the next morning. You pay for giving in to that outrage-inducing piece of clickbait about that politician you hate with the patience and empathy it took from you, and the anger you have at yourself for allowing yourself to take the bait in the first place.
We pay attention with the lives we might have lived. When we consider the opportunity costs in this wider view, the question of attention extends far beyond the next turn in your life’s GPS: it encompasses all the turns and their relations, the nature of your destination, the specific way you want to get there, why you’re going there, and also your ability to ask any of these questions in the first place. In this view, the question of attention becomes the question of having the freedom to navigate your life in the way you want, across all scales of the human experience.
The great thinkers on the question of freedom can be of use here, in particular the nineteenth-century British philosopher John Stuart Mill. In his seminal text On Liberty, Mill writes that the “appropriate region of human liberty … comprises, first, the inward domain of consciousness … liberty of thought and feeling; absolute freedom of opinion and sentiment on all subjects, practical or speculative.” “This principle,” he writes, “requires liberty of tastes and pursuits; of framing the plan of our life to suit our own character.”8 Here, Mill seems to me to be articulating something like a freedom of attention. Crucially, he points out that freedom of the mind is the first freedom, upon which freedom of expression depends. The freedom of speech is meaningless without the freedom of attention, which is both its complement and its prerequisite.
But Mill also gives us a clue here about how we might think more broadly about attention – how we might take into account the full range of potential harms to which our “almost infinite appetite for distractions” might fall prey. So attention isn’t just about what you’re doing right now. It’s about the way you navigate your whole life: it’s about who you are, who you want to be, and the way you define and pursue those things.
This suggests that we need to move beyond a narrowly psychologized notion of attention. Georg Franck writes, “Attention is far more than just the ready supply of information processing capacity. Attention is the essence of being conscious in the sense of both self-certain existence and alert presence of mind. Attention is the medium in which everything must be represented that is to become real for us as experiencing creatures.”9 This is an intriguing direction in which to take the concept of attention. However, for our present purposes it seems overly broad.
Perhaps William James’s description of “effort of attention” as “the essential phenomenon of will” points the way to a narrower and more useful middle ground. If we expand our notion of “attention” in the direction of conceptions of the human will, this may allow us to take a view that’s wide enough to include more than just the immediate “spotlight,” but not so ultra-wide that it encompasses totalizing concepts such as “consciousness,” “being,” “life itself,” and so on. I’m not arguing here that we should think of attention as coextensive with the human will, but rather as a construct that we can usefully expand in that general direction. For our present purposes, we might think of this widened view of “attention” as the full stack of navigational capacities across all levels of human life.
The will is, of course, also the source of the authority of democracy. In this light, the political and moral implications of the digital attention economy start to move into the foreground. Article 21 of the Universal Declaration of Human Rights states, “The will of the people shall be the basis of the authority of government.” If the digital attention economy were compromising the human will, it would therefore be striking at the very foundations of democracy. This would directly threaten not only individual freedom and autonomy, but also our collective ability to pursue any politics worth having.
Of course, the “luminous conception” of the general will Rousseau writes about is not merely the aggregation of individual wills: it’s the joined will of individuals where they are all “concerned with the common interest.” That is to say, an individual can have a personal will that is contrary or dissimilar to the general will that he has as a citizen. So the political implications of undermining attention, in this broader sense, are not fully accounted for by considering merely the frustrated navigation of an individual’s life, or even the frustrated navigation of many individuals’ lives. We must also account for the unique frustrations of the citizen, and possibly even the very idea of citizenship itself. Rousseau writes that if society were understood as a “body,” then “there would be a kind of common sensorium which would ensure that all parts are coordinated.” Following this metaphor, undermining the very construct of citizenship would be akin to short-circuiting the nervous system that coordinates the body politic. Indeed, there are many types of group decision-making biases and fallacies that psychology research has identified which routinely lead to collective action that does not reflect the collective will (and sometimes, as in the “Abilene Paradox,” even reflects its opposite).10
Can we expand the language of attention and use it to talk across questions of both individual and general will in order to clarify the threats the intelligent, industrialized persuasion of the attention economy poses to life and politics?
If we accept this broader view of attention as something akin to the operation of the human will, and we pair it with an understanding of the centrality of the human will for politics, then it’s hard to avoid viewing the attention economy as a project that ultimately targets and shapes the foundations of our politics. It is not merely the user, but indeed the citizen, who is the product.
To develop this wider notion of “attention” in the direction of the will, both individual and collective, let’s assume (at least for now) two more types of attention – two more “lights” – in addition to the “spotlight” of immediate awareness. These “lights” broadly align with the way the philosopher Harry Frankfurt views the structure of the human will.
It’s important to note here that I’m not making any sort of scientific claim or argument with these distinctions. My interest is primarily exploratory: think of this as one possible heuristic that may be useful for piercing through this problem space. Gordon Pask once called cybernetics “the art and science of manipulating defensible metaphors.”11 This is a fitting description for our task here as well.
- The “Spotlight”
Our immediate capacities for navigating awareness and action toward tasks. Enables us to do what we want to do.
- The “Starlight”
Our broader capacities for navigating life “by the stars” of our higher goals and values. Enables us to be who we want to be.
- The “Daylight”
Our fundamental capacities – such as reflection, metacognition, reason, and intelligence – that enable us to define our goals and values to begin with. Enables us to “want what we want to want.”
At Netflix, we are competing for our customers’ time, so our competitors include Snapchat, YouTube, sleep, etc.
Bob Dylan said, “A man is a success if he gets up in the morning and gets to bed at night, and in between he does what he wants to do.”1 Sometimes our technologies help us do what we want to do. Other times they don’t. When our technologies fail us in this regard, they undermine the “spotlight” of our attention. This produces functional distractions that direct us away from information or actions relevant to our immediate tasks or goals.
Functional distraction is what’s commonly meant by the word “distraction” in day-to-day use. This is the sort of distraction that Huxley called the “mere casual waste products of psychophysiological activity.”2 Like when you sit down at a computer to fulfill all the plans you’ve made, to do all those very responsible and adult things you know at the back of your mind you absolutely must do, and yet you don’t: instead, your unconscious mind outruns your conscious mind, and you find yourself, forty-five minutes later, having read articles about the global economic meltdown, having watched auto-playing YouTube videos about dogs who were running while sleeping, and having voyeured the life achievements of some astonishing percentage of people who are willing to publicly admit that they know you, however little it may actually be the case.
Functional distractions commonly come from notifications. Each day, the Android mobile operating system alone sends over 11 billion notifications to its more than 1 billion users. We widely encounter notifications from systems such as email services, social networks, and mobile applications. For instance, “I was going to turn on the kettle so I could make some tea, but then Candy Crush reminded me I haven’t played in a few days.” Another major source of notifications is person-to-person communication, as in instant messaging applications. Often, as in Google’s Gmail system, notifications are colored red and placed in the upper-right corner of the user’s vision in order to better grab their attention and maximize the persuasive effect. This effect relies on the human reaction to the color red,3 as well as the cleaning/grooming instinct,4 which often makes it hard to resist clicking on the notifications.
The effects of interruptions aren’t limited to the amount of time we lose engaging with them directly. When a person is in a focus state and gets interrupted, it takes on average twenty-three minutes for them to regain their focus. In addition, experiencing a functional distraction in your environment can make it harder to return your attention to that same place in your environment later if something task-salient appears there.5 Also, functional distractions may direct your attention away not merely from perceptual information, but also from reflective information. For example, when an app notification or instant message from another person interrupts your focus or “flow,” it may introduce information that crowds out other task-relevant information in your working memory.6 In other words, the persuasive designs of the attention economy compete not only against one another for your attention, but also against things in your inner environment as well. Furthermore, exposure to repeated notifications can create mental habits that train users to interrupt themselves, even in the absence of the technologies themselves.7 We tend to overlook the harms of functional distraction due to the bite-size nature of its influence. However, as the philosopher Matthew Crawford writes, “Distractibility might be regarded as the mental equivalent of obesity.” From this perspective, individual functional distractions can be viewed as akin to individual potato chips.
Undermining the spotlight of attention can frustrate our political lives in several ways. One is by distracting us away from political information and toward some nonpolitical type of information. This effect doesn’t necessarily have to be consciously engineered. For instance, a news website might give me the option of viewing the latest update on my government’s effort to reform tax policy, but it may place it on the page next to another article with a headline that’s teasing some juicy piece of celebrity gossip – and whose photo is undoubtedly better at speaking to my automatic self and getting me to click.
At the same time, distraction away from political information could occur by design, for instance via the propagandizing efforts of a political party or some other interested actor. For example, the Chinese government has been known to censor information online that they deem objectionable by suppressing or removing it. However, their propaganda organization, commonly known as the “50 Cent Party,” has recently begun using a technique called “reverse censorship,” or “strategic distraction,” to drown out the offending information with a torrent of other social media content that directs people’s attention away from the objectionable material. The Harvard researchers who carried out a study analyzing these efforts estimate that the Chinese government creates 448 million posts on social media per year as part of this strategic distraction.8 As researcher Margaret Roberts said in an interview, “the point isn’t to get people to believe or care about the propaganda; it’s to get them to pay less attention to stories the government wants to suppress.”9
A “strategic distraction” may also be used to change the focus of a political debate. Here it is hard to avoid discussion of US President Donald J. Trump’s use of the Twitter microblogging platform. A major function of his Twitter use has been to deflect attention away from scandalous or embarrassing news stories that may reflect poorly on him. Similarly, in the 2016 US presidential election, he used his so-called “tweetstorms” to “take all of the air out of the room,” in other words, to gain the attention of television and radio news broadcasters and thereby capture as much of their finite airtime as possible, leaving little airtime for other candidates to capture. One study estimated that eight months before the 2016 election, he had already captured almost $2 billion worth of free or “earned” media coverage.10 In addition to this bulk approach, he also deployed highly targeted functional distraction. For example, consider his campaign’s voter suppression efforts, which used Facebook to send highly targeted messages to African Americans (techniques which, while outrageous, used fairly standard digital advertising methods).11
Functional distraction can certainly be politically consequential, but it’s unlikely that an isolated instance of a compromised “spotlight” would pose the sort of fundamental risk to individual and collective will that we’re ultimately concerned with addressing here. To identify those deeper risks, it’s necessary to move quickly to the deeper types of distraction.
[Donald Trump’s candidacy] may not be good for America, but it’s damn good for CBS.
Around the time I started feeling existentially compromised by the deep distractions collecting in my life, I developed a habit that quickly became annoying to everyone around me. It went like this: I’d hear someone use a phrase to describe me that had a certain ring to it, like it would make a good title for something – but its content was both specific and odd enough that if it were used as the title for a biography about my whole life, it would be utterly absurd. Whenever I’d hear a phrase like that, I’d repeat it with the gravitas of a movie-trailer announcer, and then follow it with the phrase: “The James Williams Story.”
Here’s an example. One day, after a long conversation with my wife, she said to me, “You’re, like, my receptacle of secrets.” To which I replied: “Receptacle of Secrets: The James Williams Story.” The joke being, of course, that choosing this one random, specific snapshot of my life to represent the narrative of my entire existence – an existence which has involved many achievements more notable than hearing and keeping the odd spousal secret – would be an absurd and arbitrary thing to do. I eventually came to understand (or perhaps rationalize) this habit as a playful, shorthand way of stabilizing what philosophers would call my “diachronic self,” or the self over time, over the increasingly rocky waves of my “synchronic self,” or the self at a given moment. I might have been overanalyzing it, but I interpreted this emergent habit as a way of pushing back against my immediate environment’s ability to define me. It was a way of saying, “I will not be so easily summarized!” It was a way of trying to hold onto my story by calling attention to what my story definitely was not.
We experience our identities as stories, according to a line of thought known as “narrative identity theory.”1 In his book Neuroethics, Neil Levy writes that both synchronic and diachronic unity are essential for helping us maintain the integrity of these stories: “We want to live a life that expresses our central values, and we want that life to make narrative sense: we want to be able to tell ourselves and others a story, which explains where we come from, how we got to where we are, and where we are going” (p. 201).
When we lose the story of our identities, whether on individual or collective levels, it undermines what we could call the “starlight” of our attention, or our ability to navigate “by the stars” of our higher values or “being goals.” When our “starlight” is obscured, it makes it harder to “be who we want to be.” We feel the self fragmenting and dividing, resulting in an existential sort of distraction. William James wrote that “our self-feeling in this world depends entirely on what we back ourselves to be and do.” When we become aware that our actual habits are in dissonance with our desired values, this self-feeling often feels like a challenge to, if not the loss of, our identities.
This obscured “starlight” was a deeper layer of the distractions I’d been feeling, and I felt that the attention-grabby techniques of technology design were playing a nontrivial role. I began to realize that my technologies were enabling habits in my life that led my actions over time to diverge from the identity and values by which I wanted to live. It wasn’t just that my life’s GPS was guiding me into the occasional wrong turn, but rather that it had programmed me a new destination in a far-off place that it did not behoove me to visit. It was a place that valued short-term over long-term rewards, simple over complex pleasures. It felt like I was back in my high-school calculus class, and all these new technologies were souped-up versions of Tetris. It wasn’t just that my tasks and goals were giving way to theirs – my values were as well.
One way I saw the “starlight” getting obscured in myself and others, in both the personal and political domains, was in the proliferation of pettiness. Pettiness means pursuing a low-level goal as though it were a higher, intrinsically valuable one. Low-level goals tend to be short-term goals; where this is so, pettiness may be viewed as a kind of imprudence. In The Theory of Moral Sentiments, Adam Smith calls prudence the virtue that’s “most useful to the individual.” For Smith, prudence involves the union of two things: (1) our capacity for “discerning the remote consequences of all our actions,” and (2) “self-command, by which we are enabled to abstain from present pleasure or to endure present pain, in order to obtain a greater pleasure or to avoid a greater pain in some future time.”
In my own life I saw this pettiness, this imprudence, manifesting in the way the social comparison dynamics of social media platforms had trained me to prioritize mere “likes” or “favorites,” or to get as many “friends” or “connections” as possible, over pursuing other more meaningful relational aims. These dynamics had made me more competitive for other people’s attention and affirmation than I ever remember being: I found myself spending more and more time trying to come up with clever things to say in my social posts, not because I felt they were things worth saying but because I had come to value these attentional signals for their own sake. Social interaction had become a numbers game for me, and I was focused on “winning” – even though I had no idea what winning looked like. I just knew that the more of these rewarding little social validations I got, the more of them I wanted. I was hooked.
The creators of these mechanisms didn’t necessarily intend to make me, or us, into petty people. The creator of the Facebook “like” button, for instance, initially intended for it to send “little bits of positivity” to people.2 If its design had been steered in the right way, perhaps it might have done so. However, soon enough the “like” function began to serve the data-collection and engagement-maximizing interests of advertisers. As a result, the metrics that comprised the “score” of my social game – and I, as the player of that game – were directly serving the interests of the attention economy. In the pettiness of my day-to-day number-chasing, I had lost the higher view of who I really was, or why I wanted to communicate with all these people in the first place.
Pettiness is not exactly a rare phenomenon in the political domain. However, during the 2016 US presidential election I encountered a highly moralized variant of pettiness coming from people I would have never expected to see it in. Over the course of just a few months, I witnessed several acquaintances back in Texas – good, loving people, and deeply religious “values voters” – go from vocally rejecting one particular candidate as being morally reprehensible and utterly unacceptable, to ultimately setting aside those foundational moral commitments in the name of securing a short-term political win. By the time a video emerged of the candidate bragging about committing sexual assault, this petty overwriting of moral commitment with political expediency was so total as to render this staggering development barely shrug-worthy. By then, their posts on social media were saying things like, “I care more about what Hillary did than what Trump said!”
In the 2016 presidential election campaign, Donald Trump took the dominance of pettiness over prudence to new heights. Trump is very straightforwardly an embodiment of the dynamics of clickbait: he’s the logical product (though not endpoint) in the political domain of a petty media environment defined by impulsivity and zero-sum competition for our attention. One analyst has estimated that Trump is worth $2 billion to Twitter, which amounts to almost one-fifth of the company’s current value.3 His success metrics – number of rally attendees, number of retweets – are attention economy metrics. Given this, it’s remarkable how consistently societal discussion has completely misread him by casting him in informational, rather than attentional, terms. Like clickbait or so-called “fake news,” the design goal of Trump is not to inform but to induce. Content is incidental to effect.
At its extreme, this pettiness can manifest as narcissism, a preoccupation with being recognized by others, valuing attention for its own sake, and the prioritization of fame as a core value. A meta-analysis of fifty-seven studies found that social media in particular is linked with increased narcissism.4 Another study found that young people are now getting more plastic surgery due to pressure from social media.5 And a study of children’s television shows in recent years found that, rather than pro-social community values, the main value now held up by children’s television shows as being most worth pursuing is fame.6 In his historical study of fame The Frenzy of Renown, Leo Braudy writes that when we call someone “famous,” what we’re fundamentally saying is, “pay attention to this.” So it’s entirely to be expected that in an age of information abundance and attention scarcity we would see an increased reliance on fame as a heuristic for determining what and who matters (i.e. merits our attention), as well as an increased desire for achieving fame in one’s own lifetime (as opposed to a legacy across generations).7
Sometimes the desire for fame can have life-and-death consequences. Countless YouTube personalities walk on the edges of skyscrapers, chug whole bottles of liquor, and perform other dangerous stunts, all for the fame – and the advertising revenue – it might bring them. The results are sometimes tragic. In June 2017 a man concocted an attention-getting YouTube stunt in which he instructed his wife, who was then pregnant with their second child, to shoot a handgun from point-blank range at a thick book he was holding in front of his chest. The bullet ripped through the book and struck and killed him. As the New York Times reported:
It was a preventable death, the sheriff said, apparently fostered by a culture in which money and some degree of stardom can be obtained by those who attract a loyal internet following with their antics.
In the couple’s last video, posted on Monday, Ms. Perez and her boyfriend considered what it would be like to be one of those stars – “when we have 300,000 subscribers.”
“The bigger we get, I’ll be throwing parties,” Mr. Ruiz said. “Why not?”8
Similarly, on the video-game live-streaming site Twitch, a 35-year-old man stayed awake to continue his streaming marathon for so long that he died.9 And in December 2017, Wu Yongning, a Chinese man known as a “rooftopper” – someone who dangles from skyscrapers without safety equipment in order to post and monetize the video online – fell to his death. As one user on the Chinese microblogging service Weibo reflected about the role, and responsibility, of the man’s approving audience members:
Watching him and praising him was akin to … buying a knife for someone who wanted to stab himself, or encouraging someone who wants to jump off a building. … Don’t click “like,” don’t click “follow.” This is the least we can do to try to save someone’s life.10
There’s nothing wrong with wanting attention from other people. Indeed, it’s only human. Receiving the attention of others is a necessary, and often quite meaningful, part of human life. In fact, Adam Smith argues in Wealth of Nations that it’s the main reason we pursue wealth in the first place: “To be attended to, to be taken notice of with sympathy, complacency, and approbation,” he writes, “are all the advantages which we can propose to derive from it.” It’s this approval, this regard from others, he says, that leads people to pursue wealth – and when they do attain wealth, and then “expend it,” it’s that expenditure – what we might call the exchange of monetary wealth for attentional, or reputational, wealth – that Smith describes as being “led by an invisible hand.”11 So, on a certain reading, one could argue that all economies are ultimately economies of attention. However, this doesn’t mean that all attention is worth receiving, or that all ways of pursuing it are praiseworthy.
We can also see the obscuring of our starlight in the erosion of our sense of the nature and importance of our higher values. In Mike Judge’s film Idiocracy, a man awakes from cryogenic slumber in a distant future where everyone has become markedly stupider. At one point in the story he visits a shambolic Costco warehouse store, where a glazed-eyed front-door greeter welcomes him by mechanically droning, “Welcome to Costco. I love you.” This is an extreme example of the dilution of a higher value – in this case, love. In the design of digital technologies, persuasive goals often go by names that sound lofty and virtuous but have been similarly diluted: “relevance,” “engagement,” “smart,” and so on. Designing users’ lives toward diluted values leads to the dilution of their own values at both individual and collective levels.
Consider that across many liberal democracies the percentage of people who say it’s “essential” to live in a democracy has in recent years been in freefall. The “starlight” of democratic values seems to be dimming across diverse cultures, languages, and economic situations. However, one of the few factors these countries do have in common is their dominant form of media, which just happens to be the largest, most standardized, and most centralized form of attentional control in human history, and which also happens to distract from our “starlight” by design.
Similarly, in the last two decades the percentage of Americans who approve of military rule (saying it would be either “good” or “very good”) has doubled, according to the World Values Survey, to now being one in six people.12 The authors of a noted study on this topic point out that this percentage “has risen in most mature democracies, including Germany, Sweden, and the United Kingdom.” Crucially, they also note that this trend can’t be attributed to economic hardship. “Strikingly,” the authors write, “such undemocratic sentiments have risen especially quickly among the wealthy,” and even more so among the young and wealthy. Today, this approval of military rule “is held by 35 percent of rich young Americans.”13
On the part of political representatives, this value dilution manifests as the prioritization of metrics that look very much like attention economy metrics, as well as the placing of party over country. As Rousseau wrote in Political Economy, when a sense of duty is no longer present among political leaders, they simply focus on “fascinating the gaze of those whom they need” in order to stay in power.
Our information and communication technologies serve as mirrors for our identities, and these mirrors can show us either dignified or undignified reflections of ourselves. When we see a life in the mirror that appears to be diverging from the “stars” of freedom and self-authorship by which we want to live, our reaction not only involves the shock of indignity, but also quite often a defensive posture of “reactance.” Reactance refers to the idea “that individuals have certain freedoms with regard to their behavior. If these behavioral freedoms are reduced or threatened with reduction, the individual will be motivationally aroused to regain them.”14 In other words, when we feel our freedom being restricted, we tend to want to fight to get it back.
To take one example of an undignified reflection that prompts this sort of reactance, consider the Facebook “emotional contagion” experiment that Facebook and researchers at Cornell University carried out in 2014. The experiment used the Facebook news feed to identify evidence of social contagion effects (i.e. transference of emotional valence). Over a one-week period, the experiment reduced the number of either positive or negative posts that a sample of around 700,000 Facebook users saw in their News Feed. They found that when users saw fewer negative posts, their own posts had a lower percentage of words that were negative. The same was true for positive posts and positive words. While the effect sizes were very small, the results showed a clear persuasive effect on the emotional content of users’ posts.15
In response, some raised questions about research ethics processes – but many objections were also about the mere fact that Facebook had manipulated its users at all. Clay Johnson, the founder of political marketing firm Blue State Digital, wrote, “the Facebook ‘transmission of anger’ experiment is terrifying.”16 The Atlantic described the study as “Facebook’s Secret Mood Manipulation Experiment.”17 A member of the UK parliament called for an “investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.”18 And privacy activist Lauren Weinstein wrote on Twitter, “I wonder if Facebook KILLED anyone with their emotion manipulation stunt. At their scale and with depressed people out there, it’s possible.”19
We are manipulated by the design of our media all the time. This seems to me simply another way of describing what media is and does. Much, if not most, of the advertising research that occurs behind the closed doors of companies could be described as “secret mood manipulation experiments.” And the investigation the UK parliamentarian called for would effectively mean investigating the design of all digital media that shape our attention in any way whatsoever.
What was unfortunately missed in the outrage cascades about this experiment was the fact that Facebook was finally measuring whether a given design had a positive or negative effect on people’s emotions – something that they don’t appear to have been doing before this time. This is precisely the sort of knowledge that allows the public to say, “We know you can measure this now – so start using it for our benefit!” But that potential response was, as it is so often, ultimately scuppered by the dynamics of the attention economy itself.
If a person were to interpret Facebook’s alteration of their news feed as unacceptable manipulation, and object to the image – the “undignified reflection” – of themselves as someone who is not fully in control of their decisions about what they write in their own posts, then they would see their use of Facebook as incompatible with, and unsupportive of, the ultimate “being goal” they have for themselves. The sense of a precipitous sliding backward from that ultimate goal would, as discussed above, have the effect of undermining that person’s sense of self-integrity, and would thus reduce their sense of dignity.
Finally, when we start to lose the story of our shared identity, it has major implications for politics. We find it harder to keep in view the commonalities we have with others in our own society. We struggle to imagine them inhabiting the same space or demos as us, especially when we’re increasingly physically isolated from them. Division itself is not bad, of course: isolation is necessary for the development of individual views and opinions. Diversity requires division, of a sort. But the sort of division that removes the space in which the common interest and general will may be found is the sort that is extremely problematic.
This erosion of shared identity is often mischaracterized as political “polarization.” However, “polarization” suggests a rational disunity, mere disagreement about political positions or assumptions. In essence, a disunity of ideas. What we have before us, on the other hand, seems a profoundly irrational disunity – a disunity of identity – and indeed a “deep-self discordance” among the body politic. This can lead to collective akrasia, or weakness of will. As the philosopher Charles Taylor writes, “the danger is not actual despotic control but fragmentation – that is, a people increasingly less capable of forming a common purpose and carrying it out.”20 William James, in The Principles of Psychology, writes, “There is no more miserable human being than one in whom nothing is habitual but indecision.”21 Perhaps we could say the same of societies as well.
Rousseau argued that a collective decision can depart from the general will if people are “misled by particular interests … by the influence and persuasiveness of a few clever men.”22 This can, of course, happen via mere functional distraction, or inhibition of the “spotlight,” but Rousseau notes that this control more often happens by subdividing society into groups, which leads them to “abandon” their “membership” of the wider group. At extremes, groups may diverge so much from one another that their insularity becomes self-reinforcing. And when this division of identity becomes moralized in such a way that it leads to a deeper sort of tribalistic delegitimizing, it veers toward a certain kind of populism, which I will discuss in the next chapter.
Here at the level of the “starlight,” however, this division has primarily prompted lamentations about the problems of internet “echo chambers,”23 or self-reinforcing “bubbles of homophily.”24 Yet the echoic metaphor seems to me to miss something essential: while echoes do bounce back, the sound ultimately dissipates. A better metaphor might be amplifier feedback, that is, holding a live microphone up to a speaker to create an instant shrieking loop that will destroy your eardrums if you let it. When the content of that shrieking loop consists of our own identities, whether individually or as groups, the distorted reflection we see in the “mirror” of technology takes on the character of a funhouse mirror, giving us only an absurd parody of ourselves.
Considering the ways my “starlight” was being obscured helped me broaden the scope of “distraction” to include not just frustrations of doing, but also frustrations of being over time. This sort of distraction makes us start to lose the story, at both individual and collective levels. When that happens, we start to grasp for things that feel real, true, or authentic in order to get the story back. We try to reorient our living toward the values and higher goals we want to pursue.
But here, at least, we still know when we’re not living by our chosen stars – we can still in principle detect the errors and correct them. It seemed like there was one deeper level of “distraction” to contend with: the sort of distraction that would threaten our ability to know and define what our goals and values are in the first place.
When men yield up the privilege of thinking, the last shadow of liberty quits the horizon.
The third, and most profound, level of attention is the “daylight.” By this I mean the suite of foundational capacities that enable us to define our goals and values in the first place, to “want what we want to want.” When our daylight is compromised, epistemic distraction results. Epistemic distraction is the diminishment of underlying capacities that enable a person to define or pursue their goals: capacities essential for democracy such as reflection, memory, prediction, leisure, reasoning, and goal-setting. This is where the distractions of the attention economy most directly undermine the foundations of democracy.
Epistemic distraction can make it harder to “integrate associations across many different experiences to detect common structures across them.” These commonalities “form abstractions, general principles, concepts, and symbolisms that are the medium of the sophisticated, ‘big-picture’ thought needed for truly long-term goals.”1 In the absence of this capacity to effectively plan one’s own projects and goals, our automatic, bottom-up processes take over. Thus, at its extreme, epistemic distraction produces what Harry Frankfurt refers to as “wantonness” because it removes reflected-upon, intentional reasons for action, leaving only impulsive reasons in its wake.2
I call this type of distraction “epistemic” for two reasons. First, it distracts from knowledge of the world (both outer and inner) that’s necessary for someone to be able to function as a purposeful, competent agent. Second, it constitutes what the philosopher Miranda Fricker calls an “epistemic injustice,” in that it harms a person in their ability to be a “knower” (in this case, a knower of both the world and of oneself).3 Like existential distraction, epistemic distraction also has an impact on both autonomy and dignity. It violates the integrity of the self by undermining the necessary preconditions for it to exist and to thrive, thus pulling the carpet out from under one’s feet, so to speak.
Our daylight may be obscured when our capacities for knowing what’s true, or for predicting what’s likely to be true, are undermined. The undermining of truth can happen via the phenomenon of “fake news,” which Collins Dictionary selected as its 2017 Word of the Year, defining it as “false, often sensational, information disseminated under the guise of news reporting.”4 An Oxford University study found that during the 2016 US election, Twitter users posted more “misinformation, polarizing and conspiratorial content” than real news articles.5 The Pope has gone so far as to call fake news a “grave sin that hurts the heart of the journalist and hurts others.”6 Our capacities for prediction may also be undermined by the attention economy, for instance when the practice of statistical opinion polling itself becomes subjugated to its incentives. Especially during major elections, it now seems that small, meaningless day-to-day changes in candidates’ probabilities of winning serve as the “rewards” drawing readers back to websites whose ultimate aim is to garner page views and clicks. (When this effect occurs by design, perhaps we could call it “statbait,” or statistical clickbait.)
Our daylight can also be obscured via the diminishment of intelligence or other cognitive capacities. A Hewlett-Packard study found that distractions decreased the IQ scores of knowledge workers by 10 points, which the researchers note is “twice the decline recorded for those smoking marijuana.”7 Similarly, researchers at the University of Texas found that the mere presence of one’s smartphone can adversely affect available working memory capacity and functional fluid intelligence.8 Also of relevance here are physiological effects, such as the stress produced by “email apnea,” a phenomenon that occurs when a person opens their email inbox to find many unread messages, inducing a “fight-or-flight” response that causes the person to stop breathing.9 In addition, recent research has also associated social media usage with increased social anxiety, depression, and lower mood.10 Another source of anxiety is the phenomenon of “cyberchondria,” which is defined as the “unfounded escalation of concerns about common symptomatology, based on the review of search results and literature on the Web.” A 2009 study found that escalatory terminology on the pages users visit – which serves, as do clickbait headlines, to increase page views and other engagement metrics – plays a key role in this process.11
Reflection is an essential ingredient for the kind of thinking that helps us determine “what we want to want.” For the American philosopher Christine Korsgaard, reflection is the way we “turn our attention on to our own mental activities” in order to “call our beliefs and motives into question.”12 When the technologies of our attention inhibit our capacities for reflection, our “daylight” gets obscured in ways that have particular implications for politics. For instance, notifications or addictive mobile apps may fill up those little moments in the day during which a person might have otherwise reflected on their goals and priorities. Users check their phones an average of 150 times per day13 (and touch them over 2,600 times per day),14 so that would add up to a lot of potential reflection going unrealized.
Closely related to the task of reflection is the activity of leisure. We often conflate leisure with entertainment. However, properly understood, leisure is akin to what Aristotle called “periodic nonthought”.15 It’s that unstructured downtime that serves as the ground out of which one’s true self bubbles forth. This sort of unstructured thought is of particular developmental importance for children.16 The philosopher Josef Pieper even argued in 1948 that leisure is “the basis of culture,” the unconscious ground out of which not only individual but also collective values and meaning-making processes emerge.17
Leisure also uniquely enables the kind of thinking and deliberation necessary for the thoughtful invention of societal institutions. The philosopher Hannah Arendt saw this as being particularly true when it comes to the design of democratic systems worth having.18 In an unpublished lecture, she writes about the authors of the United States’ institutions of government:
No doubt, it is obvious and of great consequence that this passion for freedom for its own sake awoke in and was nourished by men of leisure, by the hommes de lettres who had no masters and were not always busy making a living. In other words, they enjoyed the privileges of Athenian and Roman citizens without taking part in those affairs of state that so occupied the freemen of antiquity. Needless to add, where men live in truly miserable conditions this passion for freedom is unknown.19
“Leisure” here for Arendt seems to mean more than just “nonthought” or reflection: in counterposing it with work, she seems to be using the term to refer to something like a respite from having to perform attentional labor. A line from Theodore Roethke’s 1963 poem “Infirmity” comes to mind: “A mind too active is no mind at all / The deep eye sees the shimmer on the stone …” The busy demands of making a living can make a mind too active, but so can the busy demands of notifications, never-ending feeds of information, persuasive appeals, endless entertainment options, and all the other pings on our attention that the digital attention economy throws our way. This seems to suggest that there’s an opportunity to clarify where and how our interactions with the forces of the attention economy could be considered a kind of attentional labor, and what the implications of that characterization might be for the kinds of freedom we look to leisure to sustain.
However, the most visible and consequential form of compromised “daylight” we see in the digital attention economy today is the prevalence and centrality of moral outrage. Moral outrage consists of more than just anger: it also includes the impulse to judge, punish, and shame someone you think has crossed a moral line. You’re most likely to experience moral outrage when you feel not merely angry about some perceived misdeed, but angry and disgusted.20
Moral outrage played a useful role earlier in human evolution, when people lived in small nomadic groups: it enabled greater accountability, cooperation, and in-group trust.21 However, the amplification of moral outrage on a societal, or even global, scale carries dire implications for the pursuit of politics worth having. In the past, when we lived in environments of information scarcity, all the world’s moral transgressions weren’t competing for our attention every day. According to a study in the US and Canada, less than 5 percent of the population will ever personally experience a truly moral misdeed in real life.22 However, in the era of smartphones, if anyone experiences a misdeed, then everyone potentially experiences it.
On an almost daily basis now, it seems the entire internet – that is to say, we – erupt in outrage at some perceived moral transgression whose news has cascaded across the web, or gone “viral.” Virality, the mass transmission of some piece of information across a network, is biased toward certain types of information over others. Since the 1960s, it’s been widely held that bad news spreads more quickly and easily than good news.23 More recent research building upon this idea has shown that it’s not only the emotional “valence” of the information – namely, how good or bad it makes you feel – that influences whether or not you’ll share it, but also the degree to which the particular emotion you experience produces an “arousal response” in you, namely makes you more physiologically alert and attentive.24 In other words, if you’ve got two equally “bad” pieces of news to share with your friends, one of which makes you feel sad and the other angry – but you only want to share one of them – then odds are you’ll share the one that angers you, because anger’s a high-arousal emotion whereas sadness is low-arousal.
Here’s just one example of the kind of webwide outrage cascade I’m talking about. In July of 2015 a dentist from the US state of Minnesota went hunting in Zimbabwe and killed a well-known lion named Cecil. Cecil’s cause of death was an arrow followed by – after about forty hours of stumbling around, bleeding, in the wilderness – a rifle round. Cecil was then decapitated and flown to Minnesota as the trophy of a victorious hunt. It cost around $50,000 to kill Cecil. It may not have been legal.
When the story of Cecil’s demise went “viral,” the whole internet seemed to roar in outrage all at once. On Twitter, Cecil’s memorial hashtag, #CecilTheLion, received 670,000 tweets in just twenty-four hours.25 Comedian Jimmy Kimmel called the Minnesotan dentist “the most hated man in America who never advertised Jell-O on television.” Actress Mia Farrow tweeted the dentist’s address.26 Crowds appeared at his office to yell “Murderer! Terrorist!” through megaphones and to display homemade signs suggesting that he “ROT IN HELL.” Someone spray-painted “Lion Killer” on his house. Someone else took down his professional website. Still others, sitting elsewhere in the world, spent hours falsifying one-star Yelp reviews of his dental practice. On Facebook, the thousand-plus member group that emerged as the de facto mission control for Cecil’s revenge brigade was called “Shame Lion Killer Dr. Walter Palmer and River Bluff Dental.”27
When children behave like this toward one another, we use words like “cyberbullying” or “harassment.” Yet when it’s adults doing the shaming and threatening, we’re inclined to shrug our shoulders, or even cheer it as “karma,” “sweet, sweet revenge,” or “justice in the court of public opinion.” But it isn’t any of those things. It’s nothing more – and nothing less – than mob rule, a digital Salem. And today, because the targets of moral outrage can no longer be burned at the stake (in most places), the implicit goal becomes to destroy them symbolically, reputationally – we might even say attentionally – for their transgression.
Yet don’t some transgressions deserve anger, and even outrage? Certainly. As the famous bumper sticker says: “if you’re not outraged, you’re not paying attention.” Sometimes, the social pressure that comes from moral outrage is the only means we have to hold people accountable for their actions, especially when the institutions of society have failed to do so. For example, in 2011 moral outrage in Egypt led to the ouster of Hosni Mubarak from the presidency and advanced the Arab Spring.28 In 2012 in the United States, after the shooting of Trayvon Martin, an unarmed African American teenager, moral outrage galvanized national conversations about race, guns, and accountability in law enforcement.29 And in 2017, moral outrage finally gave a hearing to many women whose claims about the sexual offenses of Harvey Weinstein, widely considered the most powerful man in Hollywood, had previously been ignored if not outright disbelieved. Upon Weinstein’s exile from the entertainment industry, similar claims came to light about other figures in Hollywood and beyond, ultimately leading to widespread societal reflection about issues of sexual harassment, gender relations, and power dynamics in the workplace.30
But if justice is our goal – as it should be – then it is not at all clear that these dynamics of moral outrage and mob rule advance it. If anything, they seem to lead in the opposite direction.
In her book Anger and Forgiveness, Martha Nussbaum describes the ways in which anger is morally problematic. She uses Aristotle’s definition of anger, which is pretty close to the concept of moral outrage I gave above: it’s “a desire accompanied by pain for an imagined retribution on account of an imagined slighting inflicted by people who have no legitimate reason to slight oneself or one’s own.” The “imagined slighting” and “imagined retribution,” Nussbaum says, essentially take the form of status downrankings. She argues that much moralistic behavior, therefore, aims not at justice-oriented but status-oriented outcomes. For example, virtue signaling often masquerades as apparently useful or prudent actions, as when people take action to ensure that sex offenders don’t move to their neighborhood. The real goal here, says Nussbaum, is one of “lowering the status of sex offenders and raising the status of good people like herself.”
There is, however, one particular type of anger that Nussbaum views as valuable: what she calls “transition anger.” This refers to anger that is followed by “the Transition,” or the “healthy segue into forward-looking thoughts of welfare, and, accordingly, from anger into compassionate hope.” “In a sane and not excessively anxious and status-focused person,” she writes, “anger’s idea of retribution or payback is a brief dream or cloud, soon dispelled by saner thoughts of personal and social welfare.” However, in the attention economy, outrage cascades in such a way that the “Transition” rarely, if ever, has any chance to occur. What results, then, is unbridled mobocracy, or mob rule.
One might object here and say that “mob justice” is better than no justice at all. Nussbaum would seem to disagree: “when there is great injustice,” she says, “we should not use that fact as an excuse for childish and undisciplined behavior.” And while “accountability expresses society’s commitment to important values,” it “does not require the magical thinking of payback.” In other words, recognizing that killing Cecil the Lion was the wrong thing to do, and holding those involved accountable, in no way requires – or justifies – the status-downranking behaviors of shaming or trying to destroy their reputations and livelihoods.
In 1838 a young Abraham Lincoln gave a speech at the Lyceum in Springfield, Illinois in which he warned about the threat that outrage and the mobocratic impulses it engenders pose for democracy and justice:
[T]here is, even now, something of ill-omen, amongst us. I mean the increasing disregard for law which pervades the country; the growing disposition to substitute the wild and furious passions, in lieu of the sober judgment of Courts; and the worse than savage mobs, for the executive ministers of justice … Thus, then, by the operation of this mobocratic spirit, which all must admit, is now abroad in the land, the strongest bulwark of any Government, and particularly of those constituted like ours, may effectually be broken down and destroyed.31
He continued: “There is no grievance that is a fit object of redress by mob law.” Mobocratic “justice” is no justice worth having, and this is only partly because of the outcomes it tends to produce. It’s also because of the way mobocracy goes about producing them.
Legal professionals have a saying: “Justice is the process, not the outcome.”32 The process of mobocratic “justice” fueled by viral outrage that cascades online is one of caprice, arbitrariness, and uncertainty. So it should come as no surprise that mob rule is precisely the path that Socrates, in The Republic, describes as being the path societies take from democracy back into tyranny.33
Unfortunately, mob rule is hard-coded into the design of the attention economy. In this way, it can be considered a kind of society-wide utility function that optimizes for extremism, which may at times even manifest as terrorism. It creates an environment in which extremist actors, causes, or groups who feed on outrage can flourish. As the writer Tobias Rose-Stockwell has put it, “this is the uncomfortable truth of terrorism’s prominence in our lives: We have built an instant distribution system for its actual intent – Terror.”34
On an individual level, the proliferation of outrage creates more fear and anxiety in our lives. A headline of an article on the satirical news site The Onion reads, “Blogger Takes Few Moments Every Morning To Decide Whether To Feel Outraged, Incensed, Or Shocked By Day’s News.”35 It also contributes to the “stickiness,” or the compulsive effects of the medium, that keep us “hooked” and continually coming back for more. It can also skew our view of the world by giving us the impression that things are much worse than they actually are. In his essay A Free Man’s Worship, Bertrand Russell writes, “indignation is still a bondage, for it compels our thoughts to be occupied with an evil world; and in the fierceness of desire from which rebellion springs there is a kind of self-assertion which it is necessary for the wise to overcome.”36 Or, as a worker in a Russian “troll house” put it, “if every day you are feeding on hate, it eats away at your soul.”37
When the attention economy amplifies moral outrage in a way that moralizes political division, it clears the way for the tribalistic impulse to claim for one’s own group the mantle of representing the “real” or “true” will of the people as a whole. This, for Jan-Werner Müller in What is Populism?, is the essence of the concept of “populism.”38
In recent years we’ve witnessed a flood of political events across Western liberal democracies that have been described as “populist” in character. Yet the term’s definition has remained stubbornly mercurial. Some have used it to refer to particularly emotive styles of collective action. Some have used it to mean antielitism, others antipluralism. And some simply use it to describe a type of politics that seems vaguely problematic. Our conceptions of populism have themselves been polarized.
Müller offers a helpful corrective. In his book, he writes that populism is “a particular moralistic imagination of politics, a way of perceiving the political world that sets a morally pure and fully unified … people against elites who are deemed corrupt or in some other way morally inferior.” He says that “populism is about making a certain kind of moral claim,” namely that “only some of the people are really the people.” In The Social Contract, Rousseau warned of the risk that “particular wills could replace the general will in the deliberations of the people.” Müller’s conception of populism can thus be seen as a kind of moralized version of that fragmentation of collective identity. But while the development of Rousseau’s general will “requires actual participation by citizens; the populist, on the other hand, can divine the proper will of the people on the basis of what it means, for instance, to be a ‘real American.’”
The work of Berkeley cognitive linguist George Lakoff is extremely relevant here. For several years he has been calling attention to the way in which American politics may be read as the projection of family systems dynamics onto the body politic: in this reading, the right is the “strict father” whereas the left is the “nurturing mother.”39 (It is relevant here to note that in 2004, one of the highest-correlated views with voting Republican was support for corporal punishment, or “spanking” one’s children.)40 Lakoff explains, “the basic idea is that authority is justified by morality, and that, in a well-ordered world, there should be a moral hierarchy in which those who have traditionally dominated should dominate.” He continues, “The hierarchy is God above man; man above nature; the rich above the poor; employers above employees; adults above children; Western culture above other cultures; our country above other countries. The hierarchy also extends to men above women, whites above nonwhites, Christians above non-Christians, straights above gays.” “Since this is seen as a ‘natural’ order,” he continues, “it is not to be questioned.”41
It’s easy to spot examples of populism, on this particular definition, across the political spectrum in recent years. On the right, it manifests as appeals to rural American voters as being “real Americans,” “birtherism,” or Nigel Farage’s hailing of the UK’s “Brexit” vote as a “victory for real people.” On the left, it manifests as appeals to “the 99%” (i.e. we are “the people,” if you round up), as well as in various manifestations of identity politics.
Müller writes that populists “can accurately be described as ‘enemies of institutions’ – although not of institutions in general” – only “mechanisms of representation that fail to vindicate their claim to exclusive moral representation.” In this light, calls on the American left in the wake of the 2016 US presidential election to abolish the electoral college system (in which Hillary Clinton lost the electoral vote but won the popular vote) may be read as similarly “impulsive” desires to get rid of intermediary regulatory systems. “Everything that liberals from Montesquieu and Tocqueville onward once lauded as moderating influences – what they called intermediate institutions – disappears here in favor of Urbinati’s ‘direct representation.’”
Importantly, Müller also writes that political crises don’t cause populism: “a crisis – whether economic, social, or ultimately also political – does not automatically produce populism” of this sort. Nor can populism merely be chalked up to “frustration,” “anger,” or “resentment” – to take such a view would not only be uncharitable but indeed also patronizing, and even a dereliction of one’s duties as a citizen. As Müller writes, “simply to shift the discussion to social psychology (and treat the angry and frustrated as potential patients for a political sanatorium) is to neglect a basic democratic duty to engage in reasoning.”
Yet the technologies of the digital attention economy don’t promote or select for the kind of reasoning, deliberation, or understanding that’s necessary to take political action beyond the white-hot flash of outrage and revolution. As Wael Ghonim, the Egyptian activist who set up the Facebook group that was instrumental in sparking the Arab Spring, said in a talk called “The Algorithms of Fear”:
We who use the Internet now “like” or we flame – but there’s [very little] now happening [algorithmically] to drive people into the more consensus-based, productive discussions we need to have, to help us make civic progress. Productive discussions aren’t getting the [media] distribution they deserve. We’re not driving people to content that could help us, as a society … come together without a flame war … You can build algorithms and experiences that are designed to get the best out of people, and you can build algorithms and experiences that drive out the worst. It’s our job as civic technologists to build experiences that drive the best. We can do that. We must do that now.42
What’s the best part of people that our technologies should be designed to bring out? What should the system be inducing in us instead of outrage? Nussbaum writes, “the spirit that should be our goal has many names: Greek philophrosunē, Roman humanitas, biblical agapē, African ubuntu – a patient and forbearing disposition to see and seek the good rather than to harp obsessively on the bad.”
The problem, of course, is that the “patient and forbearing disposition to see and seek the good” does not grab eyeballs, and therefore does not sell ads. “Harping obsessively on the bad,” however, does. As it stands, the dynamics of the attention economy are thus structurally set up to undermine the most noble aims and virtues worth pursuing. Again, outrage and anger are not bad – they are understandable human responses to injustice, and they can even make us feel happy, in a way.43 However, because the attention economy contains many incentives to induce anger but none to induce the “Transition,” outrage rapidly cascades into mobocracy on a societal, if not global, scale.
By compromising the “daylight” of our attention, then, the digital attention economy directly militates against the foundations of democracy and justice. It undermines fundamental capacities that are preconditions for self-determination at both the individual and the collective level. In fact, to the extent that we take these fundamental capacities to be among our uniquely human guiding lights, there’s a very real sense in which epistemic distraction literally dehumanizes.