Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-20T17:17:35.144Z Has data issue: false hasContentIssue false

I - Distraction by Design

Published online by Cambridge University Press:  30 May 2018

James Williams
Affiliation:
University of Oxford

Summary

Type
Chapter
Information
Stand out of our Light
Freedom and Resistance in the Attention Economy
, pp. 5 - 40
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

2 The Faulty GPS

Five years ago I was working for Google and advancing a mission that I still admire for its audacity of scope: “to organize the world’s information and make it universally accessible and useful.”1 But one day I had an epiphany: there was more technology in my life than ever before, but it felt harder than ever to do the things I wanted to do.

I felt … distracted. But it was more than just “distraction” – this was some new mode of deep distraction I didn’t have words for. Something was shifting on a level deeper than mere annoyance, and its disruptive effects felt far more perilous than the usual surface-level static we expect from day-to-day life. It felt like something disintegrating, decohering: as though the floor was crumbling under my feet, and my body was just beginning to realize it was falling. I felt the story of my life being compromised in some fuzzy way I couldn’t articulate. The matter of my world seemed to be sublimating into thin air. Does that even make sense? It didn’t at the time.

Whatever it was, this deep distraction seemed to have the exact opposite effect of the one technology is supposed to have on our lives. More and more, I found myself asking the question, “What was all this technology supposed to be doing for me?”

Think for a moment about the goals you have for yourself: your goals for reading this book, for later today, for this week, even for later this year and beyond. If you’re like most people, they’re probably goals like “learn how to play piano,” “spend more time with family,” “plan that trip I’ve been meaning to take,” and so on. These are real goals, human goals. They’re the kinds of goals that, when we’re on our deathbeds, we’ll regret not having achieved. If technology is for anything, it’s for helping us pursue these kinds of goals.

A few years ago I read an article called “Regrets of the Dying.”2 It was about a businesswoman whose disillusionment with the day-to-day slog of her trade had led her to leave it, and to start working in a very different place: in rooms where people were dying. She spent her days attending to their needs and listening to their regrets, and she recorded the most common things they wished they’d done, or hadn’t done, in life: they’d worked too hard, they hadn’t told people how they felt, they hadn’t let themselves be happy, and so on. This, it seems to me, is the proper perspective – the one that’s truly our own, if any really is. It’s the perspective that our screens and machines ought to help us circle back on, again and again: because whatever we might choose to want, nobody chooses to want to regret.

Think back on your goals from a moment ago. Now try to imagine what your technologies’ goals are for you. What do you think they are? I don’t mean the companies’ mission statements and high-flying marketing messages – I mean the goals on the dashboards in their product design meetings, the metrics they’re using to define what success means for your life. How likely do you think it is that they reflect the goals you have for yourself?

Not very likely, sorry to say. Instead of your goals, success from their perspective is usually defined in the form of low-level “engagement” goals, as they’re often called. These include things like maximizing the amount of time you spend with their product, keeping you clicking or tapping or scrolling as much as possible, or showing you as many pages or ads as they can. A peculiar quirk of the technology industry is its ability to drain words of their deeper meanings; “engagement” is one such word. (Incidentally, it’s fitting that this term can also refer to clashes between armies: here, the “engagement” is fundamentally adversarial as well.)

But these “engagement” goals are petty, subhuman goals. No person has these goals for themselves. No one wakes up in the morning and asks, “How much time can I possibly spend using social media today?” (If there is someone like that, I’d love to meet them and understand their mind.)

What this means, though, is that there’s a deep misalignment between the goals we have for ourselves and the goals our technologies have for us. This seems to me to be a really big deal, and one that nobody talks about nearly enough. We trust these technologies to be companion systems for our lives: we trust them to help us do the things we want to do, to become the people we want to be.

In a sense, our information technologies ought to be GPSes for our lives. (Sure, there are times when we don’t know exactly where we want to go in life. But in those cases, technology’s job is to help us figure out what our destination is, and to do so in the way we want to figure it out.) But imagine if your actual GPS was adversarial against you in this way. Imagine that you’ve just purchased a new one, installed it in your car, and on the first use it guides you efficiently to the right place. On the second trip, however, it takes you to an address several streets away from your intended destination. It’s probably just a random glitch, you think, or maybe it needs a map update. So you give it little thought. But on the third trip, you’re shocked when you find yourself miles away from your desired endpoint, which is now on the opposite side of town. These errors continue to mount, and they frustrate you so much that you give up and decide to return home. But then, when you enter your home address, the system gives you a route that would have you drive for hours and end up in a totally different city.

Any reasonable person would consider this GPS faulty and return it to the store, if not chuck it out their car window. Who would continue to put up with a device they knew would take them somewhere other than where they wanted to go? What reasons could anyone possibly have for continuing to tolerate such a thing?

No one would put up with this sort of distraction from a technology that directs them through physical space. Yet we do precisely this, on a daily basis, when it comes to the technologies that direct us through informational space. We have a curiously high tolerance for poor navigability when it comes to the GPSes for our lives – the information and communication systems that now direct so much of our thought and action.

When I looked around the technology industry, I began to see with new eyes the dashboards, the metrics, and the goals that were driving much of its design. These were the destinations we were entering into the GPSes guiding the lives of millions of human beings. I tried imagining my life reflected in the primary color numbers incrementing on screens around me: Number of Views, Time on Site, Number of Clicks, Total Conversions. Suddenly, these goals seemed petty and perverse. They were not my goals – or anyone else’s.

I soon came to understand that the cause in which I’d been conscripted wasn’t the organization of information at all, but of attention. The technology industry wasn’t designing products; it was designing users. These magical, general-purpose systems weren’t neutral “tools”; they were purpose-driven navigation systems guiding the lives of flesh-and-blood humans. They were extensions of our attention. The Canadian media theorist Harold Innis once said that his entire career’s work proceeded from the question, “Why do we attend to the things to which we attend?”3 I realized that I’d been woefully negligent in asking this question about my own attention.

But I also knew this wasn’t just about me – my deep distractions, my frustrated goals. Because when most people in society use your product, you aren’t just designing users; you’re designing society. But if all of society were to become as distracted in this new, deep way as I was starting to feel, what would that mean? What would be the implications for our shared interests, our common purposes, our collective identities, our politics?

In 1985 the educator and media critic Neil Postman wrote Amusing Ourselves to Death, a book that’s become more relevant and prescient with each passing day.4 In its foreword, Postman recalls Aldous Huxley’s observation from Brave New World Revisited that the defenders of freedom in his time had “failed to take into account … man’s almost infinite appetite for distractions.”5 Postman contrasts the indirect, persuasive threats to human freedom that Huxley warns about in Brave New World with the direct, coercive sort of threats on which George Orwell focuses in Nineteen Eighty-Four. Huxley’s foresight, Postman writes, lay in his prediction that freedom’s nastiest adversaries in the years to come would emerge not from the things we fear, but from the things that give us pleasure: it’s not the prospect of a “boot stamping on a human face – forever” that should keep us up at night, but rather the specter of a situation in which “people will come to love their oppression, to adore the technologies that undo their capacities to think.”6 A thumb scrolling through an infinite feed, forever.

I wondered whether, in the design of digital technologies, we’d made the same mistake as Huxley’s contemporaries: I wondered whether we’d failed to take into account our “almost infinite appetite for distractions.” I didn’t know the answer, but I felt the question required urgent, focused attention.

3 The Age of Attention

To see what is in front of one’s nose needs a constant struggle.

Orwell

When I told my mother I was moving to the other side of the planet to study technology ethics at a school that’s almost three times as old as my country, she asked, “Why would you go somewhere so old to study something so new?” In a way, the question contained its own answer. Working in the technology industry, I felt, was akin to climbing a mountain, and that’s one way – a very up-close and hands-on way – to get to know a mountain. But if you want to see its shape, paint its profile, understand its relations with the wider geography – to do that, you have to go a few miles away and look back. I felt that my inquiry into the faulty GPSes of my life required this move. I needed distance, not only physical but also temporal and ultimately critical, from the windy yet intriguing cliffs of the technology industry. “Amongst the rocks one cannot stop or think.”1 Sometimes, the struggle to see what’s in front of your nose is a struggle to get away from it so you can see it as a whole.

I soon found that my quest to gain distance from the mountain of the technology industry was paralleling, and in many ways enabling, a more general quest to gain distance from the assumptions of the Information Age altogether. I suspect that no one living in a named age – the Bronze Age, the Iron Age – ever called it by the name we give it now. They no doubt used other names rooted in assumptions of their times that they could not imagine would ever be overturned. So it’s always both bemused and annoyed me, in roughly equal measure, that we so triumphantly call our time the “Information Age.” Information is the water in which we swim; we perceive it to be the raw material of the human experience. So the dominant metaphor for the human is now the computer, and we interpret the challenges of our world primarily in terms of the management of information.

This is, of course, the standard way people talk about digital technologies: it’s assumed that information is fundamentally what they’re managing, manipulating, and moving around. For example, ten seconds before I started writing this sentence my wife walked into the room and said, “I just heard the internet described on the radio as ‘a conveyor belt of fraudulent information.’” Every day, we hear dozens of remarks like this: on the radio, in the newspaper, and in conversations with others. We instinctively frame issues pertaining to digital technologies in informational terms, which means that the political and ethical challenges we end up worrying about most of the time also involve the management of information: privacy, security, surveillance, and so on.

This is understandable. For most of human history, we’ve lived in environments of information scarcity. In those contexts, the implicit goal of information technologies has been to break down the barriers between us and information. Because information was scarce, any new piece of it represented a novel addition to your life. You had plenty of capacity to attend to it and integrate it into your general picture of the world. For example, a hundred years ago you could stand on a street corner in a city and start preaching, and people would probably stop and listen. They had the time and attention to spare. And because information has historically been scarce, the received wisdom has been that more information is better. The advent of digital computing, however, broke down the barriers between us and information to an unprecedented degree.

Yet, as the noted economist Herbert Simon pointed out in the 1970s, when information becomes abundant, attention becomes the scarce resource:

in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.2

Since Simon’s time, the ubiquity of small, constantly connected, general-purpose computers has produced this information–attention inversion on a global scale. Today you can access most any piece of information, or contact most anyone you wish, via a small device in your pocket not much bigger than a cigarette box. This capacity for instantaneous information and connection has come to form the background of our experience astonishingly quickly. That is to say, our informational tools have rapidly become our informational environment. What’s more, predigital media such as television and radio have largely been digitally retrofitted, rendering the networked digital environment a constant presence in human life. Today, in the average household in North America, you will find thirteen internet-connected devices.3

This inversion between information and attention has so completely pervaded our lives that it’s now (perhaps paradoxically) harder for us to notice its effects. There seems to have been a period around the time the field of cybernetics, or the science of control systems, was emerging, when it was easier to recognize the nature of this shift. This is the period in which Simon was writing, and when the Canadian media theorist Marshall McLuhan and others were beginning to put the concept of “media ecology” on the radar of popular culture.4 Now, however, we’ve pretty much lost all touch with any perceptual benchmarks against which we might judge how utterly our information technologies have enveloped our lives. We get fragmentary glimpses of that old world from time to time: when we go camping, when we take a long flight without internet connectivity, when our phone dies for several days, or when we intentionally take a digital “detox.” But these increasingly rare occurrences are exceptions, not the rule. Barring some unthinkable global catastrophe, the old world of information scarcity seems to be gone for good.

But what does it really mean to say that information abundance produces attention scarcity? Abundance can only be abundant relative to some threshold, so we might ask, “What is information now abundant relative to?” One answer would be “The amount of information available historically.” While true, this doesn’t seem like the really relevant threshold we should be interested in. For our purposes, we’re only incidentally concerned with the historical story here: the mere increase in information between two time points isn’t, in itself, a problem. Rather, the relevant threshold seems to be a functional one: what matters to us is whether the amount of information is above or below the threshold of what can be well processed given existing limitations.

To illustrate what I mean, consider the video game Tetris. The goal of Tetris is to rotate, stack, and clear different configurations of blocks as they rain down one by one from off screen, which they do at a constantly increasing rate of speed. The total number of bricks waiting off screen for you to stack is infinite – the game can keep going for as long as you can – but their infinitude, their abundance, is not the problem. The challenge of the game, and what ultimately does you in, is the increasing speed at which they fall. In the same way, information quantity as such is only important insofar as it enables information velocity. At extreme speeds, processing fails.

So the main risk information abundance poses is not that one’s attention will be occupied or used up by information, as though it were some finite, quantifiable resource, but rather that one will lose control over one’s attentional processes. In other words, the problems in Tetris arise not when you stack a brick in the wrong place (though this can contribute to problems down the line), but rather when you lose control of the ability to direct, rotate, and stack the bricks altogether.

It’s precisely in this area – the keeping or losing of control – where the personal and political challenges of information abundance, and attention scarcity, arise. To say that information abundance produces attention scarcity means that the problems we encounter are now less about breaking down barriers between us and information, and more about putting barriers in place. It means that the really important sort of censorship we ought to worry about pertains less to the management of information, and more to the management of attention.

Here’s the problem: Many of the systems we’ve developed to help guide our lives – systems like news, education, law, advertising, and so on – arose in, and still assume, an environment of information scarcity. We’re only just beginning to explore what these systems should do for us, and how they need to change, in this new milieu of information abundance.

We call our time the Information Age, but I think a better name for it would be the “Age of Attention.” In the Age of Attention, digital technologies are uniquely poised to help us grapple with the new challenges we face – challenges which are, fundamentally, challenges of self-regulation.

4 Bring your own Boundaries

Who will be great, must be able to limit himself.

Goethe

I mostly grew up in west Texas, in a town called Abilene, which is big enough that you might have heard it in country songs, where it rhymes with names like Eileen or Darlene, or phrases like “treat you mean” or “I ever seen,” but it’s still small enough that when I was in high school Microsoft Word would autocorrect its name to “abalone,” which refers to a species of marine snail with a shell that’s tough and cloddish on the outside, but slippery and rainbow-like within, as though someone had tried to flush out the little being inside with gasoline.

In my senior year of high school in Abilene I signed up for calculus, a class that required me to have a graphing calculator – one of those bigger models, with a dot-matrix display that lets you visualize the implications of your equations when they get too complex to imagine in your head, or to work out easily on paper. So I acquired a Texas Instruments TI-83, the latest model, which had come out just a couple of years earlier. An older model would have sufficed, but the TI-83 had native support for something called assembly programming languages, which meant you could load programs onto it that did anything, not just graph equations. This meant that practically, it wasn’t just a “calculator” anymore; it was a full-fledged, “general-purpose” computer. One of my classmates found a program somewhere for the game Tetris, and soon enough I had that loaded onto my calculator too. When class got boring, I’d sometimes load the Tetris program and play it to pass the time. Before long, I found myself realizing I’d opened the game and started playing it automatically, without consciously deciding to do so. It was just so convenient, having fun waiting a few key-clicks away – and it was usually far more rewarding than listening to the teacher drone on about integrals and differentials. That is to say, it was more immediately rewarding – right then, in that moment.

Soon, I started falling behind in class. Distracted by calculator-Tetris, my grades began to slide. This wasn’t anyone else’s fault, of course; I had loaded the program onto my calculator, and I was the one who kept opening and playing the game. But I didn’t want to tell anyone about the problem because I was embarrassed and ashamed to have let myself get derailed by so trivial a thing. I kept putting off my day of reckoning with this distraction, and its effects continued to mount. I carried my constant knowledge of the problem with me, as well as my failure to look it in the face, which made me turn to the quick pleasures of its immediate rewards even more. I hated how impulsive and weak of will I had become, but I kept turning again to the very cause of it to find a consolation that I knew was fleeting and illusory. The bricks kept falling quicker. I kept misstacking them. The pile kept getting higher. The music kept getting faster.

The “game over” moment finally came on a school trip in a nearby town, where I had been scheduled to participate in a journalism competition. At the last minute, word had come through from my school that I was no longer eligible to compete because I had failed my last calculus test. I had never failed a test in my life.

If you wanted to train all of society to be as impulsive and weak-willed as possible, how would you do it? One way would be to invent an impulsivity training device – let’s call it an iTrainer – that delivers an endless supply of informational rewards on demand. You’d want to make it small enough to fit in a pocket or purse so people could carry it anywhere they went. The informational rewards it would pipe into their attentional world could be anything, from cute cat photos to tidbits of news that outrage you (because outrage can, after all, be a reward too). To boost its effectiveness, you could endow the iTrainer with rich systems of intelligence and automation so it could adapt to users’ behaviors, contexts, and individual quirks in order to get them to spend as much time and attention with it as possible.

So let’s say you build the iTrainer and distribute it gradually into society. At first, people’s willpower would probably be pretty strong and resistant. The iTrainer might also cause some awkward social situations, at least until enough people had adopted it that it was widely accepted, and not seen as weird. But if everyone were to keep using it over several years, you’d probably start seeing it work pretty well. Now, the iTrainer might make people’s lives harder to live, of course; it would no doubt get in the way of them pursuing their desired tasks and goals. Even though you created it, you probably wouldn’t let your kids use one. But from the point of view of your design goals – in other words, making the world more impulsive and weak-willed – it would likely be a roaring success.

Then, what if you wanted to take things even further? What if you wanted to make everyone even more distracted, angry, cynical – and even unsure of what, or how, to think? What if you wanted to troll everyone’s minds? You’d probably create an engine, a set of economic incentives, that would make it profitable for other people to produce and deliver these rewards – and, where possible, you’d make these the only incentives for doing so. You don’t want just any rewards to get delivered – you want people to receive rewards that speak to their impulsive selves, rewards that are the best at punching the right buttons in their brains. For good measure, you could also centralize the ownership of this design as much as possible.

If you’d done all this ten years ago, right about now you’d probably be seeing some interesting results. You’d probably see nine out of ten people never leaving home without their iTrainer.1 Almost half its users would say they couldn’t even live without their device.2 You’d probably see them using it to access most of the information they consume, across every context of life, from politics to education to celebrity gossip and beyond. You’d probably find they were using the iTrainer hundreds of times per day, spending a third of their waking lives engaged with it, and it would probably be the first and last thing they engaged with every day.3

If you wanted to train society to be as weak-willed and impulsive as possible, you could do a whole lot worse than this. In any event, after unleashing the iTrainer on the world, it would be absurd to claim that it hadn’t produced significant changes in the thoughts, behaviors, and habits of its users. After all, everyone would have been part of a rigorous impulsivity training program for many years! What’s more, this program would have effectively done an end run around many of our other societal systems; it would have opened a door directly onto our attentional capacities, and become a lens through which society sees the world. It would, of course, be a major undertaking to try to understand the full story about what effects this project had had in people’s lives – not only as individuals, but also for society as a whole. It would certainly have had major implications for the way we had been collectively discussing and deciding questions of great importance. And it would certainly have given us, as did previous forms of media, political candidates that were made in its image.

Of course, the iTrainer project would never come anywhere close to passing a research ethics review. Launching such a project of societal reshaping, and letting it run unchecked, would clearly be utterly outrageous. So it’s a good thing this is all just a thought experiment.

The new challenges we face in the Age of Attention are, on both individual and collective levels, challenges of self-regulation. Having some limits is inevitable in human life. In fact, limits are necessary if we are to have any freedom at all. As the American philosopher Harry Frankfurt puts it: “What has no boundaries has no shape.”4 Reason, relationships, racetracks, rules of games, sunglasses, walls of buildings, lines on a page: our lives are full of useful constraints to which we freely submit so that we can achieve otherwise unachievable ends. “To be driven by our appetites alone is slavery,” wrote Rousseau in The Social Contract, “while to obey a law that we have imposed on ourselves is freedom” (p. 59). Even our old friend Diogenes, lover of unrestrained living that he was, said, “for the conduct of life we need right reason or a halter.”5 When we apply restraints upon ourselves that channel our activities toward our higher goals – some call these restraints “commitment devices” – we reach heights that would have been otherwise unreachable. If Odysseus had not instructed his sailors to tie him to the mast (and to plug up their own ears with wax), he would never have heard the sirens’ song and lived to tell about it.

For most of human history, when you were born you inherited an off-the-shelf package of religious and cultural constraints. This was a kind of library of limits that was embedded in your social and physical environment. These limits performed certain self-regulatory tasks for you so you didn’t have to take them on yourself. The packages included habits, practices, rituals, social conventions, moral codes, and a myriad of other constraints that had typically evolved over many centuries, if not millennia, to reliably guide – or shall we say design – our lives in the direction of particular values, and to help us give attention to the things that matter most.

In the twentieth century the rise of secularism and modernism in the West occasioned the collapse – if not the jettisoning – of many of these off-the-shelf packages of constraints in the cause of the liberation of the individual. In many cases, this rejection occurred on the basis of philosophical or cosmological disagreements with the old packages. This has, of course, had many great benefits. Yet by rejecting entire packages of constraint, we’ve also rejected those constraints that were actually useful for our purposes. “The left’s project of liberation,” writes the American philosopher Matthew Crawford, “led us to dismantle inherited cultural jigs that once imposed a certain coherence (for better and worse) on individual lives. This created a vacuum of cultural authority that has been filled, opportunistically, with attentional landscapes that get installed by whatever ‘choice architect’ brings the most energy to the task – usually because it sees the profit potential.” The German philosopher Peter Sloterdijk, in his book You Must Change Your Life, has called for a reclamation of this particular aspect of religion – its habits and practices – which he calls “anthropotechnics.”6

When you dismantle existing boundaries in your environment, it frees you from their limitations, but it requires you to bring your own boundaries where you didn’t have to before. Sometimes, taking on this additional self-regulatory burden is totally worth it. Other times, though, the cost is too high. According to the so-called “ego-depletion” hypothesis, our self-control, our willpower, is a finite resource.7 So when the self-regulatory cost of bringing your own boundaries is high enough, it takes away willpower that could have been spent on something else.

This increase in self-regulatory burden may pose a unique challenge for those living in poverty, who, research suggests are more likely to begin from a place of willpower depletion relative to everyone else. This is largely due to the many decisions and trade-offs they must make on a day-to-day basis that those who don’t live in poverty don’t have to make.8 Diogenes once said that “disabled” ought to mean “poor,” and to the extent that living in poverty means one’s willpower can be more easily depleted, he was more right than he knew.9 But the wider implication here is that these problems of self-regulation in the face of information abundance aren’t just “first-world problems.” They carry large implications for the societal goals of justice and equality. If the first “digital divide” disenfranchised those who couldn’t access information, today’s digital divide disenfranchises those who can’t pay attention.10

It’s against this cultural backdrop, of having to bring our own boundaries where we didn’t before, that digital technologies have posed these new challenges of self-regulation. Like the iTrainer in my thought experiment, digital technologies have transformed our experiential world into a never-ending flow of potential informational rewards. They’ve become the playing field on which everything now competes for our attention. Similar to economic abundance, “if these rewards arrive faster than the disciplines of prudence can form, then self-control will decline with affluence: the affluent (with everyone else) will become less prudent.”11 In a sense, information abundance requires us to invert our understanding of what “information technologies” do: Rather than overcoming barriers in the world, they increasingly exist to help us put barriers in place. The headphone manufacturer Bose now sells a product called Hearphones that allows the user to block out all sounds in their environment except the ones coming from their desired source – to focus on a conversation in a loud room, for example. The product’s website reads: “Focus on the voices you want to hear – and filter out the noises you don’t – so you can comfortably hear every word. From now on, how you hear is up to you.”12 We could also read this tagline as a fitting description of the new challenges in the Age of Attention as a whole.

The increasing rate of technological change further amplifies these challenges of attention and self-regulation. Historically, new forms of media took years, if not generations, to be adopted, analyzed, and adapted to. Today, however, new technologies can arrive on the scene and rapidly scale to millions of users in the course of months or even days. The constant stream of new products this unleashes – along with the ongoing optimization of features within products already in use – can result in a situation in which users are in a constant state of learning and adaptation to new interaction dynamics, familiar enough with their technologies to operate them, but never so fully in control that they can prevent the technologies from operating on them in unexpected or undesirable ways. This keeps us living on what I sometimes call a “treadmill of incompetence.”

In his essay “Reflections on Progress”, Aldous Huxley writes, “however powerful and well trained the surface will is, it is not a match for circumstances.”13 Indeed, one of the major lessons of the past several decades of psychology research has been the power of people’s environments in shaping their thoughts and behaviors. On one level, these effects may be temporary, such as changes in one’s mood. As Nikola Tesla observed, “One may feel a sudden wave of sadness and rake his brain for an explanation when he might have noticed that it was caused by a cloud cutting off the rays of the sun.”14 Yet our environments can also have deep, long-lasting influences on our underlying capacities – even how autonomous (or nonautonomous) we are able to be. The Oxford philosopher Neil Levy writes in his book Neuroethics, “Autonomy is developmentally dependent upon the environment: we become autonomous individuals, able to control our behavior in the light of our values, only if the environment in which we grow up is suitably structured to reward self-control.”15

Yet in the absence of environments that reward self-control or provide effective commitment devices, we’re left to our own devices – and given our inherent scarcity of attention, the resulting cognitive overload often makes bringing our own boundaries extremely challenging, if not prohibitive. Limiting our lives in the right way was already hard enough, but in the Age of Attention we encounter even stronger headwinds. Of course, digital technology is uniquely poised to help us deal with these new challenges. And if technology exists to solve problems in our lives, it ought to help us surmount these challenges.

Unfortunately, far from helping us mitigate these challenges of self-regulation, our technologies have largely been amplifying them. Rather than helping us to more effectively stack and clear the Tetris bricks in our lives, they’ve been making the blocks fall faster than we ever imagined they could.

5 Empires of the Mind

The empires of the future are the empires of the mind.

Churchill

There was once a man walking down a road, wearing a cloak to keep warm. The North Wind noticed him and said to the Sun, “Let’s see which one of us can get that man to take off his cloak. I bet I’ll surely win, for no one can resist the gales of my mighty breath!” The Sun agreed to the contest, so the North Wind went first and started blowing at the man as hard as he could. The man’s hat flew off; leaves swirled in the air all around him. He could barely take a step forward, but he clutched his cloak tightly – and no matter how hard the North Wind blew, the man’s cloak stayed on. “What? Impossible!” the North Wind said. “Well, if I have failed,” he said to the Sun, “then surely there is no hope for you.” “We shall see,” said the Sun. The Sun welled up his chest and made himself as bright as he could possibly be. The man, still walking, had to shield his eyes because the Sun’s shine was so intense. Soon the man grew so warm inside his wool cloak that he began to feel faint: he started to stagger, sweat dripping off his head into the dirt. Breathing deeply, he untied his cloak and flung it over his shoulder, all the while scanning his environs for a source of water where he could cool off. The Sun’s persuasion had won out where the North Wind’s coercion could not.

This story comes from Aesop, the Greek fabulist who lived a few hundred years before Diogenes ever trolled the streets of Corinth. Like Diogenes, Aesop was also a slave at one point in his life before eventually being freed. Aesop died in Delphi, where the famous oracle lived upon whose temple was inscribed that famous maxim “Know Thyself.” You probably know some of Aesop’s other fables – “The Tortoise and the Hare,” “The Ant and the Grasshopper,” “The Dog and its Reflection” – but “The North Wind and the Sun” is one of my favorites, because it shows us that persuasion can be just as powerful, if not more so, than coercion.1

Of all the ways humans try to influence each other, persuasion might be the most prevalent and consequential. A marriage proposal. A car dealer’s sales pitch. The temptation of Christ. A political stump speech. This book. When we consider the stories of our lives, and the stories that give our lives meaning, we find that they often turn on pivot points of persuasion. Since ancient Greece, persuasion has been understood primarily in its linguistic form, as rhetorike techne, or the art of the orator. Aristotle identified what he saw as three pillars of rhetoric – ethos, pathos, and logos – which roughly correspond to our notions of authority, emotion, and reason. And into medieval times, persuasion held a central position in education, alongside grammar and logic, as one-third of the classical trivium.

Yet all design is “persuasive” in a broad sense; it all directs our thoughts or actions in one way or another.2 There’s no such thing as a “neutral” technology. All design embodies certain goals and values; all design shapes the world in some way. A technology can no more be neutral than a government can be neutral. In fact, the cyber- in “cybernetics” and the gover- in “government” both stem from the same Greek root: kyber-, “to steer or to guide,” originally used in the context of the navigation of ships. (This nautical metaphor provides a fitting illustration of what I mean: The idea of a “neutral” rudder is an incoherent one. Certainly, a rudder held straight can help you stay the course – but it won’t guide your ship somewhere. Nor, in the same way, does any technology.)

However, some design is “persuasive” in a narrower sense than this. Some design has a form that follows directly from a specific representation of users’ thoughts or behaviors, that the designer wants to change. This sort of persuasive design is by no means unique to digital technologies; humans have long designed physical environments toward such persuasive ends. Consider, for instance, the placement of escalators in shopping malls, the music in grocery stores, or the layouts of cities.3 Yet what Churchill said about physical architecture – “we shape our buildings, and afterwards, our buildings shape us” – is just as true of the information architectures in which we now spend so much of our lives.4

For most of human history, persuasive design in this narrower sense has been a more or less handicraft undertaking. It’s had the character of an art rather than a science. As a result, we haven’t worried too much about its power over us. Instead, we’ve kept an eye on coercive, as opposed to persuasive, designs. As Postman pointed out, we’ve been more attuned to the Orwellian than the Huxleyan threats to our freedom.

But now the winds have changed. While we weren’t watching, persuasion became industrialized. In the twentieth century the modern advertising industry came to maturity and began systematically applying new knowledge about human psychology and decision making. In parallel, advertising’s scope expanded beyond the mere provision of information to include the shaping of behaviors and attitudes. By the end of the twentieth century, new forms of electric media afforded advertisers new platforms and strategies for their persuasion, but the true effectiveness of their efforts was still hard to measure. Then, the internet came along and closed the feedback loop of measurement. Very quickly, an unprecedented infrastructure of analytics, experimentation, message delivery, customization, and automation emerged to enable digital advertising practices. Furthermore, networked general-purpose computers were becoming more portable and connected, and people were spending more time than ever with them. Designers began applying techniques and infrastructures developed for digital advertising to advance persuasive goals in the platforms and services themselves. The scalability and increasing profitability of digital advertising made it the default business model, and thus incentive structure, for digital platforms and services. As a result, goals and metrics that served the ends of advertising became the dominant goals and metrics in the design of digital services themselves. By and large, these metrics involved capturing the maximum amount of users’ time and attention possible. In order to win the fierce global competition for our attention, design was forced to speak to the lowest parts of us, and to exploit our cognitive vulnerabilities.

This is how the twenty-first century began: with sophisticated persuasion allying with sophisticated technology to advance the pettiest possible goals in our lives. It began with the AI behind the system that beat the world champion at the board game Go recommending videos to keep me watching YouTube longer.5

There’s no good analogue for this monopoly of the mind the forces of industrialized persuasion now hold – especially on the scale of billions of minds. Perhaps Christian adherents carrying the Bible everywhere they go, or the memorization of full Homeric epics in the Greek oral tradition, or the assignment of Buddhist mantras to recite all day under one’s breath, or the total propaganda machines of totalitarian states. But we must look to the religious, the mythic, the totalistic, to find any remotely appropriate comparison. We have not been primed, either by nature or habit, to notice, much less struggle against, these new persuasive forces that so deeply shape our attention, our actions, and our lives.

This problem is not new just in scale, but also in kind. The empires of the present are the empires of the mind.

On October 26, 1994, if you had fired up your 28.8k modem, double-clicked the icon for the newly released Netscape Navigator web browser, and accessed the website of Wired Magazine, you would have seen a rectangle at the top of the page. In it, tie-dye text against a black background would have asked you, “Have you ever clicked your mouse right HERE? You will.6 Whether intended as prediction or command, this message – the first banner ad on the web – was more correct than its creators could have imagined. Digital ad spend was projected to pass $223 billion in 2017, and to continue to grow at double-digit rates until at least 2020.7 Digital advertising is by far the dominant business model for monetizing information on the internet today. Many of the most widely used platforms, such as Google, Facebook, and Twitter, are at core advertising companies. As a result, many of the world’s top software engineers, designers, analysts, and statisticians now spend their days figuring out how to direct people’s thinking and behavior toward predefined goals that may not align with their own. As Jeff Hammerbacher, Facebook’s first research scientist, remarked: “The best minds of my generation are thinking about how to make people click ads … and it sucks.”8

As a media dynamic, advertising has historically been an exception to the rule of information delivery in a given medium. It’s the newspaper ads, but not the articles; it’s the billboards, but not the street signs; it’s the TV commercials, but not the programs. In a world of information scarcity, it was useful to make these exceptions to the rule because they gave us novel information that could help us make better purchasing decisions. This has, broadly speaking, been the justification for advertising’s existence in an information-scarce world.

In the mid twentieth century, as the modern advertising industry was coming to maturity, it started systematically applying new knowledge about human psychology and decision making. Psychologists such as Sigmund Freud had laid the groundwork for the study of unconscious thought, and in the 1970s Daniel Kahneman and Amos Tversky revealed the ways in which our automatic modes of thinking can override more rational rules of statistical prediction.9 In fact a great deal of our everyday experience consists of such automatic, nonconscious processes; our lives take place, as the researchers John Bargh and Tanya Chartrand have said, against the backdrop of an “unbearable automaticity of being.”10 On the basis of all this new knowledge about human psychology and decision making, advertising’s scope continued to expand beyond the informational to the persuasive; beyond shaping behaviors to shaping attitudes.11 And new forms of electric media were giving them new avenues for their persuasion.

Yet most advertising remained faith-based. Without a comprehensive, reliable measurement infrastructure, it was impossible to study the effectiveness of one’s advertising efforts, or to know how to improve on them. As John Wanamaker, a department store owner around the beginning of the twentieth century, is reported to have said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”12 The potential for computing to revolutionize advertising measurement was recognized as early as the 1960s, when advertising agencies began experimenting with large mainframe computers. Companies such as Nielsen were also beginning to use diary and survey panel methods to understand audiences and their consumption behaviors, which marginally improved advertising intelligence by providing access to demographic data. However, these methods were laborious and expensive, and their aggregate data was useful only directionally. Measuring the actual effectiveness of ads was still largely infeasible.

The internet changed all that. Digital technology enabled a Cambrian explosion of advertising measurement. It was now possible to measure – at the level of individual users – people’s behaviors (e.g. page views), intentions (e.g. search queries), contexts (e.g. physical locations), interests (e.g. inferences from users’ browsing behavior), unique identifiers (e.g. device IDs or emails of logged-in users), and more. Also, vastly improved “benchmarking” data – information about the advertising efforts of one’s competitors – became available via market intelligence services like comScore and Hitwise. Web browsers were key in enabling this sea change of advertising measurement, not only because of their new technical affordances, but also because of the precedent they set for subsequent measurement capabilities in other contexts.

In particular, the browser “cookie” – a small file delivered imperceptibly via website code to track user behavior across pages – played an essential role. In his book The Daily You, Joseph Turow writes that the cookie did “more to shape advertising – and social attention – on the web than any other invention apart from the browser itself.”13 Cookies are also emblematic, in their scope-creep, of digital advertising measurement as a whole. Initially, cookies were created to enable “shopping cart” functionality on retail websites; they were a way for the site to keep track of a user as he or she moved from page to page. Soon, however, they were being used to track people between sites, and indeed all across the web. Many groups raised privacy concerns about these scope-creeping cookies, and it soon became commonplace to speak of two main types: “first-party” cookies (cookies created by the site itself) and “third-party” cookies (cookies created by someone else). In 1997 the Internet Engineering Task Force proposed taking away third-party cookies, which sent the online advertising industry into a frenzy.14 Ultimately, though, third-party cookies became commonplace. As unique identifiers at the level of the web-browser session, cookies paved the way for unique identifiers at higher levels, such as the device and even the user. Since 2014, for instance, Google’s advertising platform has been able to track whether you visit a company’s store in person after you see their ad.15

To manage this fire hose of measurement, “analytics” systems – such as Omniture, Coremetrics, and Google Analytics – emerged to serve as unified interfaces for managing one’s advertising as well as nonadvertising data. In doing so, they helped establish the “engagement” metrics of advertising (e.g. number of clicks, impressions, or time on site) as default operational metrics for websites themselves. This effectively extended the design logic of advertising – and particularly attention-oriented advertising (as opposed to advertising that serves users’ intentions) – to the design of the entire user experience.

In previous media, advertising had largely been an exception to the rule of information delivery – but in digital media, it seemed to have broken down some essential boundary; it seemed now to have become the rule. If advertising was previously said to be “underwriting” the dominant design goals of a medium, in digital media it now seemed to be “overwriting” them with its own. It wasn’t just that the line between advertising and nonadvertising was getting blurry, as with “native advertisements” (i.e. ads that have a similar look and feel to the rest of the content) or product placements (e.g. companies paying YouTube or Instagram “influencers” to use a product). Rather, it seemed that everything was now becoming an ad.

The confluence of these trends has given us the digital “attention economy”, the environment in which digital products and services relentlessly compete to capture and exploit our attention. In the attention economy, winning means getting as many people as possible to spend as much time and attention as possible with one’s product or service. Although, as it’s often said, in the attention economy “the user is the product.”

Think about it: The attention you’re deploying in order to read this book right now (an attention for which, by the way, I’m grateful) – an attention that includes, among other things, the saccades of your eyeballs, the information flows of your executive control function, your daily stockpile of willpower, and the goals you hope reading this book will help you achieve – these and other processes you use to navigate your life are literally the object of competition among many of the technologies you use every day. There are literally billions of dollars being spent to figure out how to get you to look at one thing over another; to buy one thing over another; to care about one thing over another. This is literally the design purpose of many of the technologies you trust to guide your life every day.

Because there’s so much competition for our attention, designers inevitably have to appeal to the lowest parts of us – they have to privilege our impulses over our intentions even further – and exploit the catalog of decision-making biases that psychologists and behavioral economists have been diligently compiling over the last few decades. These biases include things like loss aversion (such as the “fear of missing out,” often abbreviated as FOMO), social comparison, the status quo bias, framing effects, anchoring effects, and countless others.16 My friend Tristan Harris has a nice phrase for this cheap exploitation of our vulnerabilities: the “race to the bottom of the brain stem.”17

Clickbait is emblematic of this petty competition for our attention. Although the word is of recent coinage, “clickbait” has already been enshrined in the Oxford English Dictionary, where it’s defined as “content whose main purpose is to attract attention and encourage visitors to click on a link to a particular web page.” You’ve no doubt come across clickbait on the web, even if you haven’t known it by name. It’s marked by certain recognizable and rage-inducing headline patterns, as seen in, for example: “23 Things Parents Should Never Apologize For,” “This One Surprising Phrase Will Make You Seem More Polite,” or “This Baby Panda Showed Up At My Door. You Won’t Believe What Happened Next.” Clickbait laser-targets our emotions: a study of 100 million articles shared on Facebook found that the most common phrases in “top-performing” headlines were phrases such as “are freaking out,” “make you cry,” and “shocked to see.” It also found that headlines which “appeal to a sense of tribal belonging” drive increased engagement, for instance those of the formulation “X things only [some group] will understand.”18

In the attention economy, this is the game all persuasive design must play – not only the writers of headlines. In fact, there’s a burgeoning industry of authors and consultants helping designers of all sorts draw on the latest research in behavioral science to punch the right buttons in our brains as effectively and reliably as possible.19

One major aim of such persuasive design is to keep users coming back to a product repeatedly, which requires the creation of habits. The closest thing to a bible for designers who want to induce habits in their users is probably Nir Eyal’s book Hooked: How to Build Habit Forming Products. “Technologists build products meant to persuade people to do what we want them to do,” Eyal writes. “We call these people ‘users’ and even if we don’t say it aloud, we secretly wish every one of them would become fiendishly hooked to whatever we’re making.”20 In the book, Eyal gives designers a four-stage model for hooking users that consists of a trigger, an action, a variable reward, and the user’s “investment” in the product (e.g. of time or money).

The key element here is the variable reward. When you randomize the reward schedule for a given action, it increases the number of times a person is likely to take that action.21 This is the underlying dynamic at work behind the high engagement users have with “infinite” scrolling feeds, especially those with “pull-to-refresh” functionality, which we find in countless applications and websites today such as Facebook’s News Feed or Twitter’s Stream. It’s also used widely in all sorts of video games. In fact, this effect is often referred to as the “slot machine” effect, because it’s the foundational mechanism on which the machine gambling industry relies – and which generates for them over a billion dollars in revenue every day in the United States alone.22 Variable reward scheduling is also the engine of the compulsive, and sometimes addictive, habits of usage that many users struggle to control.23

Whether we’re using a slot machine or an app that’s designed to “hook” us, we’re doing the same thing; we’re “paying for the possibility of a surprise.”24 With slot machines, we pay with our money. With technologies in the attention economy, we pay with our attention. And, as with slot machines, the benefits we receive from these technologies – namely “free” products and services – are up front and immediate, whereas we pay the attentional costs in small denominations distributed over time. Rarely do we realize how costly our free things are.

Persuasive design isn’t inherently bad, of course, even when it does appeal to our psychological biases. Indeed, it can be used for our benefit. In the area of public policy, for instance, the practice of “nudging” aims to structure people’s environments in ways that help them make decisions that better promote their well-being. However, in the attention economy the incentives for persuasive design reward grabbing, and holding, our attention – keeping us looking, clicking, tapping, and scrolling. This amplifies, rather than mitigates, the challenges of self-regulation we already face in the era of information abundance.

On the opening screen of one of the first web browsers there was a notice that read, “There is no ‘top’ to the World Wide Web.”25 In other words, the web isn’t categorized hierarchically, like a directory of files – it’s decentralized, a network of nodes. One of the tragic ironies about the internet is that such a decentralized infrastructure of information management could enable the most centralized systems of attention management in human history. Today, just a few people at a handful of companies now have the ability to shape what billions of human beings think and do. One person, Mark Zuckerberg, owns Facebook, which has over 2 billion users, as well as WhatsApp (1.3 billion users), Facebook Messenger (1.2 billion users), and Instagram (800 million users).26 Google and Facebook now comprise 85 percent (and rising) of internet advertising’s year-over-year growth.27 And the Facebook News Feed is now the primary source of traffic for news websites.28

Alexander the Great could never have dreamed of having this amount of power. We don’t even have a good word for it yet. This isn’t a currently categorizable form of control over one’s fellow human beings. It’s more akin to a new government or religion, or even language. But even these categories feel insufficient. There aren’t even 2 billion English speakers in the world.

In 1943, in the thick of World War II, Winston Churchill traveled to Harvard to pick up an honorary degree and say a few words to a packed house. The title of his talk was “The Gift of a Common Tongue.” After lauding the fact that Britain and America shared a common language – which, he hoped, might one day serve as the basis not only for Anglo-American fraternity and solidarity, but even for a common citizenship – he gave a plug to Basic English, a simplified version of English that he hoped might one day become a global lingua franca, a “medium, albeit primitive, of intercourse and understanding.” This was the context – the prospect of giving the world a common linguistic operating system – in which he said “the empires of the future are the empires of the mind.”

The corollary of Churchill’s maxim is that the freedoms of the future are the freedoms of the mind. His future was the present we now struggle to see. Yet when the light falls on it just right, we can see the clear and urgent threat that this unprecedented system of intelligent, industrialized persuasion poses to our freedom of attention.

Footnotes

2 The Faulty GPS

3 The Age of Attention

4 Bring your own Boundaries

5 Empires of the Mind

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×