1. Introduction
After decades of slumber, the world is awakening to the extraordinary power we have vested in the custodians of our digital infrastructure. “Big Tech” is under attack from regulators worldwide seeking to wrest that power back. CEOs have been dragged (by video) before Congress; antitrust cases have launched; the General Data Protection Regulation (GDPR) is in force, the EU Digital Services Act in preparation.Footnote 1 Even smaller countries like Australia have squared up.Footnote 2 Societies go through few such “constitutional” moments—when we collectively recognise that we are subject to illegitimate power structures and determine that they may not stand. Political philosophers should be well placed to help at these moments (think Hobbes, Paine, Rousseau). We can diagnose the moral flaws of existing power structures and, using that diagnosis, recommend alternatives. And yet, political philosophy’s engagement with this digital revolution is in its infancy. The normative analysis of our digital infrastructure has been led by other disciplines, in a tidal wave of critique known as the techlash, in which there is considerable normative agreement and little sustained focus on unpacking the conceptual foundations of that agreement.
This should give us pause. We need to be sure the tsunami of critique is aimed at the right targets. And we need arguments for it that do not presuppose antecedent agreement. Most importantly, we need to know not only that some practice is morally objectionable, but why it is. Only then can we know how problematic it is, and so calibrate our concern to its seriousness, and craft positive proposals that address the root cause of our moral concern.
In this paper, we introduce and offer a moral diagnosis of one of the primary engines of our contemporary digital infrastructure: Automated Influence, the use of automated systems to collect and analyse user data, and then target interventions aimed at changing their behaviour. Ultimately the tech titans’ power relies on their revenues, and those depend on Automated Influence encompassing online behavioural advertising, recommender systems, and newsfeed and search algorithms. Automated Influence has also driven Artificial Intelligence (AI) research and development, whether finding new modalities for the exercise of influence (e.g., digital personal assistants operationalising advances in natural language processing) or optimising existing methods (e.g., tweaking a recommendation algorithm to increase user engagement) (Hao Reference Hao2021). More perhaps than any other discrete practice of the leading digital platforms, Automated Influence has inspired popular concern, from New York Times editorials to Netflix documentaries (Zuboff Reference Zuboff2019).Footnote 3
In the moral critique of any social practice, we can adopt at least two broad perspectives, which we will call interactional and structural. Footnote 4 These are of course archetypes; most work includes some combination of the two. The interactional approach considers the interactions between agents that make up a social practice. It aims to identify adverse effects for individuals directly caused by those interactions. Its normative critique is grounded exclusively in the self-authenticating claims of persons with moral standing. A claim is a fact about a person that can potentially ground pro tanto duties in others—that is, give others moral reasons that it can be wrong to breach. A self-authenticating claim is sufficient on its own to ground such duties.
The structural approach evaluates the emergent social structures of which those interactions are the leading edge.Footnote 5 It considers how those social structures directly and indirectly impact people’s lives and their relational properties—such as how they influence distributions of power, knowledge, and resources—as well as their aggregate effects—cumulative social impacts that are significant at scale, but relatively insignificant for each person affected. The structural approach can be motivated by showing how these structures have downstream impacts on people’s self-authenticating claims. But it can also be motivated by these fundamentally relational goods (Taylor Reference Taylor1995; Waldron Reference Waldron1987; Griffin Reference Griffin2008).Footnote 6 Individuals do not have self-authenticating claims to a particular distribution of power, knowledge, or resources, or to one particular cumulative outcome over another.
Interactional critiques of social practices have a compelling kind of freestanding moral authority. One has instrumental reason to win others’ support for one’s cause, but the claims at stake are self-authenticating, so do not depend on that support. For example, think of abolitionists campaigning against slavery. Structural critiques that focus on relational and aggregate social goods are more deliberatively demanding. Since we do not have individual claims to social goods, we must collectively decide on the right path to take. Winning others’ support for your cause is not just instrumentally important, it is constitutive of the value of your cause. Think here of campaigners for national self-determination of a cultural group (Margalit and Raz Reference Margalit and Raz1990).
The prevailing critique of Automated Influence, especially in public discourse but also in academic research, emphasises its interactional shortcomings. Although this lends normative clarity and motivational force—you should oppose Automated Influence because it is undermining your self-authenticating claims—we think an exclusively interactional approach misses crucially important structural dimensions of the problem with Automated Influence. And this presents us with a more demanding challenge: to decide how we want distributions of power, knowledge, and resources to be shaped by our digital infrastructure. That decision cannot be made by a “moral vanguard.” It requires a genuine rethink of our social institutions writ large.
We begin by precisifying Automated Influence, then consider three central objections against it. In each case, we show how a structural version of that objection adds something crucial to its interactional counterpart. Our paper therefore makes a case for political philosophers giving greater weight to structural arguments in their moral diagnoses of social phenomena. We recommend the emerging field of AI Ethics turn away from its present interactional focus, and towards a more structural agenda: a genuinely political philosophy of data and AI.
2. Automated influence
Automated Influence: The use of Artificial Intelligence to collect, integrate and analyse people’s data, and to deliver targeted interventions based on this analysis, intended to shape their behaviour for exogenous or endogenous ends.
Many first become concerned by Automated Influence through online behavioural advertising. An ad seems to follow one around the web; we begin to realise that we are being tracked online and targeted accordingly. But online behavioural advertising is just the most explicit, and crudest, face of Automated Influence. Most of our digital services—from search, to social media, to online shopping—rely on user direction to secure our engagement and attention (and so show us more ads), as well as to help us navigate the functionally infinite space of our digital infrastructure, analysing our preferences to suggest complementary content, products, and services.
Automated Influence is driven by AI, but it has also driven epochal advances in AI.Footnote 7 The revenues generated by Automated Influence have sustained research and development in AI; the data gathered has made possible great leaps forward in computer vision, natural language processing, and other fields using machine learning (ML). Reciprocally, AI has also enabled a speed, scale, and personalisation of influence that would never have been possible without it.
Our definition highlights the role of AI in collection, integration, and analysis of user data, and its operationalisation by targeting a particular intervention.Footnote 8 We cannot morally assess Automated Influence without considering the pipeline of data that makes it possible, both to train predictive models and to target particular interventions.
Automated Influence makes it possible in principle to target behavioural interventions at an audience of one (Turow and Draper Reference Turow, Draper, Ball, Haggerty and Lyon2012, 138). This targeting comes in two broad forms. First, matching people with products, services and content they may find appealing.Footnote 9 This means differentiating “persuadables” from “sure things,” “lost causes,” and “do not disturbs”—people whom targeting would actively put off. Second, tailoring the message to the individual based on their inferred susceptibility to that method of persuasion (Calo Reference Calo2014, 1018). Experimental results show the viability of such “persuasion profiling,” but there is little publicly available information about how widespread it is (Kaptein and Eckles Reference Kaptein and Eckles2012, Reference Kaptein, Eckles, Ploug, Hasle and Oinas-Kukkonen2010).Footnote 10
These interventions aim to shape the user’s behaviour—that is, they aim to raise the probability they will ultimately take some particular course of action—in order to realise some goal. Behaviour is, minimally, a function of one’s beliefs and desires given one’s option set. Automated Influence can shape each element. Search and newsfeed algorithms shape what we believe; ads and recommender systems prompt and direct our desires; platforms make some options available and attractive, while hiding others. Each modality of influence can be either covert or explicit.
Automated Influence can shape user behaviour in their own (endogenous) interests, and/or in the (exogenous) interests of others. Typically, the goal is to do both: to provide the user with a benefit while also extracting profit for the influencer—for example, hold the user’s attention on the platform in order to serve them more ads.
Presented in this light, Automated Influence does have a benign face, and may to some extent be necessary. The internet is as good as infinite; without some means to navigate it, we would be lost. Automated Influence enables us to discover relevant products, services, and content. Developing the infrastructure of Automated Influence requires significant investment; that investment is possible because tech companies optimise for profit as well as for user functionality.
But there is a malign face too. Critics of Automated Influence argue that it relies on invasive inferences from data that is illicitly acquired, thereby delivering excessively targeted interventions that covertly shape people’s beliefs, desires, and behaviour for exogenous ends. From this general anxiety, we extract three objections to Automated Influence, focusing on privacy, exploitation, and manipulation. We discuss each in turn.
3. PrivacyFootnote 11
We’ll call data collected to train predictive models training data and that used for targeting targeting data. We also distinguish between sensitive and nonsensitive data points. Sensitivity is a functional term intended to identify data about a person that that person might reasonably not want others to know.Footnote 12 Our key distinction is between data that is intrinsically and extrinsically sensitive. A data point is intrinsically sensitive if it is sensitive when considered on its own—that is, if you would reasonably not want others to know that data point alone. It is extrinsically sensitive if it is sensitive only when considered in combination with other data points.
This is the basic paradigm of Automated Influence. An “influencer” has training data including sensitive and nonsensitive information about a population. They train a model on that data revealing a link between intrinsically nonsensitive properties P, Q, and R, and intrinsically sensitive property S, such that if [P, Q, R] obtain for a user, the probability of S obtaining increases (Barocas and Nissenbaum Reference Barocas, Nissenbaum, Lane, Stodden, Bender and Nissenbaum2014, 55). Suppose P, Q, and R have to do with the user’s music, podcast, and browsing patterns, while S is their sexuality, for example. The model is then applied to a user who has revealed P, Q, and R but not S, which enables the influencer to infer that S likely obtains and to target the user with interventions aimed at S-people.
3.a Control of data about you
In the public discourse on Automated Influence, a prominent objection claims that using people’s data in this way undermines their privacy.Footnote 13 More specifically: influencers have no right to use people’s data to train their predictive models and it is wrong to make invasive inferences about people’s sensitive information.
This objection can be developed in interactional or structural terms. We start with the interactional approach. This is most compelling if we can identify an underived self-authenticating claim against our privacy being undermined by Automated Influence. One can also argue for a derived claim grounded in privacy’s utility in protecting other interests—such as in not being exploited or manipulated—but since that argument is really grounded in those other interests, we return to it below.
The internet’s first decades have seen many egregious invasions of individual privacy, on any reasonable interpretation (Zuboff Reference Zuboff2019; Véliz Reference Véliz2020). However, these are now widely acknowledged as being obviously wrong, so we set them aside to focus on practices that are central to the ongoing business model of Automated Influence.
We are sceptical about grounding the critique of Automated Influence on its undermining an underived self-authenticating claim to privacy. We think that you do not have a weighty underived claim to unilaterally control your intrinsically nonsensitive behavioural data. That data is the product of your interaction with a digital infrastructure and thus the creators of that digital infrastructure must also have some antecedent claim to it.Footnote 14 This behavioural data is about you. But it is also about the site you have navigated to and the services you have used. You have some claim over it, to be sure. But so does the service provider.
There has long been a struggle over who should control people’s data exhaust, or behavioural surplus (the very terminology is the site of this struggle) (Zuboff Reference Zuboff2019). The conventional wisdom now is that this is your data—the user has unilateral rights over it (Véliz Reference Véliz2020). While we might endorse this as the conclusion of a political argument grounded in aggregate, relational, and structural considerations, we deny it as an underived moral premise in a critique of Automated Influence.Footnote 15 For you to have a natural, underived claim to unilateral authority over some data point, it should be either intrinsically sensitive, or you should otherwise have some kind of special claim to it—for example, perhaps, because you unilaterally generated it (think of intellectual property as an example). If you make something together with another person or organisation, then both you and that organisation have some natural claim to control the fruits of your joint labour. If it is not intrinsically sensitive, the mere fact that a data point so generated is about you is not sufficient to give you unilateral authority over it.
One could counter, here, that it’s a mistake to place too much weight on whether the data point is intrinsically sensitive. If S is a sensitive attribute, and knowledge of [P, Q, R] raises the probability of S, then can that ground a claim to unilateral control over P, Q, and R?
We think this argument is worth exploring. We can develop it in at least two ways. First, one might argue that you have a claim to unilateral control over P, Q, and R just in case they are necessary to infer that the probability S obtains is above some threshold. Or, second, the claim could be grounded in P, Q, and R being sufficient to make that inference.
The first approach seems unlikely to generate robust privacy protections. The redundant encoding of sensitive attributes in large datasets typically means that many different subsets of the data enable the same inference, so no particular subset is necessary. As a result, on this view, we would have limited, if any, rights to control the data that enables sensitive predictions.
The second approach is worth exploring in more depth. P, Q, and R entail a higher probability of S only given that the model has also been trained on data about many other people. If data point X being part of a set of data points that are jointly sufficient for S to be inferred grounded a claim to your having unilateral control over X, then you would have a claim to unilateral control over data about others, which you do not. After all, the set of data sufficient for S to be inferred about you will also include data that is part of a set that is sufficient for S to be inferred about many other people, and you cannot all have unilateral control over the same data points.
Could we then supplement the sufficiency approach by arguing that if X is about you, and is part of a dataset that is sufficient for a high probability of S to be inferred about you, then you have a claim to unilateral control over X? We think this is likely to be overly inclusive; it is hard to imagine a piece of intrinsically nonsensitive information about you that is not part of a set that is sufficient for making sensitive inferences. On this approach, you would have a right to unilateral control over literally every data point that is about you. But much of the data that is about you is also about other people; it is relational data, such as that A and B are spouses, or that A and B were communicating together on a messaging platform (Salome Viljoen Reference Viljoen2020).
Probably the best version of this argument, then, would say that you have a right to unilateral control over any data point that is exclusively about you, that is part of a set that is sufficient for inferring a high probability of some sensitive attribute S about you. This raises some interesting questions, which we cannot settle here, about what it takes for a data point to be exclusively about one person. And as we noted above, data that you generate by using some digital service is not exclusively about you. It is also about that digital service. We therefore think this argument is likely to be significantly underinclusive, though we think it deserves further consideration.
3.b Control over inferences
Rather than appeal to our claim to unilaterally control P, Q, and R just because they enable an inference to S, one might instead simply argue that others who licitly know P, Q, and R should not infer S from it. Although there are instrumental reasons to prohibit such inferences in particular cases, we deny an underived claim that others not put two and two together. There can be nothing wrong (we think) with the mere fact of making a warranted inference.
Objection: Does our scepticism derive from irrelevant assumptions about human psychology? We generally lack a claim that others make inferences from what they licitly know, because we could never prevent such inferences in practice and, even if we could, it would egregiously constrain their freedom of thought. We can, however, easily prevent people from using predictive models, and doing so does not obviously undermine their freedom of thought.
However, we think that if there is a basic objection to drawing inferences by predictive models, then it should also be at least somewhat wrong to infer S when you licitly know P, Q, and R. But we think it cannot be. Identifying patterns and making inferences from licitly acquired data is not in itself wrongful. Acting on those inferences might be wrongful because of the consequences of doing so. But that is a separate matter.
3.c The role of consent
We are somewhat sceptical about the force of appealing to individual privacy to ground opposition to Automated Influence. But suppose we could show either that you have a self-authenticating claim that others not make certain inferences from what they licitly know, or that you have a claim to unilateral control over any data that is exclusively about you, and that is part of a set that is sufficient for inferring some sensitive attribute (and that the set of data exclusively about you is meaningful and large). Even then, we presumably would not think that either of those claims were inalienable. If you want to let companies know P, Q, and R even knowing that this will enable them to infer S, then there are seemingly few interactional grounds for denying you the right to do so. It is therefore unsurprising that consent looms so large in discussions of individual privacy and Automated Influence.
We can use consent to criticise Automated Influence on the grounds either that it involves breaching actual agreements between users and digital service providers, or that the agreements that license it are themselves invalid. We set aside the former objection; there is no mystery about why breach of contract is wrong. The second objection has more promise and, over the last two decades, scholars have exhaustively demonstrated the inadequacy of individual consent to legitimate the collection and use of individuals’ behavioural data in the era of ML (Barocas and Nissenbaum Reference Barocas, Nissenbaum, Lane, Stodden, Bender and Nissenbaum2014). Instead of revisiting these arguments, we will argue that the best reasons for thinking these contracts invalid refer to the structural, aggregate effects of managing behavioural data by individual consent.
A predictive model does not need to access everyone’s data to make reliable predictions. Its training data could be a sample as small as 20 percent of the whole (Barocas and Nissenbaum Reference Barocas, Nissenbaum, Lane, Stodden, Bender and Nissenbaum2014, 62). It can then make sensitive predictions based only on targeting data, which can be significantly less comprehensive than training data, and indeed can include only the data that you cannot avoid sharing in order to use a digital service, such as your hardware and browser metadata and your IP address.Footnote 16 In these cases, your only hope for avoiding having sensitive inferences made about you is to avoid using the digital service entirely.
Assume that consent in the absence of a reasonable alternative is not morally effective (that is, it does not change what others are permitted to do [Wertheimer Reference Wertheimer1987]). How, then, should we assess the consent to share behavioural data with a digital service provider in light of these externalities? You have three options: A, use the service and share behavioural data that can be used to train a predictive model, perhaps with some modest incentive to do so; B, use the service, share only the minimal targeting data that you cannot avoid sharing; C, do not use the service at all. Suppose that if one in five people choose A, then there is little to no difference between the inferences that can be made about you whether you choose A or B. In that case, you gain no real advantage by choosing B, and you miss out on the incentive to choose A. So if enough people choose A, then B is no longer a reasonable alternative to it. Everything then depends on whether C—not using the service at all—is a reasonable alternative to A.
In the present digital environment, we think that option B is equivalent to using the new (putatively) privacy-preserving digital services, which have been launched in response to growing concern about Automated Influence. Many users try to protect their privacy by using a Virtual Private Network (VPN), searching on sites like DuckDuckGo, browsing on Safari, or deleting their Facebook accounts, to prevent some kinds of cross-site tracking. Almost invariably these privacy-preserving techniques impose some cost on the user (most privacy-preserving search engines license Bing’s search results—try using those for a week). And the reality is that given the choices of others to use the more popular, more invasive services, your privacy-preserving choices make little to no difference to the ability of online advertisers to profile you and target you with advertisements (and other interventions). Hence, the only meaningful choice is between not using the internet at all and submitting to being profiled and targeted. Given how many of us are dependent on the internet for our professional and personal lives, this is not the kind of choice that can generate morally effective consent.Footnote 17
The obvious alternative to the lens of individual consent—as has been recognised by privacy scholars for some time, and with particular force in a forthcoming paper by Salome Viljoen (Reference Viljoen2020)—is that we must instead work out a collective approach to allocating and using behavioural data.Footnote 18 We think this is the right answer, but it entails focusing on the relational and aggregate effects of the data practices of Automated Influence, rather than considering individuals’ claims to privacy first and foremost. Privacy claims, on this view, are the product of a negotiation over how we, as societies, should govern the flow of data, rather than being a crucial input into those negotiations.
There is a further problem with grounding our critique of Automated Influence in individuals’ privacy claims, and so in our practices of notice and consent, for there is a way to improve those practices and make them much more tractable for users. But it may involve centralising authority in a few trusted platforms, which then automatically manage the user’s preferences with respect to third parties.Footnote 19 The larger platforms have long recognised the opportunity in taking charge of the enforcement of privacy norms online (Clark Reference Clark2021). But while they constrain third parties’ access to users’ behavioural data, their own access is practically unconstrained. And while they might solve one problem with Automated Influence, they do so by exacerbating another—the concentration of power in too few unaccountable hands.
3.d A structural approach
Instead of focusing on individuals’ voluntary decisions whether to share their data with digital service providers, we need to emphasise the aggregate effects of the broader institutions of data governance. This shifts us from an interactional perspective to a structural perspective. Continuing in the same vein: the problem with Automated Influence is not just that automated systems access and make inferences from intrinsically nonsensitive behavioural data, but that they create standing economic incentives to turn everything into behavioural data, steering us ever closer to ubiquitous surveillance.Footnote 20 Instead of having just our online behaviour recorded, we increasingly find it impossible to escape being continually recorded wherever we are. What’s more, we are often complicit in this mass mutual surveillance, wilfully filling our lives with devices that record both ourselves and others.
But what is actually wrong with ubiquitous surveillance? We think it encroaches on the basic, self-authenticating claim to have some significant space free from being observed, and on the social good of living in free and equal societies.
3.d Surveillance and sovereignty
We can readily imagine a world with Automated Influence, but without ubiquitous surveillance. However, in the actual world, Automated Influence creates a standing economic incentive to turn everything into behavioural data, so that it can be used to target advertisements, products, services, and content. We have both interactional and structural reasons for objecting to ubiquitous surveillance, but invoking ubiquitous surveillance contributes to the structural critique of Automated Influence because only by attending to the social structures enabled by Automated Influence can we see its contingent downstream impacts on other aspects of our lives. An interactional approach that focused on Automated Influence without attending to these social consequences would not hold it accountable for the ubiquitous surveillance that it incentivises.
Our first objection to ubiquitous surveillance is grounded in our sovereignty over our own persons and our claim to a reasonable sphere of action free from observation by others. We need not take advantage of this sphere if we choose not to, but the basic licence to retreat from the gaze of others is as fundamental to our sovereignty over our persons as is our similar authority over our bodies.
Suppose we could achieve some nontrivial benefit for others by cutting off some of your hair while you are asleep, without your ever knowing it had happened. Even though you would never knowingly be affected and the objective effect would be trivial, it is still wrong to do this without your consent; it’s your body, and you are sovereign over it (Quinn Reference Quinn1989). To be sovereign over your person, you must have a morally authorised sphere of freedom in which you are at liberty to decide what to do without penalty or censure (Lazar Reference Lazar2019).
Just as you are entitled (to a point) to refuse others the use of your person for the sake of fulfilling overarching goals, you are also entitled (to a point) to refuse them the observation of your person. For this to be possible, you must be able to withdraw from others’ gaze without undue penalty. Increasingly ubiquitous surveillance raises the costs of withdrawing since it shrinks your sphere of freedom, and thus undermines your capacity to be sovereign over your own person.
Much rests here on the idea of “observation.” Some think that one’s basic interest in privacy is activated only when data about one is accessed by others, so that merely being recorded is not sufficient to set back that interest (Macnish Reference Macnish2020). We think that you lack sovereignty over your person if some other person or group is able to observe you without adequate limitation.Footnote 21 This means that the problem is not merely that we are always susceptible to being recorded by different devices, but that it is possible to integrate those different streams in order to build a comprehensive picture of each person. If your whole life (or close enough) could be observed by some other person or group, should they choose to, then you are not properly sovereign over it. If you were recorded every waking moment of your life but it was impossible to integrate those recordings, then your sovereignty over your person would be less seriously contravened since no other person or group would be able to surveil your every moment; each person would have only a snapshot.
3.f Surveillance, freedom, and equality
The next two arguments focus on structural, relational social goods: the value of living in societies that are free and equal. This value is not simply reducible to the instrumental benefit for each person of society being free and equal: free and equal societies are good in themselves, over and above how they contribute to the well-being of each person.Footnote 22
Ubiquitous surveillance, together with the power of the modern state, makes for an unfree society. This point is often made, so we will not dwell on it at length.Footnote 23 States face many different challenges, real and imagined, and granular data about each of their citizens’ behaviour can help solve some of those challenges, so our behavioural data exerts an irresistible pull on state authorities. For most of us, this comes to nothing. However, some have their basic privacy rights invaded but never know it. Some suffer the direct consequences of the mistaken or unjust exercise of state power, supercharged by big data and AI, and lose their freedom. This is especially likely for those who lack the full protections of citizenship (for example, undocumented migrants in the US [Bedoya Reference Bedoya2020]).
But the broader problem, independent of precisely who ends up suffering these severe incursions into their privacy and their freedom, is that a society in which we can be surveilled in this way by state authorities is one in which we are all unfree. Automated Influence provides the economic case for launching product after product that records our online and offline behaviour; these products are either expressly and transparently repurposed for state use (for example, Ring doorbell cameras transmitting data to police forces), or are surreptitiously accessed through back doors, or internet service providers (ISPs) (Harwell Reference Harwell2019). If democratic states tried to install this kind of surveillance equipment as pervasively as this, there would be a massive uproar. Instead, we are installing these gadgets ourselves.Footnote 24
The obvious solution here would be to ensure that our behavioural data is genuinely secure against all third parties, including the state, by preventing it from being aggregated at all, keeping it on encrypted devices, or else aggregating only after encryption has been applied. However, this again ends up putting a lot of power in the hands of tech companies, which still have access to identified data, and which are, in this scenario, entrusted with protecting our data against the might of the state. As we will discuss in more detail below, in some ways the problem is that digital technologies enable too much power, making the challenge of identifying a legitimate authority still more daunting.
Final argument: ubiquitous surveillance threatens equality as well as freedom. Those who can access a comprehensive picture of our online and offline behaviour have undue power over us. This obviously undermines our freedom, but also places us in unequal social relations. Consider the Uber founder and one-time CEO’s “party trick” of turning on “God View,” a display revealing the location of everyone using an Uber (troubling enough in itself, but all the more so when explicitly used to track individuals [Hill Reference Hill2014; Véliz Reference Véliz2020, 37]). They call it God View because it gives them a supernatural level of insight about and power over mere mortals like us. A society in which some people can have this kind of access to the behavioural data of others is to this extent and for this reason unequal (it may also be unequal for many other reasons, of course).
The central problem here is that contemporary computing power and data management and analysis capabilities enable us to integrate vast amounts of disaggregated data into a coherent whole. A mishmash of different devices—smart TVs, smart speakers, smart doorbells, smartphones—can be integrated into a single effective network for realising some objective. The net is not created at once and then thrown over us all so that we can see it coming and resist. Instead, we are each stitching our own little piece of it, and data management companies like Palantir are drawing it all together.
This is a general feature of the political problems raised by big data and AI, and of the central contribution that they can make to society: seemingly disconnected and ineffectual individual elements come together in the aggregate to realise something astonishingly powerful. One net result is that some people are placed in an extraordinarily asymmetrical position relative to others: we each know only our piece of the patchwork; they have a view of the whole. For most of us, this makes little practical difference. The data gathered to facilitate Automated Influence will only ever be used for that purpose. But we now live in a society where some people are subject to unjust or mistaken intervention on the basis of this data, and in which some people have access to awesome power. We live in a society that is pro tanto less free and equal than it would be without the ubiquitous surveillance that Automated Influence has incentivised.
4. Exploitation
The privacy-based argument against Automated Influence is most compelling if it is either developed into a structural critique of the social relations enabled by big data, or else pinned on the downstream implications of affording people inadequate control over their behavioural data: for example, that without this control, we will be subject to exploitation by digital service providers.Footnote 25 We think the interactional version of this argument is insufficient. Individual users do not in general have a strong complaint that they are being exploited by influencers. But when we consider users as a group and influencers as a group, and when we consider the overarching infrastructure of Automated Influence rather than individual interactions, the argument becomes more compelling.
We adopt the following (stipulative) understanding of exploitation. Exploitation occurs when one party to an ostensibly voluntary agreement intentionally takes advantage of a relevant and significant asymmetry of knowledge, power, or resources to offer the other party terms of exchange to which they agree but would never accept were they more symmetrically situated in that respect.Footnote 26 Applied to Automated Influence, this would imply that the apparently voluntary agreement to share our behavioural data for access to digital services is made against a backdrop of a significant asymmetry of power, resources, or knowledge, and that we would reject these terms if we had a stronger bargaining position.
4.a Unfavourable transactions
As with the argument from privacy, we concede that many internet users have been gulled into deeply unfavourable transactions that they would never have accepted had they known what was really at stake. More than this, many data companies and Automated Influencers have simply deceived their users, using subterfuge to acquire data that was never intended to be shared. The actual practice of Automated Influence has been riddled with this kind of naked corruption. Individuals have a clear complaint against these corrupt practices.Footnote 27 However, even when these are set aside, some have argued that Automated Influence is still objectionably exploitative. Let’s look at why.
The purveyors of Automated Influence have indeed made a tidy profit from it. Advertising has proved extraordinarily lucrative. Even companies whose traditional profit centres were in software or retail have recently seen more and more profits come from this one stream (Graham Reference Graham2021).Footnote 28 And users are severely asymmetrically positioned relative to the major digital service providers. Their level of power and their knowledge of user behaviour are jointly extraordinary. Ours, not so much (Calo and Rosenblat Reference Calo and Rosenblat2017).
And yet, the case that individual users are exploited by these practices rests on a weak foundation. For a start, the argument presupposes that each party to the transaction has a right to unilateral control over what they are exchanging. As argued above, that is contentious for our behavioural data. It is generated by our use of the digital infrastructure, so is part of the cooperative surplus that we must agree to divide, rather than something of ours that we bring to the bargaining table.
Even so, the division of that surplus could be an unfair one, which we agree to only because of a radical asymmetry in our respective bargaining positions. Users typically believe their behavioural data trivial, while Influencers know that with enough data to train their predictive models they can reap significant benefits. One might compare them to an unscrupulous art collector who knowingly buys a priceless masterpiece for a song from its ignorant owner. This would be a kind of exploitation—taking advantage of the other’s ignorance. But it is not an accurate analogy here because any given individual’s data is effectively worthless.Footnote 29 Predictive models depend on massive datasets; the marginal individual is a drop in the ocean. A better analogy would be if millions of us each owned a piece of a priceless jigsaw puzzle, but all of the pieces are multiply duplicated, and assembling the puzzle requires tremendous investment and ingenuity. The art collector buys up a full set without explaining their composite value to any of the sellers, but then has to invest considerable resources in assembling it. This does not seem so obviously exploitative.
The insights (and profits) generated by behavioural data require considerable investment and ingenuity to extract, and any individual’s contribution to the end result is typically trivial. In that light, being paid for our data with free access to digital services does not seem to be exploitative. It also has a progressive cast: providing digital services free at the point of use enables everyone to take advantage of them, rather than keeping out those with less disposable income.
An interactional, individualist version of the exploitation objection seems at best incomplete. But when we focus not on individuals, but on communities, and not on individual interactions but on the broader infrastructure of Automated Influence, the picture is different.
4.b Dividing the cooperative surplus
One of political philosophy’s central questions is how we should distribute the productive surplus made possible by cooperation with one another in society (Rawls Reference Rawls1999). The cooperative surplus generated through our use of the new digital infrastructure has been divided to give digital service providers a disproportionate share of the benefit and, more importantly, a disproportionate share of the power. They get to decide how our digital cooperative surplus is distributed and what to do with it. In an adaptation of Julie Cohen’s phrase, we have allowed them, to our detriment, to have unilateral control over the “means of prediction” (Cohen Reference Cohen2000, 1406).Footnote 30 We think this can provide the basis for a more structural, collectivist version of the objection from exploitation. Let’s go through this in more detail, focusing in turn on the new resources, knowledge, and power enabled by this cooperative surplus.
At the crudest level, this is about resources (Cohen Reference Cohen2018, 216). Each individual’s data is near valueless. But in the aggregate, it is an extraordinary resource that has generated untold wealth for the most prominent tech companies, their owners, and employees. Though individually irrelevant, we are together essential for the creation of this collective surplus. But because we do not control the means of prediction, access to digital services is our primary return. We can redress the balance by taxing these companies and imposing other imposts on them. However, they are extremely adept at avoiding those costs, and even at mobilising the public to resist their imposition.Footnote 31
More importantly, our cooperative behavioural surplus enables new kinds of knowledge. Even if the primary economic motivation for data collection and analysis is to facilitate the personalised delivery of products, services, and content, massive datasets have extraordinary “latent energy,” and can generate insights on many different topics of social importance, as well as providing training data to enable vast leaps forward in AI. These insights and advances are accessible to those who control the means of prediction, but the rest of us, including our democratically elected representatives, are locked out. We cannot even know how effective Automated Influence itself is; we cannot gain firsthand knowledge of the functioning of the different recommendation algorithms that structure our online experiences. And we cannot decide the research agenda for how to use our cooperative surplus to generate insights about our offline lives that could play a vital role in improving public policy. For example, consider the COVID-19 pandemic. Tech companies have access to location and interaction information that could be invaluable to understanding specific transmission scenarios and broader trends, but governments of democratic polities are locked out of that information except on the companies’ terms. The decision how to weigh values like privacy and public health is taken not by democratically elected officials, but by the executives of Apple and Google (Lazar and Sheel Reference Lazar and Sheel2020).
This brings us to the deeper and more persistent problem. Our cooperative behavioural surplus enables new kinds and distributions of power. The tech companies’ control of the means of prediction means that we can only indirectly infer the extent of that power. And as yet we have no viable way of legitimating these new power relations. In the previous section, we discussed the power over individuals made possible by ubiquitous surveillance. Here we need to consider the power over populations enabled by the insights that can be generated by the means of prediction applied to aggregated user data. We have put all this power in the hands of tech companies, leaving to them the decision of how to use this data and what to try to gain insight into.
Maintaining the means of prediction and the broader infrastructure of Automated Influence requires digital platforms. Any constraints on the collection or use of data have to be implemented within those platforms. And, in practice, the complexity and sheer volume of interaction on those platforms mean that they largely police themselves (consider the example of copyright enforcement [Suzor Reference Suzor2019]). But where does their authority to do so come from? What procedural standards should they observe? Can we ensure that they will implement duly authorised laws, and won’t oversimplify them in order to reduce the cost of enforcement (Suzor Reference Suzor2019)?
4.c Refusal and resistance
At the same time as collectively generating this new cooperative surplus of resources, knowledge, and power, the systems of Automated Influence and the companies purveying it have worked to atomise individual consumers, reinforcing in us the mindset of individual choice and consent, and fragmenting our shared epistemic landscape (Viljoen, Goldenfein, and McGuigan Reference Viljoen, Goldenfein and McGuigan2020, 7). This is one of the great ironies of Automated Influence: it depends on an infrastructure that derives from a species of unthinking collective action, but which then enables a kind of personalisation and an ideology of individualism that fragments us such that we become worse at engaging in considered collective action to undertake collective bargaining with the tech companies.
This has three steps: two epistemic, one ideological. First, just as Automated Influence affords influencers unprecedented insight into our lives, their control of the means of prediction prevents us from seeing and understanding just how they are governing the digital infrastructure they have created, and the extent of the insights and influence our cooperative surplus can create.
Next, Automated Influence delivers us each a personalised experience of the internet in which we see content tailored to our interests. As we become increasingly dependent on our digital infrastructure to inform our worldview, we are subjected to an increasingly fractured epistemic landscape, which militates against coordinated collective action to wrest unilateral control of the means of prediction away from the tech companies.Footnote 32
The last step is ideological. Tech companies have extensively promulgated the idea of individual agency and choice, framing our experience of our digital infrastructure so that we consider ourselves atomised individuals negotiating only on our own behalf.Footnote 33 It is their solution to every objection raised against Automated Influence because it ensures that the only collective action we engage in works to their benefit; beyond generating the cooperative surplus, we leave everything else to them. The sense that we must navigate all the shortcomings of our digital lives alone is deeply disempowering to many of us; a sense of “digital resignation” leaves us simply agreeing to various disclosures so that we don’t have to spend our whole lives online policing the boundaries of our rights (Draper and Turow Reference Draper and Turow2019, 1829).
4.d The exploitation objection restated
When we view Automated Influence through this lens—focusing on the social structures that we have collectively allowed to emerge over the last twenty years, rather than on individual transactions between users and tech companies—the argument from exploitation looks much more plausible.
The relevant transaction is between us—the users of the internet on the one hand and the Automated Influencers on the other. We are exchanging our data—individually of little value, but precious in the aggregate—for access to digital services. And while our data is generated through interaction with a digital infrastructure that we did not create, at issue here is not only entitlement to proceeds from particular interactions, but how to divide the cooperative surplus of resources, knowledge, and power that our data collectively makes possible. And as self-determining political communities we do have robust presumptive rights to set the terms for how that cooperative surplus is distributed.
The relevant asymmetry is between, on the one hand, the tech companies’ understanding of the value of that data and their ability to act in a coordinated and purposeful way, and, on the other hand, our general ignorance of the aggregate value of our data and our inability to act in a coordinated and purposeful way. We have, therefore, by accident and without coordination, in effect collectively accepted terms of exchange that give the tech companies near unilateral control over the means of prediction. If we were better coordinated, we should certainly demand more control and a greater share of the cooperative surplus of resources, knowledge, and power. Worse still, the tech companies have used the very tools we have given them access to in order to exacerbate the asymmetry between them and us by using the methods of Automated Influence to further undermine our ability to coordinate, nudging us towards atomised individual decision-making by promoting an ideology of individual agency and control, while also fragmenting the shared epistemic foundations for collective action.
5. Manipulation
Recent years have seen a groundswell of opposition to Automated Influence, from bestselling books and Netflix documentaries to resolutions in the European parliament (Zuboff Reference Zuboff2019; Lomas Reference Lomas2020).Footnote 34 People are increasingly concluding that Automated Influence is undermining our autonomy—that we are all subject to “remote control” (Zuboff Reference Zuboff2020). This objection deserves serious consideration; if Automated Influence were inherently manipulative, then that might be reason enough to reform or reject it.Footnote 35 When thinking through this objection, however, we again think that considering only the individual manipulatory effects of Automated Influence does not adequately convey the seriousness of what is at stake. For a comprehensive picture, we must adopt a more structural and collective approach.
5.a A sufficient condition for manipulation
We start by offering a sufficient condition for manipulation. Manipulation involves (though may not be exhausted by) undermining an individual’s decision-making power—for example, preying on their emotions, their momentary whims, or their reliance on cognitive biases and heuristics—in order to change their behaviour.Footnote 36 Their “decision-making power” is, roughly, their ability to select among their options, given their beliefs about the world, in ways that advance their goals. Some contend that only covert influence counts as manipulation; we deny this.Footnote 37 While manipulation can proceed by concealment or deception—for example, when casinos manipulate people to stay longer than they might otherwise intend by not having any visible clocks in their gaming rooms—many of our cognitive shortcomings are equally decisive even when we know they are in play, so one can manipulate another entirely transparently.
The wrong of manipulation has two sides. First, it involves effectively subordinating the will of others, and, as such, it undermines their autonomy. Second, it involves the manipulator placing themselves above the manipulated, treating the manipulated as a subordinate. This is an objectionable species of disrespect, and an affront to egalitarian social relations.
5.b Tailoring the message, targeting the product
Are the methods of Automated Influence manipulative? Let’s start with online behavioural advertising. This involves two salient species of Automated Influence: tailoring the message and targeting the product. Tailoring the message can certainly appear manipulative, especially if it relies on extracting and operationalising users’ “persuasion profiles.” Some psychologists have argued that we have a propensity to be swayed more easily by some tactics than others, which is constant across contexts (Kaptein and Eckles Reference Kaptein, Eckles, Ploug, Hasle and Oinas-Kukkonen2010, Reference Kaptein and Eckles2012). On some approaches this draws on quite specific features of individual psychology; on others, we target relatively crudely drawn personality types with a kind of messaging known to resonate well with that type (Matz et al. Reference Matz, Kosinski, Nave and Stillwell2017). We might thus advertise the same product to two different people in quite different ways based on our estimation of the likely success of the specific method used for each.
Like many aspects of the infrastructure of Automated Influence, it’s hard to say how widespread persuasion profiling is. However, a possibly less invasive analogue is common: A/B testing particular messages with particular target groups. One can soon discover the effectiveness for each group and continue to use the most persuasive message without explicitly categorising anyone according to their persuasion profile.
Tailoring the message is manipulative if it involves identifying and targeting a weakness in the user’s rational decision-making. But advertising in general makes a virtue out of identifying and operationalising cognitive biases and heuristics, so if tailoring the message is manipulative, it does not stand out much from other kinds of advertising.
We do think, however, that suasion can be morally problematic (whether we want to call it manipulative or not) when it involves concealing some fact that might, if known, make that suasion less effective. And tailoring the message plausibly does so. If you knew that the same product being advertised to you in one way was being advertised to another person quite differently, you might resist, especially if the messages were somehow conflicting.Footnote 38 If you knew that your persuasion profile was being inferred and operationalised, you would very likely refuse to do what you are being influenced to do on that basis alone (Boerman, Kruikemeier, and Zuiderveen Borgesius Reference Boerman, Kruikemeier and Borgesius2017; Baek and Morimoto Reference Baek and Morimoto2012).
So there is some reason to think that tailoring the message is problematically manipulative, albeit arguably not a cardinal sin. However, the bulk of online behavioural advertising is not about tailoring the message, but about targeting the product. This concerns both audience selection and the process of real-time auctioning of advertising spaces, driven in part by predictions of users’ click-through rates based on their traits and history (He et al. Reference He, Pan, Jin, Xu, Bo, Xu, Shi, Atallah, Herbrich, Bowers and Candela2014). In some extreme cases, this might be unacceptably manipulative—the much-cited cases of identifying depressed users on social media and targeting them with products tailored to their depression would perhaps be an example. These, however, are extreme cases. More commonly, targeting the product is a matter of using familiar methods of market segmentation. One might still object that if we knew why they were showing us this ad—not “because of our browser history,” but “because your mouse hovered over this image on two separate occasions in the past,” or “because your frequent use of smart scales implies that you are dieting”—then we would be less likely to click through.Footnote 39
5.d How effective is online manipulation?
The most compelling case for Automated Influence involving the manipulation of individual people requires us to look past online behavioural advertising towards the recommender algorithms that shape our experience of digital platforms more generally.Footnote 40 These work by shaping our options, as well as influencing our beliefs and desires, to hold our attention for longer and direct it towards products, services, and content that we might ultimately be willing to spend our money on. Is this an autonomy-undermining form of suasion? On the one hand, perhaps our putative “addiction” to the products of recommender systems is, in fact, bad for us; on the other, perhaps this kind of judgment about what makes a life go better or worse ought not be the basis for a broadly liberal critique of Automated Influence. Either way, the mere fact that digital platforms are addictive presumably does not make them much more manipulative than, for example, videogames and other forms of entertainment. It is possible, of course, that the degree of information that social media companies have about their users enables them to more powerfully operationalise our propensity to addiction than is true for other platforms, which might, again, ground valid concerns.
While it might seem hyperbolic to say that Automated Influence has us under remote control, we have found some grounds for saying that it subjects individuals to manipulation. The next question, however, is: How morally serious is this? Manipulation is morally graver, in our view, if (a) it is more successful and (b) the option ultimately chosen by the manipulated is significantly worse than the option they would have chosen had they not been manipulated. Unfortunately for the prophets of doom, Automated Influence, especially in the form of online behavioural advertising, is not especially effective on an individual level (Boerman, Kruikemeier, and Zuiderveen Borgesius Reference Boerman, Kruikemeier and Borgesius2017; Tucker Reference Tucker2014; Aguirre et al. Reference Aguirre, Mahr, Grewal, de Ruyter and Wetzels2015; Jones et al. Reference Jones, Bond, Bakshy, Eckles and Fowler2017; Calo Reference Calo2014, 1003; Kaptein and Eckles Reference Kaptein, Eckles, Ploug, Hasle and Oinas-Kukkonen2010, Reference Kaptein and Eckles2012; Matz et al. Reference Matz, Kosinski, Nave and Stillwell2017; Hwang Reference Hwang2020).Footnote 41 It can be significant in the aggregate, as we discuss below. But from each individual’s perspective, the probability that they will be successfully influenced by these different kinds of intervention remains small in absolute terms.Footnote 42
Some might object here that the very fact that the tech companies dominate the advertising market is evidence of their product’s success. This would be too quick. Their success arguably comes primarily from their ability to monopolise our attention—to be our default site for search or for idle browsing. This alone would make their platforms indispensable to advertisers even if they entirely stopped using user data to target advertisements.
The next question is how much is at stake. In a matter of choosing one product rather than another, the stakes seem pretty low. Of course, online behavioural advertising is also used to market much bigger, life-altering kinds of products, such as unsecured loans and job opportunities. But everything we know suggests that the higher the stakes, the less likely we are to be significantly swayed by advertising of any kind (Boerman, Kruikemeier, and Zuiderveen Borgesius Reference Boerman, Kruikemeier and Borgesius2017).
What about Automated Influence in political campaigning? Here again the stakes for any particular individual might be relatively low, and the higher the stakes, the less the role we would expect digital advertising to play in their decision. A targeted ad might generate a small donation. A series of such ads might even contribute to a decision not to vote or (less likely) to switch sides. These might seem pretty significant outcomes, but at the individual level they really aren’t, because whether you vote or not, and whether you vote for one side or the other, almost certainly makes no difference to the outcomes for you given the vanishingly small probability that your vote will be decisive.
However, while the effects of manipulation might fail to achieve the intended behavioural changes, they might still succeed in altering the subject’s beliefs and desires and, so, affect other aspects of their lives. Automated Influence has clearly contributed to many people in highly digitised societies becoming relatively unmoored from political reality (Vosoughi, Roy, and Aral Reference Vosoughi, Roy and Aral2018; Paul Reference Paul2021; Hills Reference Hills2019; Törnberg Reference Törnberg2018).Footnote 43 Properly understanding how Automated Influence has contributed to misinformation and the widespread adoption of conspiracy theories, however, requires zooming out from individual interactions to the broader structural implications of Automated Influence. We return to it below, but we acknowledge that the individuals whose worldviews have been significantly altered through content served to them by targeted advertising and rapacious recommender algorithms have suffered a morally serious species of manipulation.
We can draw an interim conclusion that, in general, online behavioural advertising is not significantly more effective than other forms of advertising; even the more nefarious methods don’t seem to make that much difference and, anyway, it’s hard to get too riled up about being nudged into consuming a little more than your budget allows or spending more time than you think you should staring at a screen.
5.e Stochastic manipulation
What happens, then, when we consider the infrastructure of Automated Influence through a wider lens? The magic of big data is in its aggregate effects, which are more than the sum of their parts. The same is true of the harms of big data. They might be relatively trivial for most of those who are adversely affected, while being serious in the aggregate. Even if Automated Influence only involves a modest degree of manipulation of individuals, it permits a more troubling species of stochastic manipulation of groups.
By stochastic manipulation, we mean that the interventions of Automated Influence may have a relatively low probability of changing the behaviour of any particular individual, but in the aggregate may make nontrivial impacts on group behaviour as a whole. What’s more, in keeping with our account of manipulation above, we think that stochastic manipulation preys on some pathologies of collective decision-making, in particular our failure to coordinate our actions with one another, and our propensity to realise tragedies of the commons. This is most obvious in the context of political decision-making—not just in elections, but more broadly when mobilising public support for or against particular policy proposals. In these contexts, the ability to sway a given group by a few percentage points, even a few fractions of a percentage point, can ultimately prove decisive (Heilman Reference Heilman2020).
Stochastic manipulation also impacts on nonpolitical decision-making. From the perspective of each individual consumer, choosing one product rather than another may make little difference. But at the aggregate level, the inevitability that digital platforms will shape our purchasing choices can lead to serious anticompetitive results. For example, while the nudge we receive to buy products with the Amazon Prime badge may benefit each user individually, each individual transaction contributes to the centralisation of power in the retail economy, putting Amazon’s competitors out of business (Romm, Zakrzewksi, and Lerman Reference Romm, Zakrzewksi and Lerman2020).Footnote 44
The central moral concern of stochastic manipulation is less its effect on individuals whose decisions are swayed, and more that these new techniques enable small groups of savvy people to exercise a disturbing amount of power over groups and populations at large (Moore Reference Moore2019). As individuals, we may not be subject to remote control, but the tools of Automated Influence seem to allow those who can wield them an outsized ability to influence populations to advance their goals.
Are individuals gravely wronged by stochastic manipulation? We think not. An agent’s subjective probability of success can affect the seriousness of the wrong they commit. In other words, if A attempts to manipulate B and succeeds, then A wrongs B more severely the higher the probability that when A acted, her manipulation would be successful (other things equal) (Lazar Reference Lazar2015). Recall that the wrong of manipulation consists both in the impact on the victim’s autonomy, and in the disrespect shown by the manipulator to the manipulated, in violation of their equal social relations. The impact of being manipulated on B’s autonomy is unaffected by A’s probability of success when she acted. But the disrespect evinced by A in her action does vary with that probability, we think. A chancy attempt that happens to succeed involves a less egregious species of disrespect than does a sure thing.
To see why this must be so, note that if φing is wrong, then attempting to φ is typically also wrong. When the success of φing is chancy, we concede that successful φing is more seriously wrongful than an unsuccessful attempt. But the difference between them cannot be very great. Suppose then for reductio that A’s successfully manipulating B1 with a high probability of success is no more seriously wrongful than her successfully manipulating B2 with a low probability of success. Suppose that A also unsuccessfully attempted to manipulate C2–Z2 with the same probability of success as for B2. If chancy unsuccessful attempts are not much less seriously wrongful than chancy successful harms, and if low probability successful manipulation of B2 is not less seriously wrongful than high probability successful manipulation of B1, then the low probability, unsuccessful attempt to manipulate each of C2–Z2 is not much less seriously wrongful than the high probability, successful manipulation of B1. But this is implausible. C2–Z2 have much weaker complaints against A than does B1. The way out of the reductio is to concede that successful high probability manipulations may be substantially more seriously wrongful than successful low probability manipulations. Hence, the impact of stochastic manipulation on individuals should carry less weight in our deliberations than would less chancy manipulation.
But stochastic manipulation can still pose serious problems. Automated Influence has surely played a significant role in the political upheaval of the last five years (Aral and Eckles Reference Aral and Eckles2019). The problem is less that we have ended up in one possible world rather than another, but that a few people have the means to reach and influence so many people in terms tailored for their particular circumstances. This is especially clear when the tech companies want to get a particular message across to us. Their capacity to reach and influence political communities is extraordinary (Culpepper and Thelen Reference Culpepper and Thelen2020).
Stochastic manipulation concentrates power in too few hands. It also pollutes our capacity for, and willingness to commit to, collective deliberation and action. We tend to think that we are not susceptible to Automated Influence, but that others are (Ham and Nelson Reference Ham and Nelson2016, 689). The perception that others are being manipulated is corrosive to democratic deliberation, even if it is, in fact, overstated. To illustrate, suppose you thought that some part of the population of your country might be Cylons—humanoid robots indistinguishable from homo sapiens without advanced biometric testing, but which can be reprogrammed by a central controller at any given time. Even if you don’t know for sure how many Cylons there are, the mere fact that there might be some would be corrosive to public trust. How can we deliberate, debate, and decide in good faith when some significant portion of our interlocutors might be immune to rational argument and are effectively under the control of our implacable opponents?Footnote 45
Even when Automated Influence is ineffective, it is perceived to be effective, which undermines trust in the authenticity of one’s fellow citizens’ deliberations.Footnote 46 It is also deeply objectionable that tech companies know how effective this influence is, while leaving the rest of us guessing. Imagine something in the water could be turning people into Cylons. To know whether it is, one needs to test the water at many different points. Only one private company can do so, but they don’t make that data available to us, or reliably tell us whether and where the water is contaminated. That would surely be wrong. But it is similar to our situation now.Footnote 47
Stochastic manipulation corrodes democracy, but it may not be the most serious manipulation enabled by Automated Influence. Instead, systems of Automated Influence are accessories to a more objectionable, more effective, and more traditional species of manipulation. Automated Influence has funnelled people towards human manipulators because the recommendation algorithms that serve us products, services, and especially content are optimised to sustain user engagement; and content produced by manipulators is, by its nature, deeply engaging to the manipulated (Alfano et al. Reference Alfano, Amir Ebrahimi Fard, Clutton and Klein2020). Automated Influence steers us towards manipulators, who then take advantage of our emotions, prejudices, and fears; they lie to us and might ultimately incite us to do terrible things (Vaidhyanathan Reference Vaidhyanathan2018). The worst kind of manipulation in our digital lives right now is being conducted by some of the people who use social media, and they are enabled and empowered by the newsfeed algorithms that drive people towards more sensational, extreme, and polarising content (Hao Reference Hao2021; Tufekci Reference Tufekci2018).
5.f Democratic deliberation and collective decision-making
As noted above, the victims of this kind of manipulation arguably have weighty individual complaints against the manipulators, and indirectly against the systems of Automated Influence that empower them (though they must also take some responsibility for their own susceptibility). But there are also larger-scale consequences. We all have a very weighty public interest in living in societies that are capable of meaningful democratic deliberation as a prelude to collective decision-making.
The greater the extent to which our public discourse is fragmented by misinformation and conspiracy theories, the less capable we are of reasonable, respectful, collective deliberation. Democratic success depends on norms of public discourse in which we view one another as valid interlocutors striving to realise our values in light of broadly accurate and shared beliefs about the world. When significant swathes of the population are simply unmoored from reality and endorse radicalised values that are wildly out of step with not only the common good but also their own interests, it becomes impossible to have this kind of public forum. “Democratic” politics becomes nothing more than a thinly veiled struggle for power, which undermines the legitimacy of the whole political process, and makes events such as the January 6, 2021, insurrection in the US not just more probable: it is all but inevitable. Such events result from a corruption of public discourse enabled by systems of Automated Influence that serve people content that fires them up and keeps them engaged at a speed and scale that content-moderation algorithms (and human content moderators) cannot hope to keep up with.
Though all the major social media platforms are now trying to redress these effects, we cannot set them to one side as incidental or outlying. The problem is much deeper. The entire business model of Automated Influence depends on optimising for engagement. The only recourse is to incorporate a measure of epistemic paternalism—giving people the information that is good for them whether they want it or not. This goes beyond simply taking down unacceptable content, but also ensuring that content promotion is regulated by epistemic ideals. Not only will this prove incredibly challenging to implement, but it aims to solve one problem with the infrastructure of Automated Influence by exacerbating another: the radical centralisation of power in the hands of a few unaccountable corporations. Once again, solving a core problem at the heart of the business model of Automated Influence requires somebody to exercise a significant degree of power; yet giving that power to tech companies simply increases our subjection to their unaccountable authority.
6. Conclusion: A crisis of legitimacy
We lack the space to do justice to all the plausible objections to Automated Influence.Footnote 48 Nevertheless, we see a clear common thread. Automated Influence is, at its heart, a novel mechanism for the exercise of power. It consolidates and adds to the power of the already powerful, and it creates new agents of power. These new modalities for the exercise of power have emerged from the commercial, private sphere, and, as such, their sole claim to legitimacy lies in the consent of those affected by them. But, as we have seen, our individual consent does little to legitimate the new power structures of Automated Influence. Indeed, assessing Automated Influence from the individual perspective at all largely misses the point. Instead, we must recognise that in the digital sphere, through our more or less uncoordinated voluntary choices, we have created a new set of social structures, which shape significant proportions of our lives. And our existing political institutions have proved distinctively ill-suited to governing those novel structures.
When we have to live together, we are driven to find ways of developing freely self-determining political communities so that we can be at home in the laws to which we are subject. But in our digital lives, we are incapable of realising anything approaching this level of collective autonomy. Not only are we subject to the whims of a few extraordinarily powerful corporations, but we are immersed in fundamentally algorithmic governance, our experiences and our options shaped by authorities that are entirely opaque to us: we can’t know how they work, or what effects they have, not only because we are precluded from knowing the facts by intellectual property laws, but because the algorithms themselves are inscrutable, and are little understood even by those who designed them (Selbst and Barocas Reference Selbst and Barocas2018).
Unsurprisingly, this mixture of chaos and untrammelled power has led to seriously deleterious effects (as well as some good ones). The economic imperatives of Automated Influence have left us vulnerable to ubiquitous surveillance. A few corporations control the means of prediction, and the infrastructure that they have created work to fragment us: they reap the benefits of big data, while consigning us to the ideology and practice of small politics, undermining our capacity for collective action. And the mechanisms of Automated Influence allow too few people to subject too many people to stochastic manipulation—relatively trivial for many of the individuals affected, but in the aggregate potentially changing the destiny of nations—and steer us towards the most adept manipulators of all: each other.
These problems all have more or less the same structure: they are collective action problems, the presumptive solution to which is more power, not less—a central authority that can hold the different players in our digital lives to common standards, which allow the market-lubricating aspects of Automated Influence while avoiding the costs. But unless that power is legitimate, we would just trade the feudal chaos of our digital lives for a kind of digital authoritarianism.
What’s more, the only option less attractive than leaving this power with the titans of tech is giving the same kind of access to national governments, even democratic ones (to say nothing of quasi-democratic supranational organisations). Their power over us is already extreme; with unfettered access to our digital lives as well, the balance of power between us and them would be utterly and decisively skewed. What’s more, national governments are by their nature territorial; our digital lives are not. Moreover, democratic governments are notoriously inept at implementing any kind of technological governance. At present, only the tech companies are able to implement and enforce reforms that might address some of the concerns in this paper. And they can do so effectively only if they remain, as they are now, large enough to stifle the kind of competition that leads to a race to the bottom. We are therefore at an impasse: we are subject to new kinds of power and reaping the whirlwind with few appealing solutions for calming the storm without further empowering our digital masters. The task of all would-be self-governing citizens of the internet—political philosophers included—is to answer this crisis of legitimacy with new ways to realise collective self-determination in our digital lives.Footnote 49
Acknowledgements
For their helpful comments and advice on earlier drafts of this paper, we thank Annette Zimmermann, Alex Voorhoeve, Kate Vredenburgh, Max Fedoseev, Jake Goldenfein, Charles Evans, Selim Berker, Anne Gelling, the members of the HMI project at ANU, and the anonymous referees for this journal.
Claire Benn (PhD, University of Cambridge) is a research fellow on the Humanising Machine Intelligence Grand Challenge project at the Australian National University. Her current research focuses on the intersection of ethics, political philosophy, and technology.
Seth Lazar (DPhil, University of Oxford) is a professor at the School of Philosophy and project leader of the Humanising Machine Intelligence Grand Challenge at the Australian National University. He works on the moral and political philosophy of data and AI.