Hostname: page-component-848d4c4894-x5gtn Total loading time: 0 Render date: 2024-06-07T21:12:06.241Z Has data issue: false hasContentIssue false

Rethinking rights in social media governance: human rights, ideology and inequality

Published online by Cambridge University Press:  29 June 2023

Rachel Griffin*
Affiliation:
Law School of Sciences Po, Paris, France
*
Corresponding author. E-mail: rachel.griffin@sciencespo.fr
Rights & Permissions [Opens in a new window]

Abstract

This paper aims to question the dominance of human rights as the primary normative framework for European social media regulation, and academic research in this field. Analysing EU legislation and recent ECJ cases, it shows that issues like discriminatory content moderation, profiling, and promotion of stereotypes cannot adequately be addressed within a human rights framework. The centrality of individual rights in the EU legal regime not only fails to address collective issues, like platforms’ influence on culture and social norms, but cannot even offer effective, equal protection to individuals. In policy debates, the depoliticised and individualistic language of human rights can legitimise corporate activities and downplay important questions about the political economy of this privatised, highly-concentrated, advertiser-funded industry. The paper also considers interpretations of human rights as structural conditions or collective values, and argues that they cannot fully overcome the limitations discussed here. Given the entrenched role of fundamental rights in EU law, critics of social media cannot avoid relying on them. However, academics should also seek to develop more explicitly political critiques, based on alternative normative visions.

Type
Core analysis
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

In November 2021, leaked internal documents revealed that the software Facebook used to automatically remove hate speech was heavily biased against minorities: around 90 per cent of ‘hateful’ posts removed criticised white people and/or men.Footnote 1 External studies also suggest that Facebook’s content moderation systems disproportionately target marginalised groups.Footnote 2 Facebook has since changed its race-blind moderation policies, which ignored systemic inequalities between racial groups and treated speech targeting privileged and marginalised groups as equivalent.Footnote 3 However, discrimination will almost certainly persist, for example due to widespread bias in artificial intelligence (AI) classifiers.Footnote 4 Other platforms also mostly operate race-blind policiesFootnote 5; user accusationsFootnote 6 and survey evidenceFootnote 7 suggest they exhibit similar racial biases. Most major platforms also strictly ban content deemed ‘adult’ or sexually suggestive, and content related even tangentially to sex work. These policies not only materially harm sex workers,Footnote 8 but are typically enforced arbitrarily and disproportionately against women of colour and LGBTQ+ people.Footnote 9

Content moderation has become a particular focus of academic and political debates, but moderation policies like these are not the only ways platforms reinforce existing social inequalities. Targeted advertising-based business models require continual surveillance and classification of users, which both reflects and reproduces existing social inequalities.Footnote 10 Gender- and race-based targeting enable discrimination in employment and housing adverts,Footnote 11 and facilitate advertising campaigns which exploit reductive stereotypes.Footnote 12 Platforms impose simplistic and reductive identity categories, like binary gender classifications,Footnote 13 to fulfil marketers’ demands for gender-segregated audiences.Footnote 14 Such business models not only enable direct discrimination, but also channel online culture and communication in predictable directions that reinforce dominant social norms because they appeal to advertisers. According to the most heavily-promoted social media content, women wear makeup, while men play video games.Footnote 15 Queer people can only be visible if they are desexualised and embrace heteronormative lifestyles.Footnote 16 As Iris Marion Young emphasises, restrictive social norms and stereotypes like these are as much a form of injustice as more obvious forms of discrimination and maldistribution.Footnote 17

Social media regulation must do more to address these manifestations of social inequality. So far this has not been a focus for European lawmakers, who have instead prioritised issues like misinformation, extremism, and copyright infringement, but these problems are beginning to be recognised. Initiatives including the self-regulatory Code of Conduct on Hate SpeechFootnote 18 and 2018 Audiovisual Media Services Directive (AVMSD),Footnote 19 as well as the 2022 Digital Services Act (DSA),Footnote 20 contain provisions aimed at addressing discrimination and prejudice. They also address broader issues, such as opacity and arbitrariness in content moderation, which are relevant to addressing bias and inequality.

The predominant framework through which the European Union (EU) addresses such issues is that of fundamental rights.Footnote 21 In various legislative measures, mandating respect for users’ rights serves as a key safeguard against state and corporate censorship. Now, under the DSA, platforms will increasingly have to consider human rights when implementing moderation and content governance policies.Footnote 22 Moreover, human rights have been the normative starting point for much academic commentary and activism around social media regulation. They function as a default moral standard against which law and policy can be judged, and a widely-accepted normative basis for calls for reform.

This paper aims to question the dominance of rights frameworks in law, policy and academic debates around social media. Without dismissing the importance of human rights, their ubiquity as the rarely questioned moral yardstick against which all platform and state policies are measured is problematic. As a legal framework, even if human rights protection is made more effective and comprehensive, it is structurally unsuited to addressing systemic, collective and cultural issues that are irreducible to discrete decisions or individual victims. As a mode of political discourse, human rights favour certain (liberal, individualistic) framings of problems and solutions and distract from issues which are not easily framed in these terms. Thus, human rights are not well suited to addressing structural inequalities in social media governance, and should not be the sole legal or normative framework for regulation and research. Research concerned with social media’s unequal impacts would benefit from drawing on and developing alternative normative frameworks and values which place more emphasis on structural conditions and the distribution of power and resources: these could for example depart from theories of data justice,Footnote 23 media justice,Footnote 24 or democratic legitimacy.Footnote 25

The paper proceeds as follows. Section 2 outlines the roles of human rights in EU social media law and surrounding academic debates. Section 3 discusses the limitations of rights-based legal frameworks. It first shows that individual legal rights are not only practically incapable of offering effective and equal protection, but also structurally unsuited to addressing systemic inequalities. It then addresses the counter-argument that fundamental rights in EU social media law function not only as individual legal protections, but also as general principles, arguing that the indeterminacy of these principles makes them equally incapable of effectively addressing systemic inequalities. Section 4 argues that the dominance of human rights framings in legal and policy debates about social media does not serve progressive goals. It reinforces corporate power by suggesting that corporations can legitimately control the online public sphere if they make minor operational reforms, and sidelines criticisms of their business models and market structure. Finally, Section 5 considers reconfigured human rights frameworks which arguably better address the paper’s key concerns. Attempts to reinterpret human rights for the digital age in more structural or collective ways cannot fully overcome the limitations identified. However, arguments from critical race theory that human rights are imperfect but necessary in the absence of widely supported alternatives should be taken seriously. Research and advocacy should use rights discourse and litigation pragmatically, while recognising their limitations and more strongly emphasising alternative frameworks which centre political economy, structural oppression and collective action.

2. Fundamental rights and social media

Since the 2000 E-Commerce Directive (ECD) set the baseline standards for European platform regulation,Footnote 26 the role of fundamental rights has markedly increased. The ECD primarily pursued economic objectives, instituting a liberal regulatory regime in order to develop the nascent internet industry, inspired by the Clinton administration’s pro-market internet policies.Footnote 27 In contrast, the preamble to the DSA – which overhauls and updates the ECD – states that ‘protection of fundamental rights’ is one of its primary objectives, alongside two other classic liberal values: public security and market development.Footnote 28 This growing importance can be explained partly by the increased social and political significance of online platforms, which has produced a consensus that they must be regulated to protect ‘European values’,Footnote 29 and partly by broader developments in EU fundamental rights law, such as the Charter becoming binding and the developing jurisprudence on states’ positive obligations.Footnote 30

Section 2(A) first outlines the four main ways EU social media law draws on fundamental rights. Section 2(B) then briefly reviews academic literature on EU fundamental rights and international human rights law (IHRL) in this context. Legal literature on social media is extensive and diverse, but a large proportion of critical EU law research relies on fundamental rights as the key normative framework, focusing on potential threats to or inadequate protection of fundamental rights. Outside the EU, activists and academics have influentially argued that IHRL should play a greater role in social media governance.

A. Fundamental rights in EU law

Fundamental rights play four major roles in EU social media law. First, mandating respect for rights frequently serves as a key safeguard against excessive censorship by platforms. For example, the controversial 2021 Terrorist Content Regulation (TCR) requires platforms to remove content flagged by law enforcement within one hour.Footnote 31 Its recitals repeatedly mention that authorities and platforms handling removal orders must respect fundamental rights.Footnote 32 The even more controversial 2019 Copyright Directive (CD) created a new liability regime in which platforms must make best efforts to ensure unavailability of unlicensed copyright works, effectively requiring filtering of all user uploads.Footnote 33 Article 17(10) states that industry best practices on enforcement should be guided by fundamental rights.Footnote 34

In this context, as well as serving as guiding principles, fundamental rights safeguards are operationalised through legal rights for individual users to contest moderation decisions. Article 17(9) CDFootnote 35 and Article 10 TCRFootnote 36 require platforms to offer appeals systems for users whose content is removed pursuant to platforms’ legal obligations. The DSA extends this obligation to all moderation, and additionally allows users to appeal to independent out-of-court institutions.Footnote 37 This regulatory approach ultimately aims to prevent arbitrary moderation by empowering individuals to defend their rights.

Second, several recent EU regulations incorporate ‘private ordering’: platforms are required to take action to pursue certain goals, but can decide largely autonomously how they do so.Footnote 38 Such legislation typically envisages fundamental rights as a guiding principle for, and constraint on, such actions. For example, under Article 5 TCR, platforms must take ‘specific measures’ to address terrorist content; an indicative list of possible measures is given, but platforms have broad discretion.Footnote 39 Similarly, Article 28b AVMSD requires video-sharing platformsFootnote 40 to take appropriate and proportionate measures to protect children from harmful content, and the public generally from illegal content such as hate speech.Footnote 41 Both provisions specify that these proactive measures must have regard to users’ fundamental rights. The DSA gives fundamental rights an even broader role. Article 14(4) requires all platforms to have regard to users’ rights when implementing moderation policies.Footnote 42 Articles 34 and 35 require ‘very large online platforms’ with over 45 million EU users to regularly evaluate and address ‘systemic risks’ to various public values, including fundamental rights.Footnote 43

Third, fundamental rights principles serve as judicial constraints on EU legislation. In the leading SABAM cases,Footnote 44 the European Court of Justice (ECJ) established that the ECD’s prohibition of general monitoring obligations means hosting platforms cannot be required to check all uploads for illegal material. Central to both decisions was the balancing of copyright owners’ IP rights against users’ rights to privacy and freedom of expression and information, and platform companies’ freedom to conduct a business. More recently, the Polish government brought an unsuccessful judicial review case against Article 17 CD, arguing that requiring advance filtering of user content (which is inevitably somewhat over-inclusive) was incompatible with fundamental rights, in particular freedom of expression. The ECJ held that Article 17 is compatible with fundamental rights, but only if interpreted narrowly, to minimise removal of non-infringing content.Footnote 45

Finally, rights play a prominent rhetorical role in orienting and legitimising EU social media regulation. As noted above, fundamental rights protection is framed as a key objective and guiding value of the DSA – both in the legislation itself,Footnote 46 and in EU officials’ surrounding press statements.Footnote 47 This shared understanding can shape the functioning of the EU regulatory regime as a whole, including aspects which are not based on individual fundamental rights claims, such as the regulatory oversight by the Commission and national digital services coordinators (DSCs).Footnote 48 The consensus that fundamental rights protection is a key goal of the DSA can be expected to influence how these actors engage with platforms and understand their regulatory responsibilities. This is especially the case as some aspects of the regulatory framework explicitly encourage them to frame their goals in rights terms. For example, industry codes of conduct led by the Commission must address the categories of ‘systemic risk’ specified in Article 34(1), which prominently include risks to fundamental rights.Footnote 49

B. Human rights in academic debates

This framing is also dominant in academic debates. Much of the leading scholarship on EU platform regulation evaluates its compatibility with fundamental rights.Footnote 50 For other scholars, fundamental rights provide a general normative orientation to analyse the implications of practices like algorithmic recommendations and guide recommendations for future regulation.Footnote 51 This is by no means the only normative perspective represented in the literature. For example, competition scholars have focused on market structure and economic power,Footnote 52 while others address broader social concerns including media pluralismFootnote 53 and the commercialisation of online discourse.Footnote 54 Overall, however, fundamental rights provide a prominent, widely accepted normative framework for critiques of social media law.

Another, more international body of literature calls for IHRL to play a greater role in social media governance.Footnote 55 Three broad strands in this literature can be identified. First, there have been calls for platforms to voluntarily respect or formally consider IHRL in decision-making processes, as a form of corporate social responsibility commitment.Footnote 56 This has been advocated prominently by former UN freedom of expression rapporteur David Kaye,Footnote 57 as well as the coalition of human rights NGOs behind the Santa Clara content moderation principles.Footnote 58 Such calls typically attach particular weight to the UN Guiding Principles on Business and Human Rights (UNGPs), which outline a ‘moral responsibility’ for businesses to respect IHRL.Footnote 59 In a sympathetic but critical overview of this literature, Evelyn Douek identifies six key commonly-cited benefits: strengthening legitimacy; providing globally applicable norms; providing a ‘common vocabulary’ for debates; helping companies resist state censorship demands; providing procedural safeguards; and being ‘the least-worst option available’.Footnote 60 She also highlights difficulties including the indeterminacy of IHRL and the possibility that it will simply serve as legitimating rhetoric for companies, but concludes that it can be useful and calls for multi-stakeholder debate to further develop IHRL norms for the social media context.

A second strand makes similar arguments for IHRL’s benefits, but it places more emphasis on reining in platforms whose scale and power threaten users’ rights. Accordingly, instead of voluntary commitments, it calls for dominant platforms to be directly bound by IHRL. For example, Agnès Callamard argues that, in an ideal world, a new international human rights treaty would create duties for large social media platforms (while acknowledging that this is unlikely and has drawbacks in practice).Footnote 61 Other authors interpret existing IHRL norms as already creating such obligations in some circumstances – in particular, where dominant ‘gatekeeper’ platforms can significantly restrict freedom of expression.Footnote 62

A final strand emphasises states’ positive obligations, which are well-established under major human rights treaties. The UNGPs state that businesses are morally obliged to respect human rights, but states are legally obliged to protect them, including by regulating businesses. Positive obligations to protect rights including freedom of expression by regulating private actors are also well-established in European human rights jurisprudence.Footnote 63 Accordingly, it is argued that states must regulate social media to ensure platforms do not significantly interfere with users’ rights.Footnote 64 Like the EU law literature discussed above, such perspectives criticise state regulation which mandates or incentivises over-broad censorship. They also call for regulation to ban arbitrary censorship by platforms, and to require proactive action on issues like online hate speech.Footnote 65

3. Human rights as a legal framework

Critical scholarship has long argued that legal frameworks based on individual rights are unsuited to addressing systemic inequalities. These arguments are highly relevant in the context of social media, where unequal outcomes like those discussed in the introduction principally result from systemic issues which are not reducible to individual rights. Building on established critiques of human and fundamental rights law, Section 3(A) shows that the individual procedural rights established in the CD, TCR, and DSA are incapable of addressing systemic inequalities in content governance. Section 3(B) then discusses fundamental rights as general guiding principles for social media governance, showing that they are in practice unlikely to place meaningful constraints on social media companies.

A. Individual rights claims

David Kennedy suggests that human rights frameworks often emphasise ‘participation and procedure’ over material resources and capabilities.Footnote 66 This tendency is highly visible in EU social media policy. Individual procedural rights – most importantly the ability to appeal content removals – have become the go-to mechanism to strengthen accountability in content moderation. In the CD and TCR, these appeals procedures are the key safeguard against state-mandated or private censorship.Footnote 67 In the DSA, they are one element of a broader framework which also involves more systemic safeguards, such as oversight by national DSCs, and general mandates for content policies to respect fundamental rights.Footnote 68 However, the notice and appeal procedures involve the most concrete, detailed and extensive obligations.Footnote 69 One expert has described the DSA as ‘in its essence…a digital due process regulation bundled with risk management tools’.Footnote 70

Giovanni De Gregorio describes this regulatory approach as ultimately based on a liberal ethos which aims to protect individual autonomy and dignity by enabling individuals to understand and contest decisions which affect them.Footnote 71 This very much aligns with the liberal values traditionally identified as central to human rights law.Footnote 72 It is plausible that the reliance on fundamental rights as the primary safeguard against abuses of power in the EU legal framework, and their prominence in surrounding political discourse, have predisposed EU legislators towards this approach – especially as such procedural remedies are often a key demand of scholars and NGOs advocating a greater role for human rights in social media governance.Footnote 73

However, regulation focused on individual rights and due process is inherently limited in two ways. First, even where individual rights claims are in principle relevant, in that the issues at hand primarily affect legally protected interests of identifiable individuals, in practice they inevitably offer imperfect and unequal protection. Second, more fundamentally, individual legal rights simply cannot address many important issues related to contemporary social media: in particular, those that result from systemic or institutional design choices, and that affect collective rather than individual interests.

Practical limitations

Even where decisions directly affect individual rights and are open to challenge by those individuals – for example, when platforms arbitrarily censor content that does not violate their stated policies, or content protected by mandatory copyright exceptions – several factors suggest that most people’s rights will not be effectively protected in practice. First, these procedural protections are unlikely to be widely used. Evidence from the longstanding notice-and-takedown system in US copyright law is that people rarely submit appeals.Footnote 74 Many users simply lack the time, energy or motivation. They may also be intimidated by needing to state that their content does not infringe copyright, potentially starting a legal conflict with corporate rightsholders,Footnote 75 while often facing intentionally off-putting messages from platforms.Footnote 76 Similar patterns can be expected in other policy areas. Non-expert users are unlikely to confidently understand legal categories such as hate speech, and research shows that users generally have little understanding of platforms’ content policies.Footnote 77 This will inevitably limit their ability to challenge platforms’ application of these norms. The copyright literature suggests that formally protecting rights without considering the context in which they are exercised, and the imbalances of knowledge and power between users and platform companies, is unlikely to be effective in practice.

Second, rights frameworks inevitably fail to represent everyone’s rights equally. Generally, individuals with more economic and social capital are more likely to have the time and informational resources needed to enforce legal rights.Footnote 78 Inequalities in digital literacy are also highly relevant in this context: many people, disproportionately the economically and socially disadvantaged, do not understand basic features of social mediaFootnote 79 and are thus unlikely to engage with appeals procedures. Some evidence also suggests that women are less likely than men to submit appeals, and more likely to be discouraged from sharing content in future.Footnote 80 In Germany, some users have successfully sued platforms for violating their own terms and conditions – which must be interpreted so as to adequately respect users’ constitutional rights – by removing content arbitrarily or without notice. A recent analysis of the case law indicates that this possibility has so far mostly been used by right-wing men whose content was removed as hate speech.Footnote 81 While some or all of those cases may have been well-founded, this skewed uptake of rights meant to protect everyone illustrates the practical limitations of individual legal rights in promoting equality.

Insofar as enforcement of legal rights reflects existing social inequalities, their distributional effects will tend to be regressive, shifting limited resources towards those individuals who are best able to enforce their rights.Footnote 82 Compliance with the DSA’s detailed procedural obligations is expected to be resource-intensive.Footnote 83 This is especially relevant for smaller platforms, but leaked information and journalistic investigations suggest that even at the largest and wealthiest companies, moderation and security teams are overstretched and under-resourced.Footnote 84 These procedural rights might thus not only disproportionately benefit relatively privileged individuals, but could divert resources from other areas – such as systemic improvements to moderation processes,Footnote 85 or research into safer technological designFootnote 86 – that might bring more benefits to marginalised users.

Notably, some of these limitations are recognised in Advocate General Øe’s Opinion in Poland’s judicial review case against Article 17 CD. He clearly acknowledges that appeals procedures are insufficient to protect freedom of expression and information: many users will not submit them, and even successful appeals may be too late for the content to have its intended impact (he does not address inequalities between users).Footnote 87 To minimise mistaken removals, his favoured interpretation requires platforms to block only content which manifestly infringes copyright and is not covered by an exception.Footnote 88 This was largely followed by the ECJ, which did not use the word ‘manifest’ but stated that content should not be filtered where determining infringement requires independent (manual) assessment.Footnote 89 The judgement thus recognises that individual procedural protections are an insufficient safeguard against censorship: filtering systems must rather be designed to minimise it from the start.

However, neither Opinion nor judgement specifies how this should be legally guaranteed. Advocate General Øe suggests platforms blocking non-infringing content could lose their immunity from intermediary liability for infringementFootnote 90 – which would produce the odd scenario that users’ rights to share content based on or similar to copyright material rely on litigation by copyright owners. Otherwise, concrete legal solutions are not proposed. Instead, both judgementFootnote 91 and OpinionFootnote 92 state that member states must ensure that their implementing legislation and its supervision by judicial and administrative authorities protect non-infringing content, and that the stakeholder dialogues on best practices for content filtering must ensure protection of users’ rights.

The fact that both leave it to other actors to determine how overblocking should be prevented could be taken to illustrate the difficulty of identifying solutions to structural problems while thinking in terms of individual rights. However, even if this reticence is explained on other grounds, such as the ECJ’s limited institutional competence to devise solutions, it is unclear how effective any national-level safeguards will be. Most Member States so far largely transposed Article 17 word-for-word,Footnote 93 and copyright scholars disagree over what (if any) further action the judgement requires.Footnote 94 Overall, the Article 17 litigation suggests that a structural approach to content governance, protecting collective interests in free exchange of information as well as individual rights, must be clearly incorporated into EU legislation from the outset and not only read in as an afterthought.Footnote 95

Structural limitations

This points to broader limitations of rights frameworks. Even if legal rights could be perfectly and fairly enforced, critics have argued that their inherently individualistic nature makes them unsuited to addressing structural and collective problems. This line of criticism is related to leftist and feminist critiques of legal frameworks focusing on formal equality – often protected through individual rights to equal treatment – over substantive or transformative equality, understood as requiring structural or institutional change.Footnote 96 Similarly, critical legal studies (CLS) and law and political economy scholars have argued that pursuing equality requires a more holistic analysis of how legal institutions allocate power and resources, and make some people more vulnerable to rights violations than others.Footnote 97 Generally, rights claims are not suited to articulating the need for systemic or institutional reforms,Footnote 98 instead offering individuals ‘empowerment…understood as agency within existing constraints’.Footnote 99

In technology law, similar arguments have been forcefully made by privacy and data protection scholars. Leading scholars have increasingly sought to reframe privacy as a collective value serving social interests like political and intellectual freedom, not (only) an individual interest,Footnote 100 and have argued that even perfectly-enforceable rights could not adequately address the social impacts of contemporary data-processing practices, which are essentially collective.Footnote 101 A person’s data is primarily valuable not as information about them, but as part of much bigger datasets which can be used to infer information about and act on others. Thus, data processing may respect the data subject’s rights, but harm other people or society generally. Addressing these effects requires normative frameworks which centre collective interests and democratic control.Footnote 102

These arguments are also highly relevant in the social media context. Many of the most consequential and concerning practices of contemporary social media companies do not primarily involve decisions that directly affect individuals, but higher-level decisions about how technical and operational systems are designed. This has most comprehensively been shown by Douek, who argues that ‘the scale and speed of online speech means content moderation cannot be understood as simply the aggregation of many (many!) individual adjudications’.Footnote 103 The most salient questions are not about individual posts, but how moderation systems function and, since errors are inevitable, which types will be preferred. Rights to understand and contest individual moderation decisions do not allow users to challenge the systemic choices that structure them, even though it is these that underlie many of the systemic biases discussed in the introduction.

For example, strict bans on sexual content – in place at almost all major platformsFootnote 104 – tend to be enforced arbitrarily and disproportionately against LGBTQ+ users,Footnote 105 for several reasons. On the one hand, policy enforcement frequently discriminates against LGBTQ+ people, due to algorithmic bias in AI moderation tools (which are often built on image classification datasets pervaded by homophobia,Footnote 106 or default to blunt censorship of LGBTQ+-related keywordsFootnote 107) and widespread ideological biases which make human moderators disproportionately likely to see queer sexuality as adult or inappropriate.Footnote 108 On the other hand, even assuming unbiased enforcement would be possible, many queer communities place particular value on open and unconventional expressions of gender and sexuality, meaning they will be disproportionately harmed by policies banning explicit or suggestive content.Footnote 109 The appeals processes established by Articles 20–21 DSA would allow users to challenge instances where content which clearly respects platforms’ stated policies on adult content is removed – but not the reasonableness of the policies themselves, the AI systems used to implement them, the widespread cultural prejudices against queer sexuality, or the broader assumptions that all social media should be policed so as to be safe for children.Footnote 110 Individual rights claims cannot achieve the systemic changes to policies, technical tools and company practices that would be necessary for substantively equal treatment of LGBTQ+ users. Nor do they offer democratic oversight or contestation of how these systems are designed.

These rights are also structurally incapable of representing all relevant interests. In particular, enabling individuals to challenge removal of their content fails to represent the collective interests of the content’s potential audience.Footnote 111 Equally, although the DSA in principle allows challenges to decisions not to remove content,Footnote 112 harmful content such as hate speech or misinformation often primarily affects collective interests rather than identifiable individuals, making such challenges less likely.Footnote 113 These problems reflect established limitations of rights frameworks. As Salomé Viljoen demonstrates in the privacy context, they cannot address decisions that are directly about one person, and respect their rights, but have harmful downstream effects for others or for society generally.Footnote 114

Moreover, in addition to moderating content, platforms make many other governance decisions with systemic effects on user behaviour and information flows. These include the content and interactions technically permitted by interfaces (for example, the ease of commenting on strangers’ posts affects the incidence of abuse and harassmentFootnote 115), the presentation of content (for example, TikTok’s presentation of short-form videos with little context appears to encourage misinformationFootnote 116), and recommendation systems (for example, Instagram’s algorithms appear to recommend photos more when users wear revealing clothingFootnote 117). These choices have a much broader impact than individual moderation decisions, but are not easily addressed through individual rights. This may be part of the reason that platform recommendations and other design choices are left largely unregulated by the DSA.Footnote 118 Its focus on content moderation as the primary area of concern makes sense within a rights framework which emphasises harms to identifiable individuals at the expense of broader questions about how platforms shape online media and communications.

Citing Young,Footnote 119 Anna Lauren Hoffmann argues that focusing on ‘rights, opportunities and resources’ fails to capture injustices stemming from the ways information technologies ‘shape normative standards of identity and behavior’.Footnote 120 Such concerns are particularly relevant for social media, as intermediaries for all kinds of media and cultural consumption.Footnote 121 Research in creator studies has shown that professional social media creators perceive moderation and recommendation systems as pervasively biased in favour of white, straight, conventionally attractive creators,Footnote 122 and that the most successful creators are often those whose content conforms to dominant norms and stereotypes around gender, race and class.Footnote 123 This has implications not only for equality between creators, but for social norms and culture much more broadly. For example, survey evidence shows that social media reinforce gendered beauty standards which young women and non-binary people experience as oppressive and often distressing.Footnote 124 Such diffuse, cumulative impacts on culture and media cannot be addressed through individual rights, but require more systemic consideration of the logics and objectives of content curation systems.

Overall, EU regulation of content moderation lends support to arguments that rights frameworks tend towards conservatism, offering better treatment for individuals within existing social institutions rather than institutional reform or democratic governance.Footnote 125 Alexander Somek has argued that EU anti-discrimination law essentially aims to guarantee all individuals fair access to markets, unhindered by irrational prejudices, as opposed to alleviating dependence on markets or the unequal outcomes they produce.Footnote 126 This could aptly describe the DSA’s regulatory approach. Platforms can determine according to their own business interests what content to allow, how their policies are enforced, and how they organise and promote content; procedural rights simply aim to guarantee fair access to these market services. As this section has shown, not only do these rights fail to offer substantively equal protection to all users; they are fundamentally incapable of addressing systemic biases and inequalities in social media governance.

B. Rights as principles

A possible counterargument to this is that individual legal claims are not the only, or even the primary way that fundamental rights are understood and protected in EU social media law. As section 2(A) outlined, they are also operationalised in numerous provisions as general guiding principles for companies and regulators. Private ordering measures like those in the TCR and AVMSD require platforms to consider fundamental rights when implementing their legal obligations,Footnote 127 while the DSA gives fundamental rights a broader guiding role. Article 14(4) requires platforms to be ‘diligent, objective and proportionate’ and have regard to fundamental rights whenever applying and enforcing their terms and conditions. Notably, it explicitly mentions the right to media freedom and pluralism, clearly indicating that fundamental rights are understood here as collective values, not only individual interests. In addition, Articles 34–35 require very large online platforms to regularly assess and take measures to mitigate ‘systemic risks’ to various social values, including fundamental rights.

These provisions could arguably address structural issues like those discussed above, since they require platforms to consider not only whether they are treating individuals fairly, but also whether they are appropriately balancing everyone’s fundamental rights in the design and operation of their systems as a whole – with Articles 34–35 extending to all design and business practices, not only moderation policies.Footnote 128 For example, operating content moderation systems which are systematically biased against LGBTQ+ users could be argued to violate Article 14(4). Articles 34–35 could require large platforms to identify design features that exacerbate problems like misinformation or harassment, and change them to mitigate these risks.

However, the claim that fundamental rights in principle could address certain issues does not mean they are actually likely to be interpreted and enforced in that way. The CLS movement influentially argued that rights (and law generally) are intrinsically indeterminate – meaning that applying them in particular situations inevitably involves significant discretion, and will be influenced by decision-makers’ perspectives and ideologies.Footnote 129 Several factors make the indeterminacy critique particularly relevant in this context, and suggest that fundamental rights will not place significant constraints on platform companies.

First, not only are the Charter rights themselves abstract and open to different interpretations, the legal provisions requiring platforms to take them into consideration are even more vague. What it concretely means for platforms to ‘take into account’Footnote 130 or have ‘due regard to’Footnote 131 rights is unclear, though it seems obviously less stringent than a requirement to ‘respect’ or ‘protect’ them; arguably companies could comply purely by documenting consideration of relevant rights in decision-making processes, without making any substantive changes.Footnote 132 The novel concept of a systemic risk to rights is even less clear: what does it mean for a right to be at risk, and how widespread must that risk be to be systemic? Moreover, almost any content governance decision affects multiple, competing rights.Footnote 133 The requirement to have regard to all of them offers no indication of how to resolve such conflicts. Since there are so many plausible interpretations of the relevant rights and the appropriate mitigation measures, Articles 14 and 34-35 DSA offer virtually no substantive guidance on content governance.

This uncertainty is compounded by the lack of established standards on how Charter rights should be interpreted in the social media context and in relation to private companies. This is particularly relevant to Article 21 of the Charter on non-discrimination, which would obviously be central in mandating platforms to redress systemic inequalities. By default, the Charter only binds EU and Member State institutions. Some rights, including non-discrimination, can bind private actors where they have been concretised by EU legislation.Footnote 134 However, of the various EU anti-discrimination measures, only the 2000 Race Equality Directive (RED) covers all private services; discrimination on other grounds is only prohibited in specific contexts, such as employment.Footnote 135 Thus, the EU law right to non-discrimination generally does not apply to social media, making it difficult to establish what it would mean for social media companies to have regard to this right.

In practice, the meaning of fundamental rights provisions will in the first instance be determined by platform companies themselves, since they are responsible for showing that they have considered relevant rights, with regulators playing the secondary role of overseeing compliance. A likely outcome is that platforms make whatever decisions they would have made regardless, while going through the formalities of risk assessments and using fundamental rights language to justify them.Footnote 136 Where they do make substantive changes, they will probably prioritise the most superficial and least costly measures. Journalistic investigations have documented multiple cases where major companies’ internal research teams identified changes to recommendation algorithms that could reduce the visibility of harmful content, but company executives rejected them because they could reduce engagement and advertising revenue.Footnote 137 Given these business incentives, companies will likely interpret and balance fundamental rights in ways that require only minor adjustments, rather than making the extensive investments in technology and human resources which would be needed to address systemic biases in content moderation, or redesigning platforms to prioritise other goals over profit maximisation.

Of course, national regulators and the CommissionFootnote 138 can shape the interpretation of the relevant provisions and ensure that compliance is not a mere formality. As well as threatening fines for non-compliance where policies are not considered to have due regard to fundamental rights or risk assessments are deemed inadequate, they can develop abstract rights provisions into more concrete and stringent standards: for example, by publishing guidance and helping develop industry codes and best practices.Footnote 139 In turn, independent research and activism can influence regulators’ agendas, pushing them to focus on systemic issues.Footnote 140

However, the indeterminacy of fundamental rights law will still to some extent limit regulators’ ability to put pressure on platforms. If companies produce self-serving but defensible accounts of how they considered and balanced fundamental rights, it will be difficult for regulators to make a clear case for non-compliance. Regulators’ capacities and motivations to push for resource-intensive, systemic reforms can also be questioned. The DSA regime is generally focused on individual due process and committed to a market-based model of social media,Footnote 141 suggesting that the Commission is not aiming for a particularly interventionist approach. The DSA’s procedural obligations are also much more detailed and specific than the open-ended provisions mandating consideration of fundamental rights, and even enforcing these obligations will be resource-intensive for regulators. This may leave little capacity for proactive investigation and oversight of other provisions where establishing non-compliance would be less clear-cut.

Finally, even if fundamental rights are interpreted as representing collective interests and values, they remain unsuited to addressing systemic problems which go beyond particular companies. Critical scholarship on IHRL and EU fundamental rights law has argued that rights frameworks focus attention not only on individual victims, but also on the wrongdoing of individual perpetrators, at the expense of broader social structures which produce inequality.Footnote 142 Similarly, EU social media law operationalises fundamental rights as guiding principles for individual companies; it thus excludes consideration of how moderation and other aspects of social media governance unfold across the industry.

For example, returning to the example of sexual content bans, such policies are unlikely to be regarded as violating rights in individual cases: individuals will rarely be severely harmed by being unable to post on a particular platform, and companies could easily defend their policies as a proportionate restriction of free speech, justified by child safety and by their own business interests in appealing to a wide audience. However, the cumulative effects of requiring almost all major platforms to be child-friendly and free of sexual content are deeply concerning. It bars adults from healthy forms of self-expression, impedes access to sexual health advice, and suppresses queer subcultures.Footnote 143 It also sets questionable boundaries for art and culture more broadly, as when museums are prevented from posting images of nude art.Footnote 144 Rights frameworks do not facilitate discussions of when and in what context society generally, and particular communities, need platforms that permit adult content.

Douek’s call for ‘content moderation as systems thinking’Footnote 145 should thus not be limited to considering system design within individual platforms, but should consider how biases and unequal impacts play out across the social media ecosystem. This cannot be achieved by subjecting individual market actors to fundamental rights principles, but requires broader reform of how the industry is governed.

4. Human rights as political discourse

As this suggests, the reliance on fundamental rights as guiding principles does not only encounter practical problems, but also raises broader normative questions about whether this is the most desirable framing to understand and discuss policy issues. In EU technology regulationFootnote 146 and surrounding civil society advocacy,Footnote 147 ‘fundamental rights’ sometimes seems to be used as a synonym for the public interest. All policy concerns can be understood as threats to rights; stronger rights protection must therefore be the solution. This influences legal and policy debates in ways which are generally unlikely to favour progressive goals.

Critical IHRL scholars have argued that human rights discourse can displace or delegitimise alternative normative frameworks focused on structural and political-economic conditions, democratic governance and equality.Footnote 148 While such effects are difficult to conclusively demonstrate, it is strongly arguable that the predominance of rights framings is displacing other normative frameworks in the social media context. Given the consensus around the importance of human rights, researchers and other stakeholders are incentivised to frame issues in rights terms in order to bolster their authority and attract support. These incentives are now also built into the DSA. To challenge platforms’ content policies under Article 14(4), stakeholders must frame issues in fundamental rights terms. Similarly, researchers requesting access to platform data must show that their research relates to one of the systemic risks categorised in Article 34(1).Footnote 149 Fundamental rights offer the broadest and most flexible category, meaning that unless research involves another more specific area, such as electoral integrity, researchers will generally have to frame issues in terms of their fundamental rights impacts. This may not be completely incompatible with alternative normative frameworks focused on more collective values, like justice or democracy, but is likely to displace them to some extent.

One implication of this is the depoliticising nature of human rights discourse. Human rights purportedly express universal values,Footnote 150 and thus promise a way of making authoritative normative claims which bypass political disagreements.Footnote 151 However, choosing to understand issues in terms of individual legal rights does have political implications. The extent to which the individualism of human rights frameworks favours right-wing politics is disputed,Footnote 152 but they are widely considered to align with liberal ideologies which emphasise individual autonomy over other values.Footnote 153 By focusing on protecting individuals against rights violations by identifiable perpetrators, human rights can divert attention from the macro-level social, political and economic context. Susan Marks describes this as ‘false contingency’: focusing on particular instances of individual harm can make them seem ‘random, accidental or arbitrary’, and therefore fail to address the underlying structural conditions.Footnote 154 Indeed, focusing on isolated rights violations may legitimate these conditions, by normalising activities which do not violate rights directly and obviously.Footnote 155

This depoliticising tendency is apparent in the social media literature. Authors advocating a greater role for human rights often emphasise legal form and procedure over political and normative substance: for example, arguing that IHRL would not prevent platforms from setting content policies at their discretion, but would improve matters by requiring transparency and due process.Footnote 156 The impression is often that they do not mind by whom and in whose interests social media are governed, provided they follow some basic procedural rules. Nonetheless, how media and communications infrastructure are governed is a political question, with distributional implications. The implicit view that the current marketised industry is acceptable, so long as corporations respect human rights and due process norms, is a political and ideological position – one that is rarely explicitly defended.

Another recurring argument is that human rights provide a common language and structured framework to address problems, without necessarily prescribing determinate solutions.Footnote 157 However, it is misleading to present having a common language as unqualifiedly good, as if any shared language is equivalently useful. Such claims have a further depoliticising effect, suggesting that lack of consensus about policy issues is due to miscommunication between stakeholders, rather than power imbalances, conflicting interests or fundamental disagreements.Footnote 158 Human rights language is also technical and accords particular authority to legal experts, which may exacerbate power differentials and limit participation in social media governance by those lacking this expertise.Footnote 159

More fundamentally, language is not neutral, but structures our thinking.Footnote 160 The dominance of fundamental rights framings not only encourages reliance on ineffective individual remedies, but diverts attention from systemic and collective issues, instead focusing attention on those which are readily understood in terms of harm to individuals. Legal research on social media has generally been dominated by discussions of content moderation, which is easily conceptualised in terms of individual users’ free speech rights, rather than issues like recommendations and platform design. Similarly, scholarship on profiling by social media companies has focused on individual judicial remedies for discrimination,Footnote 161 even as the unequal impacts of surveillance advertising – for example, when women are overall less likely to see a job advert – primarily exacerbate inequalities at the group level.Footnote 162 Discussing injustice in terms of universal rights can also obscure the particular interests and vulnerabilities of marginalised groups.Footnote 163 It is notable that social injustice and discrimination are major themes in other areas of European technology law scholarship, such as AI and workplace technologies,Footnote 164 but have so far been less prominent in social media law, where debates have tended to focus on supposedly-universal issues like freedom of expression.

The unequal impacts of social media cannot be adequately understood without considering the political economy of the contemporary industry, which is not readily analysed in rights terms. Given major platforms’ business models, a core aim of their content governance systems is to attract advertisers.Footnote 165 This commercial business model is inherently in tension with aspirations to create more inclusive and egalitarian online environments. Platforms value media content according to its potential to keep users engaged and offer a suitable vehicle for adverts,Footnote 166 and profile users according to their potential value as consumers, which will inevitably reflect structural social inequalities.Footnote 167 Advertisers demand audiences segregated by binary genderFootnote 168 and crude racial categories,Footnote 169 and encourage platforms to ban challenging or controversial content which could threaten ‘brand safety’.Footnote 170 Human rights are not suited to criticising and addressing issues like these, given their inherent bias towards micro-level decisions rather than macro-level political-economic conditions.

This also points to the relevance of arguments that rights frameworks place insufficient emphasis on equality and democratic governance.Footnote 171 Human rights law is structurally oriented towards setting outer limits for acceptable state or corporate action, rather than shaping the underlying logics and objectives which these institutions pursue. In social media governance, rights discourse focuses attention on the details of how companies run their platforms: for example, whether they respect procedural and substantive constraints in individual moderation decisions,Footnote 172 or assess the human rights impacts of particular products or policies.Footnote 173 This distracts attention from more fundamental questions about how, by whom and in whose interests online media should be run. Calling for dominant corporate platforms based on surveillance advertising to operate within limits set by fundamental rights misses the opportunity to envisage alternative governance systems, pursuing different aims that might better serve the public.

Indeed, rights discourse can legitimise and stabilise current configurations of corporate power. Grietje Baars argues that using IHRL to promote ‘corporate accountability’ serves corporate interests by framing injustice in terms of exceptional wrongdoing, rather than the normal functioning of unjust systems.Footnote 174 Conversely, idealistic human rights language allows corporations which avoid obvious violations to position themselves as morally worthy, normalising harmful and unequal impacts of their everyday business practices.Footnote 175 This is apparent in the UNGPs’ claim that companies have a ‘moral responsibility’ to respect rights, and in much of the literature using them to argue that platforms should voluntarily respect IHRL, which appears to assume that, if put under enough moral pressure, platform companies can steward social media in the public interest.Footnote 176 Platform companies – most notably Meta – have played on these assumptions, relying heavily on human rights rhetoric to deflect criticism and portray themselves as socially responsible.Footnote 177

Academic literature often portrays human rights law – perhaps correctly – as unthreatening to corporate interests. Barrie Sander suggests that the inevitability of balancing competing rights would mean platforms could still set discretionary content policies, and emphasising that IHRL obligations would not ‘over-burden’ companies but would be tailored to minimise disruption to business.Footnote 178 Other authors stress that users’ rights must be balanced against platforms’ rights to run their businesses.Footnote 179 Corporate platforms’ current business models and objectives are thus portrayed as not only generally acceptable, but worthy of legal protection, and requiring only minor adjustments. None of these authors argues that human rights compliance will solve all problems; however, the centrality of human rights compliance and relatively superficial reforms such as ‘due process’ in the academic literature can give the overall impression that more structural reforms are unnecessary.

Even where scholars do propose more critical and structural analyses of corporate power, the predominance of human rights discourse and their purported status as apolitical, universally-shared values creates incentives to frame these arguments in rights terms – even where they are obviously motivated by political commitments which are not universal, or reducible to individual rights.Footnote 180 For example, authors emphasising states’ positive obligations to protect human rights often call for structural market reforms, such as stronger competition regulation.Footnote 181 However, given the indeterminacy of human rights norms, especially regarding states’ positive obligations, it is unclear why they should demand one intervention (such as increasing market competition) and not another (such as regulating dominant platforms as public utilities). Human rights provisions are not doing much work here beyond providing general rhetorical support for policy arguments influenced by other political views about how online media should be governed.

The introduction to a recent edited volume on platforms and human rights places great emphasis on economic and infrastructural power, stating that it is ‘concerned with the democratic implications of having an online domain governed by a relatively small group of powerful technology companies’.Footnote 182 It later notes that the biggest platforms ‘may affect billions of users’ human rights’.Footnote 183 If political problems result from the economic structure of an industry dominated by a few powerful corporations, and affect billions of people, the choice of individual rights as the primary framework for thinking through these problems is questionable. Yet this may be (partly) strategic. In David Kennedy’s words, human rights arguments are ‘addressed to an imaginary third eye – the bystander who will solidarise with the (unstated) politics of the human rights speaker because it is expressed in an apolitical form’.Footnote 184 Structural political–economic reform would inevitably be controversial and conflict with elite interests: platform companies have some of the world’s highest stock valuesFootnote 185 and are a major driver of US economic growth.Footnote 186 Authors framing calls for structural reform in rights terms may hope to defuse looming political conflicts and appeal to a wider audience. However, this also comes at a cost. Suggesting that reforms can be based on apolitical shared values rather than open political conflict, and that ensuring powerful actors act morally is more important than redistributing power and resources, will ultimately tend to legitimise the corporate status quo.

5. Alternative human rights frameworks

Criticisms of the liberal–individualistic orientation of human rights are not new. Several authors focusing on technology governance have addressed them by reframing human rights as more structural or collective values. These reconceptualisations are valuable, but are still unlikely to fully address the unequal impacts of social media, which requires additional normative frameworks not based on rights. A more significant challenge to the arguments put forward in this article comes from critical race theorists who are critical of rights frameworks, but nonetheless consider them useful and necessary for social justice movements. Given the longstanding dominance of human rights in social media law, and the political challenges facing more progressive normative visions, rights discourse and legal strategies cannot be abandoned.

A. Structural conceptions of human rights

In technology regulation, several authors have recognised the limitations of individualistic rights and remedies, reframing human rights in terms of collective values or structural conditions. These reconceptualisations are argued to offer more egalitarian, less individualistic approaches to technology regulation, and to better reflect how technological environments condition the enjoyment of rights in practice. While they offer interesting and generative ways of thinking about rights, they do not entirely overcome the limitations identified here.

For example, Sander contrasts ‘marketised’ conceptions of human rights law, which primarily protect individual agency against state intervention, with ‘structural’ conceptions which focus on proactively addressing systemic power imbalances.Footnote 187 He suggests that the ECtHR’s Delfi decision (approving a duty for an Estonian news website to actively monitor all user comments)Footnote 188 took a marketised approach, insofar as it targeted discrete, obvious harms associated with illegal hate speech, while overlooking the systemic risks to free speech created by obliging intermediaries to monitor user activity. In contrast, the Inter-American Commission on Human Rights has taken a more structural approach by holding that intermediary liability laws can have cumulative and systemic effects on freedom of expression which amount to rights violations, even where users’ rights to post legal content are formally protectedFootnote 189; the ECJ’s Article 17 judgement takes a similar position.Footnote 190 Sander advocates wider adoption of structural approaches, suggesting that this would entail greater focus on collective values like media diversity and broader problems in the ‘social media ecosystem’, rather than discrete decisions.Footnote 191

This approach to interpreting human rights law certainly offers advantages. Given the scale of content moderation systems, human rights oversight should address legal interventions’ systemic and indirect effects.Footnote 192 However, insofar as structural interpretations are instantiated by courts in legal claims brought by individuals, many of the problems discussed in section 3(A) will remain. On the other hand, insofar as they serve as more general guiding principles for governments – something Sander favours – the indeterminacy of human rights principles means they offer little guidance unless supplemented with other political commitments, and can easily be interpreted in ways that serve elite interests.

This is illustrated by another of Sander’s examples, the ECtHR’s Animal Defenders International ruling that the United Kingdom’s (UK) strict ban on broadcast political advertising was justified by its positive obligations to protect freedom of expression and free elections, in this case by intervening to prevent undue distortion of political debate by wealthy advertisers.Footnote 193 Considering this case’s institutional context illustrates the benefits and limitations of structural interpretations of rights. The ECtHR’s role is not to positively determine how states should implement human rights norms, but to establish minimum standards of protection. The indeterminacy of rights is embraced, via the ‘margin of appreciation’ doctrine: it balances legal accountability with democratic states’ freedom of action, allowing governments to resolve indeterminacy by making explicitly political judgements about what serves the public interest. Thus, Animal Defenders did not hold that political broadcast advertising must be banned, only that this is one defensible way of balancing negative free speech rights with positive obligations.

In such cases, where human rights function as minimum constraints on state action, the advantage of the structural approach is that it prevents an overly rigid approach to negative liberties which would prevent states from limiting individual rights to pursue collective goals. However, considering structural interpretations of free speech and free elections as positive principles setting out how to regulate political communication does not take us very far. They do not indicate which of the many possible policies to prevent the wealthy from unduly influencing the media are most desirable, or how the distributional effects of these political choices should be evaluated. For example, they do not explain whether it is reasonable for UK governments to ban all broadcast political advertising in the name of equal participation in political debate, while simultaneously embracing an oligopolistic media system where four individuals control three-quarters of newspaper circulation.Footnote 194 Equally, structural interpretations of human rights do not by themselves offer a positive vision for social media governance, unless they are supplemented by more collective and politicised normative frameworks which are not ultimately based on human rights.

Other technology regulation scholars have argued for a different understanding of human rights, which could also be called structural, in that it focuses on how sociotechnical environments condition rights in practice.Footnote 195 Julie Cohen’s argument for ‘rights-as-affordances’ holds that effectively protecting human rights requires understanding them as collective values, because people can only enjoy individual freedoms if the shared material environment accommodates them. Since our technological environment is heavily shaped by private corporations, protecting collective rights-as-affordances must also involve confronting private power.Footnote 196 In the social media context, such structural perspectives can be seen in some human rights activism advocating market interventions like competition regulation, as a means to strengthen freedom of expression by affording users more choice between platforms.Footnote 197

Although this also improves on individualistic rights framings, rights language may not be the only or best way to analyse how private power operates through sociotechnical environments. First, it again sidelines issues that are less easily framed in terms of individual rights. For example, thinking about how technologies afford or deny rights could be a productive way of critiquing the copyright-filtering systems required by Article 17 CD.Footnote 198 However, it is less easy to analyse the algorithms that determine what content becomes widely visible in rights terms. These also raise concerns about how private power is exercised through design, but because of more diffuse effects on culture, social norms and political debate.

Second, the language of rights-as-affordances may focus analysis on technical design choices, rather than the political-economic conditions that shape them. Cohen is an exception, but other scholarship and activism emphasising how technology conditions human rights primarily emphasise liberal values like individual autonomy, rather than inequality and structural disadvantage.Footnote 199 As famously illustrated by Langdon Winner’s discussion of how New York bridges facilitated racial segregation, sometimes sociotechnical environments are designed precisely to afford rights to some while denying them to others.Footnote 200 Framing such issues in terms of universal rights can depoliticise them, downplaying questions about whose interests sociotechnical environments are built to serve.

For example, Article 17 CD can be critiqued on the basis that it gives too much weight to rightsholders’ interests and not enough to users’ fundamental rights.Footnote 201 However, this does not necessarily capture the actual aims and sociotechnical context of the provision. Annemarie Bridy argues that it was essentially designed around the pre-existing technical affordances of YouTube’s Content ID copyright filtering system, as music and third-party software companies successfully lobbied for platforms to be required to offer such services.Footnote 202 Framing this process as a fundamental rights balancing exercise can paint a misleading picture of a process in which the technical systems which enforce copyright and the legal rules around them were designed from the start to serve private interests, not to achieve concordance between competing universal values. Alternative theoretical approaches such as Cohen’s detailed account of how corporations shaped the law and political economy of today’s privatised, hypercommercialised internet industry could provide a more useful starting point for critiques of EU social media regulation.Footnote 203

Overall, therefore, even structural conceptions of rights cannot by themselves address social media’s unequal impacts. They still focus attention on supposedly-universal interests and liberal values, diverting attention from institutions and social structures that systematically disadvantage some while benefiting others. Arguments for structural reforms of social media are essentially political, and can be better expressed without using the language of individual rights and universal values. In technology law, a possible alternative approach is exemplified by Niklas Eder’s work on algorithmic decision-making.Footnote 204 He rejects individualistic rights-based solutions, because they fail to engage with the systemic patterns and effects of corporate surveillance. Instead, he argues privacy regulation should focus on the legitimacy of surveillance, acknowledging that the concept of legitimacy is open to different meanings depending on underlying political philosophies, and that substantially reforming corporate surveillance would necessarily be politically contentious.

B. Ambivalent views of rights

Another influential rethinking of rights which addresses their capacity to redress structural oppression is provided by Kimberlé Crenshaw, building on other critical race theorists such as Patricia Williams.Footnote 205 Crenshaw directly challenges CLS arguments that relying on legal rights is counterproductive for progressive movements. Although largely in agreement with their basic points that rights are indeterminate and easily manipulated to justify desired decisions, and form part of a legal ideology which stabilises and legitimates the prevailing social order, she argues forcefully that CLS scholars overlook the need for social movements to make pragmatic compromises.

First, Crenshaw observes that critiques often implicitly suggest that rights should be abandoned in favour of a superior political strategy for pursuing equality, but that ‘no such strategy has yet been articulated.’Footnote 206 Subordinated social groups, by definition, have relatively limited ways to put pressure on more powerful groups, and legal rights may offer the best option. Second, she suggests that legitimation is double-edged. Rights legitimise unjust social structures, and movements relying on rights must accommodate themselves to those structures. However, this also means rights can legitimise these movements’ claims in a way that resonates broadly. Rights arguments which threaten the state’s legitimacy by pointing out its failure to respect its own stated values can be an effective lever for change. Brown makes similar points in her analysis of the paradoxes rights present for feminists, arguing that even as rights discourse has legitimised existing power structures and promoted liberal ideology, it has achieved meaningful progress and is effectively indispensable as a legal and political strategy.Footnote 207 More recently, Odette Mazel has argued for a reparative reading of pro-LGBTQ+-rights litigation, understanding it as pursuing change within existing constraints without necessarily accepting or misunderstanding those constraints.Footnote 208

These arguments are highly relevant in the social media context. This article has argued that human rights cannot satisfactorily address structural inequalities in social media governance, and called for greater emphasis on alternative normative frameworks, especially regarding the political economy of social media. However, rights cannot simply be abandoned, given their central role in the established legal regime: legal and political challenges to platform practices have little choice but to rely on them. Moreover, more explicitly progressive normative frameworks would inevitably face disagreement. Centring questions of political economy makes it apparent that the current configuration of power in the industry benefits powerful elites, and that structural industry reforms would face serious political challenges.Footnote 209 In this context, the strategic usefulness of linking calls for reform with the widely-endorsed and authoritative framework of fundamental rights law will often outweigh the disadvantages.

In this respect, literature on legal mobilisation has highlighted the potential for collective action to take advantage of rights frameworks, and to compensate for some of their individualistic and depoliticising aspects. For example, strategic litigation by associations can highlight systemic issues and represent the interests of vulnerable social groups.Footnote 210 Within the DSA framework, fundamental rights norms create some space for collective challenges to systemic injustice. Article 14(4) in principle enables regulators to address the substantive content and system-level enforcement of platforms’ content policies, not only their application in specific decisions.Footnote 211 The DSA also empowers users and – importantly – associations to complain to regulators about breaches of platforms’ obligations, including Article 14(4).Footnote 212 Additionally, the ongoing development of codes of conduct, which will play a major role in concretising platforms’ obligations under the AVMSD and DSA,Footnote 213 offers opportunities for independent researchers and other stakeholders to shape regulatory strategies and the interpretation of fundamental rights obligations. For example, they could advocate for codes to include more concrete requirements regarding platform design, the resources allocated to moderation and safety measures, and investigation and mitigation of systemic bias.Footnote 214 The fundamental rights framework ultimately constrains the terms in which such challenges and advocacy can be expressed, for example by limiting consideration to individual companies’ conduct rather than industry-wide problems. Nonetheless, it offers a basis for collective challenges to unequal treatment.

6. Conclusion

If our aim is to create a more just and egalitarian online public sphere, a world in which profit-driven multinational corporations comply with the minimum standards of IHRL or the EU Charter is, in Samuel Moyn’s words, not enough.Footnote 215 What human rights require of states is already highly indeterminate and disputed; for companies, even more so.Footnote 216 Nonetheless, it is relatively clear that this would not prevent corporations from setting online speech norms based on profitability; distributing information in ways that reproduce structural inequalities; or channelling online culture and communications in the predictable, homogenised directions most conducive to advertising. Nor would it make social media governance more democratic. Europe has the world’s oldest traditions of independent public-service media, founded on the belief that wholly privatised broadcast media systems cannot serve the public interest even if they are well-regulated. On this view, it is inherently problematic that online media are governed by profit-driven conglomerates, even if they are subject to human rights obligations.

Who should own and control the media, and how online speech should be governed, are highly political questions which cannot be answered based on universally shared values. Any project for social media regulation relies, implicitly or explicitly, on some political vision. To actively redress structural social inequalities, such visions must be guided by collective interests and address questions of political economy that do not fit within a rights framework. Structural reform of the social media industry may seem a distant prospect, making it tempting to retreat into the seemingly apolitical, consensual zone of human rights discourse. However, the entrenched dominance of corporate platforms only makes it more important to develop clear conceptual frameworks to challenge and criticise them. It is clear that fundamental rights will continue to play prominent roles in EU social media law, and that they offer a way to make political claims that resonate broadly, so progressive research and activism cannot abandon rights discourse entirely. The challenge is to simultaneously develop convincing critiques based on less individualistic normative frameworks, and to rely strategically on fundamental rights while recognising their limitations.

Acknowledgements

Thank you to Helena Alviar, Beatriz Botero Arcila, Marija Bartl, Séverine Dusollier, Brenda Dvoskin, Alexia Katsiginis, Jeremy Perelman, Barrie Sander and two anonymous reviewers for helpful comments on earlier drafts. Thank you also to the organisers of the GIG-ARTS Conference, the WZB Workshop on Radical Approaches to Platform Governance, and the Leibniz Media Lunch Talks for opportunities to present and receive feedback on this work.

Funding statement

This work received no specific grant from any funding agency, commercial or not-for-profit sectors.

Competing interests

The author has no conflicts of interest to declare.

References

1 E Dwoskin, N Tiku and C Timberg, ‘Facebook’s Race-Blind Practices around Hate Speech Came at the Expense of Black Users, New Documents Show’ (The Washington Post, 21 November 2021) <https://www.washingtonpost.com/technology/2021/11/21/facebook-algorithm-biased-race/> accessed 11 January 2022.

2 KL Gray and K Stein, ‘“We ‘Said Her Name’ and Got Zucked”: Black Women Calling-Out the Carceral Logics of Digital Platforms’ 35 (4) (2021) Gender & Society 538; E Siapera and P Viejo-Otero, ‘Governing Hate: Facebook and Digital Racism’ 22 (2) (2021) Television & New Media 112.

3 Dwoskin et al (n 1).

4 N Duarte, E Llansó and A Loup, ‘Mixed Messages? The Limits of Automated Social Media Content Analysis’ 81 (2018) Proceedings of Machine Learning Research 106; E Llansó et al, Artificial Intelligence, Content Moderation, and Freedom of Expression (Transatlantic Working Group 2020) <https://www.ivir.nl/publicaties/download/AI-Llanso-Van-Hoboken-Feb-2020.pdf> accessed 2 September 2022; J Cobbe, ‘Algorithmic Censorship by Social Platforms: Power and Resistance’ 34 (2020) Philosophy & Technology 739.

5 D Kreiss, B Barrett and M Reddi, ‘The Need for Race-Conscious Platform Policies to Protect Civic Life (Tech Policy Press 2021) <https://techpolicy.press/the-need-for-race-conscious-platform-policies-to-protect-civic-life/> accessed 21 March 2022.

6 R Albergotti, ‘Black Creators Sue YouTube, Alleging Racial Discrimination’ (The Washington Post, 18 June 2020) <https://www.washingtonpost.com/technology/2020/06/18/black-creators-sue-youtube-alleged-race-discrimination/> accessed 21 March 2022; M McCluskey, ‘These TikTok Creators Say They’re Still Being Suppressed for Posting Black Lives Matter Content’ (Time, 22 July 2020) <https://time.com/5863350/tiktok-black-creators/> accessed 21 March 2022.

7 Salty, Exclusive Report: Censorship of Marginalized Communities on Instagram (Salty 2021) <https://saltyworld.net/exclusive-report-censorship-of-marginalized-communities-on-instagram-2021-pdf-download/> accessed 21 March 2022.

8 D Blunt et al., Posting Into the Void (Hacking//Hustling 2020); R Sultan, ‘Inside Social Media’s War on Sex Workers’ (Bitch Media 2021) <https://www.bitchmedia.org/article/inside-social-medias-war-on-sex-workers> accessed 17 November 2021; D Blunt and Z Stardust, ‘Automating Whorephobia: Sex, Technology and the Violence of Deplatforming – An Interview with Hacking//Hustling’ 8 (4) (2021) Porn Studies 350.

9 OL Haimson et al, ‘Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas’ Vol 5 CSCW2 Art 466 Proceedings of the ACM on Human-Computer Interaction 1 <https://dl.acm.org/doi/10.1145/3479610> accessed 11 January 2022; C Are and S Paasonen, ‘Sex in the Shadows of Celebrity’ 8 (4) (2021) Porn Studies 411; AE Waldman, ‘Disorderly Content’ 97 (2022) Washington Law Review 907 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3906001> accessed 17 November 2021; S Katyal and J Jung, ‘’The Gender Panopticon: Artificial Intelligence, Gender, and Design Justice’ 68 (2021) UCLA Law Review 692; A Monea, The Digital Closet: How the Internet Became Straight (MIT Press 2022).

10 OH Gandy, ‘Matrix Multiplication and the Digital Divide’ in L Nakamura and P Chow-White (eds), Race After the Internet (Routledge 2013) 128–145; T Phan and S Wark, ‘What Personalisation Can Do for You! Or: How to Do Racial Discrimination Without “Race”’ 20 (2021) Culture Machine <https://culturemachine.net/vol-20-machine-intelligences/what-personalisation-can-do-for-you-or-how-to-do-racial-discrimination-without-race-thao-phan-scott-wark/> accessed 2 September 2022.

11 M Ali et al, ‘Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes’ 3 (2019) CSCW Art 199 Proceedings of the ACM on Human-Computer Interaction 1 <https://doi.org/10.1145/3359301> accessed 17 November 2021.

12 N Bol, J Strycharz, N Helberger, B van de Velde and CH de Vreese, ‘Vulnerability in a Tracked Society: Combining Tracking and Survey Data to Understand Who Gets Targeted with What Content’ 22 (11) (2018) New Media & Society 1996; Phan and Wark (n 10).

13 R Bivens, ‘The Gender Binary Will Not Be Deprogrammed: Ten Years of Coding Gender on Facebook’ 19 (6) (2015) New Media & Society 880; K Cotter et al, ‘“Reach the Right People”: The Politics of “Interests” in Facebook’s Classification System for Ad Targeting’ 8 (1) (2021) Big Data & Society <https://doi.org/10.1177%2F2053951721996046> accessed 11 January 2022.

14 S Bishop, ‘Anxiety, Panic and Self-Optimization: Inequalities and the YouTube algorithm’ 24 (1) (2018) Convergence 69.

15 Ibid.

16 C Southerton et al., ‘Restricted Modes: Social Media, Content Classification and LGBTQ Sexual Citizenship’ 23 (5) (2021) New Media & Society 920.

17 IM Young, Justice and the Politics of Difference (Princeton University Press 1990).

18 European Commission, The EU Code of Conduct on Countering Illegal Hate Speech Online (European Commission 2016) <https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en> accessed 18 November 2021.

19 Consolidated text: Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) (codified version) [2018] OJ L 095 (Audiovisual Media Services Directive).

20 ‘European Parliament legislative resolution of 5 July 2022 on the proposal for a regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC’ <https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269_EN.html> accessed 2 September 2022 (Digital Services Act).

21 F Wilman, ‘The EU’s System of Knowledge-Based Liability for Hosting Service Providers in Respect of Illegal User Content – Between the e-Commerce Directive and the Digital Services Act’ 12 (3) (2021) JIPITEC 317. The term ‘fundamental rights’ is used more often than ‘human rights’ in the EU, where it refers specifically to the rights developed as binding principles by the ECJ and now set out in the Charter of Fundamental Rights. In this paper, ‘fundamental rights’ refers to these EU law rights, and ‘human rights’ is a broader umbrella term for any rights protected in international law.

22 Arts 14 and 34-35, Digital Services Act (n 20).

23 L Dencik et al, Data Justice (SAGE 2022).

24 SP Gangadharan, ‘Media Justice and Communication Rights’ in C Padovani and A Calabrese (eds), Communication Rights and Social Justice (Palgrave MacMillan 2014) 203.

25 N Eder, ‘Beyond Automation: Machine Learning-Based Systems and Human Behavior in the Personalization Economy’ 25 (1) (2021) Stanford Technology Law Review 1; B Haggart and C Keller, ‘Democratic Legitimacy in Global Platform Governance’ 45 (6) (2021) Telecommunications Policy 102152.

26 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L.178 (‘E-Commerce Directive’).

27 A Savin, EU Internet Law (3rd edn, Edward Elgar Publishing 2020); Wilman (n 21).

28 Preamble to the Digital Services Act (n 20).

29 European Commission, ‘Europe fit for the Digital Age: Commission proposes new rules for digital platforms’ (European Commission, 15 December 2020) <https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2347> accessed 17 November 2021.

30 Wilman (n 21).

31 Art 3, Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online (2021) OJ L.172 (‘Terrorist Content Regulation’).

32 Recitals 3 and 9–10, Terrorist Content Regulation (n 31).

33 Art 17, Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (2019) OJ L.130 (Copyright Directive).

34 Art 17(10), Copyright Directive (n 33).

35 Art 17(9), Copyright Directive (n 33).

36 Art 10, Terrorist Content Regulation (n 31).

37 Arts 20–21, Digital Services Act (n 20).

38 G Frosio and M Husovec, ‘Accountability and Responsibility of Online Intermediaries’ in G Frosio (ed), The Oxford Handbook of Online Intermediary Liability (Oxford University Press 2020) 612.

39 Art 5, Terrorist Content Regulation (n 31).

40 As defined in Art 1(aa), this includes video-centric social media like YouTube and TikTok, but also other platforms with a dissociable part of the interface dedicated to videos, eg Instagram’s Reels: see TH Oguç, ‘The Prohibition of General Monitoring Obligation for Video-Sharing Platforms under Art 15 of the E-Commerce Directive in light of Recent Developments: Is It Still Necessary to Maintain It?’ 13 (3) (2022) JIPITEC 176.

41 Art 28b, Audiovisual Media Services Directive (n 19).

42 Art 14, Digital Services Act (n 20).

43 Arts 34–35, Digital Services Act (n 20).

44 Case C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) ECLI:EU:C:2011:771; Case C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV ECLI:EU:C:2012:85.

45 Case C-401/19 Poland v Parliament and Council ECLI:EU:C:2022:297.

46 Preamble to the Digital Services Act (n 20).

47 European Commission (n 29).

48 Chapter IV, Digital Services Act (n 20).

49 Art 45(1), Digital Services Act (n 20).

50 A Kuczerawy, ‘The Power of Positive Thinking: Intermediary Liability and the Effective Enjoyment of the Right to Freedom of Expression’ 8 (3) (2017) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 226; H van Hoboken, The Proposed EU Terrorism Content Regulation: Analysis and Recommendations with Respect to Freedom of Expression Implications (Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression 2019) <https://www.ivir.nl/publicaties/download/TERREG_FoE-ANALYSIS.pdf> accessed 17 November 2021; C Geiger, G Frosio and E Izyumenko, ‘Intermediary Liability and Fundamental Rights’ in G Frosio (ed), (n 38) 138; G Frosio and S Mendis, ‘Monitoring and Filtering: European Reform or Global Trend?’ in G Frosio (ed), (n 38) 544; D Keller, ‘Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling’ 69 (6) (2020) GRUR International 616; M Husovec, ‘(Ir)Responsible Legislature? Speech Risks under the EU’s Rules on Delegated Digital Enforcement’ (SSRN, 2021) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3784149> accessed 2 September 2022; R Ó Fathaigh, N Helberger and N Appelman, ‘The Perils of Legally Defining Disinformation’ 10 (4) (2021) Internet Policy Review <https://doi.org/10.14763/2021.4.1584> accessed 2 September 2022.

51 P Leerssen, ‘Cut Out by the Middle Man: The Free Speech Implications of Social Network Blocking and Banning in the EU’ 6 (2) (2015) Journal of Intellectual Property, Information Technology and Electronic Commerce Law 99; J Cobbe and J Singh, ‘Regulating Recommending: Motivations, Considerations, and Principles’ 10 (3) (2019) European Journal of Law and Technology <https://ejlt.org/index.php/ejlt/article/view/686> accessed 2 September 2022; SJ Eskens, The Fundamental Rights of News Users: The Legal Groundwork for a Personalised Online News Environment (PhD thesis, University of Amsterdam 2021) <https://hdl.handle.net/11245.1/5c557bf5-28ff-4383-ab32-acb08cd85d2a> accessed 2 September 2022.

52 A Gebicka and A Heinemann, ‘Social Media & Competition Law’ 37 (2) (2014) World Competition 149; KJ Fietkiewicz and E Lins, ‘New Media and New Territories for European Law: Competition in the Market for Social Networking Services’ in K Knautz and SK Baran (eds), Facets of Facebook: Use and Users (De Gruyter 2016) 285.

53 N Helberger, ‘The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power’ 8 (6) (2020) Digital Journalism 842.

54 Cobbe (n 4).

55 B Sander, ‘Freedom of Expression in the Age of Online Platforms: The Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation’ 43 (2020) Fordham International Law Journal 939; N Suzor et al, ‘Human Rights by Design: The Responsibilities of Social Media Platforms to Address Gender-Based Violence Online’ 11 (1) (2019) Policy & Internet 84; D Kaye, ‘A New Constitution for Content Moderation’ (OneZero 2019) <https://onezero.medium.com/a-new-constitution-for-content-moderation-6249af611bdf> accessed 2 September 2022; D Kaye and J Pielemeier, ‘The Right Way to Regulate Digital Harms’ (Project Syndicate, December 21 2020 <https://www.project-syndicate.org/commentary/content-moderation-digital-harms-regulation-by-david-kaye-and-jason-pielemeier-2020-12> accessed 2 September 2022. On AI regulation and platform governance more generally, see also NA Smuha, ‘Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea’ 34 (2021) Philosophy & Technology 91; LMcGregor, D Murray and V Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ 68 (2) (2019) International and Comparative Law Quarterly 309.

56 N Suzor, Lawless: The Secret Rules That Govern Our Digital Lives (Cambridge University Press 2019).

57 Kaye (n 55).

58 ‘The Santa Clara Principles on Transparency and Accountability in Content Moderation’ (Santa Clara Principles, 2021) <https://santaclaraprinciples.org/> accessed 2 September 2022.

59 United Nations, Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework (UNHCR 2011).

60 E Douek, ‘The Limits of International Law in Content Moderation’ 6 (2021) UC Irvine Journal of International, Transnational and Comparative Law 37, 44.

61 A Callamard, ‘The Human Rights Obligations of Non-State Actors’ in RF Jørgenson (ed), Human Rights in the Age of Platforms (MIT Press 2019) 191.

62 EB Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (Cambridge University Press 2015); MK Land, ‘Regulating Private Harms Online: Content Regulation under Human Rights Law’ in RF Jørgenson (ed), (n 61) 285.

63 T McGonagle, ‘The Council of Europe and Internet Intermediaries: A Case Study of Tentative Posturing’ in RF Jørgenson (ed), (n 61) 227.

64 Land, ‘Private Harms’ (n 62); McGonagle (n 63); Suzor et al (n 55).

65 Suzor et al (n 55).

66 D Kennedy, ‘The International Human Rights Movement: Part of the Problem?’ 15 (2002) Harvard Human Rights Journal 101.

67 Recital 13, Terrorist Content Regulation (n 31).

68 Art 14(4), Digital Services Act (n 20).

69 D Keller, ‘The DSA’s Industrial Model for Content Moderation’ (Verfassungsblog 2022) <https://verfassungsblog.de/dsa-industrial-model/> accessed 24 December 2022.

70 M Husovec, ‘Will the DSA work?’ (Verfassungsblog 2022) <https://verfassungsblog.de/dsa-money-effort/> accessed 3 January 2022.

71 G De Gregorio, Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society (Cambridge University Press 2022).

72 K Möller, The Global Model of Constitutional Rights (Oxford University Press 2012); Kennedy, ‘Part of the Problem?’ (n 66); W Brown, ‘Suffering Rights as Paradoxes’ 7 (2) (2000) Constellations 208; G Baars, The Corporation, Law and Capitalism: A Radical Perspective on the Role of Law in the Global Political Economy (Brill 2019).

73 Sander, ‘Promise and Pitfalls’ (n 55); ‘Santa Clara Principles’ (n 58).

74 JM Urban and L Quilter, ‘Efficient Process or Chilling Effects – Takedown Notices under Section 512 of the Digital Millennium Copyright Act’ 22 (4) (2006) Santa Clara High Technology Law Journal 621; Annemarie Bridy and Daphne Keller, ‘U.S. Copyright Office Section 512 Study: Comments in Response to Second Notice of Inquiry’ (SSRN 2017) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2920871> accessed 2 September 2022; JM Urban, BL Schofield and J Karaganis, ‘Takedown in Two Worlds: An Empirical Analysis’ 64 (2017) Journal of the Copyright Society of the USA 483; A Kuczerawy, ‘From “Notice and Takedown” to “Notice and Stay Down”: Risks and Safeguards for Freedom of Expression’ in G Frosio (ed), (n 38) 524.

75 Urban et al (n 74).

76 Kuczerawy, ‘Notice and Stay Down’ (n 74).

77 R Griffin, ‘New School Speech Regulation as a Regulatory Strategy against Hate Speech on Social Media: The Case of Germany’s NetzDG’ 46 (9) (2022) Telecommunications Policy 102,11; T Tyler et al, ‘Social Media Governance: Can Social Media Companies Motivate Voluntary Rule Following Behavior among Their Users?’ 17 (1) (2021) Journal of Experimental Criminology 109.

78 Brown, ‘Rights as Paradoxes’ (n 72); AL Hoffmann, ‘Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse’ 22 (7) (2019) Information, Communication & Society 900; DM Brinks and V Gauri, Courting Social Justice: Judicial Enforcement of Social and Economic Rights in the Developing World (Cambridge University Press 2008); M Dawson, E Muir and M Claes, ‘A Tool-Box for Legal and Political Mobilisation in European Equality Law’ in D Anagnostou (ed), Rights and Courts in Pursuit of Social Change: Legal Mobilisation in the Multi-Level European System (Hart Publishing 2014) 105.

79 SJ Yates et al, ‘Who Are the Limited Users of Digital Systems and Media? An Examination of U.K. Evidence’ 25 (7) (2020) First Monday <https://doi.org/10.5210/fm.v25i7.10847> accessed 2 September 2022; K Jacoby, ‘Facebook Fed Posts with Violence and Nudity to People with Low Digital Literacy’ (USA Today, 2021) <https://eu.usatoday.com/story/tech/2021/11/23/facebook-posts-violence-nudity-algorithm/6240462001/> accessed 22 March 2022.

80 JW Penney, ‘Privacy and Legal Automation: The DMCA as a Case Study’ 22 (2) (2019) Stanford Technology Law Review 412.

81 D Holznagel, ‘Enforcing the Rule of Law in Online Content Moderation: How European High Court Decisions Might Invite Reinterpretation of CDA §230’ (Business Law Today, 9 December 2021) <https://businesslawtoday.org/2021/12/rule-of-law-in-online-content-moderation-european-high-court-decisions-reinterpretation-cda-section-230/> accessed 22 March 2022.

82 A Kapczynski, ‘The Right to Medicines in an Age of Neoliberalism’ 10 (1) (2019) Humanity Journal 79; C Newdick, ‘Citizenship, Free Movement and Health Care: Cementing Individual Rights by Corroding Social Solidarity’ 43 (2006) Common Market Law Review 1645.

83 Keller, ‘Industrial Model’ (n 69).

84 JC Wong, ‘How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account’ (The Guardian, 12 April 2021) <https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang>; J Scheck, N Purnell and J Horwitz, ‘Facebook Employees Flag Drug Cartels and Human Traffickers. The Company’s Response Is Weak, Documents Show’ (Wall Street Journal, 16 September 2021) <https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953>; D O’Sullivan, C Duffy and B Fung, ‘Ex-Twitter Exec Blows the Whistle, alleging Reckless and Negligent Cybersecurity Policies’ (CNN, 23 August 2022) <https://edition.cnn.com/2022/08/23/tech/twitter-whistleblower-peiter-zatko-security/index.html>.

85 E Douek, ‘Content Moderation as Systems Thinking’ 136 (2022) Harvard Law Review 526.

86 Suzor et al (n 55).

87 Case C-401/19 (n 45), ECLI:EU:C:2021:613.

88 Ibid., para 201.

89 Case C-401/19 (n 45), Judgment, para 90.

90 Case C-401/19 (n 87), Opinion of AG Øe, n 249.

91 Case C-401/19 (n 45), Judgment, paras 96–99.

92 Case C-401/19 (n 87), Opinion of AG Øe, paras 193 and 209–213.

93 P Keller, ‘Article 17, the Year in Review (2021 edition)’ (Kluwer Copyright Blog 2022) <http://copyrightblog.kluweriplaw.com/2022/01/24/article-17-the-year-in-review-2021-edition/> accessed 21 April 2022.

94 F Reda and P Keller, ‘CJEU Upholds Article 17, But Not in the Form (Most) Member States Imagined’ (Kluwer Copyright Blog 2022) <http://copyrightblog.kluweriplaw.com/2022/04/28/cjeu-upholds-article-17-but-not-in-the-form-most-member-states-imagined/> accessed 28 June 2022; E Rosati, ‘What Does the CJEU Judgement in the Polish Challenge to Article 17 (C-401/19) Mean for the Transposition and Application of that Provision?’ (The IPKat 2022) <https://ipkitten.blogspot.com/2022/05/what-does-cjeu-judgement-in-polish.html> accessed 28 June 2022.

95 Husovec, ‘(Ir)Responsible Legislature?’ (n 50).

96 JE Goldschmidt, ‘New Perspectives on Equality: Towards Transformative Justice through the Disability Convention?’ 35 (1) (2017) Nordic Journal of Human Rights 1.

97 Kapczynski (n 82); L Adler, Gay Priori: A Queer Critical Legal Studies Approach to Law Reform (Duke University Press 2018).

98 S Marks, ‘Four Human Rights Myths’ (2012) LSE Legal Studies Working Paper No. 10/2012 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2150155> accessed 2 September 2022; S Marks, ‘Human Rights and Root Causes’ 74 (1) (2011) Modern Law Review 57; Kapczynski (n 82).

99 Marks, ‘Human Rights Myths’ (n 98), 10.

100 DJ Solove, ‘Introduction: Privacy Self-Management and the Consent Dilemma’ 126 (2013) Harvard Law Review 1880; JE Cohen, ‘What Privacy Is For’ 126 (2013) Harvard Law Review 1904; JE Cohen, ‘Turning Privacy Inside Out’ 20 (1) (2019) Theoretical Inquiries in Law 1; DJ Solove, ‘Introduction: Privacy Self-Management and the Consent Dilemma’ 126 (2013) Harvard Law Review 1880; W Hartzog, ‘What Is Privacy? That’s the Wrong Question’ 88(7) (2021) University of Chicago Law Review 1677; N Richards, Why Privacy Matters (Oxford University Press 2022).

101 S Viljoen, ‘A Relational Theory of Data Governance’ 131 (2) (2021) Yale Law Journal 573; AE Waldman, ‘Privacy’s Rights Trap’ 117 (2022) Northwestern University Law Review 88.

102 Viljoen (n 101); Eder (n 25).

103 Douek, ‘Systems Thinking’ (n 85), 1. See also Douek, ‘Limits of International Law’ (n 60); E Douek, ‘Governing Online Speech: From “Posts-As-Trumps” to Proportionality and Probability’ 121 (3) (2021) Columbia Law Review 759.

104 C Bayley, ‘Sexual Censorship on Social Media: What I Learned’ (Perspectives on Public Purpose For Emerging Technologies 2021) <https://www.belfercenter.org/publication/sexual-censorship-social-media-what-i-learned> accessed 3 January 2023.

105 Waldman, ‘Disorderly Content’ (n 9); Katyal and Jung (n 9); Monea (n 9).

106 Monea (n 9).

107 D Lux and LMH Mess, ‘Facebook’s Hate Speech Policies Censor Marginalized Users’ (Wired 2017) <https://www.wired.com/story/facebooks-hate-speech-policies-censor-marginalized-users/> accessed 5 January 2023; TD Oliva, DM Antonialli and A Gomes, ‘Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online’ 25 (2021) Sexuality & Culture 700.

108 Monea (n 9).

109 OL Haimson, A Dame-Griff, E Capello and Z Richter, ‘Tumblr Was a Trans Technology: The Meaning, Importance, History, and Future of Trans Technologies’ 21 (3) (2019) Feminist Media Studies 345.

110 B Wagner, Global Free Expression: Governing the Boundaries of Internet Content (Springer Nature 2016) 111; JA Rodriguez, ‘LGBTQ Incorporated: YouTube and the Management of Diversity’ (2022) Journal of Homosexuality <https://doi.org/10.1080/00918369.2022.2042664>.

111 D Keller, ‘Amplification and Its Discontents’ (2021) Knight First Amendment Institute Occasional Papers <https://knightcolumbia.org/content/amplification-and-its-discontents> accessed 2 September 2022.

112 Art 20(1), Digital Services Act (n 20).

113 D Holznagel, ‘A Self-Regulatory Race to the Bottom through Art. 18 Digital Services Act’ (Verfassungsblog 2022) <https://verfassungsblog.de/a-self-regulatory-race-to-the-bottom-through-art-18-digital-services-act/> accessed 22 March 2022.

114 Viljoen (n 101).

115 BJ Renninger, ‘“Where I Can be Myself … Where I Can Speak My Mind”: Networked Counterpublics in a Polymedia Environment’ 17 (9) (2015) New Media & Society 1513 <https://doi.org/10.1177%2F1461444814530095> accessed 2 September 2022.

116 J Nilsen et al, ‘TikTok, the War on Ukraine, and 10 Features That Make the App Vulnerable to Misinformation’ (2022) The Media Manipulation Casebook <https://mediamanipulation.org/research/tiktok-war-ukraine-and-10-features-make-app-vulnerable-misinformation> accessed 2 September 2022.

117 J Duportail et al, ‘Undress or Fail: Instagram’s Algorithm Strong-Arms Users into Showing Skin’ (AlgorithmWatch 2020) <https://algorithmwatch.org/en/instagram-algorithm-nudity/> accessed 2 September 2022.

118 There are limited exceptions, of which the most relevant to issues around bias and inequality are Art 34–35 on systemic risks. These provisions are analysed in more detail in section 3(B).

119 Young (n 17).

120 Hoffmann (n 78), 907.

121 T Poell, D Nieborg and BE Duffy, Platforms and Cultural Production (Polity 2021).

122 Z Glatt, ‘Precarity, Discrimination and (In)Visibility: An Ethnography of “The Algorithm” in the YouTube Influencer Industry’ in E Costa et al (eds), The Routledge Companion to Media Anthropology (Routledge 2022); BE Duffy and C Meisner, ‘Platform Governance at the Margins: Social Media Creators and Algorithmic (In)Visibility’ 7 (2) (2022) Social Media + Society <https://doi.org/10.1177/01634437221111923> accessed 5 January 2023; J Foster, ‘“It’s All About the Look”: Making Sense of Appearance, Attractiveness, and Authenticity Online’ (2022) Social Media + Society <https://doi.org/10.1177/20563051221138762> accessed 5 January 2023.

123 Bishop (n 14); Z Glatt and S Banet-Weiser, ‘Productive Ambivalence, Economies of Visibility and the Political Potential of Feminist YouTubers’ in S Cunningham and D Craig (eds), Creator Culture: An Introduction to Global Social Media Entertainment (NYU Press 2021) 39.

124 R Gill, Changing the Perfect Picture: Smartphones, Social Media and Appearance Pressures (City University of London, 2021) <https://www.city.ac.uk/__data/assets/pdf_file/0005/597209/Parliament-Report-web.pdf> accessed 2 September 2022.

125 Brown, ‘Rights as Paradoxes’ (n 72); A Somek, ‘The Preoccupation with Rights and the Embrace of Inclusion’ (2013) U Iowa Legal Studies Research Paper No. 13-11 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2205299> accessed 5 January 2023; A Somek, Engineering Equality. An Essay on European Anti-Discrimination Law (Oxford University Press 2011).

126 Somek, Engineering Equality (n 125).

127 Art 5(3)(c), Terrorist Content Regulation (n 31); Art 28b(3), Audiovisual Media Services Directive (n 19).

128 Art 35(1), Digital Services Act (n 20).

129 M Tushnet, ‘The Critique of Rights’ 47 (1) (1994) SMU Law Review 23; D Kennedy, A Critique of Adjudication (Harvard University Press 1998).

130 Art 6(1), Terrorist Content Regulation (n 31).

131 Art 12, Digital Services Act (n 20).

132 For an in-depth discussion of possible interpretations of Art 14(4), see N Appelman, J Quintais and R Fahy, ‘Using Terms and Conditions to Apply Fundamental Rights to Content Moderation’ (2023) German Law Journal (forthcoming) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4286147> accessed 24 December 2022.

133 Geiger et al (n 50); Douek, ‘Limits of International Law’ (n 60).

134 Case C-144/04 Mangold v Helm ECLI:EU:C:2005:709; Case C-414/16 Egenberger gegen Evangelisches Werk für Diakonie und Entwicklung e.V. ECLI:EU:C:2018:257.

135 R Xenidis and L Senden, ‘EU Non-Discrimination Law in the Era of Artificial Intelligence: Mapping the Challenges of Algorithmic Discrimination’ in U Bernitz et al (eds), General Principles of EU Law and the EU Digital Order (Kluwer Law International 2020) 151.

136 Douek, ‘Limits of International Law’ (n 60).

137 M Bergen, ‘YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant’ (Yahoo Finance 2019) <https://finance.yahoo.com/news/youtube-executives-ignored-warnings-letting-090026613.html?guccounter=1> accessed 11 January 2023; K Hao, ‘How Facebook Got Addicted to Spreading Misinformation’ (MIT Technology Review, 11 March 2021) <https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/> accessed 3 January 2022; JB Merrill and W Oremus, ‘Five Points for Anger, One for a ‘Like’: How Facebook’s Formula Fostered Rage and Misinformation’ (The Washington Post, 26 October 2021) <https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/> accessed 3 January 2022.

138 The Commission has sole responsibility for overseeing very large online platforms’ obligations under Art 34–35 and joint responsibility with national regulators for overseeing their compliance with Art 14: see Chapter IV, Digital Services Act (n 20).

139 C Van der Maelen, ‘Hardly Law or Hard Law? Investigating the Dimensions of Functionality and Legalisation of Codes of Conduct in Recent EU Legislation and the Normative Repercussions Thereof’ 47 (6) (2022) European Law Review 752.

140 Husovec, ‘Will the DSA work?’ (n 70).

141 Husovec, ‘Will the DSA Work?’ (n 70); M László et al, ‘4 Ways the New EU Digital Acts Fall Short and How to Remedy It’ (Medium 2022) <https://medium.com/@gregerwinnarr/4-ways-the-new-eu-digital-acts-fall-short-and-how-to-remedy-it-d16b681a88bc> accessed 3 January 2023.

142 Kennedy, ‘Part of the Problem?’ (n 66); Marks, ‘Root Causes’ (n 98); Somek, Engineering Equality (n 125).

143 K Tiidenberg and E Van der Nagel, Sex and Social Media (Emerald Publishing 2020); Monea (n 9); Blunt and Stardust (n 8).

144 E Hunt, ‘Vienna Museums Open Adults-Only OnlyFans Account to Display Nudes’ (The Guardian, 16 October 2021) <https://www.theguardian.com/artanddesign/2021/oct/16/vienna-museums-open-adult-only-onlyfans-account-to-display-nudes> accessed 19 October 2021.

145 Douek, ‘Systems Thinking’ (n 85).

146 See eg the Preamble to the Digital Services Act (n 20).

147 Access Now et al, ‘Open letter to Members of the European Parliament: Negotiations in the EP Need to Comply with Fundamental Rights’ (Access Now 2021) <https://www.accessnow.org/cms/assets/uploads/2021/09/DSA_Joint_Letter_MEPs.pdf> accessed 2 September 2022; EDRi et al, ‘An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement’ (EDRi 2021) <https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf> accessed 2 September 2022; EDRi et al, ‘Open Letter: Civil Society Call for a Digital Services Act that Benefits People and Is Compatible with Human Rights’ (Amnesty 2022) <https://www.amnesty.org/en/documents/eur01/5287/2022/en/> accessed 2 September 2022.

148 Kennedy, ‘Part of the Problem?’ (n 66); W Brown, ‘“The Most We Can Hope For…”: Human Rights and the Politics of Fatalism’ in AS Rathore and A Cistelecan (eds), Wronging Rights? Philosophical Challenges for Human Rights (Routledge India 2011) 132; Marks, ‘Human Rights Myths’ (n 70).

149 Art 40(4), Digital Services Act (n 20).

150 Such claims have been challenged by postcolonial theorists who argue they primarily reflect Western values: see for example T Asad, ‘What Do Human Rights Do? An Anthropological Enquiry’ 4 (4) (2000) Theory & Event <https://muse.jhu.edu/article/32601> accessed 2 September 2022; J Whyte, ‘Human Rights and the Collateral Damage of Neoliberalism’ 20 (1) (2017) Theory & Event 137; R Kapur, Gender, Alterity and Human Rights: Freedom in a Fishbowl (Edward Elgar Publishing 2018). Others have challenged these accounts, highlighting the importance of human rights in Global South political traditions and anticolonial movements: see eg JR Slaughter, ‘Hijacking Human Rights: Neoliberalism, the New Historiography, and the End of the Third World’ 40 (4) (2018) Human Rights Quarterly 735; AI Grimaldi, Brazil and the Transnational Human Rights Movement, 1964–1985 (PhD thesis, King’s College London) <https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.797786> accessed 2 September 2022.

151 Kennedy, Critique of Adjudication (n 129); Kennedy, ‘Part of the Problem?’ (n 66).

152 Naomi Klein has argued that the modern human rights movement both coincided with and actively facilitated the rise of neoliberalism, by concealing the links between state violence and the enforcement of radical pro-market policies: N Klein, The Shock Doctrine: The Rise of Disaster Capitalism (Knopf Canada 2007). Samuel Moyn’s historical study finds little evidence that human rights actively aided neoliberal politics, but concludes that they have also done little to challenge it: S Moyn, Not Enough: Human Rights in an Unequal World (Harvard University Press 2019).

153 Brown, ‘Rights as Paradoxes’ (n 72); Brown ‘Politics of Fatalism’ (n 148); Möller (n 72); Moyn (n 152).

154 Marks, ‘Root Causes’ (n 98), 74.

155 Kennedy, ‘Part of the Problem?’ (n 66).

156 Land, ‘Private Harms’ (n 62); Sander, ‘Promise and Pitfalls’ (n 55).

157 Sander, ‘Promise and Pitfalls’ (n 55); Douek ‘Limits of International Law’ (n 60).

158 B Dvoskin, ‘Neutral Governance’ (GigaNet Symposium, Warsaw 2021) <https://www.giga-net.org/2021SymposiumPapers/Neutral%20governance%20Brenda%20Dannecker%20GigaNet%20oct%2028.pdf> accessed 22 March 2022.

159 Dvoskin (n 158); J Niklas and L Dencik, ‘What Rights Matter? Examining the Place of Social Rights in the EU’s Artificial Intelligence Policy Debate’ 10 (3) (2021) Internet Policy Review <https://doi.org/10.14763/2021.3.1579> accessed 2 September 2022.

160 Brown, ‘Rights as Paradoxes’ (n 72); Brown, ‘Politics of Fatalism’ (n 148); Marks, ‘Root Causes’ (n 98); Marks, ‘Human Rights Myths’ (n 98).

161 O Sylvain, ‘Discriminatory Designs on User Data’ (2018) Knight First Amendment Institute Emerging Threats <https://knightcolumbia.org/content/discriminatory-designs-user-data> accessed 2 September 2022; S Wachter, ‘Affinity Profiling and Discrimination by Association in Online Behavioral Advertising’ 35 (2020) Berkeley Technology Law Journal 367.

162 OH Gandy, The Panoptic Sort (Oxford University Press 2021); Ali et al (n 11); Phan and Wark (n 10).

163 MA Franks, ‘Democratic Surveillance’ 30 (2) (2017) Harvard Journal of Law and Technology 425.

164 S Wachter, B Mittelstadt and C Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’ 41 (2021) Computer Law and Security Review <https://doi.org/10.1016/j.clsr.2021.105567> accessed 2 September 2022; A Aloisi and V de Stefano, Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour (Bloomsbury 2022); J Adams-Prassl, ‘Regulating Algorithms at Work: Lessons for a “European Approach to Artificial Intelligence”’ 13 (1) (2022) European Labour Law Journal 30.

165 T Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press 2018); K Klonick, ‘The New Governors: The People, Rules and Processes Governing Online Speech’ 131 (2018) Harvard Law Review 1598; ST Roberts, ‘Digital Detritus: ‘Error’ and the Logic of Opacity in Social Media Content Moderation’ 23 (3) (2018) First Monday <https://doi.org/10.5210/fm.v23i3.8283> accessed 2 September 2022.

166 M Cárdenas, ‘Dark Shimmers: The Rhythm of Necropolitical Affect’ in R Gossett, EA Stanley and J Burton, Trap Door: Trans Cultural Production and the Politics of Visibility (MIT Press 2019) 161; Roberts (n 165); Are and Paasonen (n 9).

167 OH Gandy, Coming to Terms with Chance: Engaging Rational Discrimination (Routledge 2009).

168 Bivens (n 13); Bishop (n 14); Katyal and Jung (n 9).

169 Phan and Wark (n 10).

170 WFA, ‘WFA and Platforms Make Major Progress to Address Harmful Content’ (World Federation of Advertisers 2020) <https://wfanet.org/knowledge/item/2020/09/23/WFA-and-platforms-make-major-progress-to-address-harmful-content> accessed 11 October 2021; Southerton et al (n 16).

171 Somek, ‘Preoccupation with Rights’ (n 125); Newdick (n 82); Viljoen (n 101).

172 G De Gregorio, ‘Democratising Online Content Moderation: A Constitutional Framework’ 36 (2020) Computer Law & Security Review 105374; Suzor (n 55); Santa Clara Principles (n 58).

173 Suzor et al (n 55).

174 Baars (n 72).

175 Ibid.

176 Suzor (n 56); Kaye (n 55).

177 T Kadri, ‘Juridical Discourse for Platforms’ 136 (2022) Harvard Law Review Forum 163; B Dvoskin, ‘Expertise and Participation in the Facebook Oversight Board: From Reason to Will’ (2022) Telecommunications Policy 102463; K Wiggers, ‘Meta’s First Human Rights Report Is Largely Self-Congratulatory’ (TechCrunch 2022) <https://techcrunch.com/2022/07/14/metas-first-human-rights-report-is-largely-self-congratulatory/?guccounter=1> accessed 5 January 2023.

178 Sander, ‘Promise and Pitfalls’ (n 55), 966.

179 RF Jørgenson, ‘Introduction’ in RF Jørgenson (ed), (n 61) xvii; Land, ‘Private Harms’ (n 62).

180 Brown, ‘Rights as Paradoxes’ (n 72); Kennedy, ‘Part of the Problem?’ (n 66).

181 Land, ‘Private Harms’ (n 62); Art 19, ‘Taming Big Tech: Protecting freedom of expression through the unbundling of services, open markets, competition, and users’ empowerment’ (2021) Art 19 <https://www.article19.org/wp-content/uploads/2021/12/Taming-big-tech_FINAL_8-Dec-1.pdf> accessed 27 June 2022.

182 Jørgenson (n 179), xxiv.

183 Ibid., xxix.

184 Kennedy, ‘Part of the Problem?’ (n 66), 121.

185 PwC, ‘Global Top 100 companies – March 2021’ (2021) PwC <https://www.pwc.com/gx/en/services/audit-assurance/publications/global-top-100-companies.html> accessed 22 March 2022, 6.

186 R Gorwa, ‘How We Can Socialize Big Tech’ (Jacobin 2022) <https://jacobin.com/2022/06/big-tech-facebook-meta-airbnb-socialize-platforms> accessed 27 June 2022.

187 B Sander, ‘Democratic Disruption in the Age of Social Media: Between Marketized and Structural Conceptions of Human Rights Law’ 32 (1) (2022) European Journal of International Law 159.

188 Delfi AS v Estonia App no 64569/09 (ECtHR 2015).

189 CB Marino, Freedom of Expression and the Internet (Office of the Special Rapporteur for Freedom of Expression, Inter-American Commission on Human Rights, 2013) <http://www.oas.org/en/iachr/expression/docs/reports/2014_04_08_internet_eng%20_web.pdf> accessed 2 September 2022.

190 Poland (n 45).

191 Sander, ‘Democratic Disruption’ (n 187), 162.

192 Douek, ‘Governing Online Speech’ (n 103); Douek, ‘Systems Thinking’ (n 85).

193 Animal Defenders International v UK App no 48876/08 (ECtHR, 22 April 2013).

194 D Ponsford, ‘Four Men Own Britain’s News Media. Is That a Problem for Democracy?’ (The New Statesman, 12 February 2021), <https://www.newstatesman.com/business/2021/02/four-men-own-britain-s-news-media-problem-democracy> accessed 2 September 2022.

195 JE Cohen, ‘Affording Fundamental Rights: A Provocation Inspired by Mireille Hildebrandt’ 4 (1) (2017) Critical Analysis of Law 78; K Yeung, ‘Responsibility and AI’ 05 (2019) Council of Europe study DGI(2019) <https://rm.coe.int/responsability-and-ai-en/168097d9c5> accessed 22 March 2022.

196 Cohen, ‘Affording Fundamental Rights’ (n 195).

197 ML Stasi, ‘Competition Rules Could Protect Human Rights on Social Media Platforms’ (Open Global Rights 2019) <https://www.openglobalrights.org/competition-rules-could-protect-human-rights-on-social-media-platforms/> accessed 2 September 2022; Art 19 (n 198).

198 For an analysis along these lines, see M Senftleben, ‘Institutionalized Algorithmic Enforcement – The Pros and Cons of the EU Approach to UGC Platform Liability’ 14 (2) (2020) FIU Law Review 299.

199 Yeung (n 195); M Flyverbom and G Whelan, ‘Digital Transformations, Informed Realities, and Human Conduct’ in RF Jørgenson (ed), (n 61), 53; Art 19 (n 198).

200 L Winner, ‘Do Artifacts Have Politics?’ 109 (1) (1980) Daedalus 121.

201 F Reda, J Selinger and M Servatius, ‘Article 17 of the Directive on Copyright in the Digital Single Market: a Fundamental Rights Assessment’ (2020) Gesellschaft für Freiheitsrechte <https://freiheitsrechte.org/home/wp-content/uploads/2020/11/GFF_Article17_Fundamental_Rights.pdf> accessed 22 March 2022.

202 A Bridy, ‘The Price of Closing the Value Gap: How the Music Industry Hacked EU Copyright Reform’ 22 (2) (2020) Vanderbilt Journal of Entertainment & Technology Law 323.

203 JE Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press 2019).

204 Eder (n 25).

205 PJ Williams, ‘Alchemical Notes: Reconstructing Ideals from Deconstructed Rights’ 22 (1987) Harvard Civil Rights-Civil Liberties Law Review 401; K Crenshaw, ‘Race, Reform, and Retrenchment: Transformation and Legitimation in Antidiscrimination Law’ 101 (7) (1988) Harvard Law Review 1331. Crenshaw’s arguments, which focus on the pragmatic usefulness of rights for social movements, are more relevant for present purposes than Williams’, which place more emphasis on the symbolic value of equal rights for historically marginalised communities. Both authors focus on US constitutional rights, but their arguments remain relevant outside this tradition.

206 Crenshaw (n 205), 1366.

207 Brown, ‘Rights as Paradoxes’ (n 72).

208 O Mazel, ‘Queer Jurisprudence: Reparative Practice in International Law’ 116 (2022) AJIL Unbound 10.

209 Gorwa (n 186).

210 Dawson et al (n 78); D Anagnostou, ‘Does European Human Rights Law Matter? Implementation and Domestic Impact of Strasbourg Court Judgements on Minority-Related Policies’ 14 (5) (2010) International Journal of Human Rights 721.

211 For a detailed analysis of the promise and limitations of Art 14 see Appelman et al (n 132).

212 Art 53, Digital Services Act (n 20).

213 Van der Maelen (n 139).

214 Husovec, ‘Will the DSA work?’ (n 70).

215 Moyn (n 152).

216 Douek, ‘Limits of International Law’ (n 60).