Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-23T17:30:16.713Z Has data issue: false hasContentIssue false

CLARIFYING HUMAN RIGHTS STANDARDS THROUGH ARTIFICIAL INTELLIGENCE INITIATIVES

Published online by Cambridge University Press:  20 October 2022

Lottie Lane*
Affiliation:
Assistant Professor of Public International Law, University of Groningen, c.l.lane@rug,nl.
Rights & Permissions [Opens in a new window]

Abstract

Taking a law-and-governance approach, this article addresses legal certainty in international human rights law as it applies to artificial intelligence (AI). After introducing key issues concerning legal certainty, a comparative analysis of AI law-and-governance initiatives at the international, regional and national levels is undertaken. The article argues that many initiatives contribute to increased legal certainty and can partially compensate for some of the shortcomings of the international human rights law framework, but that further clarification is badly needed. This is especially true for the responsibilities of private businesses which are developing AI and the corpus of human rights beyond privacy and data protection.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of the British Institute of International and Comparative Law

I. INTRODUCTION

A world in which machines can learn to identify cancer more accurately than doctors, predict which criminals will reoffend and even drive cars once belonged to the realm of science fiction. Today, the benefits of artificial intelligence (AI) are being reaped worldwide, supporting human intelligence in myriad ways. However, despite its undeniable merits, AI has the potential to wreak havoc with the enjoyment of human rights—and cases of discrimination, violations of privacy, loss of jobs and negative impacts on access to public services increasingly feature in international headlines.

The relationship between AI and human rights is irrefutable, and support for an approach to AI that is based on the international human rights law framework has grown within academia and practice.Footnote 1 However, despite ongoing developments, the international human rights law framework concerning AI is far from being fully evolved, particularly regarding the role and responsibilities of the different actors active within it, and particularly private businesses who are developing and deploying AI. Questions remain concerning the boundaries of responsibility of such actors and how they can be held to account. Furthermore, many AI governance initiatives focus on AI ethics, rather than human rights. Although such initiatives often contain principles or guidelines for both States and businesses which correlate with human rights standards, the way in which such instruments contribute to the protection of human rights is not always obvious. Clarity is desperately needed not only for individuals whose rights are affected, but also for other relevant stakeholders, such as States and businesses developing or deploying AI.

This article addresses the question of whether, and to what extent, AI ‘law-and-governance’ initiatives at the international, regional and national levels engage with human rights and contribute to legal certainty regarding the application of international human rights law in the context of the development and deployment of AI. Taking a broader law-and-governance approach and assessing the initiatives from a human rights law perspective, the article builds on previous scholarship concerning the legal certainty of international law regarding AIFootnote 2 as well as on studies mapping AI ethics guidelines and recommendations.Footnote 3

First, the article introduces the issue of legal certainty concerning AI and international human rights law (Section II). The current state of the law relating to AI and the position of businesses under international human rights law is summarised. The causes of legal uncertainty in international human rights law and the importance of filling gaps in understanding are also highlighted. Sections III–V provide an assessment of AI law-and-governance initiatives at the international level (Section III); regional level (Section IV); and national level (Section V), respectively. These various initiatives are then assessed in the light of their contribution towards the clarification of human rights standards applicable to States and businesses. Section VI comprises a comparative analysis of the initiatives and conclusions are provided in Section VII.

Throughout the article, a law-and-governance approach is adopted, in which solutions to societal problems are sought beyond the confines of the law.Footnote 4 This interdisciplinary approach views laws as tools of governance: as part of a governance structure comprising activities undertaken by many different actors, both public and private.Footnote 5 In this article the term ‘governance’ is used to refer to the ‘drafting, adopting, implementing and enforcing rules, or standards, [as well as] the mechanisms, processes and institutions that exist to achieve these tasks’.Footnote 6 Governance is understood to reach beyond governmental activities to comprise the aforementioned tasks ‘independent of the numbers and kinds of actors carrying [them] out’.Footnote 7 This reflects the current reality that governments often rely on non-governmental (that is, non-State) actors to undertake various governance tasks and ‘secure its intentions, deliver its policies, and establish a pattern of rule’.Footnote 8 This regularly occurs in the non-State provision of public services, including water, healthcare and education. Recently, there has been increasing reliance on AI developed by private businesses to help conduct various governance activities, for example in the criminal justice sector or in the context of smart cities, with questionable results from a human rights perspective.Footnote 9 However, non-State actors may also take it upon themselves to perform governance activities in order to address glaring governance gaps. Arguably, this is what has led to the wide array of governance initiatives concerning AI being adopted by a variety of governmental and non-State actors.

This article is based on an assessment of 99 initiatives which contain legally binding and/or non-binding standards directly related to AI (eg standards or strategies for the development of ethical AI) or standards that do not expressly address AI but which nevertheless have an impact on how it is developed and deployed. These include international guidelines and principles, self-regulatory instruments adopted by businesses, regional (and particularly European) legislation and governance instruments, and national (and sub-national) legislation, policy and governance documents. Authors of these instruments range from States to international organisations, non-governmental organisations, independent expert groups, business enterprises and more. The initiatives were largely identified using two databases: AI Ethics Lab's ‘Toolbox: Dynamics of AI Principles’;Footnote 10 and Nesta's ‘AI Governance Database’.Footnote 11 Further relevant initiatives were identified in academic literature and the media, as well as in the sources found in the databases. The scope of the analysis is limited by the research method adopted and does not claim to be exhaustive.Footnote 12 However, the initiatives examined provide a cross-section of significant existing initiatives and sufficient material from which to draw the conclusions arrived at below.

II. The Problem(s) of Legal Uncertainty

The ongoing development of AI technologies presents international law with a number of challenges. These include (i) the need for new laws; (ii) legal certainty; (iii) incorrect scope of existing laws; and (iv) legal obsolescence.Footnote 13 While legal certainty has long been an issue in relation to various areas of international law,Footnote 14 the present article focuses on the problem of legal certainty in the specific context of international human rights law and AI.Footnote 15 In particular, it addresses what Rebecca Crootof and BJ Ard label ‘application uncertainties’ that concern ‘indeterminacy as to whether and how existing law applies to an artifact, actor, or activity’.Footnote 16 The importance of legal certainty in this context is outlined below.

This article takes as a starting point that ‘[t]he demand for [legal] certainty creates a pressure for clear and precise rules, so that everyone knows where they stand’.Footnote 17 The need for clear, precise, accessible and consistent rules that allow actors to understand not only what their rights and obligations are, but also the consequences for not conforming to those rules, has been repeated on many occasions over many years.Footnote 18 This, in essence, is a call for legal certainty.

A link can also be made between legal certainty on the one hand and accountability and access to remedies on the other. If it is not known what an actor involved in the development/deployment of AI is responsible for, it is extremely challenging, if not impossible, to effectively hold them to account and ensure an effective remedy for victims of resulting human rights violations.Footnote 19 With private businesses playing a leading role in the development and deployment of AI, legal certainty is crucial for three stakeholders in particular:

  1. I. Victims of human rights abuses caused by reliance on AI systems. These actors need to know: Under what circumstances can they claim a violation of their rights? Where can they make such claims? These questions have a huge impact on victims’ ability to seek access to an effective remedy.

  2. II. State entities. These actors need to know: How are they expected to respect, protect and fulfil human rights in situations involving AI? In particular, what regulatory (and other) measures are States expected to undertake in order to protect human rights? Where do the obligations of States begin and end when an AI system that they use was developed by the private sector but is used by the public sector? How do the responsibilities and/or obligations of differing AI actors relate to each other?Footnote 20

  3. III. Private businesses developing and/or deploying AI. These actors need to know: What binding or non-binding standards should they follow? How do their responsibilities relate to those of other AI actors? What does this mean for them in their everyday operations and in relation to their specific AI system/s?

This article does not aspire to answer all these questions, which are crucial for achieving legal certainty and which could provide insights for potential future primary sources of international law on AI. Rather, and bearing in mind the limitations of international human rights law regarding AI and human rights, the article argues that some of the answers can be found in the broad range of AI law-and-governance initiatives that have been adopted at the international, regional and national levels.

The remainder of this section will focus on three main areas where greater legal certainty and clarity are needed: (A) gaps in the law; (B) AI, business and human rights; and (C) the abundance of AI (ethics) governance initiatives.

A. Gaps in the Law

Gaps in international human rights law concerning AI contribute significantly to legal uncertainty.Footnote 21 There is currently no express reference to AI in any of the primary sources listed in Article 38 of the Statute of the International JusticeFootnote 22 (international human rights treaties, customary international law and general principles of international law).Footnote 23 As a result, reliance must be placed on subsidiary sourcesFootnote 24 and interpretations of existing law to indicate how the more general standards found in primary sources (predominantly human rights treaties) apply in the context of AI. This is not specific to AI. However, because the human rights risks of AI have only relatively recently come to the fore, authoritative international interpretations of how international human rights law applies to AI are very limited. For instance, whilst there have been several key cases concerning AI and human rights at the national and regional levels showing how the right to privacy can apply to AI,Footnote 25 these are limited in scope and application. Judicial decisions at the international level do not yet exist. Even if such cases materialise in the future, they will be restricted to the subject matter of the specific case and only bind the parties to them.Footnote 26

That said, there are an increasing number of important comments by a variety of UN actors in the human rights sphere. For example, the UN Committee on Economic, Social and Cultural Rights (CteeESCR) adopted a general comment on the right to science in 2020, in which it discusses some of the risks posed and benefits offered by AI for human rights.Footnote 27 Significantly, the Committee stresses the need for States to ‘establish a legal framework that imposes on non-state actors a duty of human rights due diligence’.Footnote 28 This builds on its previous General Comments that have also emphasised the need for State regulation of non-State actors, especially businesses.Footnote 29 Further, in March 2021 the UN Committee on the Rights of the Child (CteeRC) adopted a general comment on the rights of children in relation to the digital environment.Footnote 30 Like that of the CteeESCR's, the CteeRC's general comment does not focus only on AI, which is expressly mentioned only once. However, it sheds light on the numerous ways in which AI can negatively impact children's rights and directly addresses the role and responsibilities of businesses (and States with regard to businesses).Footnote 31

Several reports have also been adopted by special procedures of the UN Human Rights Council, such as the Special Rapporteur on Extreme Poverty and Human Rights and the Special Rapporteur on Freedom of Expression.Footnote 32 The former Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston, even submitted an amicus curiae brief in relation to a national case heard in the Netherlands which concerned the right to privacy under the European Convention on Human Rights and a piece of Dutch legislation that allowed the use of AI for risk-profiling in the welfare sector.Footnote 33 Further, in September 2021 the UN High Commissioner for Human Rights published a report on the right to privacy in the digital age. In the report, High Commissioner Michelle Bachelet discusses the many risks that AI poses to privacy and provides suggestions for safeguards that should be designed and implemented by both States and the private sector to prevent and mitigate them.Footnote 34 Nonetheless, there are still many gaps and uncertainties about how obligations and responsibilities under international human rights law apply to AI, particularly beyond of the realm of privacy and data protection.

A significant contributing factor to these gaps is the limited ability of the current legal international human rights framework to keep abreast of developments relating to AI.Footnote 35 Irrespective of whether a new international treaty addressing AI and human rights would be desirable, the adoption of new multilateral treaties can be a slow and painstaking processes.Footnote 36 Customary international law on AI and human rights would also take time to develop since ‘there is no such thing as “instant custom”’.Footnote 37

Furthermore, international human rights law tends to develop in response to specific allegations of human rights violations. There have been many attempts by human rights adjudicatory bodiesFootnote 38 and in scholarly worksFootnote 39 to clarify the more preventive responsibilities and obligations under international human rights law, for instance through the delineation of due diligence obligations. However, the question of how these apply to the specific situation of AI remains. In particular, the nature of some AI technologies can make it extremely difficult to foresee both their capabilities and the risks of their use. Machine learning systems have been particularly criticised in this respect. They can be highly complex and continue to teach themselves over time to become more efficient. Machine learning systems may follow paths that were not envisaged by the programmer developing the system. As Matthew Scherer notes, this may even be the intention of the programmer and the appeal of a particular system, since it enables the machine to come to conclusions that would not have been made by humans.Footnote 40 Such unpredictability makes it difficult for lawmakers to ‘future proof’ the law, a challenge with which regulators on all levels are grappling.Footnote 41 Another challenge is the relative unpredictability of how existing AI technologies will be used in the future. A worrying example is the deployment of AI systems in military contexts when they were not intended for such use.Footnote 42

Ultimately, the unique phenomenon of AI poses numerous challenges to international human rights law. Despite some more recent observations from human rights bodies, there is still a considerable lack of legal certainty as to what exactly is expected from AI actors concerning AI and human rights.

B. AI, Business and Human Rights

For businesses, the lack of certainty is exacerbated by the absence of any binding human rights obligations. There are two main avenues for determining the international human rights standards to which businesses should be held: (I) the ‘indirect horizontal effect of human rights’; and (II) soft-law instruments concerning the corporate responsibility to respect human rights. Indirect horizontal effect most commonly involves the State's positive obligation to protect human rights from harmful interference by non-State actors, including businesses.Footnote 43 This can have a positive effect by ‘requir[ing] a State to adopt more effective measures to protect individuals from harm by non-state actors’.Footnote 44 Nonetheless, close inspection of cases in which indirect horizontal effect has been applied suggests that the standards expected of businesses or other non-State actors at the international level are very rarely mentioned explicitly.Footnote 45 Consequently, such case law does not add a huge deal of clarity for businesses.

Soft-law instruments on the corporate responsibility to respect human rights are much more helpful in this regard. This is particularly true of the UN Guiding Principles on Business and Human Rights (UNGPs).Footnote 46 The UNGPs were drafted by the late John Ruggie in his capacity as Special Representative of the Secretary-General on Human Rights and Transnational Corporations and Other Business Enterprises, with input from various stakeholders, including businesses themselves.Footnote 47 Unanimously endorsed by the UN Human Rights Council in 2011,Footnote 48 the UNGPs have paved the way for other significant developments in business and human rights, including the draft binding international treaty on business and human rightsFootnote 49 and the proposed EU mandatory corporate sustainability due diligence legislation.Footnote 50 The UNGPs lay down the main components of the corporate responsibility to respect human rights: (I) the adoption of a policy commitment to respect human rights; (II) a human rights due diligence process to identify, prevent, mitigate and account for how a business addresses its human rights impacts; and (III) processes for the remediation of adverse impacts caused or contributed to by the business.Footnote 51 More detailed principles explain what these various components require businesses to do.Footnote 52

The UNGPs are applicable to all business enterprises, which whilst giving them a broad scope also means that further work is needed to adapt them to specific business contexts—and what the UNGPs specifically require of businesses involved in developing AI is not addressed as such. Although this approach was necessary to ensure the UNGPS were widely applicable, it means that they offer AI businesses limited guidance on how to respect human rights in practice. Given these shortcomings, civil society and businesses have turned to extra-legal initiatives (some of which are discussed in Section III) and have also supported legally binding initiatives such as the European Commission's proposed directive on corporate sustainability due diligence.Footnote 53 A key incentive for doing so is the prospect of greater legal certainty.Footnote 54

AI fits very well into current debates on business and human rights, which draw particular attention to the role of businesses in both causing and mitigating contemporary phenomena posing significant risks to human rights, such as climate change.Footnote 55 For example, significant attention has been paid to the need for transparency in supply chains, both in the literature and in recent and ongoing regional and national legislative initiatives.Footnote 56 Transparency of and within supply chains is a notorious problem for AI. Supply chains can be highly complex, with businesses and individuals from around the world contributing to a single AI system that has the potential to affect a huge number of individuals. The lack of transparency in AI supply chains is exacerbated by the ‘discreteness’ of AI—the ‘mishmash of software and hardware components harvested by many different companies’Footnote 57 in different locations as well as the interaction between these components, coupled with the opacity of some AI systems and the desire of some companies to maintain trade secrets. Whilst the lack of transparency in supply chains may not be specific to AI, the opacity and lack of understanding surrounding AI systems and their development, especially by those who deploy but do not create the systems, causes particular problems.Footnote 58

AI poses a potentially unprecedented threat to a wide range of human rights in countless situations and adds further weight to arguments for greater corporate human rights responsibility and accountability. As discussed below, elements of the European Commission's proposed ‘AI Act’ are reminiscent of human rights due diligence obligations, which represents an implicit acknowledgement of the need for legally binding obligations concerning corporate human rights responsibilities relating to AI.

C. The Abundance of AI Ethics Governance Initiatives

In the absence of clear international human rights law standards pertaining to AI, a plethora of law-and-governance initiatives have been undertaken examining the role and risks of AI within society, how the development and deployment of AI should be regulated to mitigate these risks, and the responsibilities of different actors involved. The result is a somewhat disjointed landscape of overlapping governance activities by a wide range of actors. As seen below, these initiatives may provide a degree of clarity concerning how human rights should be interpreted and applied in relation to AI. However, the UN CteeESCR has warned against a fragmented approach to transnational technologies such as AI, as it risks ‘creat[ing] governance gaps detrimental to the enjoyment of economic, social and cultural rights’.Footnote 59 Whilst it would not be possible (or desirable) to tackle the issue of AI and human rights through one single instrument, the sheer number of governance initiatives that exist could create challenges for businesses and States alike in knowing what standards they should be following in a given situation. For this reason, coordination and cooperation between actors and the initiatives they take should be encouraged, in order to avoid conflicts and unnecessary overlap and to strengthen efficiency of AI governance.

Furthermore, while they may engage with human rights issues, many of these initiatives approach the subject from an ethical perspective. Even though ethics and human rights can be mutually reinforcing, it is imperative that a human rights approach to AI be taken. First, international human rights law comprises internationally agreed standards and obligations, and includes guidance concerning how competing rights and interests should be balanced against one another.Footnote 60 Secondly, although far from perfect, international human rights law provides for, and places an emphasis on, the importance of accountability mechanisms and access to remedies.Footnote 61 Accountability is also a key concern of AI ethicsFootnote 62 and is closely related to the right to an effective remedy.Footnote 63 Thirdly, for all of its flaws, international human rights law does contain soft-law standards on the human rights responsibilities of businesses,Footnote 64 as summarised above. The focus on AI ethics in AI law-and-governance initiatives may draw attention away from international human rights law and may result in confusion in situations where human rights law and ethics contain different standards, or even conflict with one another.

To summarise, there is an array of initiatives that may simultaneously provide crucial insights into the application of human rights to AI whilst also being a potential cause for further uncertainty. The following sections discuss in more detail AI law-and-governance initiatives at the international, regional and national levels and critically assesses their contribution to bringing clarity to human rights standards.

III. INTERNATIONAL AI INITIATIVES

As noted above, there are no legally binding instruments specifically dealing with AI under international human rights law. There are, however, several important initiatives that could have an impact on the protection of human rights and contribute to clarifying applicable standards.

One example is the work of UNI Global Union, which is a global union federation of national and regional trade unions. In 2017, the Union adopted an instrument containing the ‘Top 10 Principles for Ethical AI’.Footnote 65 The document is explicitly ethics-based, but does also engage with human rights, stating in Principle 3, for instance, that ‘AI systems must remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights’. Workers’ rights are also specifically mentioned in Principle 7, in which the responsibility of businesses as well as States is emphasised in the need to ensure that when workers are replaced by AI, they have the right of access to social security and ‘lifelong continuous learning to remain employable’. The UNI Global Union's principles also, and more explicitly, highlight corporate accountability, particularly when workers are displaced due to the use of AI. They also note the right of individuals to appeal decisions made by AI and to have a human review of such decisions (reflecting close links to the right to an effective remedy) and advocate codes of ethics for the development, application and deployment of AI to ensure compliance with fundamental rights. The codes of ethics are not elaborated upon but could presumably take the form of corporate or industry codes of conduct, either on a voluntary basis or mandated by national law.

An important example of a document that is expressly based on the international human rights law framework and directly addresses the corporate responsibility of AI businesses is the ‘Toronto Declaration on Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems’.Footnote 66 This is a joint initiative by Access Now and Amnesty International. It was adopted in 2018 and is significant because of its direct articulation of the responsibilities of private actors. The Declaration's scope is limited to non-discrimination and equality, but it contains some more general points that are applicable to a broader range of human rights. Whilst the Declaration is not legally binding, it does provide some useful clarity for both States and businesses regarding which human rights standards should be followed in the development and deployment of AI.

The Declaration essentially echoes the more general standards of the UNGPs but also indicates how due diligence should be followed in the context of machine learning. For example, businesses developing AI should submit high-risk systems to third-party auditors,Footnote 67 conduct ongoing quality checks and real-time auditing through design, testing and deployment stages.Footnote 68 The Declaration also emphasises the need for transparency, particularly regarding due diligence processes but also of technical specifications and details of algorithms, ‘including samples of training data and details of the source of data’.Footnote 69 It is not clear from the Declaration to whom this information should be accessible (ie to auditors, end users, or the public at large) and the details of what exactly should be transparent remain somewhat murky. Notwithstanding the relative ambiguity here, the bottom line of the Declaration is clear: businesses should not deploy algorithms with high risks.Footnote 70 If significant risks to human rights come to light during the due diligence process of a business, it should either make adjustments to mitigate the risks, or simply not go ahead with the project.

Other interesting non-binding initiatives have been undertaken by the Organization for Economic Cooperation and Development (OECD). As well as the AI Principles adopted by a Recommendation of the OECD's Council on Artificial Intelligence in 2019,Footnote 71 in September 2021 it published guidance on the application of human rights due diligence (HRDD) to situations involving AI.Footnote 72 The Recommendation notes the relevance of the human rights framework to AI, and in particular the Universal Declaration on Human Rights. Recommendation 1.2 on human-centred values and fairness notes that ‘AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle’, such as privacy and data protection, non-discrimination and equality, and internationally recognised labour rights. Further, Recommendation 1.5 states that ‘AI actors [including businesses] should be accountable for the proper functioning of AI systems and for the respect of the [OECD's] principles, based on their roles, the context, and consistent with the state of art.’ Accountability is a notorious challenge to ethically and human rights-compliant AI, but it is key to its achievement and a recurring theme throughout the international initiatives analysed. While the Recommendation does not indicate how different AI actors might be held to account, taking a similar approach to many other AI ethics initiatives by putting accountability centre stage encourages others to develop more specific standards or recommendations in this respect.

Going a step further, in its guidance on the requirements of ‘HRDD through responsible AI’Footnote 73 the OECD provides an insight into what the corporate responsibility to respect human rights as expressed in the UNGPs requires of businesses developing or deploying AI or involved in an AI supply chain.Footnote 74 Arguably, this initiative provides the most detailed guidance available at the international level regarding the role and responsibility of the private sector. It also provides insights into the standards expected of States on a number of issues, including their obligation to regulate the private sector,Footnote 75 and regarding the requirements of due diligence when State actors feature in an AI supply chain.

Another important example is the work of the UN Educational, Scientific and Cultural Organization (UNESCO).Footnote 76 UNESCO appointed a group of 24 experts to draft a ‘Recommendation on the Ethics of Artificial Intelligence’, to provide ‘an ethical guiding compass and a global normative bedrock allowing to build a strong respect for the rule of law in the digital world’.Footnote 77 After receiving input from various stakeholders on earlier drafts,Footnote 78 the final text of the Recommendation was adopted in November 2021. Although framed as an ethics-based initiative, an objective of UNESCO's Recommendation is ‘to protect, promote and respect human rights and fundamental freedoms, human dignity and equality’.Footnote 79 This forms the basis for four of the core values upon which the Recommendation is based. There are 64 references in the preamble and final text to various UN-level reports highlighting the relationship between human rights and AI. The importance of more specific human rights issues, including the rights of the elderly and the right to science, also feature in the document.

The Recommendation also provides insights concerning the relationship between ethics and human rights. For instance, the preamble recognises that ‘ethical values and principles can help develop and implement rights-based policy measures and legal norms, by providing guidance with a view to the fast pace of technological development’. Interestingly, the Draft Recommendation had suggested that when trade-offs between different ethical principles have to be made, stakeholders should be ‘guided by international human rights law, standards and principles’.Footnote 80 This was not included in the final Recommendation, but the notion that AI ethics and human rights can be mutually reinforcing and that in such situations human rights law should provide guidance is reflected on many occasions throughout the document.

For example, paragraph 48 provides that the ‘main’ policy action is that States should take effective measures to operationalise the values and principles set down in the Recommendation and, crucially, to ensure that other AI actors follow these standards. To that end, AI businesses should conduct due diligence and ethical impact assessments in accordance with the UNGPs.Footnote 81 Unlike the Draft Recommendation, the final text does not expressly say that ethics and human rights are not synonymous.Footnote 82 It nonetheless recognises in the preamble that ‘ethical values and principles can help develop and implement rights-based policy measures and legal norms, by providing guidance with a view to the fast pace of technological development’.Footnote 83 This reiterates the position of UNESCO that ethics and human rights are closely connected. It suggests, in line with the stance taken in this article, that AI ethical standards may be able to compensate, to some extent, for the lack of legal certainty and the shortcomings of international human rights law regarding the risks posed by AI, such as its inability to match the pace of technological development.

Overall, a range of initiatives have been undertaken at the international level which have placed considerable emphasis on a range of human or fundamental rights, even in ethics-based initiatives. Taken together, these initiatives certainly provide greater clarity concerning applicable standards for a range of public and private actors involved in the development or deployment of AI.

IV. REGIONAL AI INITIATIVES

At the regional level, many law-and-governance initiatives have been taken that could have an impact on legal certainty and the protection of human rights in relation to AI. This section will consider a number of European initiatives, as this is the region that has been most active in the governance of AI.

In terms of legally binding instruments, the European Union General Data Protection Regulation (GDPR) is perhaps the most obvious example.Footnote 84 Like many initiatives targeting privacy and data protection, the GDPR is not specific to AI, but applies more generally to data processing activities. The overall aim of the GDPR is to protect individuals’ personal data.Footnote 85 The GDPR lays down due diligence standards for companies involved in processing data and in the development of AI. It requires ‘data controllers’Footnote 86 to monitor respect for the rights of individuals ‘to informed consent and freedom of choice when submitting data, as well as their right to access, amend and verify data’.Footnote 87 As regards AI specifically, Article 22 of the GDPR prohibits some forms of automated decision-making. Whilst some commentators have been disappointed with its practical impact,Footnote 88 the GDPR is a significant development in relation to AI since it adds a degree of legal certainty and it clearly places standards on some businesses developing AI, which can be fined for non-compliance.Footnote 89

The EU has also developed the well-known ‘Ethics Guidelines for Trustworthy AI’, adopted by the High-Level Expert Group on Artificial Intelligence established by the European Commission.Footnote 90 The Guidelines set out seven requirements for trustworthy AI, based on four ethical principles. The Guidelines use the language and framework of ethics throughout but consider themselves to be based on fundamental rights as reflected in international human rights law. The Guidelines even go so far as to say that ‘[r]espect for fundamental rights … provides the most promising foundations for identifying abstract ethical principles and values, which can be operationalised in the context of AI’.Footnote 91 The Guidelines go on to say that respect for human dignity is the common foundation for human rights, which itself reflects the Guidelines’ ‘human-centric approach’ to AI.Footnote 92 As a result, many of the points made in the Guidelines align with human rights standards, even when this is not expressly stated. This is certainly true of the Guidelines’ provisions concerning privacy and bias, the latter of which is closely connected to the right to non-discrimination.

The European Commission and European Parliament have adopted numerous AI initiatives, including the Commission's ‘White Paper on Artificial Intelligence’Footnote 93 and a series of Parliament Resolutions and recommendations to the Commission relating to AI on topics including: ethics;Footnote 94 liability;Footnote 95 copyright;Footnote 96 criminal matters;Footnote 97 education, culture and the audiovisual sector.Footnote 98 Given the differing scope of these various initiatives, it is unsurprising that they engage with human rights to differing degrees. For instance, the Resolution on a framework of ethics pays significantly more attention to a broader range of fundamental rights than does the resolution on copyright, but the copyright Resolution has a particular focus on data protection and privacy. The Resolution on a Framework of AI Ethics expressly engages with a broad range of rights and the need for an EU regulatory framework on AI to be based on international human rights law.

In April 2021 the European Commission published the draft ‘Artificial Intelligence Act’,Footnote 99 which sets out a proposed legal framework for AI. Although an EU instrument, the draft Regulation could have a considerably broader impact geographically.Footnote 100 The Artificial Intelligence Act builds on ethics-based initiatives such as the Ethics Guidelines for Trustworthy AI and the Resolution on a Framework of AI Ethics. The initial draft has been met with mixed reactions.Footnote 101 It places a significant, although arguably still insufficient, emphasis on fundamental rights.Footnote 102 This is reflected in two of the proposal's four specific objectives, the first of which is to ‘ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values’.Footnote 103 The proposal contains a list of relevant rights found in the EU Charter of Fundamental Rights that must be protected in relation to AI,Footnote 104 building on those rights pinpointed in the Commission's White Paper and the European Parliaments’ Resolutions. Workers’ rights, freedom of expression, the rights to an effective remedy and to a fair trial are all listed, as well as the right to a high level of environmental protection.Footnote 105 Significantly, the proposal only places ‘regulatory burdens’ on AI systems that are ‘likely to pose high risks to fundamental rights and safety’ (the so-called ‘high risk’ systemsFootnote 106). Systems unlikely to pose such risks are subject to much more limited transparency requirements and businesses developing these systems are encouraged to adopt codes of conductFootnote 107 rather than being required to conduct the compliance assessments which are obligatory for high-risk systems. Leaving aside the draft's current flaws for the moment, the proposed regulation does provide relatively detailed standards for both States and businesses working with AI. Although not framed as a ‘human rights’ initiative, the draft arguably goes some way to addressing the legal uncertainty regarding the application of international and regional human rights law to AI for both States and AI businesses. It is to be hoped that the final version of the regulation will go even further in this regard.

Within the Council of Europe, the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data is also noteworthy.Footnote 108 The Protocol, which is not yet in force, aims to modernise the ConventionFootnote 109 and address emerging challenges to the effective protection of privacy brought about by new technologies.Footnote 110 The instrument is not an ‘AI’ initiative per se but, similar to the GDPR, it would have an impact on some aspects of the development and deployment of AI.

The Protocol expressly refers to the need to ensure protection of human rights and fundamental freedoms and to balance rights against one another (ie privacy and freedom of expression). Specifically, Article 5(1) of the consolidated treaty (‘Convention 108+’) contains a requirement that data processing be ‘proportionate in relation to the legitimate purpose pursued and reflect at all stages of the processing a fair balance between all interests concerned, whether public or private, and the rights and freedoms at stake’. This essentially reflects the balancing act required in the application of the legitimate limitations to human rights found in some provisions of the European Convention of Human Rights.Footnote 111

The Protocol takes the approach typical of the Council of Europe in placing positive obligations on State Parties which include the obligation to ensure the protection of individuals from violations by the private sector. This includes, for instance, the obligation in Article 10(2) that State Parties ‘provide that controllers and, where applicable, processors, examine the likely impact of intended data processing on the rights and fundamental freedoms of data subjects prior to the commencement of such processing’,Footnote 112 which is similar to a duty of human rights due diligence or of impact assessment, that should be imposed on (private) data controllers by the State. Despite this, the Explanatory Report to the ConventionFootnote 113 seems to suggest that the obligation may be less formal than the due diligence processes expected under other instruments (ie the UNGPs) and this could result in there being different, although not necessarily conflicting, standards. In any case, the Convention certainly provides clarity and confirmation regarding the applicability of certain human rights standards in the context of AI.

The Recommendation of the Committee of Ministers on the human rights impacts of algorithmic systems is also significant, particularly in how it addresses the responsibilities of AI businesses.Footnote 114 This document, clearly a human rights-based initiative, contains a recommendation that the governments of Council of Europe Member States:

ensure, through appropriate legislative, regulatory and supervisory frameworks related to algorithmic systems, that private sector actors engaged in the design, development and ongoing deployment of such systems comply with the applicable laws and fulfil their responsibilities to respect human rights in line with the [UNGPs] and relevant regional and international standards.Footnote 115

This is the most explicit reference to the human rights responsibilities of private businesses in any of the regional initiatives analysed and brings the Recommendation into line with the Toronto Declaration at the international level.

The approach here is again one based on the positive obligations of States, and like the Protocol discussed above, would require States to place a duty of due diligence on the businesses mentioned.Footnote 116 Indeed, the Appendix to the Recommendation explains that pursuant to ‘the horizontal effect of human rights’Footnote 117 and the central role of private sector actors in all stages of an AI system's life cycle, including in collaboration with the public sector, ‘some of the key provisions that are outlined [in the Recommendation] as obligations of States translate into legal and regulatory requirements at national level and into corporate responsibilities for private sector actors’—in other words, it has indirect horizontal effect, as explained in Section II.B above.Footnote 118 However, the Appendix to the Recommendation emphasises that businesses should comply with this responsibility regardless of whether or not States are able and willing to fulfil their own human rights obligations. Ultimately, a very clear human rights-based approach is taken by the Protocol, with due consideration given to the role of different actors in the protection of human rights. Having said that, the standards themselves are relatively vague and, unlike the Toronto Declaration, do not clarify in detail the standards expected of businesses. Rather, and similarly to Convention 108+, the Recommendation reconfirms the applicability of general standards in relation to AI.

Very little action has been taken by regional organisations outside of Europe, although the African Union has ‘called for a structured regulation of AI to manage the benefits of the technology for Africans, and to foresee and curb the risks’.Footnote 119 In Asia, no instruments have been adopted by the Association of South East Asian Nations (ASEAN) but various national initiatives have been adopted within this region (see Section VI below). The same can be said regarding the Inter-American human rights system. Looking beyond international organisations, the McKinsey Global Institute has reviewed the current state of play concerning AI in ASEAN States and has made several recommendations to both States and businesses developing or deploying AI, albeit with extremely scant reference to either ethics or human rights. There is no direct discussion of ‘rights’ as such and only one mention of ethics, made when flagging the complex ethical questions that arise from AI.Footnote 120 Its most significant comments relate to rights-based or ethical standards concerning privacy, stressing the need for governments to comply with ‘privacy norms and laws’ and ‘to grapple with defining principles of privacy as new uses are generated by AI’.Footnote 121 Again, this does not in itself help the applicable standards, although taken together, the regional initiatives could be said to clarify more generally which human rights standards are applicable, and to whom, in the context of AI.

V. NATIONAL AI INITIATIVES

A number of countries have now adopted national strategies concerning AI, and some of these have adopted legislation. However, the inclusion of human rights in such strategies and legislation is another question. In some countries, such as Singapore, AI governance frameworks have been proposed which are entirely based on ethics and only refer to human rights in a passing manner. Singapore's Personal Data Protection Commission adopted a revised ‘Proposed Model AI Governance Framework’ in 2020, with the purpose of converting ethical principles into ‘implementable practices’.Footnote 122 Whilst being listed in Annex A as a ‘foundational ethical principle’, respect for international human rights in the design, development and implementation of AI is not mentioned at all in the main text. The importance of AI solutions being ‘human-centric’ is said to be a ‘high-level guiding principle’, and that the ‘well-being and safety [of human beings] should be primary considerations in the design, development and deployment of AI’.Footnote 123 It contains suggestions as to how this can be achieved, many of which would contribute to human rights protection. For example, it proposed a ‘probability-severity of harm matrix’ to help entities wanting to deploy an AI model in decision-making processes to determine the level to which humans should be involved in order to mitigate the level of potential harm to an individual caused by reliance on the model.Footnote 124 The document also emphasises the importance of establishing clear roles and responsibilities for different actors in the ethical deployment of AI, although it does not provide specific guidance.

In Denmark, the Danish Expert Group on Data Ethics adopted recommendations on ‘Data for the Benefit of the People’.Footnote 125 These recommendations are intended to foster the responsible use of data in the business sector and are restricted to the context of data processing. Reference is made to equality and non-discrimination with regard to bias in AI, and to human dignity, which is said to outweigh profit and ‘must be respected in all data processes’.Footnote 126 The recommendations suggest ways of ensuring that these ‘values’, which are also found in international human rights law,Footnote 127 form the foundation of data driven systems and of future policy and legislation regarding data processing. This includes, among others, introducing an Independent Council for Data Ethics, a ‘data ethics oath’ similar to a doctor's Hippocratic oath, and mandatory declarations of data ethics in annual financial statements for large companies. Each of these measures can contribute to the protection of non-discrimination and dignity.

In 2020 and based in part on these recommendations, Denmark amended its national legislation to add a requirement that from January 2021 onwards, large and listed companies and State-owned public limited companies had to provide information in their annual reports on their data ethics policies or publish them on their website, with an explanation as to why no policy had been implemented if that was the case (known as the ‘comply or explain’ principle).Footnote 128 The information to be provided could include, inter alia, how and why the company uses and chooses to implement new technologies, including AI, and how algorithms used by the company are trained, as well as safeguards that are put in place to mitigate bias.Footnote 129 In terms of concrete standards and contributing to legal clarity for businesses and States alike, these measures are quite robust. There is also a focus on transparency, which as noted above can enhance accountability and access to remedies and can also have a ‘domino effect’ on the protection of other rights, such as privacy and non-discrimination.

The German Data Ethics Commission has also adopted an Opinion which expressly supported the ‘European path’, by which:

the defining feature of European technologies should be their consistent alignment with European values and fundamental rights, in particular those enshrined in the European Union's Charter of Fundamental Rights and the Council of Europe's Convention for the Protection of Human Rights and Fundamental Freedoms.Footnote 130

A rights-based approach is certainly evident in the Opinion, which is based on an understanding that, whilst important, ethics cannot replace regulation, particularly regarding issues such as AI where ‘heightened implications for fundamental rights’Footnote 131 require key decisions to be made by democratically elected representatives. It also makes recommendations for both governments and businesses regarding AI, suggesting, among other measures, the consideration of ‘enhanced obligations of private enterprises to grant access to data for public interest and public-sector purposes’.Footnote 132 The introduction of a binding ‘Algorithmic Accountability Code’ for operators of algorithmic systems, ‘inspired by the “comply or explain” regulatory model’Footnote 133 as seen in the new Danish legislation, is also suggested. A legally binding code of conduct for businesses accompanied by oversight by an independent body could have a significant impact on the protection of human rights, should it include measures such as the adoption of human rights due diligence processes.

In a similar vein, the German Ethics Council adopted an ‘Opinion on Big Data and Health – Data Sovereignty as the Shaping of Informational Freedom’.Footnote 134 The Council emphasises the need for businesses to take responsibility, suggesting this could be achieved through ‘strengthening the oversight and verifiability of their processes in terms of, for example, the algorithms employed; the measures taken to eliminate systematic discrimination; the adherence to regulations pertaining to data safekeeping, anonymisation and deletion; and the gapless and tamperproof of the origin, processing, use and exchange of data’.Footnote 135 Again, each of these measures would go some way to protecting the rights to privacy and non-discrimination.

A different approach is taken by the ‘Norwegian National Strategy for Artificial Intelligence’, adopted by the Norwegian Ministry of Local Government and Modernisation.Footnote 136 Rather than recommending binding standards applicable to businesses, businesses are encouraged to ‘establish their own industry standards or labelling or certification schemes based on the principles for responsible use of artificial intelligence’.Footnote 137 As it is a strategy document, it contains fewer specific recommendations for businesses and government and is limited to laying down some of the steps Norway will take in the regulation and governance of AI. Nonetheless, there is a recurring commitment to the protection of human rights throughout the document, which also states that Norway will adopt the ethical principles put forward in the EU Ethics Guidelines for Trustworthy AI in its governance of AI and highlights the Guidelines’ basis in fundamental rights.Footnote 138

As of 2020, 128 countries had adopted legislation on data protection more broadly,Footnote 139 although as seen above, this can also require certain conduct of entities working with AI. The materials examined suggest that, apart from privacy, there are relatively few direct references to the protection of human rights in national legislation and official regulations related to AI. However, some instruments include more general references to the protection of human rights, such as in Australia, New Zealand and GermanyFootnote 140 which also contain standards that can have an impact on the protection of human rights without being framed as such. Other legislative initiatives have been taken at the sub-national level, such as legislation adopted in Washington State in the US regarding governmental use of facial recognition and a bill concerning discrimination and the use of automated decision-making.Footnote 141

Overall, many countries are making strides in the introduction of legislation or regulation concerning AI, including through the adoption of national AI strategies, and non-binding national measures sometimes reference the broad range of human rights found at the international level. This is positive, but beyond data protection and privacy, the protection of human rights has not yet been thoroughly embedded in national legislation related to AI. Nonetheless, there are some positive contributions that enhance legal certainty for both States and businesses in the national initiatives analysed.

VI. ANALYSIS AND COMPARATIVE OBSERVATIONS

A very wide range of initiatives have been taken that set out, even if relatively vaguely, the human rights standards applicable to those involved in developing and deploying AI. Many of the initiatives examined could, if implemented, contribute to the protection of human rights in situations involving AI. While promising, three main issues remain to be adequately addressed in order to enhance human rights protection: (i) consideration of human rights other than those directly related to privacy and data protection, in particular economic, social and cultural rights, which are significantly affected by AI; (ii) more detailed standards regarding the human rights responsibilities of businesses involved in developing AI; and (iii) improved coordination to avoid the adoption of contradictory standards which can undermine legal and regulatory certainty and clarity.

As has been seen, some approach AI from a human rights perspective (eg the Toronto Declaration and the work of the Council of Europe), some from an ethics perspective, but which may have an impact on human rights protection (eg the EU Ethics Guidelines for Trustworthy AI), whilst some take neither as their starting point but may reference both (eg the UNESCO Recommendation). Some approaches are not based only or directly on AI, but nevertheless have an impact upon it (eg the GDPR and Convention 108+). In addition, some are addressed only to States or State actors, others are addressed to all relevant actors, and a small number specifically address businesses (eg the Toronto Declaration and the OECD's Business and Finance Outlook 2021).

While a range of human rights are covered by these various initiatives, legally binding instruments almost exclusively focus on privacy and data protection (with the key exception of the proposed AI Act). This can be seen within the European Union (eg the GDPR) and the Council of Europe (Convention 108+), at the national level in the US and India, among other countries, and at the subnational, state level in the US (eg the Washington State Legislature Bill). Non-discrimination is also covered, very often indirectly through the concept of bias documents relating to AI ethics, but sometimes as a human right, particularly within Europe. Labour rights are also occasionally mentioned though at a relatively superficial level and by way of passing references. There is a need for further clarification of the applicable standards relating to the many other rights potentially negatively affected by reliance on AI.

While many initiatives do in fact discuss human rights, the number that really engage with them and suggest concrete human rights standards for different AI actors, and particularly non-State actors, is significantly lower. Indeed, many remain decidedly vague. This includes those simply stating that human rights must be respected or that the principles proposed are generally based on human rights (eg the EU Ethics Guidelines for Trustworthy AI and the Proposed Model AI Governance Framework in Singapore). The same is true of those that engage with human rights more explicitly but fail to set out specific standards (eg the Norwegian National Strategy for Artificial Intelligence). There are, however, important exceptions. The Toronto Declaration, based on the much more general UNGPs, sets out a number of specific responsibilities of both States and businesses with regard to discrimination in machine learning, and the OECD's guidance regarding HRDD in AI supply chains provides concrete recommendations for various (private) AI actors. Additionally, the Council of Europe's Recommendations lay down several standards for businesses to follow, despite the apparently State-centric wording of the Recommendations themselves.

Interestingly, some ethics-based documents overlap considerably with approaches found within international human rights law and sometimes provide greater detail concerning how to achieve, for instance, transparency and accountability in relation to AI than is found in human rights law. To that extent, international human rights law may be able to lean on AI ethical standards in order to achieve its goals. Examples include calls for the training of and ensuring the inclusivity and diversity of staff, the use of explainable AI models, auditing, the assignment of responsibility to specific actors, the adoption of due diligence processes, mechanisms for seeking remedies, etc. These are all called for by statements concerning ethics and AI (eg the Recommendations of the Danish Expert Group on Data Ethics and the EU Ethics Guidelines for Trustworthy AI), but are equally relevant in the context of human rights protection.

Overall, it appears that, with a few exceptions, developments at the international and regional levels are more consistent in approaching human rights in a direct manner, whereas national approaches tend to focus more on AI ethics. However, it seems that even when ethics provides the favoured framework such initiatives are not blind to the need to tackle issues from a human rights perspective and appear to follow the premise that human rights and ethics can be mutually reinforcing (eg the UNESCO Recommendation).

Non-binding initiatives at the international level and within the Council of Europe seem more likely to address human rights directly, and particularly the responsibilities of AI businesses. As extra-legal initiatives, they are not bound by the State-centric legal framework of international law, which facilitates a more direct discussion of corporate responsibility and how non-binding instruments such as the UNGPs apply to AI. In addition, at the international level human rights tend to be articulated as standards applicable to States. At the national level, legally binding measures are more focused on outcomes than on whether the standards advanced have their basis in human rights or ethics. The ethical focus of many regional and national initiatives also reflects the typical approach of AI practitioners, who tend to think in terms of ethics rather than human rights. At the regional level, ethics-based instruments provide more detail and hence greater clarity concerning what is expected of the various actors than human rights-based instruments, and a cross-sectoral analysis is sometimes needed to see how the standards they set out relate to human rights.

In terms of content, many national and regional initiatives focus on the development of standards concerning privacy and data protection, as well as non-discrimination. Efforts must be made to provide specific standards covering the much broader range of human rights that can be negatively impacted by AI, as acknowledged in the proposed AI Act and in numerous international initiatives.

More needs to be done to clarify the extent of corporate responsibility in relation to human rights in the context of AI. A ‘one size fits-all’ approach is certainly neither desirable nor possible given the vast range of AI businesses and the AI models that they produce. However, more legal certainty and clear advice for businesses developing AI is crucial to ensuring the effective protection of human rights in this context. The majority of initiatives at all levels pay attention to what businesses can and should be doing to ensure ethical or human rights-friendly AI with some, such as the Toronto Declaration and the UNESCO Recommendation, going as far as citing expressly the UNGPs. However, many have only limited provisions for the supervision and enforcement of standards. This has significant consequences for accountability, which as noted above is key to securing human rights protection in this, and indeed any other, setting.

Finally, it is important from a (good) governance perspective to remember that a considerable amount of duplication, overlap and potential contradiction remains and there is a need for better coordination in order to improve legal and regulatory certainty for those involved in developing and deploying AI. Nonetheless, a key lesson to be learned from this analysis is that in order to fully protect human rights in the era of AI (especially in terms of what can and should be done by different actors involved) it is necessary to consider standards that are found not only in legal and human rights-specific instruments, but in a wide range of initiatives that have a bearing on human rights issues, irrespective of whether they are labelled as such.

VII. CONCLUSION

A true law-and-governance approach is being taken globally to try to reap the benefits of AI whilst curbing its negative impacts on individuals and society. Currently, the number of non-binding governance initiatives related to AI and human rights greatly outweighs the number of legally binding initiatives, particularly at the international level. The focus on the rights to privacy and data protection in the legally binding initiatives is understandable but must be extended to other rights which are not sufficiently addressed, including the rights to food, education, health and a healthy environment. The wide range of actors that have stepped up to take action in this area has led to varied, yet often complementary sets of standards and recommendations at the national, regional and international levels with differing impacts on the protection of human rights.

Overall, whilst there is a clear articulation of the applicability of existing human rights standards to both businesses and States developing and deploying AI in many initiatives, few provide real clarity concerning what is to be expected of them. Nevertheless, an examination of these various initiatives can help remedy, to some extent, the weakness of the international human rights law framework in relation to AI: its limited capacity to keep up with the pace of AI development, the limited opportunities for authoritative pronouncements concerning the implications of human rights law for AI; the State-centric approach of international human rights law; and the difficulty of ‘future-proofing’ international human rights law given the uncertainties of how some AI systems may develop.

This analysis shows that initiatives that are not restricted by legal frameworks and which are more practically focused are more helpful for AI practitioners in allowing them to see exactly what such standards mean for them. Such initiatives are also better able to tackle issues head-on. It is crucial to make use of the broader and technical expertise that is reflected in these initiatives to supplement the standards and approaches developed under international human rights law.

Footnotes

This research was funded by the Dutch Sectorplan for Social Sciences and Humanities. The author is grateful to the anonymous reviewers for their very constructive feedback on earlier drafts of this article.

References

1 eg L McGregor, D Murray and V Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ (2019) 68 ICLQ 309; NA Smuha, ‘Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea’ (2021) 34 Philosophy & Technology 91. Despite enthusiasm for a human rights-based approach to the governance of AI, it is important to be mindful of its pitfalls beyond the limitations outlined in this article. See Su, A, ‘The Promise and Perils of International Human Rights Law for AI Governance’ (2022) 4(2) Law, Technology and HumansCrossRefGoogle Scholar.

2 Moses, L Bennett, ‘Why Have a Theory of Law and Technological Change?’ (2007) 8(2) Minnesota Journal of Law, Science and Technology 589, 590, 605–6Google Scholar, discussed in Maas, M, ‘International Law Does Not Compute: Artificial Intelligence and the Development, Displacement or Destruction of the Global Legal Order’ (2019) 20(1) MJIL 29, 3843Google Scholar.

3 A Jobin, M Ienca and E Vayena, ‘The Global landscape of AI ethics guidelines’ (2019) 1 Nature Machine Learning 389; T Hagendorff, ‘The Ethics of AI Ethics: An Evaluation of Guidelines’ (2020) 30 Minds and Machines 99; M Ienca and E Vayena, ‘AI Ethics Guidelines: European and Global Perspectives’ in Ad Hoc Committee on Artificial Intelligence, ‘Towards Regulation of AI Systems: Global Perspectives on the Development of a Legal Framework on Artificial Intelligence (AI) Systems Based on the Council of Europe's Standards on Human Rights, Democracy and the Rule of Law’ (2020) Council of Europe Study DGI (2020)16 38; L Schmitt, ‘Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape’ (2022) 2 AI and Ethics 303.

4 Importantly, this involves viewing governance as different from, or at least as going ‘beyond government’. eg Graham, J, Amos, B and Plumptre, T, ‘Principles for Good Governance in the 21st Century’, Policy Brief No 15, cited in L Lane and M Hesselman, ‘Governing Disasters: Embracing Human Rights in a Multi-Level, Multi-Duty Bearer, Disaster Governance Landscape’ (2017) 5(2) Politics and Governance 93Google Scholar.

5 Lane and Hesselman, ‘Governing Disasters’ (n 4) 2017.

6 L Lane, ‘The Horizontal Effect of International Human Rights Law: Towards a Multi-Level Governance Approach’ (Dissertation, University of Groningen) 320–1.

7 Zürn, M, ‘Global Governance as Multi-Level Governance’ in Levi-Faur, D (ed), The Oxford Handbook of Governance (OUP 2012) 730Google Scholar, cited in L Lane, ‘The Horizontal Effect of International Human Rights Law: Towards a Multi-Level Governance Approach’ (Dissertation, University of Groningen) 321.

8 Bevir, M, ‘Governance’ in Bevir, M (ed), Encyclopedia of Governance (SAGE Publications 2007)CrossRefGoogle Scholar; cited in Lane, ‘The Horizontal Effect of International Human Rights Law’ (n 7) 321.

9 A Završnik, ‘Criminal Justice, Artificial Intelligence Systems, and Human Rights’ (2020) 20 ERA Forum 67-583.

10 AI Ethics Lab's, ‘Toolbox: Dynamics of AI Principles’ (2021) <https://aiethicslab.com/big-picture/>.

12 The analysis covers sources available in English that were adopted before June 2022.

13 Moses, L Bennett, ‘Why Have a Theory of Law and Technological Change?’ (2007) 8(2) Minnesota Journal of Law, Science and Technology 589, 590, 605–6Google Scholar, discussed in M Maas, ‘International Law Does Not Compute' (n 2), 38–43.

14 Uncertainty exists regarding how international law applies to many situations, such as climate change, technological developments more broadly, and armed conflicts. On uncertainty related to sources of international law, see eg the work of the ‘TRICI’ law team regarding the project ‘The Rules of Interpretation of Customary International Law’ <https://trici-law.com>; Kammerhofer, J, Uncertainty in International Law: A Kelsenian Perspective (Routledge 2011)Google Scholar; and Kammerhofer, J, ‘Uncertainty in the Formal Sources of International Law: Customary International Law and Some of Its Problems’ (2004) 15(3) EJIL 523CrossRefGoogle Scholar.

15 Issues of legal uncertainty regarding AI pervade other areas of international law, such as international humanitarian law. For instance, how international humanitarian law applies in relation to autonomous weapons systems has been heavily debated for some time. See eg Sassòli, M, ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified’ (2014) 90 International Law Studies 308Google Scholar. The protection of human rights in relation to AI is also challenged by issues such as transparency (of systems and efforts to prevent risks) and accountability, which fall outside the scope of this article. Debate on these topics is raging, from both AI ethics and legal perspectives. Eg JA Kroll et al, ‘Accountable Algorithms’ (2016) 165 UPaLRev 633; H Felzmann et al, ‘Towards Transparency by Design for Artificial Intelligence’ (2020) 26 Science and Engineering Ethics 3333; Schmidt, P, Biessmann, F and Teubner, T, ‘Transparency and Trust in Artificial Intelligence Systems’ (2020) 29(4) Journal of Decision Systems 260CrossRefGoogle Scholar; McGregor, Murray and Ng (n 1); R Rodrigues, ‘Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities’ (2020) Journal of Responsible Technology 4.

16 R Coothof and BJ Ard, ‘Structuring Techlaw’, (2021) 34(2) HarvJL&Tech 347, 360.

17 J Bell, ‘Certainty and Flexibility in Law’ in P Cane and J Conaghan (eds) The New Oxford Companion to Law (Online Edition. Oxford University Press 2008) cited in B Oomen and A Bedner, ‘The Relevance of Real Legal Certainty’ in Real Legal Certainty and its Relevance: Essays in honour of Jan Michiel Otto (2nd edn, Bloomsbury 2019) 1–26, 11.

18 eg ibid; JM Otto, ‘Towards an Analytical Framework: Real Legal Certainty and Its Explanatory Factors’ in J Chen, Y Liu and JM Otto (eds) Implementation of Law in the People's Republic of China (Kluwer Law International 2002); European Court of Justice, Belgium v Commission (2005) Case C-110/03, para 30, and Black-Clawson International Ltd v Papierwerke Waldhof-Aschaffenburg AG [1975] AC 591, 638, cited in M Fenwick, M Siems and S Wrbka, ‘The State of the Art and Shifting Meaning of Legal Certainty’ in M Fenwick, M Siems and S Wrbka (eds), The Shifting Meaning of Legal Certainty in Comparative and Transnational Law (Bloomsbury 2017) 1–26, 2.

19 As Nathalie A Smuha notes, if we do not elucidate how international human rights law applies and can be enforced with regard to AI, using human rights as a framework for governing AI will not fulfil its purpose: Smuha (n 1).

20 As Matthijs Maas has noted, Matthew Scherer has argued that the autonomy, opacity and unpredictability of certain Al systems might create uncertainty over concepts such as attribution, control and responsibility. MU Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, and Competencies, Strategies’ (2016) 29(2) HarvJL&Tech 353, 376–92, discussed in Maas (n 2) 59–60.

21 Gaps exist in other areas of law, such as liability, legal personhood and intellectual property. For discussion, see Rodrigues (n 15).

22 United Nations, Statute of the International Court of Justice 1946, USTS 993.

23 In areas of international law other than human rights, previous studies have made efforts to draw the connection between existing primary sources of international law and AI even where there is no direct reference to the latter: M Kunz and SÓ hÉigeartaigh, ‘Artificial Intelligence and Robotization’ in R Geiss and N Melzer (eds), Oxford Handbook on the International Law of Global Security (Oxford University Press 2021) 624–40.

24 Meaning judicial decisions and the work of the highest qualified publicists: Statute of the International Court of Justice (n 22) art 38.

25 eg District Court of The Hague, SyRI judgment (2020) ECLI:NL:RBDHA:2020:1878; UK Court of Appeal, R (on the application of Edward Bridges) v The Chief Constable of South Wales Police EWCA Civ 1058.

26 This can be said of contentious cases heard by the International Court of Justice: Statute of the International Court of Justice (n 22) art 59, but also cases heard by bodies that are not technically considered to adopt ‘judicial decisions’, such as the human rights treaty monitoring bodies whose views on individual communications do not have binding effect.

27 UN CteeESCR, ‘General Comment No 25 on Science and Economic, Social and Cultural Rights (article 15(1)(b), (2), (3) and (4) of the International Covenant on Economic, Social and Cultural Rights)’ (2021) E/C.12/GC/25.

28 ibid para 75.

29 eg UN CteeESCR, ‘General Comment No 5: Persons with Disabilities’ (1994) E/1995/22; and UN CteeESCR, ‘General Comment No 24 on State Obligations under the International Covenant on Economic, Social and Cultural Rights in the Context of Business Activities’ (2017) E/C.12/GC/24, cited in L Lane, ‘The Horizontal Effect of International Human Rights Law in Practice: A Comparative Analysis of the General Comments and Jurisprudence of Selected United Nations Human Rights Treaty Monitoring Bodies’ (2018) 5(1) EJCL 49–51.

30 UN CteeRC, ‘General Comment No 25 on Children's Rights in Relation to the Digital Environment’ (2021) CRC/C/GC/25.

31 For example, the Comment highlights the risks of ‘[o]ther forms of discrimination [that] can arise when automated processes that result in information filtering, profiling or decision-making are based on biased, partial or unfairly obtained data concerning a child’: ibid.

32 eg UN Human Rights Council, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’ (2018) A/73/3482019; UN Human Rights Council, ‘Report of the Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston’ (2019) A/74/48037.

33 UN Human Rights Council, ‘Report of the Special Rapporteur on Extreme Poverty and Human Rights’ (n 33); SyRI judgment (n 25). In the brief, Alston raised other human rights issues, such as the right to social security and non-discrimination, although the findings of the Dutch court related primarily to the right to privacy.

34 UN Human Rights Council, ‘Report of the UN High Commissioner for Human Rights, Michelle Bachelet: The Right to Privacy in the Digital Age’ (2021) A/HRC/48/31, paras 48–50.

35 This includes the framework on business and human rights more specifically, which will be addressed in Section II.B below. For discussion of the challenges of regulating AI more generally, see T Wischmeyer and T Rademacher (eds), Regulating Artificial Intelligence (Springer 2020).

36 Taking a contemporary example, the Open-ended Intergovernmental Working Group (OEIGW) on Transnational Corporations and Other Business Enterprises with Respect to Human Rights began work on a draft binding treaty on business and human rights in 2014, after being established for that purpose by the UN Human Rights Council in ‘Resolution 26/9, Elaboration of an International Legally Binding Instrument on Transnational Corporations and Other Business Enterprises with Respect to Human Rights’, (25 June 2014) A/HRC/26/L.22/Rev.1. At the time of writing in 2022, we are eight years on and although a third revised draft was published in August 2021, an adopted treaty that can be relied on by State Parties remains a fairly distant prospect. OEIWG on Transnational Corporations and Other Business Enterprises with respect to Human Rights, ‘Third Revised Draft of a Legally Binding Instrument to Regulate, in International Human Rights Law, the Activities of Transnational Corporations and Other Business Enterprises’ (17 August 2021) <https://www.ohchr.org/en/hrbodies/hrc/wgtranscorp/pages/igwgontnc.aspx>.

37 North Sea Continental Shelf cases, Judgment, ICJ Reports 1969, 43, para 74; International Law Commission, ‘Draft Conclusions on Identification of Customary International Law, with Commentaries’ (2018) Yearbook of the International Law Commission vol. II, Part Two, para 9 of the Commentary to Conclusion 8(2). Although it may take some time to get to this stage, some authors are very optimistic that the UN system will play a key role in the global governance of human rights. eg EV Garcia, ‘Multilateralism and Artificial Intelligence: What Role for the United Nations?’ in M Tinnirello (ed), The Global Politics of Artificial Intelligence (CRC Press 2022) 57–84.

38 See eg European Court of Human Rights, Volodina v Russia, App No 41261/17, Judgment of 9 July 2019; The Environment and Human Rights (State Obligations in Relation to the Environment in the Context of the Protection and Guarantee of the Rights to Life and to Personal Integrity – Interpretation and Scope of Articles 4(1) and 5(1) of the American Convention on Human Rights), Advisory Opinion OC-23/18, IACHR (ser.A) No 23 (15 November 2017).

39 eg M Monnheimer, Due Diligence Obligations in International Human Rights Law (Cambridge University Press 2021); B Baade, ‘Due Diligence and the Duty to Protect Human Rights’ in H Krieger, A Peters and L Kreuzer (eds), Due Diligence in the International Legal Order (Oxford University Press 2020) 98; N McDonald, ‘The Role of Due Diligence in International Law’ (2019) 68(4) ICLQ 1041.

40 Scherer (n 20) 365. Many of the initiatives discussed in Sections III–V call for measures to be taken to ensure that results can be explained in an accessible manner, increasing transparency and accountability with regard to those systems.

41 The EU Commission's proposed ‘AI Act’ has come under criticism on this point. eg D Matthews, ‘EU Artificial Intelligence Act Not ‘‘Futureproof’’, Experts Warn MEPs’ (Science Business, 22 March 2022) <https://sciencebusiness.net/news/eu-artificial-intelligence-act-not-futureproof-experts-warn-meps>. This is despite the Commission's claims to take a ‘future-proof approach allowing rules to adapt to technological change’. European Commission, ‘Regulatory Framework Proposal on Artificial Intelligence’ (7 June 2022) <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai>.

42 eg Amnesty International, ‘Autonomous Weapons Systems: Five Key Human Rights Considerations’ (2015). The use of military AI raises many concerns regarding international humanitarian law, which may be viewed as the lex specialis during times of armed conflict. However, the relevance of international human rights law is not ‘eclipsed’ by the application of international humanitarian law and may apply alongside it, or even constitute the lex specialis itself in some situations of reliance on AI-enabled military technologies. T Woodcock, ‘Eclipsing Human Rights: Why the International Regulation of Military AI Is Not Limited to International Humanitarian Law’ (Human Rights Here, 13 July 2021) <https://www.humanrightshere.com/post/doctoral>.

43 Lane, ‘The Horizontal Effect of International Human Rights Law in Practice’ (n 29).

44 Lane, ‘The Horizontal Effect of International Human Rights Law’ (n 6) 297.

45 Lane, ‘The Horizontal Effect of International Human Rights Law in Practice’ (n 29).

46 UN Human Rights Council, ‘Report of the Special Representative of the Secretary-General on the Issue of Human Rights and Transnational Corporations and Other Business Enterprises, John Ruggie: Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework’ (2011) A/HRC/17/31 (UNGPs).

47 The process of consultation has, however, been criticised on various grounds, including a lack of inclusion. C López, ‘The “Ruggie Process”: From Legal Obligations to Corporate Social Responsibility?’ in S Deva and D Bilchitz (eds) Human Rights Obligations of Business: Beyond the Corporate Responsibility to Respect? (Cambridge University Press 2013) 58–77; B Hamm, ‘The Struggle for Legitimacy in Business and Human Rights Regulation – A Consideration of the Processes Leading to the UN Guiding Principles and an International Treaty’ (2022) 23 Human Rights Review 103.

48 Albeit without a vote. López, (n 47); Hamm (n 47).

49 OEIWG on Transnational Corporations and Other Business Enterprises with respect to Human Rights (n 36).

50 European Parliament, ‘Resolution of 10 March 2021 with Recommendations to the Commission on Corporate Due Diligence and Corporate Accountability’, 2020/2129(INL).

51 UN Human Rights Council, ‘UNGPs’ (n 46) Principle 15.

52 ibid Principles 16, 17–21 and 22, respectively.

53 European Commission, ‘Proposal for a Directive on Corporate Sustainability Due Diligence’ (2022) 2022/0051 (COD). An example of a business taking a strong stance on this is the chocolate company Tonys Chocolonely. See Tonys Chocolonely, ‘Tony's Position on Human Rights & Environmental Due Diligence Legislation’ <https://tonyschocolonely.com/nl/nl/tonys-position-on-human-rights-environmental-due-diligence-legislation>.

54 L Smit, ‘Study on Due Diligence Requirements through the Supply Chain’ (Presentation at the Netherlands Network for Human Rights Research Toogdag 2020) <https://www.asser.nl/media/680010/1-lise-smits-presentation-of-the-ec-study-on-due-diligence_1.mp4> discussed in B Grama and L Lane 2020, ‘Mandatory Due Diligence Trends in Europe: Promises, Possibilities and Pitfalls’ (Human Rights Here 2020). <https://www.humanrightshere.com/post/mandatory-due-diligence-trends-in-europe-promises-possibilities-and-pitfalls>. It could also be argued that the draft international treaty on business and human rights may bring a greater level of clarity and legal certainty, especially if adopted in a widespread manner.

55 See eg C Macchi, Business, Human Rights and the Environment: The Evolving Agenda (TMC Asser Press 2022).

56 See eg B Pinnington, A Benstead and J Meehan, ‘Transparency in Supply Chains (TISC): Assessing and Improving the Quality of Modern Slavery Statements’ (2022) Journal of Business Ethics.

57 Scherer (n 20) 371, referring to R Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 CLR 513, 534.

58 For broader discussion see C Curtis, N Gillespie and S Lockey, ‘AI-Deploying Organizations Are Key to Addressing ‘‘Perfect Storm’’ of AI Risks’ (2022) AI Ethics.

59 UN CteeESCR, ‘General Comment No 25 on Science and Economic, Social and Cultural Rights’ (n 27) para 75, cited in Lane, ‘Artificial Intelligence and Human Rights’ (n 4).

60 A Berthet, ‘Why do emerging AI guidelines emphasize “ethics” over human rights?’ (Open Global Rights, 10 July 2019) <https://www.openglobalrights.org/why-do-emerging-ai-guidelines-emphasize-ethics-over-human-rights/> cited in L Lane, ‘Why an ‘‘ethical’’ AI approach isn't enough to protect human rights’ (Slimmer AI, 23 June 2021) <https://www.slimmer.ai/news/a-human-rights-approach-to-ai-the-whys-and-wherefores>. See also Section III below.

61 ibid.

62 eg High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (8 April 2019).

63 eg art 13, European Convention for the Protection of Human Rights and Fundamental Freedoms 1950, as amended by Protocol Nos 11 and 14 (adopted 4 November 1950, entered into force 3 September 1953) ETS 5 (ECHR).

64 Berthet, (n 60).

65 UNI Global Union, ‘Top 10 Principles for Ethical Artificial Intelligence’ (2017) <http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf>.

66 Access Now and Amnesty International, ‘The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems’ (2018) para 51(b) <https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf>.

67 ibid para 47(c).

68 ibid para 49.

69 ibid para 51(b).

70 ibid, para 48.

71 Organization for Economic Cooperation and Development (OECD), ‘Principles on AI’ (2019) <https://www.oecd.org/digital/artificial-intelligence/ai-principles/>.

72 OECD, ‘AI in Business and Finance: Global Finance Outlook 2021’ (2021) <https://www.oecd.org/daf/oecd-business-and-finance-outlook-26172577.htm>.

73 ibid Section 3.

74 ibid Section 3.3.

75 This obligation has been repeatedly read into the broader State ‘obligation to protect’ human rights from the harmful conduct of third parties. In the context of business and human rights it has been given content by, inter alia, CteeESCR, ‘General Comment No 24 of the Committee on Economic, Social and Cultural Rights on State obligations under the International Covenant on Economic, Social and Cultural Rights in the context of business activities’ (2017) E/C.12/GC/24.

76 UNESCO, ‘Elaboration of a Recommendation on the ethics of artificial intelligence’ (2020) <https://en.unesco.org/artificial-intelligence/ethics>.

77 ibid.

78 Ad Hoc Expert Group for the preparation of a draft text of a recommendation on the ethics of artificial intelligence, ‘Outcome Document: First Draft of the Recommendation on the Ethics of Artificial Intelligence’ (2020) SHS/BIOAHEG-AI/2020/4 Rev.2.

79 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (23 November 2021) SHS/BIO/PI/2021/1 para 8(c).

80 ibid para 11.

81 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (n 79) para 48. The Recommendation provides a considerable number of standards that should be followed by various AI actors, further discussion of which falls outside the scope of this article.

82 Ad Hoc Expert Group for the preparation of a draft text of a recommendation on the ethics of artificial intelligence (n 78) para 1 defines AI ethics with the statement: ‘Rather than equating ethics to law, human rights, or a normative add-on to technologies, it considers ethics as a dynamic basis for the normative evaluation and guidance of AI technologies.’ The final text follows the same definition of AI ethics, without this additional remark: UNESCO, Recommendation on the Ethics of Artificial Intelligence’ (n 79) para 1.

83 ibid, preamble.

84 European Parliament and European Council (2016) Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR).

85 For a summary of the provisions and rules contained in the GDPR, see GDPR.EU, ‘What is GDPR, the EU's new data protection law?’ (2021) <https://gdpr.eu/what-is-gdpr>.

86 Defined as ‘the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data’: art 4(7) GDPR (n 84).

87 European Economic and Social Committee, ‘Opinion on Artificial Intelligence-The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society’ (2017) INT806.

88 eg Algorithm Watch and Bertelsmann Stiftung, ‘Automating Society: Taking Stock of Automated Decision-Making in the EU (2019) <https://algorithmwatch.org/en/wp-content/uploads/2019/02/Automating_Society_Report_2019.pdf>.

89 GDPR (n 84) art 83.

90 High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (n 62).

91 ibid.

92 ibid 9–10.

93 European Commission, ‘White Paper on Artificial Intelligence: A European approach to excellence and trust’, COM(2020) 65 final.

94 European Parliament, ‘Resolution of 20 October 2020 on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies’ (2020) 2020/2012(INL).

95 European Parliament, ‘Resolution of 20 October 2020 on a Civil Liability Regime for Artificial Intelligence’ (2020) 2020/2014(INL).

96 European Parliament, ‘Resolution of 20 October 2020 on Intellectual Property Rights for the Development of Artificial Intelligence Technologies’ (2020) 2020/2015(INI).

97 European Parliament, ‘Draft Report, Artificial Intelligence in Criminal Law and its Use by the Police and Judicial Authorities in Criminal Matters’ (2021) 2020/2016(INI).

98 European Parliament, ‘Draft Report, Artificial Intelligence in Education, Culture and the Audiovisual Sector’ (2021) 2020/2017(INI).

99 European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ (2021) 2021/0106(COD).

100 For discussion of the potential extraterritorial impact of the AI Act, see M Ipek, ‘EU Draft Artificial Intelligence Regulation: Extraterritorial Application and Effects’ (EU Law Blog, 17 February 2022) <https://europeanlawblog.eu/2022/02/17/eu-draft-artificial-intelligence-regulation-extraterritorial-application-and-effects/>; C Siegmann and M Anderljung, ‘The Brussels Effect and Artificial Intelligence: How EU Regulation Will Impact the Global AI Market’ (Centre for the Governance of AI, 16 August 2022) <https://www.governance.ai/research-paper/brussels-effect-ai>.

101 There are many commentaries on the draft from a range of actors. See eg M Veale and F Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act—Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach’ (2021) 22(4) Computer Law Review International 97; European Tech Alliance, ‘The European Tech Alliance Welcomes the Artificial Intelligence Act’ (June 2021) <https://eutechalliance.eu/euta-position-on-aia/>. For a more critical perspective: Meijers Committee, ‘Comments on the AI Regulation Proposal’ (22 February 2022) CM2203 <https://www.commissie-meijers.nl/wp-content/uploads/2022/02/CM2203-Comments-on-the-AI-Regulation.pdf>; T Krupiy, ‘Why the Proposed Artificial Intelligence Regulation Does Not Deliver on the Promise to Protect Individuals From Harm’ (EU Law Blog, 23 July 2021) <https://europeanlawblog.eu/2021/07/23/why-the-proposed-artificial-intelligence-regulation-does-not-deliver-on-the-promise-to-protect-individuals-from-harm/>.

102 European Digital Rights et al, ‘An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement’ (November 2021) <https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf>.

103 ibid 3 (emphasis added).

104 Council of the European Union, Charter of Fundamental Rights of the European Union (2007) 2007/C 303/01.

105 European Commission, ‘Artificial Intelligence Act’ (n 99) Explanatory Memorandum, para 3.5.

106 ibid 7.

107 ibid art 69.

108 Council of Europe, ‘Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data’ (2018) CETS 223.

109 Council of Europe, ‘Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data (1981) CETS 108.

110 Council of Europe, ‘Convention 108+, Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data’ (2018) CETS 108.

111 Council of Europe, ECHR (n 63) art 8.

112 Emphasis added.

113 ‘Convention 108+’ (n 110) para 88.

114 Council of Europe, ‘Recommendation of the Committee of Ministers to Member States on the Human Rights Impacts of Algorithmic Systems’ (2020) CM/Rec(2020)1.

115 ibid para 3 (emphasis added).

116 ibid Appendix to the Recommendation.

117 ibid para 1.3.

118 ibid.

119 African Union, ‘Press Release: African Digital Transformation Strategy and African Union Communication and Advocacy Strategy Among Major AU Initiatives in Final Declaration of STCCICT3’ (2019) <https://au.int/en/pressreleases/20191026/african-digital-transformation-strategy-and-african-union-communication-and> cited in J Okechukwu Effoduh, ‘7 Ways That African States are Legitimizing Artificial Intelligence’ (Open African Innovation Research, 20 October 2020) <https://openair.africa/7-ways-that-african-states-are-legitimizing-artificial-intelligence>.

120 McKinsey Global Institute, ‘Artificial Intelligence and Southeast Asia's Future: Discussion Paper’ (2017) <https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/ai%20and%20se%20asia%20future/artificial-intelligence-and-southeast-asias-future.ashx2017>.

121 ibid 30.

122 Singapore Data Protection Commission, ‘Proposed Model AI Governance Framework’ (2nd Edition 2020) para 1.2 <https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf>.

123 ibid para 2.7(b).

124 ibid 30–5.

125 Danish Expert Group on Data Ethics, ‘Data for the Benefit of the People: Recommendations from the Danish Expert Group on Data Ethics’ (November 2018).

126 ibid 9.

127 eg art 14 Council of Europe, ECHR (n 63); art 2, UN General Assembly, International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) UNTS 999, 171 (ICCPR). These provisions relate to non-discrimination. Rather than being a legally binding right in itself, human dignity is a value often considered to be a foundation for human rights. See eg the preamble of the ICCPR.

128 P Nørkær and S Veje Rasmussen, ‘New Requirement for Large and Listed Companies to Report on Their Data Ethics Policies’ (Moalem Weitemeyer, 21 January 2021) <https://moalemweitemeyer.com/insights/2021-01-21>.

129 ibid.

130 German Data Ethics Commission, ‘Opinion’ (2019) 7. <https://www.bmjv.de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_EN.pdf?__blob=publicationFile&v=1.27> (emphasis added).

131 ibid 7.

132 ibid Recommendation 35.

133 ibid Recommendation 59.

135 ibid para 88.

136 Norwegian Ministry of Local Government and Modernisation, ‘Norwegian National Strategy for Artificial Intelligence’ (January 2020).

137 ibid 63. The strategy document also highlights on several occasions Norway's preference for voluntary standards for businesses, for example regarding the sharing of data, rather than setting binding requirements for such actors’ although notes that it could be ‘imposed if necessary; for example for reasons of public interest’.

138 ibid 58.

139 The more recent of which have been inspired by the EU's GDPR, eg the Republic of Kenya, Data Protection Act (2019) Kenya Gazette Supplement No 181 (Acts No 24), discussed in G Obulutsu and D Miriri, ‘Kenya Passes Data Protection Law Crucial for Tech Investments’ (Reuters, 8 November 2019) <https://www.reuters.com/article/us-kenya-dataprotection/kenya-passes-data-protection-law-crucial-for-tech-investments-idUSKBN1XI1O1>; Okechukwu Effoduh (n 119). See also U Val Obi, ‘An Extensive Article on Data Privacy and Data Protection Law in Nigeria. International Network of Privacy Law Professionals’ (2020) <https://inplp.com/latest-news/article/an-extensive-article-on-data-privacy-and-data-protection-law-in-nigeria/>.

140 Council of Europe Committee of experts on Internet MSI-NET, ‘Study on the Human Rights Dimensions of Automated Data Processing (in particular algorithms) and Possible Regulatory Implications’ DGI(2017)12.

141 Washington State Legislature, Engrossed Substitute Senate Bill 6280 (2020); discussed in M Nickelsburg, ‘Washington State Lawmakers Seek to Ban Government From Using Discriminatory AI tech’ (GeekWire, 13 February 2021) <https://www.geekwire.com/2021/washington-state-lawmakers-seek-ban-government-using-ai-tech-discriminates>; and National Conference of State Legislatures, ‘Legislation Related to Artificial Intelligence’ (2021) <https://www.ncsl.org/research/telecommunications-and-information-technology/2020-legislation-related-to-artificial-intelligence.aspx>.