Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-19T15:29:40.807Z Has data issue: false hasContentIssue false

9 - China’s Normative Systems for Responsible AI

From Soft Law to Hard Law

from Part II - Current and Future Approaches to AI Governance

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

The law scholars Weixing Shen and Yun Liu focus on China’s efforts in the field of AI regulation and spell out recent legislative actions. While there is no unified AI law today in China, many provisions from Chinese data protection law are in part applicable to AI systems. The authors particularly analyse the rights and obligations from the Chinese Data Security Law, the Chinese Civil Code, the E-Commerce Law, and the Personal Information Protection Law and explain the relevance of these regulations with regard to responsible AI and algorithm governance. The authors introduce as well the Draft Regulation in Internet Information Service Based on Algorithm Recommendation Technology. This adopts many AI specific principles such as transparency, fairness, and reasonableness. Regarding the widely discussed field of facial recognition by AI systems, they introduce a Draft Regulation, and a judicial Opinion by the Supreme People’s Court of China. Finally, Weixing Shen and Yun Liu refer to the AI Act proposed by the European Commission, which could also inspire future Chinese regulatory approaches.

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 150 - 166
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

I. Introduction

Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI.

The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework.Footnote 1 Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws.

This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing laws and regulations. Finally, this chapter concludes by identifying the trend of building a regulatory system for responsible AI in China.

II. The Multiple Exploration of Responsible AI

1. The Impact of AI Applications Is Regarded As a Revolution

Science and technology are a kind of productive force. The innovation and application of new technologies often improves production efficiency and stimulates transformative changes on politics, economics, society, and culture. In China, ‘technological revolution’ got its name due to the widespread application of these technologies. It is well known that there have been three technological revolutions in the modern era. China missed the three historic developmental opportunities due to foreign invasion and internal turmoil. During the first and second industrial revolutions, which were powered by steam and electricity respectively, China was in its last imperial period, the Qing Dynasty, and missed the opportunity to participate in the creation of inventions because it was experiencing the century of humiliation in its five-thousand-year history. The Third Industrial Revolution, which began in the 1950s, was marked by the invention and application of atomic energy, electronic computers, space technology, and bioengineering. However, China missed most of it because it lacked the political environment to participate in international communications. China has been lagging behind for a long time. Due to the implementation of the reform and opening-up policy in 1978, China started to catch up and to learn from the West in the aspects of science and technology, legal systems, and other fields.

In order to promote the development of science and technology, Article 12 of the Constitution of the People’s Republic of China (1978 revised) stipulates that the state shall vigorously develop scientific undertakings, strengthen scientific research, carry out technological innovation and technological revolution, and adopt advanced technology in all sectors of the national economy as far as possible. In September 1988, when Deng Xiaoping, the second generation leader of PRC, met with President Gustáv Husák of Czechoslovakia, he said, “Science and technology are the primary productive force,” which has become a generally accepted consensus among Chinese people.

China caught up with the new trend of the third AI flourishing period. At the beginning of the twenty-first century, China’s science and technology policy began to plan the development of ‘next generation information technology’.Footnote 2 Since 2011, Chinese official documents have made extensive references to the development of ‘next generation information technology’. With the rapid development of global AI, China has the opportunity to stand at the same starting line in the next round of AI technology development and application. China is fully aware of the profound impact of AI technology, and some high-level documents already refer to the next round of technological development, represented by AI, as a ‘technological revolution’, which is considered to be similar to the aforementioned three technological revolutions. As it is a revolutionary technology, the Chinese government does not see it only as a technology, but also realizes that it will play a key role in social governance, economic structure, political environment, the international landscape, and other aspects.

On 31 October 2018, the Political Bureau of the Central Committee of the CPC held its ninth collective study on the current status and trends of AI development, and Xi Jinping particularly emphasized that

Artificial Intelligence is a strategic technology leading this round of scientific and technological revolution and industrial change, with a strong ‘head goose’ effect of spillover drive. It is necessary to strengthen the development of Artificial Intelligence potential risk research and prevention, to safeguard the interests of the people and national security, to ensure that Artificial Intelligence is safe, reliable and controllable. It is necessary to integrate multidisciplinary forces, strengthen research on legal, ethical and social issues related to AI, and establish and improve laws and regulations, institutional systems and ethics to safeguard the healthy development of AI.Footnote 3

When recognizing that AI can have such a broad shaping power, China’s technology policy reflects on the idea of balancing development and governance, considering both the promotion of positive social benefits from AI and the prevention of risks from AI applications as components of achieving responsible AI. On one hand, China’s main goal, since its reform and opening up, has been to devote itself to economic development and the improvement of people’s living standards, and in recent years it has also put forward the reform goal of modernizing its governance system and capabilities.Footnote 4 Actively promoting AI technology development is conducive to improving the country’s economy, increasing people’s well-being, and improving the social governance system. On the other hand, AI replaces or performs some behaviors on behalf of people with technical tools, and there is a risk of abuse or loss of control when the technical conditions and social situation are not yet mature. The development measures of technology and risk governance measures are two dimensions with large differences, and the responsible AI mentioned subsequently in this chapter focuses on analyzing the normative system of responsible AI in China from the risk governance dimension.

II. The Social Consensus Established by Soft Law

Soft law is a common tool in the field of technology governance. Technical standards, ethics and morality, initiative and guidelines, and other forms of soft law have diverse flexibility and inclusiveness, and they can fill in areas of social relationships that hard law fails to adjust in a timely manner, adapting to the dual goals of technological innovation development and security prevention. In China’s AI governance framework, government opinions, technical standards, and industry self-regulatory initiatives are all governance tools. These soft laws have no mandatory effect, but are mainly adopted and enforced through self-adoption, being referenced in contracts, within industry autonomy, through public opinion supervision, and within market competition to form a common social consciousness and be implemented, and other tools such as technical standards will also indirectly obtain binding effect by means of legal references.

A government opinion is a kind of nonmandatory guidance document issued by the government. In November 2017, the Ministry of Science and Technology of PRC led the establishment of the Office of the Development and Advancement of the New Generation of Artificial Intelligence, which is a coordinating body jointly composed of 15 relevant departments responsible for promoting the organization and implementation of the new generation of AI development planning and major science and technology projects. In March 2019, the Office of the Development and Advancement of the New Generation of Artificial Intelligence established the Committee on Professional Governance, which was formed by the Ministry of Science and Technology of PRC by inviting scholars from the fields of public administration, computer science, ethics, etc. On 17 June 2019, the Committee on Professional Governance of the New Generation of Artificial Intelligence released in its own name the Governance Principles of the New Generation of Artificial Intelligence – Developing Responsible AI.Footnote 5 According to the above governance principles, in order to promote the healthy development of a new generation of AI; better coordinate the relationship between development and governance; ensure safe, reliable, and controllable AI; promote sustainable economic, social, and ecological development; and build a community of human destiny; all parties involved in the development of AI should follow eight principles: (1) harmony and friendliness, with the goal of promoting common human welfare; (2) fairness and justice, eliminating prejudice and discrimination; (3) inclusiveness and sharing, in line with environmentally friendly, promoting coordinated development, eliminating the digital divide, and encouraging open and orderly competition; (4) respect for privacy, setting behavioral boundaries in the collection, storage, processing, use, and other aspects of personal information; (5) security and controllability, enhancing transparency, explainability, reliability, and controllability; (6) shared responsibility, clarifying the responsibilities of developers, users, and recipients; (7) open cooperation, encouraging interdisciplinary, cross-disciplinary, cross-regional, and cross-border exchanges and cooperation; (8) agile governance, ensuring timely detection and resolution of risks that may arise.Footnote 6 These principles establish the basic ethical framework for responsible AI in China.

China’s technical standards include national standards, industry standards, and local standards which were published by governments agencies, and also include consortia standards and enterprise standards which were published by nongovernment agencies. According to the Standardization Law of the People’s Republic of China (2017 Revision), technical standards are in principle implemented voluntarily, and mandatory standards can be set only under specific circumstances.Footnote 7 There are no mandatory standards for AI governance, and those that have entered the work process are voluntary standards. In August 2020, the Standardization Administration of China and relevant departments released the Guide to the Construction of National New Generation AI Standard System, which incorporates security and ethics into the work plan of national standards, and plans to develop security and privacy protection standards, ethical standards, and other related standards.Footnote 8 In November 2020, the national information security standardization technical committee issued the Guideline for Cyber Security Standards: Practice-Guideline for Ethics of Artificial Intelligence (Draft), which clearly lists five major types of ethical and moral risks of AI: (1) out-of-control risk, which is beyond the scope predetermined, understood, and controllable by the developer, designer, and deployer; (2) social risk, which causes social values and other systematic risks due to abuse or misuse; (3) infringement risk, which causes damage to basic rights, person, privacy, and property; (4) discrimination risk, which generates subjective or objective risks to specific groups of people; (5) liability risk, where the boundaries of responsibilities of relevant parties are unclear.Footnote 9 Currently, AI Risk Assessment Model and AI Privacy Protection Machine Learning Technical Requirements and other relevant technical standards have been released in draft version and are expected to become technical guidelines for AI risk assessment and privacy protection in the form of voluntary standards.Footnote 10

Industry self-regulatory initiatives are nonbinding norms issued by a number of social groups and research institutions in conjunction with stakeholders. The Beijing Zhiyuan Institute of Artificial Intelligence, jointly built by Beijing’s research institutions in the field of AI, released the Beijing Consensus on Artificial Intelligence in May 2019, which addresses AI from three aspects: research and development, use, and governance, and proposes 15 principles that are beneficial to the construction of a human destiny community and social development, which each participant should follow. In July 2021, AI Forum, jointly with more than 20 universities and AI technology companies, released the Initiative for Promoting Trustworthy AI Development, putting forward four initiatives: (1) insisting on technology for good to ensure that trustworthy AI benefits humanity; (2) insisting on sharing rights and responsibilities to promote the value concept of trustworthy AI; (3) insisting on a healthy and orderly approach to promote trustworthy AI industry practices; and (4) insisting on pluralism and inclusion to gather international consensus on trustworthy AI. In addition, there are a series of related initiative documents in areas such as facial recognition security.

III. The Ambition toward a Comprehensive Legal Framework

China currently does not have a unified AI law, but it has been under discussion. In contrast to soft law, the national legislature can promulgate a ‘hard law’ with binding force, which can establish general and binding rules on the scope of application, management system, security measures, rights and remedies, and legal liabilities of AI technologies. After these rules are confirmed by the legislator, the relevant actors within the scope of the law must implement a unified governance model. Therefore, by enacting laws, legislators are selecting a definitive model of governance for society. To ensure that the right choice is made, legislators need to have a good grasp of the past and present of the technology, as well as a sound understanding of the future direction of the technology. At the same time, in the early stages of the development of emerging technologies, there is a wide variation in the technological level of different developers, and the overall technological development stage of society is rapidly iterating, while the process of making new laws and revising them takes a long time, which leads legislators to worry that the laws made may soon become obsolete laws that lag behind the development stage of society, and that if there were no such laws made, it may face a series of new problems brought about by the development of disruptive innovations that cannot be clearly addressed.

During the two sessions of the National People’s Congress in recent years, there have been many proposals or motions on AI governance. There are several proposals on AI regulation between 2018 and 2021, including the Bill on Formulating the Law on the Development of Artificial Intelligence (2018), the Bill on Formulating the Law on the Administration of Artificial Intelligence Applications (2019), and the Bill on Formulating the Law on Artificial Intelligence Governance (2021). Other delegates have proposed the Bill on the Enactment of a Law on Self-Driving Cars (2019). In accordance with the procedures of the two sessions of the National People’s Congress, the delegates’ bills will be referred to the relevant authorities for processing and response, mainly by the Legislative Affairs Commission of the Standing Committee of the National People’s Congress, the Ministry of Science and Technology, and the Cyberspace Administration of China. At present, most of the proposals are referred to the legislative bodies or relevant industry authorities for research and solution, and their main attitude is that the AI legislation shall be carried out as a research project, not yet upgraded to the specific legislative agenda. For example, the Standing Committee of the National People’s Congress (NPC) proposed in its 2020 legislative work plan to

pay attention to research on legal issues related to new technologies and fields such as Artificial Intelligence, block chain and gene editing. Continue to promote the normalization and mechanism of theoretical research work, play the role of scientific research institutions, think tanks and other ‘external brain’, strengthen the exchange and cooperation with relevant parties, and urgently form high-quality research results.Footnote 11

The legislative work on AI is also a task to which President Xi Jinping attaches importance. The Political Bureau of the CPC Central Committee held its ninth collective study on the current status and trends of AI development on October 31, 2018. The General Secretary of the CPC Central Committee and President Xi Jinping clearly stated at this meeting that China will, ‘strengthen research on legal, ethical, and social issues related to Artificial Intelligence, and establish sound laws and regulations, institutional systems, and ethics to safeguard the healthy development of Artificial Intelligence.’Footnote 12 Subsequently, in November 2018, the members of the Standing Committee of the National People’s Congress (NPC) held a special meeting in Beijing to discuss the topic of regulating the development of AI, and after discussion, it was concluded that

the relevant special committees, working bodies and relevant parties of the NPC should take early action and act as soon as possible to conduct in-depth investigation and research on the legal issues involved in Artificial Intelligence, so as to provide relevant legislation work to lay a good foundation and make preparations to promote the healthy, standardized and orderly development of AI.Footnote 13

During the two national sessions held in March 2019, more representatives and members began to discuss the topic of how to build the future rule of law system for AI.Footnote 14 In addition, according to the timetable established in the State Council’s Development Plan for a New Generation of Artificial Intelligence, China should initially establish an AI legal and regulatory system in 2025. To this end, China’s legislature has also begun to cooperate with experts from research institutions to conduct supporting studies. In this context, the author of this paper also participated in the relevant discussions, undertook one of the research tasks, and made suggestions on the legislative strategy of AI at the 45th biweekly consultation symposium of the 13th National Committee of the Chinese People’s Political Consultative Conference (CPPCC) held in December 2020, undertook a project of the Ministry of Science and Technology in 2021 – Research on Major Legislative Issues of AI, and participated in the research task of the Law Working Committee of the Standing Committee of the National People’s Congress on the legislation of facial recognition regulation.

Although there is no comprehensive legislative outcome, China’s solutions for responsible AI can be extracted in all relevant laws. For example, the E-Commerce Law of the People’s Republic of China (E-Commerce Law) enacted in 2018 dictates prohibitions on the use of personal information for Big-Data Driven Price Discrimination,Footnote 15 while the Personal Information Protection Law of the People’s Republic of China (Personal Information Protection Law) and the Data Security Law of the People’s Republic of China (Data Security Law) enacted in 2021 set requirements in terms of automated decision-making rules and data security requirements. In July 2021, the Supreme People’s Court promulgated the Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Related to the Use of Facial Recognition Technology for Handling Personal Information which is also an important governance regulation.Footnote 16 In addition, on 27 August 2021, the Cyberspace Administration of China issued the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), which is the first national-level legislative document in China to comprehensively regulate AI from the perspective of algorithms. At the local level, the Shenzhen legislature used its special legislative power as a special economic zone to issue the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone (Draft for Soliciting Public Comment) on 14 July 2021. Despite the name of the law containing the word ‘promotion’, it includes a special chapter ‘Governance Principles and Measures’ providing the rules for responsible AI.

IV. The Legally Binding Method to Achieve Responsible AI

The new generation of AI is mainly driven by data and algorithms exerting essential social influence on different scenarios through various network platforms. Under the Chinese legal system, we can implement responsible AI through the governance of four dimensions: data, algorithm, platform, and application scenario.Footnote 17

1. Responsible AI Based on Data Governance

Data is the key factor driving the prosperous development of a new generation of AI. Big data resources are increasingly having a significant impact on global production, circulation, distribution, consumption, and economic and social systems, as well as national governance capabilities.Footnote 18 The Cyber Security Law enacted in November 2016 sets requirements for the security of important data and personal information respectively, and AI developers must comply with relevant regulations when processing data. In particular, national security and public interest should be safeguarded when dealing with important data, and the rights and interests of natural persons should be protected when dealing with personal information. In 2020, the newly released Civil Code of the People’s Republic of China (Civil Code) protects privacy and personal information interests. In 2021, the Data Security Law and Personal Information Protection Law (hereinafter referred to as ‘PIPL’) jointly provided for a more comprehensive approach to data governance. Responsible AI is ensured through new legal rules in four major dimensions in the field of data governance: giving individuals new civil rights, setting out obligations for processors, building a governance system for data security risk, and strengthening data processing responsibilities.

The new civil rights granted to individuals are mainly reflected in Chapter 4 of the PIPL. Also, Civil Code has laid down a ‘privacy right’ and a ‘personal information right.’ Privacy refers to a natural person’s undisturbed private life and the private space, private activities, and private information that the person does not want others to know about, while personal information is recorded electronically or by other means that can be used, by itself or in combination with other information, to identify a natural person.Footnote 19 Such distinctions may not be marked and are rarely mentioned in legal and academic research in Europe and the United States (US). However, through these two systems, China constructs the strict protection of privacy rights, protecting natural persons from being exposed or interfered with and giving them the right to keep personal information from being handled illegally. According to the Civil Code, the private information included in personal information shall apply to the provisions of privacy; if there is no such provision, the provisions on the protection of personal information, such as the PIPL, shall be applied. The PIPL provides a series of specific rights in Articles 44–55, the content of which is consistent with the connotation of some articles of the European Union (EU) General Data Protection Regulation (GDPR),Footnote 20 the fundamental purpose of which is to safeguard the rights of individuals in the data processing environment. Based on the protection of these rights, when AI handles personal information, it is also necessary to fully respect human dignity and ensure that personal information is not plundered by information technology. See Table 9.1 for the specific rights system and its legal basis.

Table 9.1. Individuals’ Rights in Personal Information Processing Activities

No.Name of rightLegal references
1The right to be informed, to decide, to restrict or refuse the processingPIPL Art. 44
2The right to consult, duplicate and transfer personal informationPIPL Art. 45
3The right of correction or supplementation of their personal informationPIPL Art. 46
4The right to deletePIPL Art. 47
5The right to request personal information processors to explain their personal information processing rules.PIPL Art. 48
6Exercise the rights of the relevant personal information of the deceasedPIPL Art. 49
7The right to get remedyPIPL Art. 50
8Privacy respectedCC Art. 1032

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; CC refers to Civil Code of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

The obligations of processors are set not only to protect the personal information rights and interests of natural persons but also to strengthen the regulatory measures of protection specifically. AI developers and operators may be personal information processors, and they are subjected to comply with the nine major obligations under the PIPL and the Data Security Law, as shown in Table 9.2. These obligations cover the entire life cycle of personal information processing, ensuring the accountability of AI applications and reducing or eliminating the risk of damage to personal information.

Table 9.2. Obligations of Data Processors

No.Name of obligationLegal reference
1Acquire legal basis for processing personal informationPIPL Art. 13
2Truthfully, accurately, and completely notify individuals of the relevant matters in a conspicuous way and in clear and easily understood languagePIPL Art. 14 & 17
3Take corresponding security technical measuresPIPL Art. 51 & 59
4Appoint a person in charge of personal information protectionPIPL Art. 52 & 53
5Audit on a regular basis the compliance of their processing of personal informationPIPL Art. 54
6Conduct personal information protection impact assessment in advance, and record the processing informationPIPL Art. 55 & 56
7Immediately take remedial measures, and notify the authority performing personal information protection functions and the relevant individualsPIPL Art. 57

DSL Art. 29 & 30

8Specific requirements for sharing dataPIPL Art. 23
9Specific requirements for important Internet platformsPIPL Art. 58

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; CC refers to Civil Code of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

Once data security risks in AI applications arise, it is difficult to recover from the damage. In order to avoid different types of data security risks such as personal information and important data, Data Security Law and PIPL establish a series of mechanisms to identify, eliminate, and resolve risks, thus ensuring data security in AI applications. Risk governance measures can be understood from different dimensions. Eight important governance measures under the law are listed in Table 9.3.

Table 9.3. Risk Management System of Data Governance

No.Risk Management SystemLegal reference
1Informed consentPIPL Art. 13–17
2Data minimizationPIPL Art. 6 & 19
3Openness and transparencyPIPL Art. 7, 17, 48, 58
4Cross-border security management systemPIPL Art. 38–43
DSL Art. 31
5Sensitive personal information processing rulesPIPL Art. 28–32
6Categorized and hierarchical data protection systemDSL Art. 21
7Risk monitoring and security emergency response and disposition mechanismDSL Art. 22 & 23
8Public supervising for personal information systemPIPL Art. 60–65
DSL Art. 40

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

One of the basic principles for responsible AI is accountability, which also applies to data governance. The developers, controllers, and operators of AI systems can also be regarded as personal information processors on PIPL or data processors on Data Security Law, and they must comply with the above obligations. If the relevant obligators of the AI system violate data security obligations, they are liable for the corresponding damage consequences. The liability includes civil liability for compensation, administrative penalty, and criminal liability. Chapter VI of the Data Security Act and Chapter VII of the PIPL provide a number of legal liabilities that can ensure that individuals are able to obtain remedies and processors are punished in the event of data risks.

2. Responsible AI Based on Algorithm Governance

Responsible AI requires a combination of external and internal factors to play an active role. Data is the external factor, and algorithm is the internal factor. Algorithms are the key components of intelligence, and a series of algorithms combined with data training can form an AI system. Here is an example of the intelligent trial system developed by the authors’ research group in a research program on Intelligent Assistive Technology in the Context of Judicial Process Involving Concerned Civil and Commercial Cases for the China courts. The actual workflow of this platform can be represented in Figure 9.1.

Figure 9.1 A development process of AI application

This process is not a unique way to develop an AI system, but it is an example of common practice. Through the earlier mentioned program development process, a computational function can be implemented, that is, data input, model calculation, and data output, where the merit of the model directly determines the performance of the AI system. In this process, pre-trained models are often selected to reduce the workload of development. The algorithm used combined these pre-trained models, and the new model structure formed after development changed some of the parameters. Therefore, the algorithm governance commonly used in practice mainly refers to the parameters in the model structure, and this chapter continues to adopt ‘algorithmic governance’ as a unified concept to continue the academic terminology.

The E-Commerce Law, the PIPL and other related laws provide relevant provisions on algorithm governance. On 27 August 2021, the Cyberspace Administration of China released the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), which provides many algorithm governance requirements. In the E-Commerce Law, the representative concept in the law of algorithm governance can be summarized as ‘personalized recommendation’.Footnote 21 In the PIPL, the representative concept in this law of algorithmic governance is ‘automated decision-making’.Footnote 22 In the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the representative concept of algorithm governance is ‘algorithm recommendation technology’ which refers to providing information by using algorithmic techniques such as generation synthesis, personalized push, sorting selection, retrieval and filtering, scheduling decision, etc.Footnote 23 Although the name of this regulation appears to apply to information services, information services here can be understood in a broad sense as information service technology.

We generally believe that the main principled requirements of algorithm governance are transparency, fairness, controllability, and accountability. The governance of algorithms in relevant Chinese laws and regulations basically follows the above-mentioned principles, which are also reflected in this regulation. According to the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the actions to use algorithm recommendations should follow the principles of fairness, openness, transparency, reasonableness, and honesty.Footnote 24 Moreover, it explicitly prohibits the use of algorithm recommendation services to engage in activities prohibited by laws and regulations, such as endangering national security, disrupting the economic and social order, and infringing on the legitimate rights and interests of others.Footnote 25

Data-driven AI has a certain degree of incomprehensibility, and algorithmic transparency can help us understand how AI systems work and ensure that users make well-informed choices about their use behavior. According to the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the algorithm recommendation service provider should inform users of the algorithm recommendation service in a conspicuous manner and publicize the basic principle, purpose, and operation mechanism of the algorithm recommendation service properly.Footnote 26 In addition, Articles 24 and 48 of PIPL also have the requirement of algorithm transparency, which makes it clear that individuals have the right to request the personal information processors to explain their personal information processing rules. Individuals have the right to request the personal information processors to explain the decisions that significantly impact their rights and interests through automated decision-making, and the right to refuse to allow the personal information processors to make decisions through automated decision-making only.

Algorithm bias is also a highly controversial issue in algorithm governance, and the center topic is how to ensure the fairness of AI. In China, establishing higher prices for price-insensitive users through algorithms occasionally occurs in e-commerce. The main scenario occurs when cheaper prices are set for new users while relatively higher prices are set for older users who have developed a dependency. Article 18 of the E-Commerce Law and Article 21 of the PIPL have already made relevant provisions. Articles 10 and 18 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment) have made further provisions: providers of algorithm recommendation service shall strengthen the management of user models and user labels and, to improve the rules of interest points recorded in the user model, they shall not record illegal keywords or unhealthy information in the user’s interest points, or mark it as user labels to recommend information. And they shall not set discriminatory or prejudicial user labels. Algorithm recommendation service providers selling goods or providing services to consumers shall protect consumers’ legitimate rights and interests and shall not use algorithms to impose unreasonable differential treatment in transaction prices and other transaction prices based on consumer preferences, transaction conditions, and other characteristics or illegal acts.

At present, Chinese e-commerce operators still have different opinions on whether such behavior constitutes algorithmic bias. However, with extensive news media coverage, the general public opinion is more inclined to oppose algorithmic biases such as differential treatment of older users.

AI replaces some human behavior with automatic machine behaviors, and controllability is the essential requirement to ensure the safety and stability of AI. In order to prevent the risk of loss of control, the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone (Draft) was released in July 2021. It set out the rules for agile governance, that is, organizing and conducting social experiments on AI; studying the comprehensive influence of AI development on the behavior patterns, social psychology, employment structure, income changes, social equity of individuals and organizations; and accumulating data and practical experience.Footnote 27

At present, China mainly focuses on self-driving cars and so-called robot advisors that give advice with regard to investment decisions. Relevant departments of the State Council and some localities have issued a series of road-testing specifications for intelligent connected vehicles (ICV), making closed road testing a prerequisite for self-driving cars to be put on the market. At the same time, to further improve their controllability, the cars tested on designated open roads must also be equipped with drivers ready to take over.Footnote 28

On the other hand, the Guidance on Standardizing the Asset Management Business of Financial Institutions issued by the People’s Bank of China and other departments in 2018 also points out the requirements of uncontrolled risk prevention in the field of smart investment consultants. It states that

Financial institutions should develop corresponding AI algorithms or program trading according to different product investment strategies to avoid algorithm homogeneity and increase pro-cyclicality of investment behavior, and for the resulting market volatility risk to develop a response plan. Due to algorithm homogenization, programming design errors, insufficient depth of data utilization and other Artificial Intelligence algorithm model defects or system anomalies, resulting in herding effects, affecting the stable operation of financial markets, financial institutions should promptly take manual intervention measures to force the adjustment or termination of the Artificial Intelligence business.Footnote 29

The accountability of algorithms requires that regulators and stakeholders perform their respective duties to ensure that technological innovation is accompanied by effective risk mitigation. In terms of China’s legal system, the Civil Code, the Product Quality Law, and other related laws provide the basis for the accountability of the algorithm.

For example, the Product Quality Law requires producers who design and sell products to reach the best (not the highest) degree of care; at the same time, it imposes strict liability control over unreasonable risks and aims that producers of AI system products improve their controllability requirements. In the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the Cyberspace Administration of China expected a new rule that providers of algorithm recommendation services with high risk should register within ten working days from the date of service. The information of the service provider name, service form, application domain, algorithm type, algorithm self-evaluation report, and content to be publicized should be provided through the algorithm filing system of Internet information service.Footnote 30 On the other hand, the service providers of algorithm recommendation should accept social supervision, set up a convenient complaint reporting portal, and handle public complaints and reports promptly. The algorithm recommendation service provider should establish a use complaint channel and system, standardize handling complaints and feedback in a timely fashion, and protect users’ legitimate rights and interests.Footnote 31

The content discussed above reflects the need for administrative authorities, algorithm developers and other relevant parties to fulfill their corresponding responsibilities under the accountability requirements of algorithms.

3. Responsible AI Based on Platform Governance

The world’s primary AI technology innovation enterprises are companies of online platforms. These online platforms have strong technological innovation capabilities and a wide range of AI application scenarios, and many new technologies and new business models derive from online platforms. Therefore, platform governance is also an important aspect of achieving responsible AI. In China, online platform governance is mainly regulated in the E-Commerce Law, competition law, and other relevant laws and regulations. In recent years, provisions on AI governance under online platforms have been adopted or promulgated through the process of legislation or amendment.

There are a growing number of platforms in online transactions that use AI to determine flexible transaction rules. The E-Commerce Law issued in 2018 explicitly defines the platform as a regulated object, which requires that e-commerce platform operators should follow the principles of openness, fairness, and impartiality in formulating platform service agreements and transaction rules.Footnote 32 Article 18 E-Commerce Law also requires e-commerce operators to respect and equally protect the legitimate rights and interests of consumers when providing personalized recommendation services. In addition, the Interim Provisions on the Management of Online Tourism Operation Services issued in August 2020 also sets out that online travel operators shall not abuse technical means such as big data analysis to set unfair trading conditions based on tourists’ consumption records and tourism preferences, to infringe on the legitimate rights and interests of tourists.Footnote 33 Thus, it is clear that the E-Commerce Law mainly requires that the application of AI should not undermine the rights of consumers and operators within the platform to be treated fairly.

Online platforms often use AI to gain an unfair market competitive advantage. China’s Anti-Monopoly Law, Anti-Unfair Competition Law, and other relative regulations are also concerned with the platform responsibilities in the process of AI application.Footnote 34 In terms of horizontal monopoly agreements, the substantial existence of coordination through data, algorithms, platform rules, or other means is regarded as an illegal monopoly. In terms of vertical monopoly agreements, it is also regarded as an illegal monopoly to exclude or restrict market competition through directly or indirectly limiting the price by using data and algorithms, or limiting other transaction conditions by using technical means, platform rules, data, and algorithms.Footnote 35 The use of big data and algorithms to impose differential prices or other trading conditions or to impose differentiated standards, rules, or algorithms based on the ability to pay,Footnote 36 consumption preferences, and usage habits of the counterparty is also considered an illegal monopolistic act of abuse of a dominant position in the market. AI used to implement these monopolistic acts mainly refers to large-scale Internet platforms with a dominant market position.

It is an act of unfair competition for a business operator to use data, algorithms and other technical means to implement traffic hijacking, interference and malicious incompatibility to prevent or disrupt the normal operation of network products or services lawfully provided by other operators.Footnote 37 Similarly, operators that use data, algorithms, and other technical means to unreasonably provide different transaction information to counterparties managed by the same transaction conditions (by collecting and analysing transaction information, the content, and time of internet browsing; the brand and value of the terminal equipment used for the transaction; etc.), are infringing the counterparties’ right to know, right to choose, right to fair trade, etc., and are disrupting the fair trading order of the marketFootnote 38 Those who use AI to implement the above unfair competition can be both large Internet platforms and participants of other platform markets.

4. Responsible AI under Specific Scenarios

Specific scenarios in the field of AI based on new technologies and new business models often attract special attention. As a result, some AI-related regulations in different areas have been emerging. For example, China has special regulations to ensure responsible AI in areas such as labor employment, facial recognition, autonomous driving, smart investment consultants, deep forgery, online travel, online litigation, etc. The regulations related to responsible AI in these special areas can be divided into two categories. The first category is to reflect the provisions of the PIPL, the E-Commerce Law, and other relevant regulations to these specific areas, which increases the relevance of norm implementation but does not substantially introduce a new legal system. The second category is to establish more legal obligations and rights based on the special circumstances in the specific scenarios.

In the labor market, in September 2020, a widely spread report in the social platforms of China about the abuse of algorithms for performance management on online platforms revealed that a series of automated behaviors based on algorithms, such as point rating systems, system ‘upgrades’ to shorten delivery times, and navigation instructions that violate traffic rules, are forcing couriers to engage in high-intensity labor.Footnote 39 In July 2021, the State Administration for Market Regulation (SAMR) and relevant departments jointly issued binding opinions pointing out that the network catering platform and third-party partners should reasonably set the performance appraisal system for delivery workers. In the development of adjustments to the assessment, rewards and punishments and other systems or significant matters involving the delivery workers’ direct interests should be publicized in advance to fully listen to the views of the delivery workers, trade unions, and other parties. Optimize the algorithm rules, not the ‘strictest algorithm’ as the assessment requirements, through the ‘algorithm to take’ and other ways to reasonably determine the number of orders, online rate, and other assessment factors, to determine appropriate flexibility of the delivery time frame.Footnote 40 The Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment) released in August 2021 further points out that the algorithm recommendation service providers providing work scheduling services to workers should establish and improve the platform order distribution, compensation and payment, working hours, rewards and punishments and other related algorithms, and fulfill the obligations of workers’ rights and interests.Footnote 41 These requirements are a special case in the field of labor employment and reflect the principle of inclusiveness of AI to avoid the risk of polarization of AI.

In the field of facial recognition, an associate professor of the law school of Zhejiang University of Technology sued for the compulsory use of facial recognition equipment as admission to Hangzhou Safari Park, which was regarded as the first case of facial recognition rights defense in the judicial field in China. After that, a professor of the law school of Tsinghua University published a criticism for the compulsory use of facial recognition equipment as permission to enter into the apartment. A series of facial recognition incidents have raised social concerns about the right to choose facial recognition applications. In December 2020, the Cyberspace Administration of China drafted a Security Management Regulations for Commercial Applications of Facial Recognition Technology (Draft), but the draft has not yet been released. Meanwhile, the Supreme People’s Court has published a judicial opinion, Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Relating to the Use of Facial Recognition Technology for Handling Personal Information, which states that: if building managers use facial recognition systems as the only way to verify owners or property users (to enter or leave the building), the people’s court shall support the owners or property users who disagree with using facial recognition and request to provide other reasonable verification methods under the law.Footnote 42 In addition, the Legal Working Committee of the Standing Committee of the National People’s Congress is also considering drafting special legal provisions for facial recognition. These moves reflect the requirements of personal biometric information protection and combine the specific scenarios of facial recognition to make special rules.

Furthermore, in the field of autonomous driving, the relevant departments of the State Council have specifically set complex conditions for its testing on open roads, including commissioned inspection reports of autonomous driving functions issued by third-party testing agencies, testing programs, and certificates of compulsory insurance for traffic accidents.Footnote 43 In the field of smart investment consultants, financial institutions should report the main parameters of the AI model and the main logic of asset allocation to the financial supervision and management authorities, set up separate smart management accounts for investors, fully indicate the inherent flaws and risks of using AI algorithms, clarify the transaction process, strengthen the management of the traces, and strictly monitor the trading positions, risk limits, types of transactions, pricing authority of smart management accounts, etc. Financial institutions that cause losses to investors due to violations of law or mismanagement shall be liable for damages as prescribed by law.Footnote 44 In the field of deep-fakes governance, the law requires that no organization or individual shall infringe upon the portrait rights of others by scandalizing, defacing, or using information technology means to forge, etc.; for the protection of the voice of natural persons, the relevant provisions on the protection of portrait rights shall apply by reference.Footnote 45 The dispersion of the above provisions indicates that there are differences in the degree of application of AI in different fields, and there are differences in the risks and security needs arising from its use in different fields. In the absence of comprehensive legislation on AI, the adoption of special binding provisions or guidance to address specific issues is a way to balance the pursuit of development with security values.

IV. Conclusion

The legal professional culture is generally conservative, which results in the law and regulations always lagging behind in responding to innovation and new technologies. At the beginning of the rapid development of AI, we have mainly implemented risk governance through moral codes, ethical guidelines, and technical standards. In contrast to these soft laws, the national legislature can enact mandatory ‘hard laws’ that establish general and binding rules on the scope of application, management system, safety measures, rights and remedies, and legal liabilities of AI technologies. China has issued several soft law governance tools around responsible AI in different sectors but does not yet have comprehensive AI legislation. However, China is still attempting to develop comprehensive AI legislation as evidenced by President Xi Jinping’s statement on AI legislation, the requirements in the national-level development plan for a new generation of AI, the attention paid to the topic by the National People’s Congress deputies, and some local legislative motions, as represented by Shenzhen.

The law has been out of date since its enactment. However, this does not mean that the law can do nothing about the problems after its promulgation. In the codified tradition, the applicability of various legal documents is often scalable, which enables new technologies and new business models to find corresponding applicable provisions. The effective Civil Code, E-Commerce Law, Product Quality Law, and other relevant legislations can serve as legal requirements developing responsible AI. In addition, new laws and other binding documents enacted in recent years provide a substantial basis for AI governance, and the effective and draft documents released in 2021 show that responsible AI is increasingly a concrete goal that needs to be enforced. Looking toward the future, two different options for legislative routes exist in countries including China, the EU, and the United States. One option is a foresighted legal design mindset that designs an institutional track to develop emerging technologies in the fastest possible way. Under this option, after the basic application pattern of AI technology is formed, lawmakers will summarize and predict the various risks of AI based on the existing situation and the understanding obtained by reasoning. Another option is to adopt a ‘wait and see’ approach, arguing that it is still too early for lawmakers to see just how this technology will impact citizens. Under this option, lawmakers pay more attention to the positive value of emerging technology development, and the risks involved are self-identified, self-adjusted, self-regulated, and self-healed by the free competition mechanism of the market.

From the current legislative dynamics in China, improving regulations related to data, algorithms, platforms, and specific scenarios will provide a broad and effective basis for AI governance. The development of comprehensive AI legislation has not been formally included in the NPC Standing Committee’s work plan in the short term, but this does not prevent local legislatures from exploring the possibility of comprehensive legislation. If a comprehensive AI legislation is to be enacted, its key elements are to record the types of AI risks, design the mechanism for identifying AI risks, and construct the mechanism for resolving AI risks. The EU proposal of AI Act released in 2021 has also been widely followed in China, and we can expect that after the proposal of the act is passed in Europe, it is likely that similar legislation will be enacted in China shortly thereafter.

Footnotes

1 State Council, The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence (The State Council of the People’s Republic of China, 8 July 2017) www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.

2 Ministry of Science and Technology, ‘Notice of the Project Proposal of Application for the National Key Research and Development Plan’ (2001).

3 Xi Jinping in the ninth collective study of the Political Bureau Central Committee of the CPC stressed the importance of strengthening leadership to do a good job of planning a clear task of solid foundation to promote the healthy development of a new generation of AI in China; Xinhua News Agency, ‘Xi Jinping Presided Over the Ninth Collective Study of the Political Bureau of the CPC Central Committee and Gave a Speech’ (The State Council, The People’s Republic of China, 31 October 2018) www.gov.cn/xinwen/2018-10/31/content_5336251.htm (hereafter Xi Jinping, ‘Ninth Study CPC Central Committee’).

4 In November 2013, the Third Plenary Session of the 18th CPC Central Committee took “promoting the modernization of national governance system and governance capacity” as the overall goal of comprehensively deepening reform, China.org.cn, ‘Communiqué of the Third Plenary Session of the 18th Central Committee of the Communist Party of China’ (China.org.cn, 15 January 2014) www.china.org.cn/china/third_plenary_session/2014-01/15/content_31203056.htm. On 31 October 2019, the Fourth Plenary Session of the 19th Central Committee of the Communist Party of China adopted the “decision of the Central Committee of the Communist Party of China on several major issues on adhering to and improving the socialist system with Chinese characteristics and promoting the modernization of national governance system and governance capacity”, which further put forward the requirements of national governance reform, Online Party School, ‘Communiqué of the Fourth Plenary Session of the 19th Central Committee of the Communist Party of China’ (Liaoning Urban and Rural Construction Planning Design Institute Co. LTD, 5 December 2019) http://lnupd.com/english/article/shows/377.

6 The Committee on Professional Governance of the New Generation of Artificial Intelligence, ‘Governance Principles of the New Generation of Artificial Intelligence – Developing Responsible AI’ (Catch the Wind, 17 June 2019) www.ucozon.com/news/59733737.html.

7 Article 10 of Standardization Law of the People’s Republic of China (adopted 1988, effective 1989) stipulated that mandatory national standards shall be developed to address technical requirements for ensuring people’s health and the security of their lives and property, safeguarding national and eco-environmental security, and meeting the basic need of economic and social management.

8 Standardization Administration of China, Cyberspace Administration of China, and other relevant departments, ‘Guide to the Construction of National New Generation AI Standard System’ (2020) 24–25.

9 National Information Security Standardization Technical Committee, ‘Guideline for Cyber Security Standards: Practice-Guideline for Ethics of Artificial Intelligence (Draft)’ (2020).

10 China Institute of Electronic Technology Standardization, ‘White Paper on Standardization of Artificial Intelligence (version 2021)’ (July 2021).

11 Chinese National People’s Congress, ‘The 2020 legislative work plan of the Standing Committee of the National People’s Congress (NPC)’ (The National People’s Congress of the People’s Republic of China, 20 June 2020) www.npc.gov.cn/npc/c30834/202006/b46fd4cbdbbb4b8faa9487da9e76e5f6.shtml.

12 Xi Jinping, ‘Ninth Study CPC Central Committee’ (Footnote n 3).

13 Li Zhanshu held and delivered speech in the meeting of the members of the Standing Committee of the National People’s Congress: Xinhua, ‘The Members of the NPC Standing Committee Chairman’s Meeting Conducted Special Studies and Li Zhansu Chaired and Delivered a Speech’ (The National People’s Congress of the People’s Republic of China, 24 November 2018) www.npc.gov.cn/npc/c238/201811/e3883fb5618e4a2bbefa5d170fe7b02a.shtml.

14 Zhan Haifeng, Committee Members Discuss about the Development of Artificial Intelligence: Building the Future Legal System of AI (6th ed. 2019).

15 Article 18 E-Commerce Law (promulgated 31 August 2018, effective 1 January 2019): when providing the results of search for commodities or services for a consumer based on the hobby, consumption habit, or any other traits thereof, the e-commerce business shall provide the consumer with options not targeting his/her identifiable traits and respect and equally protect the lawful rights and interests of consumers.

16 Supreme People’s Court, Law Interpretation [2021] No. 15, Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Related to the Use of Facial Recognition Technology for Handling Personal Information (Judgement of 8 June 2021, in force on 1 August 2021) (hereafter Supreme People’s Court, Provisions on Facial Recognition).

17 Weixing Shen and Yun Liu, ‘New Paradigm of Legal Research: Connotation, Category and Method of Computational Law’ (2020) 5 Chinese Journal of Law 323.

18 Notice of the State Council on Printing and Distributing the Action Platform for Promoting the Development of Big Data, Document No GF [2015] No 50, issued by the State Council on 31 August 2015.

19 Article 1032 and Article 1034 of the Civil Code of the People’s Republic of China (Adopted at the Third Session of the Thirteenth National People’s Congress on 28 May 2020), Order No 45 of the President of the People’s Republic of China (hereafter Civil Code of the People’s Republic of China).

20 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.

21 Article 18 of the E-Commerce Law.

22 Article 73 of the Personal Information Protection Law (effective 1 November 2021).

23 Article 2 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

24 Article 4 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

25 Article 6 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

26 Article 14 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

27 Article 66 of the Regulations of Shenzhen Special Economic Zone on the Promotion of Artificial Intelligence Industry (Draft for Soliciting Public Comment) (14 July 2021).

28 The Ministry of Industry and Information Technology, the Ministry of Public Security and the Ministry of Transport, ‘Specifications for Road Test Management of Intelligent Networked Vehicles (for Trial Implementation)’ (3 April 2018) and The Ministry of Industry and Information Technology, the Ministry of Public Security and the Ministry of Transport, ‘Regulations on the Management of Intelligent Networked Vehicles in Shenzhen Special Economic Zone (Draft for Soliciting Public Comment)’ (23 March 2021).

29 Article 23 of Guiding Opinions on Regulating Asset Management Business of Financial Institutions (27 April 2018, revised 31 July 2020) No 16 [2018] People’s Bank of China (hereafter Guiding Opinions on Regulating Asset Management).

30 Article 20 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

31 Article 26 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

32 Article 32 of the E-Commerce Law.

33 Article 15 of the Interim Provisions on the Management of Online Tourism Operation Services, Order No 4 of the Ministry of Culture and Tourism of the People’s Republic of China (2020).

34 Article 5 of the Anti-Monopoly Guidelines on Platform Economy.

35 Article 7 of the Anti-Monopoly Guidelines on Platform Economy.

36 Article 17 of the Anti-Monopoly Guidelines on Platform Economy.

37 Article 13 of the Notice from the State Administration for Market Regulation of the Provisions on Prohibited Acts of Unfair Competition Online (Draft for Soliciting Public Comment).

38 Article 21 of the Notice from the State Administration for Market Regulation of the Provisions on Prohibited Acts of Unfair Competition Online (Draft for Soliciting Public Comment).

39 Lai Youxuan, ‘Deliveries, Stuck in the System’ (People, September 2020) https://epaper.gmw.cn/wzb/html/2020-09/12/nw.D110000wzb_20200912_1-01.htm.

40 Guidance on the Implementation of The Responsibility of Online Catering Platforms to Effectively Safeguard the Rights and Interests of Food Delivery Workers, issued by SAMR on 16 July 2021.

41 Article 17 of Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

42 Article 10 of Supreme People’s Court, Provisions on Facial Recognition (Footnote n 16).

43 Article 9 of the Intelligent Networked Vehicle Road Test Management Specifications (for Trial Implementation) notice.

44 Guiding Opinions on Regulating Asset Management (Footnote n 29).

45 Article 1019 and Article 1023 of the Civil Code of the People’s Republic of China (Footnote n 19).

Figure 0

Table 9.1. Individuals’ Rights in Personal Information Processing Activities

Figure 1

Table 9.2. Obligations of Data Processors

Figure 2

Table 9.3. Risk Management System of Data Governance

Figure 3

Figure 9.1 A development process of AI application

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×