Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-17T00:07:35.770Z Has data issue: false hasContentIssue false

Part VI - Responsible Corporate Governance of AI Systems

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 329 - 376
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

19 From Corporate Governance to Algorithm Governance Artificial Intelligence as a Challenge for Corporations and Their Executives

Jan Lieder
I. Introduction

Every generation has its topic: The topic of our generation is digitalization. At present, we are all witnessing the so-called industrial revolution 4.0.Footnote 1 This revolution is characterized by the use of a whole range of new digital technologies that can be combined in a variety of ways. Keywords are self-learning algorithms, Artificial Intelligence (AI), autonomous systems, Big Data, biometrics, cloud computing, Internet of Things, mobile internet, robotics, and social media.Footnote 2

The use of digital technologies challenges the law and those applying it. The range of questions and problems is tremendously broad.Footnote 3 Widely discussed examples are self-driving cars,Footnote 4 the use of digital technologies in corporate finance, credit financing and credit protection,Footnote 5 the digital estate,Footnote 6 or online dispute resolution.Footnote 7 In fact, digital technologies challenge the entire national legal system including public and criminal law as well as EU and international law. Some even say we may face ‘the beginning of the end for the law’.Footnote 8 In fact, this is not the end, but rather the time for a digital initiative. This chapter focuses on the changes that AI brings about in corporate law and corporate governance, especially in terms of the challenges for corporations and their executives.

From a conceptual perspective, AI applications will have a major impact on corporate law in general and corporate governance in particular. In practice, AI poses a tremendous challenge for corporations and their executives. As algorithms have already entered the boardroom, lawmakers must consider legally recognizing e-persons as directors and managers. The applicable law must deal with effects of AI on corporate duties of boards and their liabilities. The interdependencies of AI, delegation of leadership tasks, and the business judgement rule as a safe harbor for executives are of particular importance. A further issue to be addressed is how AI will change the decision-making process in corporations as a whole. This topic is closely connected with the board’s duties in Big Data and Data Governance as well as the qualifications and responsibilities of directors and managers.

By referring to AI, I mean information technology systems that reproduce or approximate various cognitive abilities of humans.Footnote 9 In the same breath, we need to distinguish between strong AI and weak AI. Currently, strong AI does not exist.Footnote 10 There is no system really imitating a human being, such as a so-called superintelligence. Only weak AI is applied today. These are single technologies for smart human–machine interactions, such as machine learning or deep learning. Weak AI focuses on the solution of specific application problems based on the methods from math and computer science, whereby the systems are capable of self-optimization.Footnote 11

By referring to corporate governance, I mean a system by which companies are directed and controlled.Footnote 12 In continental European jurisdictions, such as Germany, a dual board structure is the prevailing system with a management board running the day-to-day business of the firm and a supervisory board monitoring the business decisions of the management board. In Anglo-American jurisdictions, such as the United States (US) and the United Kingdom (UK), the two functions of management and supervision are combined within one unitary board – the board of directors.Footnote 13

II. Algorithms As Directors

The first question is, “Could and should algorithms act as directors?” In 2014, newspapers reported that a venture capital firm had just appointed an algorithm to its board of directors. The Hong Kong based VC firm Deep Knowledge Ventures was supposed to have appointed an algorithm called Vital (an abbreviation for Validating Investment Tool for Advancing Life Sciences) to serve as a director with full voting rights and full decision-making power over corporate measures.Footnote 14 In fact, Vital only had an observer and adviser status with regard to the board members, which are all natural persons.Footnote 15

Under German law according to sections 76(3) and 100(1)(1) AktG,Footnote 16 the members of the management board and the supervisory board must be natural persons with full legal capacity. Not even corporations are allowed to serve as board members. That means, in order to appoint algorithms as directors, the law must be changed.Footnote 17 Actually, the lawmaker could legally recognize e-persons as directors. However, the lawmaker should not do so, because there is a reason for the exclusion of legal persons and algorithms under German law. Both lack personal liability and personal accountability for the management and the supervision of the company.Footnote 18

Nevertheless, the European Parliament enacted a resolution with recommendations to the Commission on Civil Law Rules on Robotics, and suggested therein

creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.Footnote 19

The most fundamental requirement for legally recognizing an e-person would be its own liability – either based on an ownership fund or based on a mandatory liability insurance. In case corporations are appointing AI entities as directors (or apply it otherwise), they should be strictly liable for damages caused by AI applications in order to mitigate the particular challenges and potential risks of AI.Footnote 20 This is because strict liability would not only delegate the risk assessment and thus control the level of care and activity, but would also create an incentive for further developing this technology.Footnote 21 At the same time, creditors of the company should be protected by a compulsory liability insurance, whereas piercing the corporate veil, that is, a personal liability of the shareholders, must remain a rare exception.Footnote 22 However, at an international level, regulatory competition makes it difficult to guarantee comparable standards. Harmonization can only be expected (if ever) in supranational legal systems, such as the European Union.Footnote 23 In this context, it is noteworthy that the EU Commission’s White Paper on AI presented in 2020 does not address the questions of the legal status of algorithms at all.Footnote 24

However, even if we were to establish such a liability safeguard, there is no self-interested action of an algorithm as long as there is no strong AI. True, circumstances may change in the future due to technological progress. However, there is a long and winding road to the notorious superintelligence.Footnote 25 Conversely, weak AI only carries out actions in the third-party interest of people or organizations, and is currently not in a position to make its own value decisions and judgemental considerations.Footnote 26 In the end, current algorithms are nothing more than digital slaves, albeit slaves with superhuman abilities. In addition, the currently applicable incentive system of corporate law and governance would have to be adapted to AI directors, because – unlike human directors – duties of loyalty can hardly be applied to them, but rather they decide according to algorithmic models.Footnote 27 At present, only humans have original creative power, only they are capable of making decisions and acting in the true sense of the word.Footnote 28

III. Management Board

Given the current limitations of AI, we will continue to have to get by with human directors for the next few decades. Although algorithms do not currently appear suitable for making independent corporate decisions, AI can nonetheless support human directors in their management and monitoring tasks. AI is already used in practice to analyze and forecast the financial development of a company, but also to identify the need for optimization in an entrepreneurial value chain.Footnote 29 In addition, AI applications are used in the run-up to mergers and acquisitions (M&A) transactions,Footnote 30 namely as part of due diligence, in order to simplify particularly labor-intensive processes when checking documents. Algorithms are also able to recognize unusual contract clauses and to summarize essential parameters of contracts, even to create contract templates themselves.Footnote 31 Further examples for the use of AI applications are cybersecurityFootnote 32 and compliance management systems.Footnote 33

1. Legal Framework

With regard to the German corporate governance system, the management board is responsible for running the company.Footnote 34 Consequently, the management board also decides on the overall corporate strategy, the degree of digitalization and the use of AI applications.Footnote 35 The supervisory board monitors the business decisions of the management board, decides on the approval of particularly important measures,Footnote 36 as well as on the appointment and removal of the management board members;Footnote 37 whereas the shareholders meeting does not determine a company’s digitalization structures.Footnote 38

2. AI Related Duties

In principle, the use of AI neither constitutes a violation of corporate law or the articles of association,Footnote 39 nor is it an expression of bad corporate governance. Even if the use of AI is associated with risks, it is difficult to advise companies – as the safest option – to forego it completely.Footnote 40 Instead, the use of AI places special demands on the management board members.

a. General Responsibilities

Managers must have a fundamental understanding of the relevant AI applications, of their potentials, suitability, and risks. However, the board members do not need to have in-depth knowledge about the detailed functioning of a certain AI application. In particular, the knowledge of an IT expert cannot be demanded, nor a detailed examination of the material correctness of the decision.Footnote 41 Rather, they need to have an understanding of the scope and limits of an application and possible results and outcomes of the application in order to perform plausibility checks to prevent incorrect decisions quickly and effectively.Footnote 42 The management board has to ensure, through test runs, the functionality of the application with regard to the concrete fulfilment of tasks in the specific company environment.Footnote 43 If, according to the specific nature of the AI application, there is the possibility of an adjustment to the concrete circumstances of the company, for example, with regard to the firm’s risk profile or statutory provisions, then the management board is obliged to carry out such an adjustment.Footnote 44 During the use of the AI, the board of directors must continuously evaluate and monitor the working methods, information procurement, and information evaluation as well as the results achieved.

The management board must implement a system that eliminates, as far as possible, the risks and false results that arise from the use of AI. This system must assure that anyone who uses AI knows the respective scope of possible results of an application so that it can be determined whether a concrete result is still within the possible range of results. However, that can hardly be determined abstractly, but requires a close look at the concrete AI application. Furthermore, the market standard is to be included in the analysis. If all companies in a certain industry use certain AI applications that are considered safe and effective, then an application by other companies will rarely prove to breach a management board’s duty of care.

Under these conditions, the management board is allowed to delegate decisions and tasks to an AI application.Footnote 45 This is not contradicted by the fact that algorithms lack legal capacity, because in this context the board’s own duties are decisive.Footnote 46 In any event, a blanket self-commitment to the results of an AI application is incompatible with the management responsibility and personal accountability of the board members.Footnote 47 At all times, the applied AI must be manageable and controllable in order to ensure that no human loss of control occurs and the decision-making process is comprehensible. The person responsible for applying AI in a certain corporate setting must always be able to operate the off-switch. In normative terms, this requirement is derived from section 91(2) AktG, which obliges the management board to take suitable measures to identify, at an early stage, developments that could jeopardize the continued existence of the company.Footnote 48 In addition, the application must be protected against external attacks, and emergency precautions must be implemented in the event of a technical malfunction.Footnote 49

b. Delegation of Responsibility

The board may delegate the responsibility for applying AI to subordinate employees, but it is required to carefully select, instruct, and supervise the delegate.Footnote 50 Under the prevailing view, core tasks, however, cannot be delegated, as board members are not allowed to evade their leadership responsibility.Footnote 51 Such non-delegable management tasks of the management board include basic measures with regard to the strategic direction, business policy and the organization of the company.Footnote 52 The decision as to whether and to what extent AI should be used in the company is also a management measure that cannot be delegated under the prevailing view.Footnote 53 Only the preparation of decisions by auxiliary persons is permissible, as long as the board of directors makes the decision personally and of its own responsibility. In this respect, the board is responsible for the selection of AI use and the application of AI in general. The board has to provide the necessary information, must exclude conflicts of interest and has to perform plausibility checks of the results obtained. Furthermore, the managers must conduct an ongoing monitoring and ensure that the assigned tasks are properly performed.

c. Data Governance

AI relies on extensive data sets (Big Data). In this respect, the management board is responsible for a wide scope and high quality of the available data, for the suitability and training of AI applications, and for the coordination of the model predictions with the objectives of the respective company.Footnote 54 In addition, the board of directors must observe data protection law limitsFootnote 55 and must pursue a non-discriminatory procedure.Footnote 56 If AI use is not in line with these regulations or other mandatory provisions, the management board violates the duty of legality.Footnote 57 In this case, the management board does not benefit from the liability privilege of the business judgement rule.Footnote 58

Apart from that, the management board has an entrepreneurial discretion with regard to the proper organization of the company’s internal knowledge organization.Footnote 59 The starting point is the management board’s duty to ensure a legal, statutory, and appropriate organizational structure.Footnote 60 The specific scope and content of the obligation to organize knowledge depends largely on the type, size, and industry of the company and its resources.Footnote 61 However, if, according to these principles, there is a breach of the obligation to store, forward, and actually query information, then the company will be considered to have acted with knowledge or negligent ignorance under German law.Footnote 62

d. Management Liability

If managers violate these obligations (and do not benefit from the liability privilege of the business judgement rule)Footnote 63, they can be held liable for damages to the company.Footnote 64 This applies in particular in the event of an inadmissible or inadequate delegation.Footnote 65 In order to mitigate the liability risk for management board members, they have to ensure that the whole framework of AI usage in terms of specific applications, competences, and responsibilities as well as the AI-related flow of information within the company is well designed and documented in detail. Conversely, board members are not liable for individual algorithmic errors as long as (1) the algorithm works reliably, (2) the algorithm does not make unlawful decisions, (3) there are no conflicts of interest, and (4) the AI’s functioning is fundamentally overseen and properly documented.Footnote 66

Comprehensive documentation of the circumstances that prompted the management board to use a certain AI and the specific circumstances of its application reduces the risk of being sued for damages by the company. This ensures, in particular, that the members of the management board can handle the burden of proof incumbent on them according to section 93(2)(2) AktG. They will achieve this better, the more detailed the decision-making process regarding the use of AI can be understood from the written documents.Footnote 67 This kind of documentation by the management board is to be distinguished from general documentation requirements discussed at the European and national level for the development of AI models and for access authorization to this documentation, the details of which are beyond the scope of this chapter.Footnote 68

e. Composition of the Management Board

In order to cope with the challenges that the use of AI applications causes, the structure and composition of the management and the board has already changed significantly. That manifests itself in the establishment of new management positions, such as a Chief Information Officer (CIO)Footnote 69 or a Chief Digital Officer (CDO).Footnote 70 Almost half of the 40 largest German companies have such a position at board level.Footnote 71

In addition, soft factors are becoming increasingly important in corporate management. Just think of the damage to the company’s reputation, which is one of the tangible economic factors of a company today.Footnote 72 Under the term Corporate Digital Responsibility (CDR), specific responsibilities are developing for the use of AI and other digital innovations.Footnote 73 For example, Deutsche Telekom AG has enacted nine guidelines for responsible AI in a corporate setting. SAP SE established an advisory board for responsible AI consisting of experts from academia, politics, and the industry. These developments, of course, have an important influence on the overall knowledge attribution within the company and a corporate group. AI and Big Data make information available faster and facilitate the decision-making process at board level. Therefore, the management board must examine whether the absence of any AI application in the information gathering and decision-making process is in the best interest of a company. However, a duty to use AI applications only exists in exceptional cases and depends on the market standard in the respective industry. The greater the amount of data to be managed and the more complex and calculation-extensive the decisions in question, the more likely it is that the management board will be obliged to use AI.Footnote 74

3. Business Judgement Rule

This point is closely connected with the application of the business judgement rule as a safe harbour for AI use. Under the general concept of the business judgement rule that is well-known in many jurisdictionsFootnote 75, as it is in Germany according to section 93(1)(2) AktG, a director cannot be held liable for an entrepreneurial decision if there is no conflict of interest and she had good reason to assume she was acting based on adequate information and for the benefit of the company.

a. Adequate Information

The requirement of adequate information depends significantly on the ability to gather and analyse information. Taking into account all the circumstances of the specific individual case, the board of directors has a considerable amount of leeway to judge which information is to be obtained from an economic point of view in the time available and to be included in the decision-making process. Neither a comprehensive nor the best possible, but only an appropriate information basis is necessary.Footnote 76 In addition, the appropriateness is to be assessed from the subjective perspective of the board members (‘could reasonably assume’), so that a court is effectively prevented during the subsequent review from substituting its own understanding of appropriateness for the subjective assessment of the decision-maker.Footnote 77 In the context of litigation, a plausibility check based on justifiability is decisive.Footnote 78

In general, the type, size, purpose, and organization of the company as well as the availability of a functional AI and the data required for operation are relevant for answering the question of the extent to which AI must be used in the context of the decision-making preparation based on information. The cost of the AI system and the proportionality of the information procurement must also be taken into account.Footnote 79 If there is a great amount of data to be managed and a complex and calculation-intensive decision to be made, AI and Big Data applications are of major importance and the members of the management board will hardly be able to justify not using AI.Footnote 80 Conversely, the use of AI to obtain information is definitely not objectionable.Footnote 81

b. Benefit of the Company

Furthermore, the board of directors must reasonably assume to act in the best interest of the company when using AI. This criterion is to be assessed from an ex ante perspective, not ex post.Footnote 82 According to the mixed-subjective standard, it depends largely on the concrete perception of the acting board members at the time of the entrepreneurial decision.Footnote 83 In principle, the board of directors is free to organize the operation of the company according to its own ideas, as long as it stays within the limits of the best interest of the corporationFootnote 84 that are informed solely by the existence and the long-term and sustainable profitability of the company.Footnote 85 Only when the board members act in a grossly negligent manner or take irresponsible risks do they act outside the company’s best interest.Footnote 86 Taking all these aspects into account, the criterion of acceptability proves to be a suitable benchmark.Footnote 87

In the specific decision-making process, all advantages and disadvantages of using or delegating the decision to use AI applications must be included and carefully weighed against one another for the benefit of the company. In this context, however, it cannot simply be seen as unacceptable and contrary to the welfare of the company that the decisions made by or with the support of AI can no longer be understood from a purely human perspective.Footnote 88 On the one hand, human decisions that require a certain originality and creativity cannot always be traced down to the last detail. On the other hand, one of the major potentials of AI is to harness particularly creative and original ideas in the area of corporate management. AI can, therefore, be used as long as its use is not associated with unacceptable risks. The business judgement rule allows the management board to consciously take at least justifiable risks in the best interest of the company.

However, the management board may also conclude that applying AI is just too much of a risk for the existence or the profitability of the firm and therefore may refrain from it without taking a liability risk under section 93(1)(2) AktG.Footnote 89 The prerequisite for this is that the board performs a conscious act of decision-making.Footnote 90 Otherwise, acting in good faith for the benefit of the company is ruled out a priori. This decision can also consist of a conscious toleration or omission.Footnote 91 The same applies to intuitive action,Footnote 92 even if in this case the other requirements of section 93(1)(2) AktG must be subjected to a particularly thorough examination.Footnote 93 Furthermore, in addition to the action taken, there must have been another alternative,Footnote 94 even if only to omit the action taken. Even if the decision makers submit themselves to an actual or supposed necessity,Footnote 95 they could at least hypothetically have omitted the action. Apart from that, the decision does not need to manifest itself in a formal act of forming a will; in particular, a resolution by the collective body is not a prerequisite. Conversely, with a view to a later (judicial) dispute, it makes sense to sufficiently document the decision.Footnote 96

c. Freedom from Conflicts of Interest

The executive board must make the decision for or against the use of AI free of extraneous influences and special interests.Footnote 97 The business judgement rule does not apply if the board members are not solely guided by the points mentioned above, but rather pursue other, namely self-interested goals. If the use of AI is not based on inappropriate interests and the board of directors has not influenced the parameters specified for the AI in a self-interested manner, the use of AI applications can contribute to a reduction of transaction costs from an economic point of view and mitigate the principle-agent-conflict, as the interest of the firm will be aligned with decisions made by AI.Footnote 98 That is, AI can make the decision-making process (more) objective.Footnote 99 However, in order to achieve an actually objective result, the quality of the data used is decisive. If the data set itself is characterized by discriminatory or incorrect information, the result will also suffer from those weaknesses (‘garbage in – garbage out’). Moreover, if the management board is in charge of developing AI applications inside the firm, it may have an interest in choosing experts and technology designs that favor its own benefit rather than the best interest of the company. This development could aggravate the principle-agent-conflict within the large public firm.Footnote 100

IV. Supervisory Board

For this reason, it will also be of fundamental importance in the future to have an institutional monitoring body in the form of the supervisory board, which enforces the interests of the company as an internal corporate governance system. With regard to the monitoring function, there is a distinction to be made as to whether the supervisory board makes use of AI itself while monitoring and advising the management of the company, or whether the supervisory board is monitoring and advising with regard to the use of AI by the management board.

1. Use of AI by the Supervisory Board Itself

As the members of the management board and of the supervisory board have to comply with the same basic standards of care and responsibility under sections 116(1) and 93(1)(1) AktG, the management board’s AI related dutiesFootnote 101 essentially apply to the supervisory board accordingly. If the supervisory board is making an entrepreneurial decision, it can also rely on the business judgement rule.Footnote 102 This is true, for example, for the granting of approval for transactions requiring approval under section 111(4)(2) AktG, with regard to M&A transactions.Footnote 103 Furthermore, the supervisory board may use AI based personality and fitness checks when it appoints and dismisses management board members.Footnote 104 AI applications can help the supervisory board to structure the remuneration of the management board appropriately. They can also be useful for the supervisory board when auditing the accounting and in the compliance area, because they are able to analyze large amounts of data and uncover inconsistencies.Footnote 105

2. Monitoring of the Use of AI by the Management Board

When it comes to the monitoring and advice on the use of AI by the management board, the supervisory board has to fulfil its general monitoring obligation under section 111(1) AktG. The starting point is the reporting from the management board under section 90 AktG.Footnote 106 Namely strategic decisions on the leading guidelines of AI use is part of the intended business policy or at least another fundamental matter regarding the future conduct of the company’s business according to section 90(1)(1) AktG. Furthermore, the usage of certain AI applications may be qualified as transactions that may have a material affect upon the profitability or liquidity of the company under section 90(1)(4) AktG. In this regard, the management board does not need to derive and trace the decision-making process of the AI in detail. Rather, it is sufficient for the management board to report to the supervisory board about the result found and how it specifically used the AI, monitored its functions, and checked the plausibility of the result.Footnote 107 In addition, pursuant to section 90(3) AktG, the supervisory board may require at any time a report from the management board on the affairs of the company, on the company’s legal and business relationships with affiliated enterprises. This report may also deal with the AI related developments on the management board level and in other entities in a corporate group.

Finally, the supervisory board may inspect and examine the books and records of the company according to section 111(2)(1) AktG. It is undisputed that this also includes electronic recordings,Footnote 108 which the supervisory board can examine using AI in the form of a big data analysis.Footnote 109 Conversely, the supervisory board does not need to conduct its own inquiries using its information authority without sufficient cause or in the event of regular and orderly business development.Footnote 110 Contrary to what the literature suggests,Footnote 111 this applies even in the event that the supervisory body has unhindered access to the company’s internal management information system.Footnote 112 The opposing view not only disregards the principle of a trusting cooperation between the management board and the supervisory board, but also surpasses the demands on the supervisory board members in terms of time.Footnote 113

With a view to the monitoring standard, the supervisory board has to assess the management board’s overall strategy as regards AI applications and especially systemic risks that result from the usage of AI in the company. This also comprises the monitoring of the AI-based management and organizational structure of the company.Footnote 114 If it recognizes violations of AI use by the management board, the supervisory board has to intervene using the general means of action. This may start with giving advice to the management board on how to optimize the AI strategy. Furthermore, the supervisory board may establish an approval right with regard to the overall AI-based management structure. In addition, the supervisory board may draw personnel conclusions and install an AI expert on the management board level such as a CIO or CDO.Footnote 115

V. Conclusion

AI is not the end of corporate governance as some authors predicted.Footnote 116 Rather, AI has the potential to change the overall corporate governance system significantly. As this chapter has shown, AI has the potential to improve corporate governance structures, especially when it comes to handling big data sets. At the same time, it poses challenges to the corporate management system, which must be met by carefully adapting the governance framework.Footnote 117 However, currently, there is no need for a strict AI regulation with a specific focus on corporations.Footnote 118 Rather, we see a creeping change from corporate governance to algorithm governance that has the potential to enhance, but also the risks to destabilize the current system. What we really need is the disclosure of information about a company’s practices with regard to AI application, organization, and oversight as well as potentials and risks.Footnote 119 This kind of transparency would help to raise awareness and to enhance the overall algorithm governance system. For that purpose, the already mandatory corporate governance report that many jurisdictions require, such as the US,Footnote 120 the UKFootnote 121 and Germany,Footnote 122 should be supplemented with additional explanations on AI.Footnote 123

In this report, the management board and the supervisory board should report on their overall strategy with regard to the use, organization, and monitoring of AI applications. This specifically relates to the responsibilities, competencies, and protective measures they established to prevent damage to the corporation. In addition, the boards should also be obliged to report on the ethical guidelines for a trustworthy use of AI.Footnote 124 In this regard, they may rely on the proposals drawn up on an international level. Of particular importance in this respect are the principles of the European Commission in its communication on ‘Building Trust in Human-Centric Artificial Intelligence’,Footnote 125 as well as the ‘Principles on Artificial Intelligence’ published by the OECD.Footnote 126 These principles require users to comply with organizational precautions in order to prevent incorrect AI decisions, provide a minimum of technical proficiency, and ensure the preservation of human final decision-making authority. In addition, there is a safeguarding of individual rights, such as privacy, diversity, non-discrimination, fairness, and an orientation of AI to the common good, including sustainability, ecological responsibility, and overall societal and social impact. Even if these principles are not legally binding, a reporting obligation requires the management board and supervisory board to deal with the corresponding questions and to explain how they relate to them. It will make a difference and may lead to improvements if companies and their executives are aware of the importance of these principles in dealing with responsible AI.

20 Autonomization and Antitrust On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion

Stefan Thomas
I. Introduction

The use of algorithms is associated with a risk of collusion. This bears on the construal of the cartel prohibition, on which the present chapter focuses. The hypothesis is that algorithms may achieve a collusive equilibrium without any involvement of natural persons. Against this backdrop, it is questionable whether and to what extent such an outcome can be qualified as a concerted practice in terms of the law.

The analysis will be structured as follows: first, it will be assessed in what way algorithms can influence competition on markets (Section II). Subsequently, the article will deal with the traditional criteria of distinction between explicit and tacit collusion, which might reveal a potential gap in the existing legal framework with respect to algorithmic collusion (Section III). Finally, it must be analyzed whether the cartel prohibition can be construed in a way that captures the phenomenon appropriately (Section IV). The chapter will close with a summary (Section V).

II. Algorithmic Collusion As a Phenomenon on Markets

It is widely accepted that the use of algorithms can precipitate collusive outcomes, at least in theory. There is no lack of attempts to systematize the different ways algorithms can be involved here. Since the first groundbreaking publications by Ariel Ezrachi, Maurice Stucke, and Salil Mehra, as well as other authors in the following, the matter has come into the focus of antitrust scholarship and practice.Footnote 1 Agencies have started to look into it.Footnote 2 First cases have emerged about restraints implemented on platforms involving computer technology. With respect to the United States (US) TopkinsFootnote 3 must be mentioned, which involved alleged horizontal price fixing on a digital platform based on algorithms. For the European Union (EU), the EturasFootnote 4 case comes to mind. The operator of a travel booking platform had informed the travel agencies using that platform that it intended to cap rebates granted to end-consumers. The European Court of Justice (ECJ) held that this amounted to a horizontally concerted practice among the travel agencies to the extent that these had not objected to this proposal. The Luxembourg competition authority in 2018 found the taxi booking app Webtaxi to be exempt from the cartel prohibition although the system involved an algorithmic horizontal alignment of prices. The agency found offsetting efficiencies to the benefit of consumers.Footnote 5 These cases have in common that the natural persons representing the companies involved were aware of the restrictions, or that they at least ought to have known about them. Algorithms were an element of implementing the restriction, yet the ultimate decision about the competitive restraint was taken by human beings. Under these conditions the cases did not pose severe difficulties in establishing explicit collusion. There is not a fundamental difference to cases in which parties communicate, for example, by way of hub-and-spoke cartelization through traditional means and forms of communication.Footnote 6

A greater legal challenge is caused by the risk of autonomous algorithmic collusion. Computers with machine learning capabilities can possibly achieve or sustain a collusive equilibrium without any involvement of human knowledge or intent. The underlying scholarly discussion usually orbits around q-learning mechanisms.Footnote 7 The hypothesis is that algorithms with machine learning capabilities can act as computer agents exploring the success of their own actions, from which a collusive strategy can emerge as the optimum. In this event, it is conceivable that even the programmer of the algorithm was not aware of this potential outcome.Footnote 8 The market effect, therefore, can be a collusive equilibrium, albeit absent any human involvement.

Lawyers, economists, and computer scientists are still at odds over the likeliness and actual occurrence of autonomous algorithmic collusion. Some consider it a realistic scenario that tends to be underestimated.Footnote 9 The German Bundeskartellamt and the French Autorité de la Concurrence have refrained from a definitive conclusion so far.Footnote 10 Ulrich Schwalbe, in his seminal article, points out that the game theoretical dilemma that has to be solved by autonomous computer agents to achieve a stable collusive equilibrium is huge and cannot be easily overcome in practice.Footnote 11 A more recent study by Emilio Calvano and othersFootnote 12, however, concludes that q-learning algorithms, in fact, can autonomously collude. The EU Commission, in its proposal for a ‘New Competition Tool’, mentions the risk that digital platforms can create ecosystems in which collusion arises, which can be read as a recognition of the phenomenon as a matter of concern.Footnote 13 Against this backdrop, it is expedient to elaborate further on the application of the cartel prohibition in such cases.

III. On the Scope of the Cartel Prohibition and Its Traditional Construal

The conceptual problem behind the traditional construal of the cartel provision is the difference in the structure of the law on the one hand and the economic determinants for collusive equilibria on the other.Footnote 14 Anticompetitive collusive equilibria are characterized by the fact that the participants consider it individually rational to pursue such a strategy. That is the case for types of collusion that are usually referred to as explicit and which are illegal in the same way as it holds true for so-called tacit collusion, which is seen as not to fall within the scope of the prohibition. Agreements in breach of the cartel prohibition are null and void so that they cannot be enforced. Therefore, any cartel is only stable so long as the firms participating in it consider it rational to remain involved. All types of collusive equilibria can, therefore, be considered as non-cooperative games in terms of game theory.Footnote 15 Yet still, the law does not prohibit the achievement or sustaining of a collusive equilibrium as such. Instead, the provision is confined to certain types of measures, which are described as agreement, concerted practice, or decision. Mere tacit collusion is supposed to be distinct from these aforementioned types of anticompetitive conduct. While explicit collusion (i.e. achieved by agreement), concerted practice, or decision, is prohibited, tacit collusion is found to be legitimate.

As becomes obvious, the traditional construal of the law rests on a description of the means and forms by which firms interact when defining collusion. In the context at hand, the category of concerted practices has the greatest relevance. It is conceived of as something whereby a ‘practical cooperation’ is substituted for the risks of competition. Such practical cooperation, in turn, is supposed to be different from merely observing a rival’s conduct and reacting to it.Footnote 16 Similar approaches of distinction apply under section 1 of the US Sherman Act.Footnote 17 There, so-called conscious parallelism is deemed not to fall within the scope of the prohibition.Footnote 18 For the finding of a cartel, so-called plus factorsFootnote 19, or ‘facilitating practices’/‘facilitating devices’ need to be established.Footnote 20

It is argued that, whereas firms when tacitly colluding merely observe each other and react independently, the mechanism allegedly differs if they opt for a practical cooperation. A private exchange of pricing information can serve as an example for the latter. The jurisprudence of the courts requires that such practical cooperation must be substituted ‘knowingly’ for the risks of competition for it to amount to a concerted practice.Footnote 21 The concept, therefore, hinges on the inner sphere of the firms involved.

Against the afore, it is questionable whether autonomous algorithmic collusion is prohibited under the traditional enforcement paradigms. If the firms lack of knowledge or intent with respect to the fact that their computer agents pursue a collusive strategy, it cannot be said that these firms ‘knowingly’ substitute a practical cooperation for competition. Several authors, therefore, point to the risk that the cartel prohibition might stop short of preventing such outcomes. Ezrachi and Stucke argue that collusion achieved by machine learning systems can fall outside the scope of the cartel prohibition for ‘lack of evidence of an anticompetitive agreement or intent.’Footnote 22 In a similar vein, Calvano and others conclude from their studyFootnote 23:

From the standpoint of competition policy, these findings should probably ring an alarm bell. Today, the prevalent approach to tacit collusion is relatively lenient, in part because tacit collusion among human decision-makers is regarded as extremely difficult to achieve. While we have no direct comparative evidence for algorithms relative to humans, our results suggest that algorithmic collusion might not be that improbable. If this is so, then the advent of algorithmic pricing could well heighten the risk that tolerant antitrust policy will produce too many false negatives.

Some authors, therefore, highlight that the enforcement paradigms might warrant amendments to close such regulatory gaps.Footnote 24

IV. Approaches for Closing Legal Gaps
1. On the Idea of Personifying Algorithms

It is questionable whether the potential enforcement gap in antitrustFootnote 25 can be overcome by defining algorithms as ‘undertakings’. The notion of an undertaking in EU antitrust, indeed, is a very broad concept which has many facets and functions. As a working definition that applies to the most common types, an undertaking can be described as a combination of assets and people that act on a market governed by a management body or further representatives irrespective of the legal personality or corporate structure. For an undertaking to act, or to have knowledge or intent, it is the action, the knowledge, or the intent of the human beings representing it which is attributed to it. If a company manager knowingly enters into an exchange of sensitive pricing information with the manager of a rival, it can, therefore, be said that these ‘undertakings’ substituted a practical cooperation for the risks of competition.

Yet what substantive meaning would terminology such as a ‘practical cooperation’ or ‘knowingly’ have, if they were applied to an algorithm? The problem is that the legal terminology is coined on human interaction and cognition, so that it will be vastly deprived of its meaning if transferred to a computer system. How shall an action of an algorithm be identified as ‘knowingly’, as opposed to another action that is supposed to happen un-knowingly? In what way is it meaningful to consider the actions of an algorithm as a ‘practical cooperation’, as opposed to a mere intelligent adaption to information obtained by this algorithm on the market? To rely on such human concepts of cognition with respect to the regulation of algorithms will likely end up in semantic exercises with limited substance.

2. On the Idea of a Prohibition of Tacit Collusion

Another way of dealing with the problem could be in equating tacit collusion and explicit collusion.Footnote 26 This would mean that under antitrust law any collusive strategy would qualify as an illicit cooperation so that any further distinctions based on the inner sphere of the persons involved would become obsolete. Such view has been suggested in the past independently of the issue of algorithmic collusion. Notable proponents were Richard Posner in his earlier writingsFootnote 27 (he has since changed his viewFootnote 28), Richard MarkovitsFootnote 29, and more recently Louis Kaplow.Footnote 30

One might feel inclined to hold that such a view does not reconcile with the structure of the law. Yet it is questionable whether this counterargument would be very strong. The notion of concerted practices can possibly be construed in a way as to extend to cases in which the collusive outcome is based on a mechanism of observation and retaliation, which is characteristic for tacit collusion as it is for explicit collusion. As outlined earlier, both categories share the feature that they can be described as a non-cooperative game in terms of game theory. It is, therefore, rather a semantic issue whether some ways of engaging in such a non-cooperative strategy can be tagged as a ‘practical cooperation’ or not, while the underlying economic principles remain the same. Firms observe and react in ways that are deemed individually optimal irrespective of which words are used to describe this phenomenon. Especially in the grey area between typical cases of explicit cooperation on the one hand and tacit oligopoly conduct on the other, it becomes apparent how brittle the traditional concept of distinction is.Footnote 31 Consider that even in cases that would usually be qualified as tacit collusion, firms cooperate in that they observe each other and react to the information they have obtained from observing each other. One might ask the question: In what way is this not a ‘practical cooperation’? Also, a collusive equilibrium can bear negatively on consumer welfareFootnote 32 irrespective of the means and forms used by firms to sustain it. This seems to add to the argument that the distinction between tacit and explicit collusion is of limited expedience.

Yet still the idea of equating tacit collusion with explicit collusion faces severe objections, which might explain why it has not gained more recognition among scholars and enforcers.Footnote 33

Any law must provide the addressee the opportunity to abide by it through choosing a compliant course of action. Otherwise, the law would be perplexing. Recall, now, that collusion in the realm of antitrust is a non-cooperative game.Footnote 34 This means that the strategy is individually rational for each participant. Sanctioning collusion without more, therefore, would amount to prohibiting the pursuit of an individually rational strategy. Kaplow reacts to this objection by pointing out that it is a common feature of the legal order to prohibit types of conduct that are individually rationalFootnote 35, such as the stealing of an apple.Footnote 36 By imposing a sanction, the law ensures that it becomes rational to refrain from that course of action, he advances.

This argument, however, is not able to overcome the conceptual problems that would arise if the law prohibited the collusive outcome as such. While it is perfectly clear what a person must do to ‘not steal an apple’, it is much less obvious what course of action a firm would have to take in order to ‘not pursue a collusive strategy’. The negatory of stealing an apple is unambiguous. The course of action in order to avoid the sanction is to not steal the apple. With a prohibition of a collusive market outcome, however, it would be much more difficult to describe, in an unambiguous way, what the compliant course of action would be. One of the reasons for this is that, from an ex ante perspective, it is unclear what outcome a collusive strategy among firms might produce. If that is not known, however, it is equally unclear what price a firm must set in order to not charge at a collusive level. If the firms do not know whether a collusive strategy would yield a market price of € 5 or rather of merely € 4, they cannot know ex ante whether individually charging a price of € 4 would be ‘non-collusive’ or not.

On a more philosophical level, another objection buoys. Effectively, the law would require the addressee to pursue any market strategy so long as it is not rational, assumed that the rational strategy would be collusion. It is questionable, however, whether an addressee of the law can be required to intentionally act irrationally. Any legal prohibition must conceive of the addressee as an intelligent entity, for if the addressee were not intelligent, it could not abide by the law in the first place. From a philosophical perspective, it is unclear, however, whether a person can ‘rationally act irrationally’. Can a firm be obliged to randomize prices in order to protect itself from being accused of acting rationally-collusive?

Yet even ignoring this fundamental problem and moreover assuming it were possible to define a hypothetical collusive price level ex ante, it would remain unclear whether a non-collusive strategy could be defined with a sufficient degree of precision so that firms could abide by the law. Would it suffice to undercut the hypothetical collusive price by, for example, 2%? Or would the law require every firm to price at marginal cost, for under perfect competition this would be the hypothetical competitive price, even though perfect competition usually does not exist? Merely imposing a sanction on the pursuit of a collusive strategy does not solve any of these conceptual problems.Footnote 37

3. Harmful Informational Signals As Point of Reference for Cartel Conduct
a. Conceptualization

Another conceptual venture to solve the issue could be to maintain a distinction between illicit cartel conduct and legitimate coordination, yet to substitute a new criterion for the traditional paradigm, viz. for the practical cooperation adage. Such alternative criterion of distinction could be checking the informational signals released by colluding firms for their propensity to create a net consumer harm. Accordingly, an illicit concerted practice would be found if firms, or the algorithms relied on by firms, released informational signals which reduced consumer rent if compared to a counterfactual in which such informational signals were absent. The counterfactual, therefore, would not be a hypothetical market without collusion. It would be the same market absent a particular informational signal.Footnote 38 To clarify the specifics of this approach, and to contrast it with the view expressed by Kaplow and others, the following explanations shall be made.

In contrast to a situation where any collusive equilibrium is prohibited, so that it is unclear which course of action to take in order to abide by the law, confining the prohibition to harmful signals leaves the addressee a binary choice that is conceptually clear: an informational signal that is ultimately harmful to consumers can either be released or not. There is no element of imposed irrationality in such a prohibition. To refrain from the release of a harmful informational signal can also mean that the addressee must choose to release a different signal in order to avoid a harmful effect. To exemplify the idea, reference can be made to the EU Commission’s remedy decision in Container Shipping, where public price announcements were not abandoned completely but limited in scope in order to avoid the creation of a consumer harm.Footnote 39

A relevant signal in this sense can be any release of market-related information independent of whether it takes place publicly or in private, whether consumers are involved or not. Even price lists, price announcements, or other types of public display of prices or other parameters can qualify as a signal that ultimately leads to a collusive equilibrium. In contrast to the aforementioned opinion of Kaplow, however, this would not suffice for the conclusion that a restriction of competition in terms of the cartel prohibition is established. Rather, the release of such informational signals would fall outside the scope of Article 101(1) TFEU to the extent that it produces offsetting consumer benefits.

This reflects the fact that there is a plethora of cases where public market information leads to collusive equilibria, although the benefit to consumers being derived from this information outweighs the gross harm of the collusive outcome if compared to a situation in which such informational signals were not made. The reason why, in most cases, public price lists and similar forms of information are not prohibited despite their risk to precipitate collusive equilibria lies in the fact that they usually bring about benefits to consumers that offset the potential harm.Footnote 40 If firms refrain from the release of such pricing signals, the distribution of goods might be significantly impeded. Consumers can face difficulties in planning their purchases ahead or in pursuing multisource strategies. It can be said, therefore, that sometimes ‘vertical transparency’ creates a greater benefit to consumers than what is taken away by the horizontal collusion that concomitantly results from it. It is necessary, therefore, to balance both effects in order to get a full picture of the economic impact of an informational signal. If such an analysis of the economic effects is integrated into the assessment of a concerted practice, this creates a system in which harm and benefit of a measure can be distinguished without the need to make recourse to notions such as the ‘practical cooperation’ adage or the intention of the firms involved.

This concept would allow an entity to deal with the phenomenon of autonomous algorithmic collusion under the cartel prohibition. Achieving or sustaining such a market outcome would be prohibited if and to the extent that the harm to consumers were greater than the benefits associated with the release of the underlying informational signals.Footnote 41 If algorithms achieved a collusive equilibrium by communicating with each other and without this interaction providing any useful information to consumers, this could, therefore, constitute a concerted practice in terms of the law. If, on the other hand, a digital platform aggregated and provided information to consumers in a way that benefits them, and if the efficiency potential of this platform could not materialize without this release of information, Article 101(1) TFEU would not be triggered even though the conduct might, ultimately, give rise to a collusive equilibrium, if and to the extent that the benefit of the former offsets the harm of the latter. The counterfactual, therefore, would not be a digital platform without collusion. It would be a situation in which the informational signals were not released. If the platform, in such a counterfactual scenario, were not operated or operated with a lesser efficiency, consumers could be deprived of the benefits resulting from it. This could mean a smaller range of suppliers being visible to them, less information being available to help consumers plan their purchases, etc.

This demonstrates that a collusive outcome can be an inevitable consequence of an algorithm-based system that produces benefits, although horizontal restraints, even on prices, are involved. The decision of the Luxembourg competition agency has made this clear with respect to the taxi app Webtaxi.Footnote 42 The agency found consumer benefits in the fact that the platform improved on the supplies with transportation services, even though a horizontal collusion on prices was an inevitable side effect. While the decision accounted for these efficiencies within the scope of an exemption from the cartel prohibition, the concept presented here would integrate this analysis already into the assessment of a concerted practice. As previously outlined, this is a consequence resulting from the substitution of an effects analysis for the less useful ‘practical cooperation adage’ in order to discriminate between legitimate and illicit collusion in algorithm cases.

b. Possible Objections

Such a concept of harmful informational signals, of course, provokes objections. They shall be dealt with in the remainder of this chapter. One might want to invoke that this construal of the law does not reconcile with the structure of the provision as shaped by the jurisprudence. Admittedly, the courts have not yet recognized an interpretation as suggested here. Rather, the Court of Justice, currently, relies on the notion of ‘practical cooperation’ in order to distinguish between concerted practices and tacit collusion. The established set of criteria does not contain a place for an economic effects assessment. On the other hand, the Court of Justice has not yet had an opportunity to hone the law with respect to the phenomenon of autonomous algorithmic collusion. It is, therefore, conceivable that the courts, upon preparatory enforcement steps done by the agencies, ultimately consider the option to readjust some of the enforcement paradigms in order to close potential regulatory gaps.

Also, it should be noted that even under the current decisional practice an effects analysis can be part of the assessment of Article 101(1) TFEU. As to the distinction between restrictions by effect and those by object, the potential of the measure to produce restrictive effects bears significance.Footnote 43 In that regard, the Court of Justice has made clear that, among other things, it must be analyzed whether and in what way consumers might suffer from the measure at stake.Footnote 44 This line of reasoning already mirrors an effects analysis based on the consumer welfare paradigm, which also is the backbone of the present proposal. In a similar way, the Commission practice demonstrates that the notion of a competitive restraint involves an analysis of the effects on consumer rent. Even though the Court of Justice has ruled that, for a restriction of competition to arise, it is unnecessary to demonstrate concrete consumer harmFootnote 45, the Commission takes this into account if it helps to discriminate between legitimate and illicit conduct. The Commission, for example, argues that even horizontal price fixing, depending on the structure and function of the cooperation, might warrant a close examination within the ‘by effect category’, which demonstrates that it is possible to conceive of cases where such conduct falls outside of the scope of Article 101(1) TFEU without any further examination of the legal exemption rule in Article 101(3) TFEU.Footnote 46

On a practical level, one might want to invoke that the assessment of whether an informational signal precipitates a net consumer harm or not is too complicated to rely on as an enforcement paradigm. Yet it must be noted that even the currently existing decisional practice is not void of elements of an effects analysis, as demonstrated previously. Beyond that, firms and enforcers face exactly these difficulties already within the realm of Article 101(3) TFEU, so that it could not be said that such difficulties are idiosyncratic to the proposal made here.

Beyond that, one might raise the question in what way firms can become responsible for the conduct of an algorithm, if the strategy pursued by the latter is unknown to the former. In legal terms, however, it is possible to hold someone responsible for the organization of an enterprise. Firms, therefore, could be considered obliged to terminate the use of an algorithm or to alter its paradigms, if and to the extent that it produces more harm than benefit to consumers. Firms could, therefore, be ordered, by way of an administrative decision, to make such adjustments to their business strategy either by tweaking the algorithm or by stopping its use altogether. Antitrust economists already expound ways to design platforms in a way that counters collusive risks emerging on it from machine learning systems. Justin Johnson, Andrew Rhodes, and Matthijs Wildenbeest published a study in 2020 on how a choice algorithm on a sales platform can impact the likeliness of collusion among independently acting q-learning algorithms.Footnote 47 It goes without saying that the imposition of a fine, or a liability for damages would, in any event, require negligence or intent on the firm’s part. That, in turn, would require the agency to establish that some degree of knowledge or intent with respect to the achievement of a collusive equilibrium can be established among the actual firms that relied on the algorithm. Such would be absent if the equilibrium were achieved or sustained independently by a machine learning process not guided or anticipated by the firms that rely on the outcome.Footnote 48 Yet still, in such an event a mere administrative order, absent any sanction or damages award, could be issued.

Finally, one might want to invoke that it would be disproportionate to make such far-reaching amendments to the construal of the cartel prohibition for the sole purpose of closing a regulatory lacuna with respect to algorithms. It is not the intention behind this proposal, however, to render the established enforcement paradigms obsolete in their entirety. Rather, the suggestion made here should be conceived of as a mere addition to the established principles that would still bear relevance in the majority of non-algorithmic collusion cases. A private exchange of pricing information between the managers of two rivals, for example, could still be considered a practical cooperation without further effects analysis required, for its potential to precipitate a consumer harm, as reflected in the Commission’s Horizontal Guidelines.Footnote 49 The informational signal approach suggested here, on the other hand, could become relevant if, in an algorithm case, the traditional criteria did not allow the application of the law in a meaningful way. The present suggestion, therefore, is meant as a humble contribution to existing paradigms, not as a postulate of a total substitution for them.

V. Conclusion

This article is a conceptual sketch of a way to deal with the intricacies coming along with autonomous algorithmic collusion. Such risk is being discussed, especially, with respect to q-learning algorithms. Even though practical cases have not yet emerged, there is sufficient reason to address potential issues as a precautionary measure at this point in the scholarly debate. It was the intention to demonstrate that the traditional construal of the law, which relies on a description of human behavior, appears inapt for effectively tackling machine-induced equilibria. Applying the established criteria in such cases would very likely lead to a harping on words without substance. There is simply no point in venturing to assess whether algorithms ‘practically cooperated’ as opposed to ‘merely observed each other’. The present proposal, therefore, is intended to serve as a conceptualization of the notion of concerted practices for cases that would otherwise elude the cartel prohibition. To conclude, the entire problem of autonomous algorithmic collusion is an example of the necessity for interdisciplinary research between lawyers, economists, and computer scientists. At the same time, the problem highlights how enforcement paradigms, that hinge on descriptions of the inner sphere and conduct of human beings, may collapse when applied to the effects precipitated by independent computer agents. The subject matter of this chapter is, therefore, an example for the greater challenges that the entire legal order faces in light of the progress of machine learning.

21 Artificial Intelligence in Financial Services New Risks and the Need for More Regulation?

Matthias Paul
Footnote *
I. Introduction

The financial services industry has been at the forefront of digitization and big data usage for decades. For the most part, data processing has been automized by information management systems. Not surprisingly, Artificial Intelligence (AI) applications, capturing the more intelligent ways of handling financial activities and information, have increasingly found their way into the financial services industry over the last years; from algorithmic trading, smart automized credit decisions, intelligent credit card fraud detection processes, personalized banking applications and even into areas like so-called robo-advisory services and quantitative investment and asset management more recently.Footnote 1

The financial industry has also been one of the most regulated industries in the world. In particular, since the collapse of Lehmann Brothers in 2008, leading into one of the most severe financial crises in history, regulation efforts of all kinds of finance-related activities and financial organizations as a whole by the different regulators around the world have significantly increased. In general, most regulations relating to the financial industry, in particular those put into place after the financial crisis in 2008, have focused on areas like safeguarding the financial institutions themselves, safeguarding the customers of financial institutions, and making sure the institutions comply with general laws overall and on a global scale, given the truly global nature of the financial industry.

More recently, authors have argued that with the emergence of AI-based applications in the financial industry, new kinds of risks have emerged that require additional regulations.Footnote 2 They have pointed for instance to increased data processing risk, cybersecurity risks, additional challenges to financial stability, and even to general ethical risks stemming from AI in financial services. Some regulators like the Monetary Authority of Singapore (MAS) have proposed an AI governance framework for financial institutions.Footnote 3 The EU also explored this topic and published a report on big data risks for the financial sector, including AI, stressing appropriate control and monitoring mechanisms.Footnote 4 Scholars have developed this topic further by adopting so-called personal responsibility frameworks to regulate any new emerging AI-based applications in the financial industry.Footnote 5 In its recent draft regulation, the EU has presented a general risk-based regulatory approach of AI which regulates and even prohibits certain so-called high risk AI system; and some of them can supposedly also be found in the financial industry.Footnote 6

This chapter will explore this entire topic of AI in the financial industry (which will also be referred to as robo-finance) further. One focus of the article will be on whether AI in the financial industry gives rise to new kinds of risks or merely increases existing risks already present in the industry. Further, the article will review one prominent general regulatory approach many scholars and regulators have put forward to limit or mitigate these alleged new risks, namely the so-called (personal) responsibility frameworks. In the final section of this chapter, a different proposal will be presented on how and to what extent best to regulate robo-finance, which will take up key elements and concepts from the recent Draft EU AIA.Footnote 7 To lay the groundwork for the discussion of these topics, the nature of AI, in particular as a general-purpose technology, will be explored first. In addition, an overview of the current state of AI applications in financial services will be given, and the different regulatory layers or focus areas for regulations that are present in the financial industry today will be presented. Based on these introductory discussions, the main topics of the chapter can then be spelled out.

II. AI As a New General Purpose Technology

Electricity is a technology or technology domain which came into life more than 150 years ago, and it still drives a lot of change today. It comprises different concepts like electrical current, electrical charge, electric field, electromagnetics etc. which have led to many different application areas in their own right; from the light ball to electrical telegraphs or to electric engines, to mention only a few. It is fair to say that electricity as a technology field or domain has revolutionized the world in many ways, and it still does. And it has changed and transformed whole industries as it transforms the automotive industry with the transition from combustion engines to electric cars.

Given its wide range of underlying concepts with multiple specific application areas of their own right, several authors have referred to electricity as a general-purpose technology (GPT).Footnote 8 What is characteristic of GPTs is that there exists a wide range of different use cases in different industries, thus GPTs are not use-case-specific or industry-specific technologies but have applications across industries and across many types of use cases. Other examples of GPTs which scholars have identified are the wheel, printing, the steam engine, and the combustion engine, to mention a few.Footnote 9 As such, GPTs are seen as technologies that can have a wide-ranging impact on an entire economy and, therefore, have the potential to drastically alter societies through their impact on economic and social structures.Footnote 10

Several authors have claimed or argued in recent years that AI can or should also be considered a GPT, ‘the most important one of our era’ in fact.Footnote 11 Or as Andrew Ng says: ‘AI will transform multiple industries’.Footnote 12 AI’s impact on societies as a whole is seen as significant, for instance, changing the way we work, the way we interact with each other and with artificial devices, how we drive, how wars might be conducted, etc. Further, like in the case of electricity, there are many different concepts underlying AI today, from classical logic or rule-based AI to machine learning and deep learning based AI, as employed so successfully today in many areas. Some hybrid applications combine both concepts.Footnote 13 These concepts have allowed for many new types of AI applications, similar to the case of electricity, where different concepts have been merged together as well.

In fact, because the use cases for AI technologies are so enormous today, companies like Facebook have created their internal AI labs or what they have called their ‘AI workshop’ where many different applications of AI technologies, in particular machine learning applications, get explored and developed.Footnote 14 The underlying assumption of such companies is that AI can be applied to so many different areas and tasks that they need to find good ways to leverage their technological expertise in all such different areas.

Clearly, AI is still in its early stages of technological development, with fewer implementations in widespread operation than in the case of electricity. But there have been language and speech processing applications, visual recognition applications like face recognition in smartphones, photo optimization algorithms in digital cameras, many kinds of big data analytics applications, etc. AI technologies have also changed the interface between humans and machines, some turn machines into helpful assistants, others allow for intelligent ways of automating processes and so on. The applications of AI are already widespread today, and we seem to be just at the beginning of a long journey of bringing more applications to life.Footnote 15

In the following, we will look at the financial industry as one major application area for AI as a general-purpose technology. The financial industry is interesting in so far as it is heavily regulated on the one hand, but also highly digitalized and technologically advanced on the other hand, with many kinds of AI use cases operational already today.

III. Robo-Finance: From Automation to the Wide-Spread Use of AI Applications

The financial industry has been one of the most data-intensive and digitized industries for decades. In 1973, SWIFT was founded and launched, the so-called Society for Worldwide Interbank Financial Telecommunication, bringing together 239 banks from 15 countries worldwide with the aim of handling the communication of cross-border payments. The main components of the original system included a computer-based messaging platform and a standard message system.Footnote 16 This system disrupted the manual processes of the past, and today more than 11,000 financial institutions from more than 200 countries are connected through SWIFT’s financial global technology infrastructure. Nasdaq, to give another example, the world’s first electronic stock exchange, began its operations even earlier, in 1971, leading the way to fully digitized exchanges for the trading of any kinds of financial securities, which are the standard and norm today. And real-time financial market data and news, probably the first big data sets used in history, were made available in the early 1980s by companies such as Thomson Reuters and Bloomberg through their market data feeds and terminal services.Footnote 17

In the years to follow, the financial industry has been at the forefront of leveraging information (management) systems to manage and process the vast amounts of data and information available.Footnote 18 In fact, today, many financial institutions resemble technology companies more than traditional banking houses, and it is no surprise that companies like Paypal or, more recently, many new fintech players were able to further transform this traditional industry by leveraging new technologies like the Internet or mobile services, platforms, and infrastructures.Footnote 19

This development of digitizing financial information and financial transactions has made the automation of data handling and processing, not just a possibility but rather a necessity to maintain and defend one’s competitiveness and to deal with and manage the various kinds of risks inherent in the financial industry. The execution of payments within the international banking system or the execution of buying or selling orders on the exchanges can be fully automized today based on simple parameters (such as dates and amounts, or stop-loss orders to manage risk, etc.). Clearly, these ways of automizing financial transactions and processes are in no way intelligent, nevertheless they have helped the investment banks and other actors in the financial industry tremendously to increase process speed, accuracy, and also improve risk management.Footnote 20 Therefore, it is no surprise that many actors in the industry have constantly searched and tried to develop more sophisticated processes, which has opened the doors for AI applications in the financial industry.

Today, there is a wide range of AI applications present in the financial industry of which the following are just key application areas with multiple kinds of use cases:Footnote 21

  1. (1) Customer Related Processes:

    1. a. new ways of segmenting customers based on the use of so-called cluster algorithms or analyses,Footnote 22

    2. b. personalized banking services and offers based, for instance, on profiling algorithms,Footnote 23

    3. c. robo-advisory services replacing human financial advisory with machines,Footnote 24

    4. d. intelligent chatbots advising or providing information to clients in different areas of their financial decision making.Footnote 25

  2. (2) Operations and Risk Management:

    1. a. underwriting automation in credit decisions and algorithmic credit scoring,Footnote 26

    2. b. automized stress testing.

  3. (3) Trading and Investment Management:

    1. a. algorithmic trading – from simple rule-based AI to more sophisticated machine learning based algorithms,Footnote 27

    2. b. automatic portfolio rebalancing in asset management adjusting the portfolio to the predefined asset allocation scheme based on simple rule-based algorithms,

    3. c. big data and machine learning–based (assisted or fully automized) asset management.Footnote 28

  4. (4) Payment Processes:

    • fraud detection algorithms in credit card payments using big data analytics and learning algorithms.Footnote 29

  5. (5) Data Security and Cybersecurity:Footnote 30

    1. a. data security – algorithms protecting the data from inside a financial institution,

    2. b. cybersecurity – algorithms protecting the data from outside attacks.Footnote 31

  6. (6) General Regulatory Services and Compliance Requirements:Footnote 32

    1. a. Anti Money Laundering (AML) automation and protection algorithms helping to identify politically exposed people (so-called Peps) or criminals involved in certain financial transactions,

    2. b. detection of compliance breaches in case of insider trading etc.

As shown here, AI is already employed today in many areas of the financial industry, and new applications are emerging every day. The question is whether additional or increased risks stem from these applications, which might require additional regulations, as argued by some authors.Footnote 33 This line of argument will be reviewed in more detail in the following sections. But first, it is important to understand from a high-level perspective the main areas and layers of regulations in the financial industry today.

IV. A Short Overview of Regulation in the Financial Services Industry

The financial services industry is probably one of the most regulated industries. Regulation of trading practices for instance dates back to the seventeenth century when in 1610 in Holland, some first forms of short selling became prohibited.Footnote 34 At the same time, the first central banks were created, such as the Swedish Riksbank in 1668, to regulate payment transactions on a national level and establish national currencies by issuing banking notes. Some of the early regulation was ‘private self-regulation’, in other words, bottom up norm creation,Footnote 35 as in the case of regulatory practices around many of the emerging exchanges, but some regulation was already at these early times government- or state-driven (top down) as in the case of the establishment of central banks and their key role in establishing standardized payments practices based on backed up currencies.Footnote 36

Today the financial industry is heavily regulated by national or supranational bodies, for instance, by the ESMAFootnote 37 in the EU or by the SECFootnote 38 in the US in regard to activities on the different financial markets. Some of the regulations are financial-industry-specific, others are general regulations that severely impact the financial industry. Overall, the different types or layers of regulations in the financial industry can be classified by their underlying aims, namely: (i) regulations meant to safeguard overall financial stability, (ii) regulations for the protection of consumers of financial services, and (iii) regulations that are meant to make sure financial services can operate in a challenging and diverse international environment with sometimes conflicting rules and principles.Footnote 39

The following overview tries to capture the main regulation areas or layers and their specific purpose or aim as they are present in the financial industry today. Some of the layers directly link up to the categories just mentioned, some are cutting across the different categories, and some are also mirroring the classification of the previous Section of AI-impacted application domains in the financial industry:

  1. (1) Equity and liquidity requirements for banks and financial institutions to adhere to minimum capital ratios and liquid asset holdings to prevent financial stress, improve risk management, and promote transparency. Examples are the Basel I, Basel II, Basel III regulations which are global voluntary regulatory frameworks adhered to by most financial institutions today;Footnote 40

  2. (2) Infrastructure regulations, many still in the proposal stage, to improve financial services firms’ operational resilience (in case of major disasters, for instance), and their responses to cyberattacks;Footnote 41

  3. (3) Pre- and post-trading regulations to strengthen investor protection and improve the functioning of financial markets, making them more efficient, resilient, and transparent like banning certain trading practices or making kickbacks by product issuers transparent. The MiFID I and II regulations in the EU are examples of such kinds of regulations;Footnote 42

  4. (4) Payment services regulations, like the PSD II directive (2015) in the EU, with the aim of creating more integrated payments markets, making payments safer and more secure and also protecting consumers, for instance, from the financial damage resulting from fraudulent credit card payments;

  5. (5) Various kinds of compliance regulations, for instance anti-money-laundering or terrorist financing regulations etc., to ensure that financial institutions obey the treaties and laws and do not enter any illegal transactions or practices, also regarding cross-border transactions;Footnote 43

  6. (6) General data privacy protections like the GDPRFootnote 44 in the EU, which is highly relevant as financial transactions involve much sensitive personal data.

As we can see, there are no specific AI regulations of financial services – although many of the regulations will also impact AI-based financial services. In fact, there are even very few regulations regarding the underlying technologies in financial services, but most of them focus on the use cases or financial activities, processes, and on the outcomes themselves. Yet, recently some scholars and some regulators have argued that there might be new risks stemming from AI applications and technologies in financial services which require additional regulation. In the following, we will look at some of the alleged risks as pointed out by scholars in the field and explore to what extent they might be covered by the above regulations already or whether there is a need for new regulatory frameworks.

V. New Risk Categories in Robo-Finance Stemming from AI?

Dirk Zetzsche and others, in their recent paper, have identified the following four risk categories or risk areas allegedly related to AI applications in the financial industry:

  1. (1) Data risks

  2. (2) Cybersecurity risks

  3. (3) Financial stability risks

  4. (4) Ethical risks.Footnote 45

Although I agree with the authors that all these kinds of risks are related to AI applications in financial services, it appears that these risks already existed before the emergence of robo-finance, given the advanced stage of the industry in terms of digitization and data dependency and usage. In fact, some of these risks might even be reduced or vanish when AI comes into place. Let us look at the different risk areas one by one.

Firstly, starting with the data risks of AI applications, Zetzsche and others bring up the following more specific arguments: (i) Because the data quality might be poor, there can be deficiencies stemming from AI applications. As a matter of fact, data quality has often been poor in many parts of the financial industry, for instance, outages at the data centers of the exchanges or of the market data providers leading to the misstating of prices of securities, which can have negative effects on investors’ decisions and the markets overall. AI could actually be used to deal with the data issues in terms of detecting and even resolving them.Footnote 46 (ii) Besides, they argue that data used for AI analyses might suffer from biases, for instance, relating to what they call ‘oversight’ in a financial organization. Again, biases have already been influencing decision making in the financial industry even before the emergence of AI applications, maybe not in the form of what they call data-biases, but biases residing more generally in human decision, for instance in the making of credit decisions or consumer lending.Footnote 47 AI application might free us from certain biases by providing a more neutral stance if programmed accordingly or at least be sensitive to such kinds of biases. (iii) And it is claimed that AI interdependency can lead to what they call ‘herding’, for instance all systems selling securities triggered by certain market events, which can lead to what has been referred to as ‘flash crashes’.Footnote 48 Again ‘herding’ behavior has existed in the financial markets for a long time, and whether the emergence of AI in electronic trading systems has been the cause of what has been called ‘flash crashes’ seems rather questionable. Simple rule-based algorithms, which today by industry experts would rather not be classified as AI systems can give rise to such behavior in contrast to more sophisticated systems trained on historical data relating to such events.

Secondly, let us look at cybersecurity risks. Obviously, they have also existed before the arrival of AI, with most attacks initiated and conducted by human individuals directly or by simple processes, methods, or algorithms. Examples are emails carrying malware that, after it has installed itself on someone’s computer, can silently send all sorts of confidential data from the computer or computer network to the attacker; a similar case of phishing attacks through links to websites – for instance, of online banks that mimic the log-in pages one is familiar with; or finally, simply the reuse of a user’s credentials which the attackers have somehow got hold of – for instance, by one of the already mentioned measures or by simply spying on people in combination with our carelessness in setting passwords. That ‘algorithms can be manipulated in an effort to transfer wealth’ has nothing to do with the presence of AI systems because this could be done already before such systems were in place and it currently happens every day in many different ways within traditional information system environments.Footnote 49 It rather seems plausible that AI might provide some help in identifying and preventing cybersecurity attacks.Footnote 50 Many services are offered today in this regard, and this seems to be one of the areas where the financial industry could benefit from employing AI-based solutions and thereby reduce potentially harmful cybersecurity risks.

Thirdly, when Zetzsche and others talk about financial stability risks, it is fairly unclear what they have in mind since they mention almost all areas of AI applications in financial services – as laid out above – from consumer facing and supporting applications, to trading and portfolio management systems, to general regulatory and compliance systems. Overall, their main concern here seems to be the emergence of ‘additional third-party dependencies’ to AI technology providers up to the ‘level of oligopoly or monopoly’. Since many of these third-party technology providers are unregulated today, as they point out, and it might even be hard to regulate them as ‘AI-related expertise beyond those developing the AI is limited’, there appears to be a major risk. As they say, ‘these third-party dependencies […] could have systemic effects’.Footnote 51

What can be said against this last part of their arguments is that many of the actors at the forefront of using AI in financial services today develop their applications inhouse, like the dominant hedge funds or algo trading shops set up by IT specialists.Footnote 52 AI-based technology has become such a core asset these days and a competitive factor of financial services that many financial institutions resemble IT companies more and more today to keep all that knowledge inhouse or with high IT expertise inside the organizations to manage their IT service providers or outsourcing partners – far from being entirely dependent or in the hands of monopolistic or oligopolistic structured IT providers.Footnote 53 Hence, their worry about systemic effects stemming from such dependencies seems to be overstated, at least in certain critical banking areas. Moreover, in many instances there are quite a few technology providers that offer similar services to the financial industries, for instance market data providers which increasingly have started to use AI technologies to organize and manage the quality of their market data feeds.Footnote 54 Financial institutions, at least in critical areas like trading, often make use of different providers at the same time, which also helps them to reduce their third-party-dependencies. Furthermore, with higher education AI or machine learning programs popping up at many educational institutions around the world, new graduates are also increasingly being educated and trained in these key areas. Thus, knowledge is building up quickly and will also be more widely available, reducing the fear of there being a kind of ‘mystery science’ only a few people have access to and can take advantage of.

Finally, let us focus on what Zetzsche and others refer to as new ‘ethical risks’ stemming from AI applications in financial services. The starting point of their argument is that algorithms do not feel anything, nor do they have values which the authors equate with a lack of ethical foundation in AI-decision making. For instance, they point out that such ‘unethical’ AI systems might nudge people to purchase unsuitable financial products, which might further be facilitated by the fact humans would easily develop a higher level of trust in the AI-based systems because with them, human–machine communication can nowadays be quite sophisticated. What this ultimately can and will lead to is reputational risk for the financial institution employing such systems, for example, when people are driven to make the wrong financial decisions and this becomes public or will be reported in the media or brought up to the courts.

There are quite a few problems with this line of reasoning as there are many financial institutions that do not have much direct interaction with human consumers, like mutual funds, hedge funds, credit card companies, etc. Besides, it is also conceivable that AI systems can have an ethical foundation, for instance thinking of utilitarian approaches which are less focused on being able to feel anything or have values. Such aligned AI systems might still be able to calculate the best outcome for society as a whole. But the main counter argument seems to be that the financial industry has not been a role model for ethical behavior to start with. Quite to the contrary, over many decades, financial institutions have been prone to all kinds of ethical misconduct. Just to give a few examples: (i) consumers have been pushed by financial advisors, humans with feelings and values, employed by financial institutions, to buy financial products which were often not suitable or beneficial to them, yet by selling them, the advisors were able to boost their commission payments, and the financial institutions could thereby boost their profits;Footnote 55 (ii) insider trading has happened frequently,Footnote 56 (iii) market manipulation has occurred, for instance in the case of the Libor scandal, and many other examples in different areas of the financial industry.Footnote 57 Thus, it is far from clear why AI-based systems and processes would make the industry less ethical than it has been in the past. In fact, the case could be made that AI-based systems and processes might allow society to create and control financial institutions and make them less driven by greed but more by higher motives to bring benefits to consumers and install fairness within the systems.

But this line of reasoning might sound overly naïve, given how many actors in the financial industry have successfully used technology over the last decades to their advantage, and to the disadvantage of other actors. One example has been the area of high frequency trading and the so-called dark pools where ‘fast moving robot trading machines were front-running long term investors on exchanges’.Footnote 58 Darks pools are markets established by the financial actors themselves for trading securities outside of the exchanges, usually virtually unregulated. The benefits for the market actors were faster processing of orders with less or even no fees from the exchanges. But the real benefits for the involved high frequency trading firms were obviously financial: with their algorithms and high frequency trading infrastructures, they were able to read the directions markets were going, and being able to buy securities before the real investors could do it and then selling the securities back to them at a higher price only milliseconds after the initial orders by the investors had been made. This practice allowed them to make huge profits stealing away money from the long-term investors like pension funds, etc.

This is a very sophisticated version of an old, mostly considered illegal practice of so-called front-running – in other words, someone trading a stock or any other financial asset based on insider knowledge of a future transaction that is about to affect its price. One important point is that this practice has been around before the emergence of AI and the high frequency data processing infrastructures. But it needs to be acknowledged that the new technologies have allowed for a more sophisticated and harder-to-control form of front-running. Yet the problem here is not that the employed AI algorithms are unethical. The problem is that the actors have used the technologies in an unethical way which obviously needs to be prevented for the benefits of the wider investor community and for society as a whole. Again, this is not a new risk, but it shows that AI and technology can be an accelerator of existing risks inherent in the financial industry.

In sum, then, the arguments by Zetzsche and others that there are many new risks stemming from AI-based applications in financial services are not fully convincing. To the contrary, it is feasible that the employment of AI applications in the financial industry might provide a route of managing existing and inherent risks in a better way, or even being able to reduce or eliminate some of these risks.Footnote 59 But clearly, there are also cases like front-running based on algorithmic high frequency trading, where it seems obvious that through the employment of AI, existing inherent risks in the financial industry have increased and can cause additional damage. Therefore, it is also important to look at ways how such damages resulting from the use of the new technologies can be avoided. In this regard, in the following section one prominent regulatory approach, the so-called responsibility frameworks, will be discussed.

VI. Responsibility Frameworks As a Solution for Managing AI Risks in financial services?

In regulating the financial industry, many regulators have moved to so-called responsibility frameworks in recent years, like the EU’s EBA/ESMA guidelines or the FCA in the UK.Footnote 60 The proposed measures focus on personal managerial responsibility, for example, the personal responsibility of directors, senior management, and individual line managers. Initially, such frameworks were meant to be applied to mitigate the risks of financial services in general, but recently authors have argued that they can also be applied to the emerging AI-based processes in the financial industry.Footnote 61

The responsibility-driven regulations by the EU, published by EBA and ESMA, focus mainly on the management bodies of financial institutions, in particular on their role in conducting their overall operational duties, but with a particular focus on risk management conduct. They are meant to ensure that a sound risk culture has been implemented in their respective organizations consistent with the individual risk profile and the overall business model of the institution. The UK’s senior management regulatory framework for financial institutions has evolved from the overall EU framework but it has strengthened the establishment of clear conduct rules for senior managers. These rules specify in more detail the steps necessary to ensure that the business of the financial institution is controlled effectively and is in compliance with existing regulatory frameworks. Also, requirements are made on the delegation of responsibilities and on the disclosure of relevant information for the regulators. Other States like the US or Singapore have issued similar guidelines.Footnote 62

Although these responsibility frameworks have been very general in nature, meant to capture all kinds of aspects of risk management in financial institutions, Zetzsche has argued that they will give us the right framework to address and manage any new risks stemming from AI applications in financial services. They write: ‘personal responsibility frameworks provide the basis for an appropriate system to address issues arising from AI in financial services’.Footnote 63 They have suggested the following three distinct instruments or measures for regulating activities related to the development and use of AI applications in financial services:

  1. 1. AI Review Committees: the installation of AI review committees is meant to address what they call the information asymmetry as to the function and limits of an AI system, namely the problem that third party vendors or inhouse AI developers understand the algorithms far better than the financial institutions that acquire and use them, and the supervisors of the institutions. These committees are meant to augment decision making and should not ‘detract from the ultimate responsibility vested in management […] regarding AI governance.Footnote 64

  2. 2. AI Due Diligence: mandatory AI due diligence should be put in place, which should be done prior to any AI employment and should include what they call “a full stock of all the characteristics of the AI […] in particular the mapping of the data set used by AI”, including an analysis of data gaps and data quality.Footnote 65

  3. 3. AI Explainability: the explainability requirement is proposed to be necessary as a minimum standard ‘demanding that the function, limits and risks of AI can be explained to someone at a level of granularity that enables remanufacturing of the code’. And this someone ‘should be a member of the executive board responsible for the AI’.Footnote 66

Before we review this proposal, it is fair to mention that the authors themselves note a few limitations, of which I want to focus on the main one, namely the inability of their responsibility framework to control what they call ‘autonomous AI’. What they mean by this are cases in which developers lose control over self-learning AI, not understanding anymore what the algorithms are doing.Footnote 67 What they propose is the concept of being able to always switch off the AI (as a kind of human oversight) while the provided services would still be functioning. This seems, prima facie, to be a reasonable request, looking for instance at the example of self-learning AI application in payment fraud detection based on the analysis of large transaction data sets. The system might modify its outlier detection algorithm in a way which might force the financial institution to switch it off, maybe because fraudsters have fed the system with data to facilitate fraudulent transactions. In this setting, switching the AI system off would make sense, but the delivery of the basic payment services should not be impacted by this – for instance, in the case of a credit card company. Yet, there will be applications where such a switch-off mechanism might be more difficult to realize without causing any further damage, as in the case of trading financial securities where orders or transactions ‘might get lost’ by switching off applications.Footnote 68

Overall, I agree with the authors that their approach is important and should be part of any software-based technology development in financial services. In fact, many elements have been in place in the industry for years already, for instance in terms of regular due diligence audits of the financial services’ technology providers.Footnote 69 Hence, the financial industry is already prepared and experienced in conducting due diligence audits on a regular basis, and they do this frequently before the release of new technology and software systems or installations, irrespective of whether these systems would include AI technology or not.

Yet, on the other hand, the authors propose some specific requirements for their AI due diligence and also in regard to their explainability requirement for AI. As will be argued, these requirements are not entirely clear and potentially will also be hard to fulfill given the nature of many AI applications.

Firstly, they argue that any AI due diligence should comprise taking full stock of all the characteristics of the AI, ‘in particular the mapping of the data set used by AI, including an analysis of data gaps and data quality. It is not clear what is meant by ‘a full stock of all the characteristics of the AI’. Besides talking about the different functionalities of the AI system, they also seem to focus on the underlying data set used by it. The problem here is that data sets might be potentially infinite and/or not be fully determined at the outset. In the case of its employment, a self-learning AI might discover new data (sets), and as it is often the case with big data, there can be quality issues and gaps. What does this mean for the AI system: should it not be launched under such circumstances? Or is it just enough to be aware of such limitations?

Secondly, their explainability requirement seems even harder to deal with in the case of AI applications. Even in regard to existing non-AI applications, it is questionable whether this requirement can be met given the complexity of many software solutions in the financial sector with millions of lines of code and often old legacy systems.Footnote 70 In the case of AI-based application, the situation is even more complex because learning AI systems are less static but more dynamic in nature, which could mean that the system might even rewrite its code in the course of its operations. Making explainability a minimum standard in the sense defined above could be the end of many applications in financial services, not only AI-driven ones. But what might be argued for is that a reduced version of this principle can be applied, not to require the remanufacturing of the actual code but, at a minimum, the possibility for the functions, limits, and risks of the AI systems to be understood on a higher functional level.

Thus, to conclude this section, the so-called responsibility frameworks can provide a basis for limiting the risks of AI applications in financial services, given that new risks really emerge. In a way, they have been present in the financial industry before the rise of AI applications, such as regular due diligence audits of technical systems and key software applications. But they also have their limitations, in particular, when certain requirements are taken in a very strict sense, as it was the case above in regard to the discussed explainability requirement. Yet reducing such requirements to a lower level raises the question to what extent the risk stemming from any kinds of new potential high risk AI applications in financial services can be contained. A slightly different approach will be presented in the final section, which builds on the approach put forward recently by the EU.

VII. Standardization, High Risk AI Applications, and the New EU AI Regulation

As noted before in this chapter, AI, being a general-purpose technology, will impact not just one industry but will also have many different kinds of use cases in different industries. There are already a lot of AI applications in the financial industry today, as pointed out earlier, and more are being added constantly by different financial institutions, their IT service providers, and innovative fintech companies. Many of the solutions might be simple or fairly basic, like the use of face recognition as an identification method giving users access to their financial accounts or applications. The others will be more complex, like developing so-called robo advisors, with AI systems engaging with users in natural languages trying to understand their financial needs and giving them suitable financial advice.

What will be important going forward is that regarding some kinds of key AI applications – such as identification processes or human–machine interaction – there can and will be standards across industries which companies need to comply with, like there are standards and norms for the use of electricity irrespective of their specific domain of use.Footnote 71 Looking at the example of AI systems intended to interact with humans, providers of such systems might require that they be transparent to the user, that they are not communicating with humans but with a machine. Such notification obligations can then become part of the standard for such human–machine interaction enabling AI systems.Footnote 72

Besides such general standard AI applications used across industries, there might also be ones very specific to certain industries like the financial industry which need to be dealt with outside of the model of standardization. In particular, when these specific applications give rise to higher or new risks, additional specific regulations might need to be put into place. For instance, there have been attempts to contain the risks of algorithmic trading applications in the financial industry, which can cause (and probably have already caused to some extent) significant financial damage in the form of leading markets to crash, thereby diminishing or blowing away investors’ money in the course of seconds.Footnote 73 Such flash crashes have been at the center of some debates over the last years, and regulators such as the EU have tried to contain this risk by putting additional obligations into place through the MiFID II framework as discussed earlier.Footnote 74

In its recent Draft EU AIA the EU has also distinguished between ‘high risk’ and non-high-risk (standard) AI applications.Footnote 75 The new proposed regulation starts with the assumption that AI applications are ultimately and potentially just tools to increase human well-being. Thus, the technology development of AI should not be hindered by any unnecessary constraints, but the rules should be balanced and proportionate. The regulation is centered on a ‘risk based regulatory approach, whereby legal intervention should be tailored to those concrete situations where there is a justified cause of concern’.Footnote 76 A key distinction is made between the so-called high risk AI systems for which special requirements and obligations then apply and other AI systems with much more limited requirements and obligations. The classification of AI systems as high-risk is thereby mainly based on their intended purpose and their harmful impact on health and safety and human rights. High risk systems are more or less identified in a two-step process, namely whether they can cause certain harms to protected goods or rights and by the severity of the harm caused and the probability of occurrence.

Given this approach, it seems obvious that there cannot be one final list of high-risk AI applications because the technology is still emerging and new applications are being launched every day. The EU acknowledges this as it lists in its Draft EU AIA only a limited number of high-risk AI applications (Annex III). Further, it allows the EU Commission to amend this list over time based on criteria spelled out in Article 7.

Interestingly, many of the high-risk applications listed by the Draft EU AIA are not specific to one industry but are general AI applications that can be present in many industries. Examples are applications that embody what is called ‘manipulative AI practices’ and a second group with ‘indiscriminate surveillance’ practices. But there are also many very specific high risk AI applications listed in the draft. When it comes to high-risk AI applications in financial services, the EU draft of the regulation lists, prima facie, only one class, namely AI systems that evaluate the creditworthiness of persons (Annex III No 5 lit. b).Footnote 77 This class of applications is included in the high-risk list because of (i) possible discrimination of persons of certain ethnic or racial origin based on the potential perpetuation of historical patterns by the AI algorithms, and (ii) the potential severity of such acts of discrimination, as in the way such discriminating credit decisions can significantly affect the course of life of people.Footnote 78

The second kind of AI application that can be associated with the financial services industry, listed in the Draft EU AIA, is the one written about above, namely AI systems intended to interact with natural persons or generate content consumed by such person. Such systems do not necessarily classify as high-risk systems, for instance they might just help someone to enter information or explain a product, but they pose the specific risk of impersonation and deception, and therefore they are subject to specific transparency obligations according to the Draft EU AIA that means that the natural persons have to be informed that they are interacting with an AI system (Article 52).

Overall, the risk-based regulatory approach which underpins the Draft EU AIA makes much more sense than any kind of generalized approach of regulating AI applications as a whole as embodied in the responsibility frameworks discussed above. As a general-purpose technology, there will be so many kinds of applications that not one standard set of rules can be applied across the board. A rigorous case by case approach is required, which also allows for amendments and revisions, as embodied in the outline of the EU regulation.

VIII. Conclusion

What has been shown in this chapter is first, that it is questionable whether there are many new additional risks stemming from AI applications in financial services today. The risks that have emerged recently, like data risks, cybersecurity risks, financial stability risks, and ethical risks have been inherent in the financial industry as a highly digitized and also complex global industry for decades. The author has taken the more positive view that by using AI these risks will not necessarily increase, but on the contrary, AI might help to mitigate and reduce them. Second, the responsibility frameworks as developed over the last few years, which are meant to deal with and limit the risks of AI in the financial industry overall, do not provide a suitable framework beyond what has been put in place already to manage the risks with more standard IT and software systems and applications in the financial industry. Furthermore, overseeing all AI applications in financial services will quickly become as complex as overseeing all types of applications in the area of electricity, to mention another general-purpose technology. What has been argued in this chapter is that for some kinds of key applications – like identification processes or human–machine interaction – there should be standards defined across industries with which companies need to comply. But for other very specific, potentially new high-risk financial AI applications, in case they emerge, there might be the need for additional very specific regulation, as in the case of certain algorithmic high frequency trading applications. But this will be less a regulation of the technology but more of the practices and intended uses of the technology, which has also been the core thinking underlying the recent Draft EU AIA of AI applications. In fact, this new EU regulation, like the GDPR a few years ago in regard to data privacy protection, in many ways points to the right direction of how to deal with AI and potential risks arising from it.

Footnotes

19 From Corporate Governance to Algorithm Governance Artificial Intelligence as a Challenge for Corporations and Their Executives

1 For details, see Bundesministerium für Bildung und Forschung, ‘Zukunftsbild Industrie 4.0’ (BMBF, 30 December 2020) www.bmbf.de/bmbf/de/forschung/digitale-wirtschaft-und-gesellschaft/industrie-4-0/industrie-4-0.html; P Bräutigam and T Klindt, ‘Industrie 4.0, das Internet der Dinge und das Recht’ (2015) 16 NJW 1137 (hereafter Bräutigam and Klindt, ‘Industrie 4.0’); T Kaufmann, Geschäftsmodelle in Industrie 4.0 und dem Internet der Dinge (2015); Schwab, Die Vierte Industrielle Revolution (2016); more reserved HJ Schlinkert, ‘Industrie 4.0 – wie das Recht Schritt hält’ (2017) 8 ZRP 222 et seq.

2 Cf. A Börding and others, ‘Neue Herausforderungen der Digitalisierung für das deutsche Zivilrecht’ (2017) 2 CR 134; J Bormann, ‘Die digitalisierte GmbH’ (2017) 46 ZGR 621, 622; B Paal, ‘Die digitalisierte GmbH’ (2017) 46 ZGR 590, 592, 599 et seq.

3 For digitalization of private law, see, e.g., K Langenbucher, ‘Digitales Finanzwesen’ (2018) 218 AcP 385 et seq.; G Teubner, ‘Digitale Rechtssubjekte?’ (2018) 218 AcP 155 et seq. (hereafter Teubner, ‘Rechtssubjekte’); cf. further M Fries, ‘PayPal Law und Legal Tech – Was macht die Digitalisierung mit dem Privatrecht?’ (2016) 39 NJW 2860 et seq (hereafter Fries, ‘Digitalisierung Privatrecht’).

4 For details, see, e.g., H Eidenmüller, ‘The Rise of Robots and the Law of Humans’ (2017) 4 ZEuP 765 et seq.; G Spindler, ‘Zukunft der Digitalisierung – Datenwirtschaft in der Unternehmenspraxis’ (2018) 1–2 DB 41, 49 et seq. (hereafter Spindler, ‘Zukunft’).

5 For details, see A Hildner and M Danzmann, ‘Blockchain-Anwendungen für die Unternehmensfinanzierung’ (2017) CF 385 et seq.; M Hüther and M Danzmann, ‘Der Einfluss des Internet of Things und der Industrie 4.0 auf Kreditfinanzierungen’ (2017) 15–16 BB 834 et seq.; R Nyffenegger and F Schär‚ ‘Token Sales: Eine Analyse Des Blockchain-Basierten Unternehmensfinanzierungsinstruments’ (2018) CF 121 et seq.; B Westermann, ‘Daten als Kreditsicherheiten – eine Analyse des Datenwirtschaftsrechts de lege lata und de lege ferenda aus Sicht des Kreditsicherungsrechts’ (2018) 26 WM 1205 et seq.

6 Cf. BGHZ 219, 243 (Bundesgerichtshof III ZR 183/17); A Kutscher, Der digitale Nachlass (2015); J Lieder and D Berneith, ‘Digitaler Nachlass: Das Facebook-Urteil des BGH’ (2018) 10 FamRZ 1486; C BudzikiewiczDigitaler Nachlass’ (2018) 218 AcP 558 et seq.; H Ludgya, ‘Digitales Update für das Erbrecht im BGB?’ (2018) 1 ZEV 1 et seq.; C Sorge, ‘Digitaler Nachlass als Knäuel von Rechtsverhältnissen’ (2018) 6 MMR 372 et seq.; see also Deutscher Bundestag, ‘Kleine Anfrage der Abgeordneten Roman Müller-Böhm et al. BT-Drucks. 19/3954’ (2018); as to this J Lieder and D Berneith, ‘Digitaler Nachlass – Sollte der Gesetzgeber tätig warden?’ (2020) 3 ZRP 87 et seq.

7 For details, see Fries, ‘Digitalisierung Privatrecht’ (Footnote n 3) 2681 et seq.; M GruppLegal Tech – Impulse für Streitbeilegung und Rechtsdienstleistung’ (2014) 8+9 AnwBl. 660 et seq.; J Wagner, ‘Legal Tech und Legal Robots in Unternehmen und den sie beratenden Kanzleien’ (2017) 17 BB 898, 900 (hereafter Wagner, ‘Legal Tech’).

8 V Boehme-Neßler, ‘Die Macht der Algorithmen und die Ohnmacht des Rechts’ (2017) 42 NJW 3031.

9 For definitions, see, e.g., M Herberger, ‘“Künstliche Intelligenz” und Recht – Ein Orientierungsversuch’ (2018) 39 NJW 2825 et seq.; C Schael, ‘Künstliche Intelligenz in der modernen Gesellschaft: Bedeutung der “Künstlichen Intelligenz” für die Gesellschaft’ (2018) 42 DuD 547 et seq.; J Armour and H Eidenmüller, ‘Selbstfahrende Kapitalgesellschaften?’ (2019) 2–3 ZHR 169, 172 et seq. (hereafter Armour and Eidenmüller, ‘Kapitalgesellschaften’); F Graf von Westphalen, ‘Definition der Künstlichen Intelligenz in der Kommissionsmitteilung COM (2020) 64 final – Auswirkungen auf das Vertragsrecht’ (2020) 35 BB 1859 et seq. (hereafter Graf von Westphalen, ‘Definition’); P Hacker, ‘Europäische und nationale Regulierung von Künstlicher Intelligenz’ (2020) 30 NJW 2142 et seq. (hereafter Hacker, ‘Regulierung’).

10 Cf. U Noack, ‘Organisationspflichten und -strukturen kraft Digitalisierung’ (2019) 183 ZHR 105, 107 (hereafter Noack, ‘Organisationspflichten’); U Noack, ‘Der digitale Aufsichtsrat’ in B Grunewald, J Koch, and J Tielmann (eds), Festschrift für Eberhard Vetter (2019) 497, 500 (hereafter Noack, ‘Aufsichtsrat’); for a different use of this wording, see L Strohn, ‘Die Rolle des Aufsichtsrats beim Einsatz von Künstlicher Intelligenz’ (2018) 182 ZHR 371 et seq. (hereafter Strohn, ‘Rolle’).

11 See Deutscher Bundestag, ‘Antwort der Bundesregierung, Erarbeitung einer KI-Strategie der Bundesregierung, BT-Drucks. 19/5678’ (2018) 2.

12 Cf. UH Schneider and C Strenger, Die “Corporate Governance-Grundsätze” der Grundsatzkommission Corporate Governance (German Panel on Corporate Governance) (2000) 106, 107; R Marsch-Barner, ‘§ 2 Corporate Governance marginal number 2.1’ in R Marsch-Barner and F Schäfer (eds), Handbuch börsennotierte AG (4th ed. 2018); J Koch, ‘§ 76 margin number 37’ in U Hüffer and J Koch (eds) Aktiengesetz (14th ed. 2020); HJ Böcking and L Bundle, ‘§ 2 marginal number 6’ in KJ Hopt, JH Binder, and HJ Böcking (eds), Handbuch Corporate Governance von Banken und Versicherungen (2nd ed. 2020); A v Werder, ‘DCGK Präambel marginal number 10’ in T Kremer and others (eds), Deutscher Corporate Governance Kodex (8th ed. 2021).

13 For a comparative overview, see J Lieder, ‘Der Aufsichtsrat im Wandel der Zeit’ (2006) 636 et seq. (hereafter Lieder ‘Aufsichtsrat’).

14 R Wile, ‘A Venture Capital Firm Just Named an Algorithm to Its Board of Directors’ (Business Insider, 13 May 2014) www.businessinsider.com/vital-named-to-board-2014-5?r=US&IR=T.

15 See N Burridge, ‘Artificial Intelligence gets a seat in the boardroom’ (Nikkei Asia, 10 May 2017) https://asia.nikkei.com/Business/Artificial-intelligence-gets-a-seat-in-the-boardroom.

16 Aktiengesetz (AktG) = Stock Corporation Act of 6 September 1965, Federal Law Gazette I, 1089. For the English version that has been used in this paper, see Rittler, German Corporate Law (2016) as well as Norton Rose Fullbright, ‘German Stock Coroporation Act (Aktiengesetz)’ (Norton Rose Fullbright, 10 May 2016) www.nortonrosefulbright.com/-/media/files/nrf/nrfweb/imported/german-stock-corporation-act.pdf.

17 Cf. H Fleischer, ‘§ 93 marginal number 129’ in M Henssler (ed), BeckOGK Aktiengesetz (15 January 2020); F Möslein, ‘Digitalisierung im Gesellschaftsrecht: Unternehmensleitung durch Algorithmen und künstliche Intelligenz?’ (2018) 5 ZIP 204, 207 et seq. (hereafter Möslein, ‘Digitalisierung’); Strohn, ‘Rolle’(Footnote n 10) 371; R Weber, A Kiefner, and S Jobst, ‘Künstliche Intelligenz und Unternehmensführung’ (2018) 29 NZG 1131 (1136) (hereafter ‘Weber, Kiefner, and Jobst ‘Unternehmensführung’); see further H Fleischer, ‘Algorithmen im Aufsichtsrat’ (2018) 9 Der Aufsichtsrat 121 (hereafter Fleischer, ‘Algorithmen’); A Sattler, ‘Der Einfluss der Digitalisierung auf das Gesellschaftsrecht’ (2018) 39 BB 2243, 2248 (hereafter Sattler, ‘Einfluss’); Wagner, ‘Legal Tech’ (Footnote n 7) 1098.

18 See B Kropff, Begründung zum Regierungsentwurf zum Aktiengesetz 1965 (1965) 135: ‘Der Entwurf gestattet es nicht, juristische Personen zu wählen, weil die Überwachungspflicht die persönliche Tätigkeit einer verantwortlichen Person voraussetzt.’ Cf. further Lieder, ‘Aufsichtsrat’ (Footnote n 13) 367 et seq.

19 M Delvaux, ‘Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL))’ (European Parliament, 27 January 2015) www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html; see, e.g., MF Lohmann, ‘Ein europäisches Roboterrecht – überfällig oder überflüssig?’ (2017) 6 ZRP 168; R Schaub, ‘Interaktion von Mensch und Maschine’ (2017) 7 JZ 342, 345 et seq.; J-E Schirmer, ‘Rechtsfähige Roboter?’ (2016) 13 JZ 660 et seq.; J Taeger, ‘Die Entwicklung des IT-Rechts im Jahr 2016’ (2016) 52 NJW 3764.

20 Cf. Bräutigam and Klindt, ‘Industrie 4.0’ (Footnote n 1) 1138; Teubner, ‘Rechtssubjekte’ (Footnote n 3) 184; DA Zetzsche, ‘Corporate Technologies – Zur Digitalisierung im Aktienrecht’ (2019) 1-02 AG 1 (10) (hereafter Zetzsche, ‘Technologies’).

21 Cf. G Borges, ‘Haftung für selbstfahrende Autos’ (2016) 4 CR 272, 278; H Kötz and G Wagner, Deliktsrecht (13th ed. 2016) marginal number 72 et seq.; H Zech, ‘Künstliche Intelligenz und Haftungsfragen’ (2019) 2 ZfPW 198, 214.

22 Cf. Armour and Eidenmüller, ‘Kapitalgesellschaften’ (Footnote n 9) 185 et seq.

23 Ibid, 186 et seq.

24 European Commission, ‘White Paper On Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65 final’ (EUR-Lex, 19 February 2020) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2020:65:FIN; see Graf von Westphalen, ‘Definition’ (Footnote n 9) 1859 et seq.; Hacker, ‘Regulierung’ (Footnote n 9), 2142 et seq.

25 Cf. further B Schölkopf, ‘Der Mann, der den Computern das Lernen beibringt’ Frankfurter Allgemeine Zeitung (26 February 2020): “Wir sind extrem weit davon entfernt, dass seine Maschine intelligenter ist als ein Mensch.”; L Enriques and DA Zetzsche, ‘Corporate Technologies and the Tech Nirvana Fallacy’ (2019) ECGI Law Working Paper N° 457/2019, 58 https://ecgi.global/sites/default/files/working_papers/documents/finalenriqueszetzsche.pdf (hereafter Enriques and Zetzsche, ‘Corporate Technologies’): “Only if and when humans relinquish corporate control to machines, may the problem at the core of corporate governance be solved; but by then humans will have more pressing issues to worry about than corporate governance.”

26 Cf. further P Krug, Haftung im Rahmen der Anwendung von künstlicher Intelligenz: Betrachtung unter Berücksichtigung der Besonderheiten des steuerberatenden Berufsstandes (2020) 74, 76; Möslein, ‘Digitalisierung’ (Footnote n 17) 207; Noack, ‘Aufsichtsrat’ (Footnote n 10) 506.

27 Möslein, ‘Digitalisierung’ (Footnote n 17) 206.

28 Cf. M Auer, ‘Der Algorithmus kennt keine Moral’ Frankfurter Allgemeine Zeitung (29 April 2020).

29 On this and the following, see Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1131.

30 Cf. Noack, ‘Organisationspflichten’ (Footnote n 10) 119.

31 Cf. M Grub and S Krispenz, ‘Auswirkungen der Digitalisierung auf M&A Transaktionen’ (2018) 5 BB 235, 238; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1131.

32 For details, see Noack, ‘Organisationspflichten’ (Footnote n 10) 124 et seq.

33 Cf. Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1131; Noack, ‘Organisationspflichten’ (Footnote n 10) 132 et seq.; Zetzsche, ‘Technologies’ (Footnote n 20) 5.

34 Section 76(1) AktG.

35 Cf. Noack, ‘Organisationspflichten’ (Footnote n 10) 115 et seq.

36 Section 111(4) AktG.

37 Section 84 AktG.

38 Cf. Section 119 AktG.

39 Cf. Strohn, ‘Rolle’ (Footnote n 10) 371, 376.

40 But see Strohn, ‘Rolle’ (Footnote n 10) 376; rightly contested by Noack, ‘Aufsichtsrat’ (Footnote n 10) 502.

41 Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1133; cf. further J Wagner, ‘Legal Tech und Legal Robots in Unternehmen und den sie beratenden Kanzleien – Teil 2: Folgen für die Pflichten von Vorstandsmitgliedern bzw. Geschäftsführern und Aufsichtsräten’ (2018) 20 BB 1097, 1099 (hereafter Wagner, ‘Legal Tech 2’); Möslein, ‘Digitalisierung’ (Footnote n 17) 208 et seq.

42 Cf. further Sattler, ‘Einfluss’ (Footnote n 17) 2248.

43 M Becker and P Pordzik, ‘Digitale Unternehmensführung’ (2020) 3 ZfPW 334, 349 (hereafter Becker and Pordzik, ‘Unternehmensführung’).

44 On this and the following, see Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

45 Becker and Pordzik, ‘Unternehmensführung’ (Footnote n 43) 344; Noack, ‘Organisationspflichten’ (Footnote n 10) 117; O Lücke, ‘Der Einsatz von KI in der und durch die Unternehmensführung’ (2019) 35 BB 1986, 1989, and 1992 (hereafter Lücke, ‘KI’); for a different view, see V Hoch, ‘Anwendung Künstlicher Intelligenz zur Beurteilung von Rechtsfragen im unternehmerischen Bereich’ (2019) 219 AcP 648, 672.

46 Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

47 Möslein, ‘Digitalisierung’ (Footnote n 17) 208 et seq.; Wagner, ‘Legal Tech 2’ (Footnote n 41) 1098 et seq., 1101; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

48 Cf. Zetzsche, ‘Technologies’ (Footnote n 20) 7.

49 Becker and Pordzik, ‘Digitale Unternehmensführung’ (Footnote n 43) 352; D Linardatos, ‘Künstliche Intelligenz und Verantwortung’ (2019) 11 ZIP 504, 508; Lücke, ‘KI’ (Footnote n 45) 1993.

50 M Dreher, ‘Nicht delegierbare Geschäftsleiterpflichten’ in S Grundmann and others (eds), Festschrift für Klaus J. Hopt zum 70. Geburtstag (2010) 517, 536; H Fleischer, ‘§ 93 marginal number 98 et seq.’ in M Henssler (ed), BeckOGK Aktiengesetz (15 January 2020); HC Grigoleit and L Tomasic, ‘§ 93 marginal number 38’ in HC Grigoleit, Aktiengesetz (2020); Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

51 Cf. M Dreher, ‘Nicht delegierbare Geschäftsleiterpflichten’ in S Grundmann and others (eds) Festschrift für Klaus J Hopt zum 70. Geburtstag (2010) 517, 527; with a specific focus on AI use, see Möslein, ‘Digitalisierung’ (Footnote n 17) 208 et seq.; Wagner, ‘Legal Tech 2’ (Footnote n 41) 1098; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

52 HJ Mertens and A Cahn, ‘§ 76 marginal number 4’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010); M Weber, ‘§ 76 marginal number 8’ in W Hölters (ed), AktG (3rd ed. 2017); Weber, Kiefner and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

53 M Kort, ‘§ 76 marginal number 37’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); G Spindler, ‘Haftung der Geschäftsführung für IT-Sachverhalte’ (2017) 11 CR 715, 722; G Spindler, ‘Gesellschaftsrecht und Digitalisierung’ (2018) 47 ZGR 17, 40 et seq. (hereafter Spindler, ‘Gesellschaftsrecht’); Spindler, ‘Zukunft’ (Footnote n 4) 44; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

54 For details, see Armour and Eidenmüller, ‘Kapitalgesellschaften’ (Footnote n 9) 176 et seq.; Krug, ‘Haftung’ (Footnote n 26) 78 et seq.; cf. further Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132 et seq.

55 For details, see T Hoeren and M Niehoff, ‘KI und Datenschutz – Begründungserfordernisse automatisierter Entscheidungen’ (2018) 1 RW 47 et seq.; cf. further CS Conrad, ‘Kann die Künstliche Intelligenz den Menschen entschlüsseln? – Neue Forderungen zum Datenschutz: Eine datenschutzrechtliche Betrachtung der “Künstlichen Intelligenz”’ (2018) 42 DuD 541 et seq.; M Rost, ‘Künstliche Intelligenz: Normative und operative Anforderungen des Datenschutzes’ (2018) 42 DuD 558.

56 As to new types of discrimination risks, see JA Kroll and others, ‘Accountable Algorithms’ (2017) 165 U Pa L Rev 633, 679 et seq.; B Paal, ‘Vielfaltsicherung im Suchmaschinensektor’ (2015) 2 ZRP 34, 35; H Steege, ‘Algorithmenbasierte Diskriminierung durch Einsatz von Künstlicher Intelligenz: Rechtsvergleichende Überlegungen und relevante Einsatzgebiete’ (2019) 11 MMR 715 et seq.

57 Cf. F König, ‘Haftung für Cyberschäden: Auswirkungen des neuen Europäischen Datenschutzrechts auf die Haftung von Aktiengesellschaften und ihrer Vorstände’ (2017) 8 AG 262, 268 et seq.; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1135.

58 See infra Section III 3.

59 G Spindler and A Seidel, ‘Die zivilrechtlichen Konsequenzen von Big Data für Wissenszurechnung und Aufklärungspflichten’ (2018) 30 NJW 2153, 2154 (hereafter Spindler and Seidel, ‘Big Data’); G Spindler and A Seidel, ‘Wissenszurechnung und Digitalisierung’ in G Spindler and others (eds), Unternehmen, Kapitalmarkt, Finanzierung. Festschrift für Reinhard Marsch-Barner (2018) 549, 552 et seq.

60 Cf. BGHZ 132, 30 (37) (Bundesgerichtshof V ZR 239/94); P Hemeling, ‘Organisationspflichten des Vorstands zwischen Rechtspflicht und Opportunität’ (2011) 175 ZHR 368, 380.

61 Spindler and Seidel, ‘Big Data’ (Footnote n 59) 2154.

62 Cf. HC Grigoleit, ‘Zivilrechtliche Grundlagen der Wissenszurechnung’ (2017) 181 ZHR 160 et seq.; M Habersack and M Foerster, ‘§ 78 marginal number 39’ in H Hirte, P O Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); Sattler, ‘Einfluss’ (Footnote n 17) 2248; Spindler, ‘Wissenszurechnung in der GmbH, der AG und im Konzern’(2017) 181 ZHR 311 et seq.

63 See infra Section III 3.

64 AktG, section 93(2).

65 Cf. further Möslein, ‘Digitalisierung’ (Footnote n 17) 210 et seq.

66 Cf. Möslein, ‘Digitalisierung’ (Footnote n 17) 211.

67 Cf. Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1136; in general, see W Hölters, ‘§ 93 marginal number 36’ in W Hölters (ed), AktG (3rd ed. 2017); HJ Mertens and A Cahn, ‘§ 93 marginal number 36’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010); G Spindler, ‘§ 93 marginal number 58’ in W Goette and M Habersack (eds) ‘Münchener Kommentar zum AktG’ (5th ed. 2019).

68 Hacker, ‘Regulierung’ (Footnote n 9) 2143 et seq.

69 Cf. Sattler, ‘Einfluss’ (Footnote n 17) 2248.

70 Cf. M Kaspar, ‘Aufsichtsrat und Digitalisierung’ (2018) BOARD 202, 203.

71 Cf. Noack, ‘Aufsichtsrat’ (Footnote n 10) 502 et seq.

72 For details, see U Schmolke and L Klöhn, ‘Unternehmensreputation (Corporate Reputation)’ (2015) 18 NZG 689 et seq.

73 On this and the following, see Noack, ‘Organisationspflichten’ (Footnote n 10) 112 et seq.; Noack, ‘Aufsichtsrat’ (Footnote n 10) 503 et seq.; cf. further F Möslein, ‘Corporate Digital Responsibility’ in S Grundmann and others (eds), Festschrift für Klaus J Hopt zum 80. Geburtstag (2020) 805 et seq.

74 Möslein, ‘Digitalisierung’ (Footnote n 17) 209.

75 For a comparative view, see H Merkt, ‘Rechtliche Grundlagen der Business Judgment Rule im internationalen Vergleich zwischen Divergenz und Konvergenz’ (2017) 46 ZGR 129 et seq.

76 For details, see J Lieder, ‘Unternehmerische Entscheidungen des Aufsichtsrats’ (2018) 47 ZGR 523, 555 (hereafter Lieder, ‘Entscheidungen’).

77 Cf. further KJ Hopt and M Roth, ‘§ 93 marginal number 102’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); H Fleischer, ‘§ 93 marginal number 90’ in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020); M Kock and R Dinkel, ‘Die zivilrechtliche Haftung von Vorständen für unternehmerische Entscheidungen – Die geplante Kodifizierung der Business Judgment Rule im Gesetz zur Unternehmensintegrität und Modernisierung des Anfechtungsrechts’ (2004) 10 NZG 441, 444; Lieder, ‘Entscheidungen’ (Footnote n 76) 557; for a different view, see W Goette, ‘Gesellschaftsrechtliche Grundfragen im Spiegel der Rechtsprechung’ (2008) 37 ZGR 436, 447 et seq.: purely objective approach.

78 Cf. H Fleischer in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020) § 93 marginal number 91; J Koch in U Hüffer and J Koch (eds) Aktiengesetz (14th ed. 2020) § 93 marginal number 21; HJ Mertens and A Cahn in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010) § 93 marginal number 34; Lieder, ‘Entscheidungen’ (Footnote n 76) 557; J Redeke, ‘Zur gerichtlichen Kontrolle der Angemessenheit der Informationsgrundlage im Rahmen der Business Judgement Rule nach § 93 Abs. 1 S. 2 AktG’ (2011) 2 ZIP 59, 60 et seq.

79 Cf. Noack, ‘Organisationspflichten’ (Footnote n 10) 122.

80 Cf. Becker and Pordzik, ‘Unternehmensführung’ (Footnote n 43) 347; Möslein, ‘Digitalisierung’ (Footnote n 17) 209 et seq., 212; Sattler, ‘Einfluss’ (Footnote n 17) 2248; Spindler, ‘Gesellschaftsrecht’ (Footnote n 53) 43; Spindler, ‘Zukunft’ (Footnote n 4) 45; Weber, Kiefner, and Jobst (Footnote n 17) 1134.

81 Wagner, ‘Legal Tech 2’ (Footnote n 41) 1100; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1132.

82 Cf. Deutscher Bundestag, ‘Begründung zum Regierungsentwurf, BT-Drucks. 15/5092’ (2005) 11; T Bürgers, ‘§ 93 marginal number 15’ in T Bürgers and T Körber (eds), AktG (4th ed. 2017); B Dauner-Lieb, ‘§ 93 AktG marginal number 23’ in M Henssler and L Strohn (eds), Gesellschaftsrecht (5th ed. 2021); KJ Hopt and M Roth, ‘§ 93 marginal number 101’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); W Hölters, ‘§ 93 marginal number 39’ in W Hölters (ed), AktG (3rd ed. 2017); HJ Mertens and A Cahn, ‘§ 93 marginal number 23’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010); Lieder, ‘Entscheidungen’ (Footnote n 76) 577.

83 Similarly H Fleischer, ‘§ 93 marginal number 92’ in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020); KJ Hopt and M Roth, ‘§ 93 marginal number 101’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2016); J Koch, ‘§ 93 marginal number 21’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020); Lieder, ‘Entscheidungen’ (Footnote n 76) 577; for a different opinion (objective standard): W Hölters, ‘§ 93 marginal number 39’ in W Hölters, AktG (3rd ed. 2017); HJ Mertens and A Cahn in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010) § 93 marginal number 23.

84 Cf. H Fleischer, ‘§ 76 marginal number 27’ in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020); M Kort, ‘§ 76 marginal number 60’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG, (5th ed. 2015); G Spindler, ‘§ 76 marginal number 67 ff.’ in W Goette and M Habersack (eds) ‘Münchener Kommentar zum AktG’ (5th ed. 2019); P Ulmer, ‘Aktienrecht im Wandel’ (2002) 202 AcP 143, 158 et seq.

85 Cf. OLG Hamm AG 1995, 512, 514; B Dauner-Lieb, ‘§ 93 AktG marginal number 23’ in M Henssler and L Strohn (eds), Gesellschaftsrecht, (5th ed. 2021); J Koch, ‘§ 76 marginal number 34’ in U Hüffer and J Koch (eds) Aktiengesetz (14th ed. 2020); M Kort, ‘§ 76 marginal number 52’ in H Hirte, P O Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); HJ Mertens and A Cahn, ‘§ 76 marginal number 17’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010) 22; Lieder, ‘Entscheidungen’ (Footnote n 76) 577–578.

86 Vgl. BGHZ 135, 244 (253–254) (Bundesgerichtshof II ZR 175/95); H Fleischer, ‘§ 93 marginal number 99’ in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020); J Koch, ‘§ 93 marginal number 23’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020).

87 Cf. T Drygala, ‘§ 116 marginal number 15’ in K Schmidt and M Lutter (eds), AktG (4th ed. 2020); J Koch, ‘§ 93 marginal number 23’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020); Lieder, ‘Entscheidungen’ (Footnote n 76) 578 with examples.

88 But see Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1135.

89 Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1134.

90 Deutscher Bundestag, ‘Begründung zum Regierungsentwurf, BT-Drucks. 15/5092’ (2005) 11; B Dauner-Lieb, ‘§ 93 AktG marginal number 20’ in M Henssler and L Strohn (eds), Gesellschaftsrecht (5th ed. 2021); J Koch, ‘§ 93 marginal number 16’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020); Lieder, ‘Entscheidungen’ (Footnote n 76) 532.

91 Deutscher Bundestag, ‘Begründung zum Regierungsentwurf, BT-Drucks. 15/5092’ (2005) 11; T Bürgers ‘§ 93 marginal number 15’ in T Bürgers and T Körber (eds), AktG (4th ed. 2017); H Fleischer, ‘§ 93 marginal number 97’ in M Henssler (ed), BeckOGK, Aktiengesetz (15 January 2020); HC Ihrig, ‘Reformbedarf beim Haftungstatbestand des § 93 AktG’ (2004) 43 WM 2098, 2105.

92 KJ Hopt and M Roth, ‘§ 93 marginal number 80’ in H Hirte, P O Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); for a different view, see G Spindler, ‘§ 93 marginal number 51’ in W Goette and M Habersack (eds), ‘Münchener Kommentar zum AktG’ (5th ed. 2019).

93 Lieder, ‘Entscheidungen’ (Footnote n 76) 532; too indiscriminately negative, however, G Spindler, ‘§ 93 marginal number 51’ in W Goette and M Habersack (eds) ‘Münchener Kommentar zum AktG’ (5th ed. 2019); H Hamann, ‘Reflektierte Optimierung oder bloße Intuition?’ (2012) ZGR 817, 825 et seq.

94 T Bürgers, ‘§ 93 marginal number 11’ in T Bürgers and T Körber, AktG (4th ed. 2017); B Dauner-Lieb, ‘§ 93 AktG marginal number 20’ in M Henssler and L Strohn (eds), Gesellschaftsrecht (5th ed. 2021); M Graumann, ‘Der Entscheidungsbegriff in § 93 Abs 1 Satz 2 AktG’ (2011) ZGR 293, 296; for a different view, see KJ Hopt and M Roth, ‘§ 93 marginal number 80’ in H Hirte, P O Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015).

95 Cf. KJ Hopt and M Roth, ‘§ 93 marginal number 80’ in H Hirte, P O Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015).

96 J Koch, ‘§ 93 marginal numbers 16, 22’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020); G Krieger and V Sailer-Coceani, ‘§ 93 marginal number 41’ in K Schmidt and M Lutter (eds), AktG (4th ed. 2020); M Lutter, ‘Die Business Judgment Rule und ihre praktische Anwendung’ (2007) 18 ZIP 841, 847; Lieder, ‘Entscheidungen’ (Footnote n 76) 533.

97 Deutscher Bundestag, ‘Begründung zum Regierungsentwurf, BT-Drucks. 15/5092’ (2005) 11; BGHZ 135, 244 (253) (Bundesgerichtshof II ZR 175/95); B Dauner-Lieb, ‘§ 93 AktG marginal number 24’ in M Henssler and L Strohn (eds), Gesellschaftsrecht (5th ed. 2021); KJ Hopt and M Roth, ‘§ 93 marginal number 90’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); G Spindler, ‘§ 93 marginal number 69’ in W Goette and M Habersack (eds), ‘Münchener Kommentar zum AktG’ (5th ed. 2019); S Harbarth, ‘Unternehmerisches Ermessen des Vorstands im Interessenkonflikt’ in B Erle and others (eds), ‘Festschrift für Peter Hommelhoff’ (2012) 323, 327; criticising this G Krieger and V Sailer-Coceani ‘§ 93 marginal number 19’ in K Schmidt and M Lutter (eds), AktG (4th ed. 2020).

98 Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1135.

99 Cf. further Noack, ‘Organisationspflichten’ (Footnote n 10) 123.

100 Cf. Enriques and Zetzsche, ‘Corporate Technologies’ (Footnote n 25) 42.

101 See supra Section III 2.

102 See supra Section III 3.

103 Fleischer, ‘Aufsichtsrat’ (Footnote n 17) 121.

104 For details, see I Erel and others, ‘Selecting Directors Using Machine Learning’ (NBER, March 2018) www.nber.org/papers/w24435.

105 Cf. Fleischer, ‘Algorithmen’ (Footnote n 17) 121; Noack, ‘Aufsichtsrat’ (Footnote n 10) 507.

106 Cf. Noack, ‘Aufsichtsrat’ (Footnote n 10) 502.

107 Cf. Noack, ‘Aufsichtsrat’ (Footnote n 10) 502; for a different view, see Strohn, ‘Rolle’ (Footnote n 10) 374.

108 S Hambloch-Gesinn and FJ Gesinn, ‘§ 111 marginal number 46’ in W Hölters (ed), AktG (3rd ed. 2017); M Habersack, ‘§ 111 marginal number 74’ in W Goette and M Habersack (eds), ‘Münchener Kommentar zum AktG’ (5th ed. 2019); HC Grigoleit and L Tomasic, ‘§ 111 marginal number 49’ in HC Grigoleit (ed), AktG (2nd ed. 2020).

109 Noack, ‘Organisationspflichten’ (Footnote n 10) 140 et seq.

110 HJ Mertens and A Cahn, ‘§ 111 marginal number 52’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010); A Cahn, ‘Aufsichtsrat und Business Judgment Rule’ (2013) WM 1293, 1299 (hereafter Cahn, ‘Aufsichtsrat’); M Hoffmann-Becking, ‘Das Recht des Aufsichtsrats zur Prüfung durch Sachverständige nach § 111 Abs 2 Satz 2 AktG’ (2011) ZGR 136, 146 et seq.; M Winter, ‘Die Verantwortlichkeit des Aufsichtsrats für “Corporate Compliance”’ in P Kindler and others (eds), Festschrift für Uwe Hüffer zum 70. Geburtstag (2010) 1103, 1110 et seq.

111 KJ Hopt and M Roth, ‘§ 111 marginal number 410’ in H Hirte, PO Mülbert, and M Roth (eds), Großkommentar zum AktG (5th ed. 2015); W Zöllner, ‘Aktienrechtliche Binnenkommunikation im Unternehmen’ in U Noack and G Spindler (eds), Unternehmensrecht und Internet (2001) 69, 86.

112 HJ Mertens and A Cahn, ‘§ 111 marginal number 52’ in W Zöllner and U Noack (eds), Kölner Kommentar zum AktG (3rd ed. 2010); J Koch, ‘§ 111 marginal number 21’ in U Hüffer and J Koch (eds), Aktiengesetz (14th ed. 2020); M Lutter, G Krieger, and D Verse, ‘Rechte und Pflichten des Aufsichtsrats’ (7th ed. 2020) marginal number 72; Cahn, ‘Aufsichtsrat’ (Footnote n 110) 1299; G Spindler, ‘Von der Früherkennung von Risiken zum umfassenden Risikomanagement – zum Wandel des § 91 AktG unter europäischem Einfluss’ in P Kindler and others (eds), Festschrift für Uwe Hüffer zum 70. Geburtstag (2010) 985, 997 et seq.

113 For details, see Lieder, ‘Entscheidungen’ (Footnote n 76) 557 et seq., 560, 563.

114 Cf. Wagner ‘Legal Tech 2’ (Footnote n 41) 1105; see furthermore Strohn, ‘Rolle’ (Footnote n 10) 375.

115 See supra Section III 2(e).

116 Leaning in that direction Armour and Eidenmüller, ‘Kapitalgesellschaften’ (Footnote n 9) 169 et seq.; cf. further, in general, V Boehme-Neßler, ‘Die Macht der Algorithmen und die Ohnmacht des Rechts: Wie die Digitalisierung das Recht relativiert’ (2017) NJW 3031 et seq.

117 Cf. Enriques and Zetzsche, ‘Corporate Technologies’ (Footnote n 25) 42.

118 More extensive Möslein, ‘Digitalisierung’ (Footnote n 17) 212; Weber, Kiefner, and Jobst, ‘Unternehmensführung’ (Footnote n 17) 1136; restrictive like here Armour and Eidenmüller, ‘Kapitalgesellschaften’ (Footnote n 9) 189; Enriques and Zetzsche, ‘Corporate Technologies’ (Footnote n 25) 47 et seq.; Noack, ‘Organisationspflichten’ (Footnote n 10) 142.

119 Cf. Enriques and Zetzsche, ‘Corporate Technologies’ (Footnote n 25) 50 et seq.; Strohn, ‘Rolle’ (Footnote n 10) 377.

120 NYSE, ‘Listed Company Manual’ Section 3, 303A.12 (NYSE, 25 November 2009) https://nyse.wolterskluwer.cloud/listed-company-manual.

121 FCA, ‘Listing Rules – FCA Handbook’ LR 9.8.6.R (5) (FCA, January 2021) www.handbook.fca.org.uk/handbook/LR.pdf.

122 AktG, section 161(1)(1).

123 For inclusion in the code cf. also Noack, ‘Organisationspflichten’ (Footnote n 10) 113, 142.

124 For precautionary compliance with the guidelines by the Supervisory Board, see Möslein, ‘Digitalisierung im Aufsichtsrat: Überwachungsaufgaben bei Einsatz künstlicher Intelligenz’ (2020) Der Aufsichtsrat 2(3).

125 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee of the Regions, ‘Building Trust in Human-Centric Artificial Intelligence’(EUR-Lex, 8 April 2019) https://eur-lex.europa.eu/legal-content/GA/TXT/?uri=CELEX%3A52019DC0168.

126 OECD AI Policy Observatory, OECD Principles on Artificial Intelligence (OECD.AI, 2020) https://oecd.ai/en/ai-principles.

20 Autonomization and Antitrust On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion

1 See, e.g., A Ezrachi and M E Stucke, ‘Sustainable and Unchallenged Algorithmic Tacit Collusion’ (2020) 17 Nw J Tech & Intell Prop 217; A Ezrachi and ME Stucke, ‘Algorithmic Collusion: Problems and Counter-Measures, Note to the OECD Roundtable on Algorithms and Collusion’ (OECD, 31 May 2017) www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=DAF/COMP/WD%282017%2925&docLanguage=En; A Ezrachi and ME Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (2016); U Schwalbe, ‘Algorithms, Machine Learning, and Collusion’ (2018) 14 J Comp L & Econ 568; A Ittoo and N Petit, ‘Algorithmic Pricing Agents and Tacit Collusion: A Technological Perspective’ in H Jacquemin and A de Streel (eds), L’intelligence artificiale et le droit (2017) 241; SK Mehra, ‘Antitrust and the Robo-Seller: Competition in the Time of Algorithms’ (2016) 100 Minn L Rev 1323; VM Pereira, ‘Algorithm-Driven Collusion: Pouring Old Wine Into New Bottles or New Wine Into Fresh Wineskins?’ (2018) 39 ECLR 212; PG Picht and B Freund, ‘Competition (Law) in the Era of Algorithms’ (2018) 39 ECLR 403; VD Roman, ‘Digital Markets and Pricing Algorithms – a Dynamic Approach towards Horizontal Competition’ (2018) 39 ECLR 37; see for an assessment of the Commission’s e-commerce sector inquiry regarding the risks of algorithmic collusion N Colombo, ‘What the European Commission (Still) Does Not Tell Us about Pricing Algorithms in the Aftermath of the E-Commerce Sector Inquiry’ (2018) 39 ECLR 478; See also P Pohlmann, ‘Algorithmen als Kartellverstöße’ in J Kokott and others (eds), Europäisches, deutsches und internationales Kartellrecht, Festschrift für Dirk Schröder (2018) 633, 645 et seq.; D Zimmer, ‘Algorithmen, Kartellrecht und Regulierung’ in J Kokott and others (eds), Europäisches, deutsches und internationales Kartellrecht, Festschrift für Dirk Schröder (2018) 999 et seq.

2 See Autorité de la Concurrence and BKartA, ‘Algorithms and Competition’ (BKartA, November 2019) www.bundeskartellamt.de/SharedDocs/Publikation/EN/Berichte/Algorithms_and_Competition_Working-Paper.pdf?__blob=publicationFile&v=5; Autoridade da Concorrência, ‘Paper on Digital Ecosystems, Big Data and Algorithms’ (AdC, July 2019) www.concorrencia.pt/vEN/News_Events/Comunicados/Documents/Digital%20Ecosystems%20Executive%20Summary.pdf; Competition & Markets Authority, ‘Pricing Algorithms: Economic Working Paper on the Use of Algorithms to Facilitate Collusion and Personalised Pricing’ (CMA, October 2018) https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/746353/Algorithms_econ_report.pdf.

3 US Department of Justice, ‘Former E-Commerce Executive Charged with Price Fixing in the Antitrust Division’s First Online Marketplace Prosecution’ (US DoJ, 6 April 2015) www.justice.gov/opa/pr/former-e-commerce-executive-charged-price-fixing-antitrust-divisions-first-online-marketplace.

4 ECJ, Case C-74/14 Eturas (23 June 2016).

5 Conseil de la Concurrence, ‘Décision 2018-FO-01’ (Conseil de la Concurrence, 7 June 2018) https://concurrence.public.lu/dam-assets/fr/decisions/ententes/2018/decision-n-2018-fo-01-du-7-juin-2018-version-non-confidentielle.pdf.

6 In Interstate Circuit v United States from 1939 pricing restraints were implemented by vertical communication through analogue means, which, however, had the same effect as a digital communication would have had, Interstate Circuit v United States 306 US 208 (1939). On this case see BJ Rodger, ‘The Oligopoly Problem and the Concept of Collective Dominance: EC Developments in the Light of U.S. Trends in Antitrust Law and Policy’ (1995/1996) 2 Colum J Eur L 25, 30–36.

7 This is a type of reinforcement-learning-algorithm, which adapts its conduct through experience. Learning takes place through the gaining of experience in these actions, which when proved successful, are repeated more frequently, while less successful actions are performed less frequently. Such a pattern allows the algorithms to develop a strategy that reaches the optimum or comes close to it. Therefore, q-learning allows an optimization without prior knowledge of the problem which is to be solved.

8 S Thomas, ‘Harmful Signals: Cartel Prohibition and Oligopoly Theory in the Age of Machine Learning’ (2019) 15 J Comp L & Econ 159.

9 A Ezrachi and ME Stucke, ‘Sustainable and Unchallenged Algorithmic Tacit Collusion’ (2020) 17 Nw J Tech & Intell Prop 217.

10 Autorité de la Concurrence/BKartA, ‘Algorithms and Competition, Nov 2019’ (BKartA, November 2019) www.bundeskartellamt.de/SharedDocs/Publikation/EN/Berichte/Algorithms_and_Competition_Working-Paper.pdf?__blob=publicationFile&v=5; Autoridade da Concorrência, ‘Paper on Digital Ecosystems, Big Data and Algorithms, July 2019, Executive Summary’ (AdC, July 2019) www.concorrencia.pt/vEN/News_Events/Comunicados/Documents/Digital%20Ecosystems%20Executive%20Summary.pdf.

11 U Schwalbe, ‘Algorithms, Machine Learning, and Collusion’ (2018) 14 J Comp L & Econ 568.

12 E Calvano and others, ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’ (2020) 110 Am Econ Rev 3267. The authors present a study on the capability of q-learning algorithms to achieve equilibria. They come to the conclusion that algorithms can learn to implement anticompetitive pricing.

13 See the explanation given in the Inception Impact Assessment for a ‘New Competition Tool’, S. 1: ‘The Commission’s enforcement experience in both antitrust and merger cases in various industries points to the existence of structural competition problems that cannot be tackled under the EU competition rules while resulting in inefficient market outcomes. […] Even short of individual market power, increasingly concentrated markets can allow companies to monitor the behaviour of their competitors and create incentives to compete less vigorously without any direct coordination (so-called tacit collusion). Moreover, the growing availability of algorithm-based technological solutions, which facilitate the monitoring of competitors’ conduct and create increased market transparency, may result in the same risk even in less concentrated markets.’ see European Commission, ‘Inception Impact Assessment’ (EC, 29 May 2020) https://ec.europa.eu/competition/consultations/2020_new_comp_tool/new_comp_tool_inception_impact_assessment.pdf.

14 The following refers to Article 101 TFEU, yet the same problems arise under the cartel provisions of many other jurisdictions in a similar way.

15 See, e.g., J Friedmann, Game Theory with Applications to Economics (1986) 184: ‘The fundamental distinction between cooperative and noncooperative games is that cooperative games allow binding agreements while noncooperative games do not.’; L Kaplow, Competition Policy and Price Fixing (2013) 177; see also E J Green and R H Porter, ‘Noncooperative Collusion under Imperfect Price Information’ (1984) 52 Econometrica 87; D G Baird and others, Game Theory and the Law (1994) 165–178.

16 ECJ, Case 48-69 Imperial Chemical Industries Ltd. v Commission of the European Commission [1972] paras 64 and 65; see also ECJ, joined cases C-89/85 and others A. Ahlström Osakeyhtiö and Others v European Communities [1993] para 63; ECJ, case C-8/08 T-Mobile Netherlands BV and others v Raad van bestuur van de Nederlandse Mededingingsautorieteit [2009] para 26.

17 Act of July 2, 1890 (Sherman Anti-Trust Act) 15 U.S. Code § 1.

18 Theatre Enterprises v Paramount 346 US 537 (1945); RH Bork, The Antitrust Paradox (1978) 178 et seq.; also arguing in favor of a distinction between ‘illegal agreement’ and ‘conscious parallelism’ MD Blechman, ‘Conscious Parallelism, Signaling and Facilitating Devices: The Problem of Tacit Collusion under the Antitrust Laws’ (1979) 24 NYL Sch L Rev 881, 882, 889.

19 The notion ‘plus factor’ was, reportedly, used for the first time in this context in C-O-Two Fire Equip. Co. v. United States 197 F2d 489, 493 (9th Cir.), cert. denied, 344 U.S. 892 (1952); see on that MD Blechman, ‘Conscious Parallelism, Signaling and Facilitating Devices: The Problem of Tacit Collusion under the Antitrust Laws’ (1979) 24 NYL Sch L Rev 881, 885.

20 The US DoJ has defined ‘facilitating devices’ as ‘mechanisms that facilitate the achievement of an industry pricing or output consensus and police deviations from it [in concentrated industries].’ See US DoJ, ‘Memorandum of John H Shenefield, Assistant Attorney General, Antitrust Division, Shared Monopolies’ (1978) 874 Antitrust & Trade Reg Rep (BNA) at F-1. See also GA Hay, ‘Facilitating Practices: The Ethyl Case (1984)’ in JE Kwoka and LJ White (eds), The Antitrust Revolution: Economics, Competition, and Policy (3rd ed. 1999) 182–201.

21 ECJ, Case 48-69 Imperial Chemical Industries Ltd. v Commission of the European Commission (14 July 1972) paras 64 and 65; see also ECJ, Joined Cases C-89/85 and others Ahlström Osakeyhtiö and Others v European Communities (20 January 1994) para 63; ECJ, Case C-8/08 T-Mobile Netherlands BV and others v Raad van bestuur van de Nederlandse Mededingingsautorieteit [2009] para 26.

22 A Ezrachi and M E Stucke, ‘Artificial Intelligence & Collusion: When Computers Inhibit Competition’ (2017) 1775, 1796 U Ill L Rev.

23 E Calvano and others, ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’ (2020) 110 Am Econ Rev 3267, 3295.

24 C Veljanovski, Cartel Damages: Principles, Measurement & Economics (2020) 100 para 7.07: ‘the law may need to be applied in a different fashion.’

25 On the attribution of legal personality to algorithms as a general legal issue see H Eidenmüller, ‘The Rise of Robots and the Law of Humans’ (2017) 4 ZEuP 765.

26 For a critical review of this view see P Pohlmann, ‘Algorithmen als Kartellverstöße’ in J Kokott and others (eds), Europäisches, deutsches und internationales Kartellrecht, Festschrift für Dirk Schroeder (2018) 633, 645 et seq.

27 RA Posner, ‘Oligopoly and the Antitrust Laws: A Suggested Approach’ (1969) 21 Stan L Rev 1562, 1575: ‘the tacit colluder should be punished like the express colluder.’; RA Posner, ‘Oligopolistic Pricing Suits, the Sherman Act, and Economic Welfare’(1976) 28 Stan L Rev 903.

28 Posner has meanwhile distanced himself from this view and takes the opposite position according to which tacit collusion should not be equated with explicit collusion, see RA Posner, ‘Review of Kaplow, Competition Policy and Price Fixing’ (2014) 79 Antitrust LJ 761; on that see also CS Hemphill, ‘Posner on Vertical Restraints’ (2019) 86 U Chi L Rev 1057, 1073.

29 Markovits wants to distinguish between ‘normal’ or ‘natural’ oligopolistic pricing on the one hand and ‘contrived’ oligopolistic pricing on the other, see RS Markovits, ‘A Response to Professor Posner’ (1976) 28 Stan L Rev 919, 933–934; RS Markovits, ‘Oligopolistic Pricing Suits, the Sherman Act, and Economic Welfare, Part II: Injurious Oligopolistic Pricing Sequences: Their Description, Interpretation, and Legality under the Sherman Act’(1974) 26 Stan L Rev 717, 738; see also RS Markovits, ‘Oligopolistic Pricing Suits, the Sherman Act, and Economic Welfare, Part III: Proving (Illegal) Oligopolistic Pricing: A Description of the Necessary Evidence and a Critique of the Received Wisdom about Its Character and Cost’(1975) 27 Stan L Rev 307, 315–319; RS Markovits, ‘Oligopolistic Pricing Suits, the Sherman Act, and Economic Welfare, Part IV: The Allocative Efficiency and Overall Desirability of Oligopolistic Pricing Suits’(1975) 28 Stan L Rev 45, 44–60. Posner criticizes this distinction as suggested by Markovits, see RA Posner, ‘Oligopolistic Pricing Suits, the Sherman Act, and Economic Welfare, A Reply to Professor Markovits’ (1976) 28 Stan L Rev 903, 908 and 913 et seq.

30 L Kaplow, ‘An Economic Approach to Price Fixing’ (2011) 77 Antitrust LJ 343, 350; see also L Kaplow, ‘Direct Versus Communications-Based Prohibitions on Price Fixing’ (2011) 3 J Legal Analysis 449; L Kaplow, ‘On the Meaning of Horizontal Agreements in Competition Law’(2011) 99 Calif L Rev 683; L Kaplow, Competition Policy and Price Fixing (2013). On this strand of arguments see also D Zimmer, ‘Kartellrecht und neuere Erkenntnisse der Spieltheorie: Vorzüge und Nachteile einer alternativen Interpretation des Verbots abgestimmten Verhaltens (§ 25 Abs 1 GWB, Art 85 Abs 1 EWGV)’ (1990) 154 ZHR 470.

31 On that see also Nicolas Petit’s suggestion of remedies against tacit collusion and the idea of applying a form of equivalence with express collusion: N Petit, ‘Re-Pricing through Disruption in Oligopolies with Tacit Collusion: A Framework for Abuse of Collective Dominance’ (2016) 119–138 World Competition; N Petit, ‘The Oligopoly Problem in EU Competition Law’ in I Liannos and D Geradin (eds), Research Handbook in European Competition Law (2013) 259–349.

32 Or other competitive parameters, such as quality.

33 S Thomas, ‘Harmful Signals: Cartel Prohibition and Oligopoly Theory in the Age of Machine Learning’ (2019) 15 J Comp L & Econ 159; S Thomas, ‘Herausforderungen des Plattformwettbewerbs für das Kartellverbot’ in S Thomas and others (eds), Das Unternehmen in der Wettbewerbsordnung, Festschrift für Gerhard Wiedemann zum 70. Geburtstag (2020) 99 et seq; S Thomas, ‘Horizontal Restraints on Platforms: How Digital Ecosystems Nudge into Rethinking the Construal of the Cartel Prohibition’ (2021) 44 World Competition 53. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3645095.

34 See supra Footnote n 15.

35 Setting aside religious beliefs and ethical convictions of the individual.

36 L Kaplow, ‘An Economic Approach to Price Fixing’ (2011) 77 Antitrust LJ 343, 431.

37 Also voicing concerns E Elhauge and D Geradin, Global Competition Law & Economics Chapter 6 C (2nd ed. 2011) 843.

38 S Thomas, ‘Harmful Signals: Cartel Prohibition and Oligopoly Theory in the Age of Machine Learning’ (2019) 15 J Comp L & Econ 159.

39 EU Commission, decision of 7 July 2016 Container Shipping AT 39850.

40 OECD, ‘Background Paper, Policy Roundtables on Unilateral Disclosure of Information with Anticompetitive Effects’ (OECD, 11 October 2012) paras 1 and 2.3.1. www.oecd.org/daf/competition/Unilateraldisclosureofinformation2012.pdf.

41 S Thomas, ‘Harmful Signals: Cartel Prohibition and Oligopoly Theory in the Age of Machine Learning’ (2019) 15 J Comp L & Econ 159.

42 See supra Footnote n 5.

43 ECJ, Case C-32/11 Allianz Hungária Biztosíto v Gazdásagi Versenyhivatal (14 March 2013) para 66.

44 ECJ, Case C-67/13 P Groupement des Cartes Bancaires v European Commission (11 September 2014) para 51.

45 ECJ, Joined Cases C‑501/06 P and others GlaxoSmithKline Services Unlimited v European Commission (6 October 2009) para 63.

46 See, e.g., EU Commission, ‘Guidelines on the Applicability of Article 101 of the Treaty on the Functioning of the European Union to Horizontal Co-Operation Agreements’ (2011) OJ 2011 C 11/1 para 161, where the Commission explains that in the case of a joint distribution which is downstream to a joint production, horizontal price fixing can be assessed within the ‘by effect category’. While this situation is not identical with the case of algorithmic collusion, it demonstrates that it is not conceptually impossible to turn towards the effects on consumer rent when evaluating whether a certain conduct amounts to a breach of Article 101(1) TFEU or not.

47 J Johnson and others, ‘Platform Design when Sellers Use Pricing Algorithms’ (SSRN, 12 September 2020) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3691621.

48 The problem of how to detect algorithmic collusion is independent of the concept of the notion of a concerted practice. Enforcers will struggle with a total absence of direct evidence if the algorithmic computer agents acted independently of human interaction. Market comparison methods, however, could be used to find out about the competitiveness of the pricing level. This is an area with a great demand for further research. Yet the problems to detect algorithmic collusion do not call into question the need to expound ways how the law should be construed in the event that algorithmic collusion can be found.

49 EU Commission, ‘Guidelines on the Applicability of Article 101 of the Treaty on the Functioning of the European Union to Horizontal Co-Operation Agreements’ (2011) OJ 2011 C 11/1 para 94.

21 Artificial Intelligence in Financial Services New Risks and the Need for More Regulation?

* I want to thank Silja Voeneky for many insightful discussions of the topic of AI, for sharing and exchanging many ideas, and also for her comments on an earlier draft version of this chapter.

1 See C Chan and others, ‘Artificial Intelligence Applications in Financial Services – Asset Management, Banking and Insurance’ (Oliver Wyman Research Report, 2019), www.oliverwyman.com/content/dam/oliver-wyman/v2/publications/2019/dec/ai-app-in-fs.pdf; for an overview, T Boobier, AI and the Future of Banking (2020), and also T Guida, Big Data and Machine Learning in Quantitative Investment (2019) (hereafter Guida, Big Data) for more recent developments in quantitative investment.

2 See D Zetzsche and others, ‘Artificial Intelligence in Finance – Putting the Human in the Loop’ (2020) University of Hong Kong Faculty of Law Research Paper No. 2020/006 (hereafter Zetzsche and others, ‘Artificial Intelligence in Finance’), Guida, Big Data (Footnote n 1), or the recent regulatory proposals from the Monetary Authority of Singapore (2019).

3 See the so-called IAC (Individual Accountability) guidelines by the Monetary Authority of Singapore, MAS, ‘Guidelines on Individual Accountability and Conduct’ (MAS, 10 September 2020) www.mas.gov.sg/-/media/MAS/MPI/Guidelines/Guidelines-on-Individual-Accountability-and-Conduct.pdf.

4 See the joint report of the European Supervisory Authorities EBA, ESMA, EIOPA on the use of big data, including AI, by financial institutions, December 2016, JC/2016/86.

5 See for instance Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2).

6 The EU published the General Regulation on a European Approach for Artificial Intelligence in 2021, which regulates the financial industry in some areas of AI applications as well, European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts COM (2021) 206 final (hereafter Draft EU AIA). Since this draft was published after the writing of this article, its impact will and can be discussed only to a smaller extent in this paper.

7 See Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2).

8 See E Brynjolfsson and A McAfee, ‘The Business of Artificial Intelligence’ (Harvard Business Review, 18 July 2017) https://hbr.org/2017/07/the-business-of-artificial-intelligence 3 (hereafter Brynjolfsson and McAfee, ‘The Business of Artificial Intelligence’), see also the interview with Andre Ng in M Ford, Architects of Intelligence – The Truth about AI from the People Building It (2018) 190 (hereafter Ford, Architects of Intelligence)

9 See R Lipsey and IC Kenneth, Economic Transformations: General Purpose Technologies and Long Term Economic Growth (2005) (hereafter Lipsey and Kenneth, Economic Transformations) for a broader discussion of different GPTs and their role for economic development and the transformation of societies as a whole.

10 Besides Lipsey and Kenneth, Economic Transformations (Footnote n 9) see also TF Bresnahan and M Trajtenberg, ‘General Purpose Technologies “Engines of Growth”?’ (1995) 65(1) Journal of Econometrics 83 for another interesting article on the wider topic of the role and impact of GPTs.

11 See Brynjolfsson and McAfee, The Business of Artificial Intelligence (Footnote n 8) 4.

12 See the interview with Andrew Ng in Ford, Architects of Intelligence (Footnote n 8) 190 et seq.

14 See J Candela and S Berinato, Artificial Intelligence: Insights You Need from Harvard Business Review (2019)

15 It is worth noting that today it is not entirely clear which direction AI as a technology will go over the next years. Despite the enormous success of machine learning as an AI concept or paradigm, several authors have pointed to its limitations – see for example the interview with Barbara Grosz in Ford, Architects of Intelligence (Footnote n 8) 333–356.

16 See www.swift.com/about-us/history for more details on the introduction of the SWIFT system.

17 Commonly big data sets are defined by the so-called 4 Vs: volume (the amount of data), velocity (the speed in which new data get created or are generated), variety (different kinds of data types from different data sources, in particular, often a mix of structured and unstructured data), and veracity (discrepancies, errors, and gaps in data sets). Typical market data feeds in the financial industry fulfill at least three of these criteria, namely volume, velocity and veracity, as the data feeds deliver fairly structured data sets. This might change in the future when data feeds might also include other kinds of data such as press releases or social media posts as it is the case already with so-called sentiment feeds including sentiment data. See B Marr, Big Data: Using SMART Big Data, Analytics and Metrics to Make Better Decisions and Improve Performance (2015) for a more general introduction in the area of big data, and Guida, Big Data (Footnote n 1) for more insights into big data in areas of financial information.

18 In broader terms an information (management) system is simply defined as a set of interrelated components consisting of an application system, and an interface for human interaction to define the tasks for the system and retrieve information. The application system consists of hardware, software, data, and a network connection. For more details see K Laudon and J Laudon, Management Information Systems: Managing the Digial Firm (15th ed. 2018).

19 Examples in the payment sector are WeChat Pay, Alipay or Apple Pay, new competitors to the established credit card payment services. In fact, today many big tech companies are moving into financial services with their own finance applications, often in areas like payments, as Apple Pay or Google Pay.

20 They operate more like a thermostat for a heating system, setting thresholds for certain actions to take place, like selling a stock position based on a predefined stop-loss order. The system will automatically initiate the transaction, but it is solely based on predefined parameters.

21 Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) present a similar classification of the AI application present today in the financial industry. See also T Boobier, AI and the Future of Banking (2020). For discussion of several of the application areas discussed here, as well as a recent leadership paper by the consultancy firms Oliver Wyman, Marsch, BCLP and Hermes, C Chan, and others, ‘Artificial Intelligence Applications in Financial Services – Asset Management, Banking and Insurance’ (Oliver Wyman Research Report, 2019), www.oliverwyman.com/content/dam/oliver-wyman/v2/publications/2019/dec/ai-app-in-fs.pdf.

22 See M Hassan and M Tabasum, ‘Customer Profiling and Segmentation in Retail Banks Using Data Mining Techniques’ (2018) 9(4) International Journal of Advanced Research in Computer Science.

23 See R Ragotan, ‘AI Has Changed the Way Banks Interact with Their Customers’ (Fintech News, 5 February 2020) www.fintechnews.org/ai-has-changed-the-way-banks-interact-with-their-customers. For a discussion of some of the applications and service providers.

24 So-called robo-advisors or advisory solutions like Betterment, Wealthfront, and Vanguard Digital, to give a few examples from the more advanced US robo advisory market, have in recent years been launched in competition with traditional human banking or financial advisors. These solutions automize and digitalize the advisory process in wealth management and private banking, thereby lowering the asset under management threshold for private investors for accessing high quality advisory solutions. Although some of the new players have also automized the asset management process itself, the primary focus of these solutions is enhancing the advisory process by replacing the human banking advisor with a machine or AI-based interface. In this regard they are classified here under customer related solutions and not under AI-based trading and portfolio management solutions as done by Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) which is rather misleading.

25 As pointed out in a study by the consulting firm McKinsey & Company (2018), data analytics applications often using AI techniques are most widespread in sales and marketing areas of businesses, that is, areas which try to generate and develop new customer relationships and transaction.

26 See an interesting article by N Aggarval, ‘The Norms of Algorithmic Credit Scoring’ (2021) 8(2) The Cambridge Law Journal 42 on the norms of algorithmic credit scoring.

27 See M Lewis, Flash Boys: A Wall Street Revolt (2014) (hereafter Lewis, Flash Boys) or S Patterson, Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market (2012) (hereafter Patterson, Dark Pools) for good non-expert introductions into this area, for a more systematic and scientific account see R Kissell, Algorithmic Trading Methods: Applications Using Advanced Statistics, Optimization, and Machine Learning Techniques (2021).

28 See Guida, Big Data (Footnote n 1) on a recent collection of articles on this new emerging and developing field. So far, AI applications and tools have mainly been used in assisting fund managers in the asset allocation process, but it is possible that there will be fully AI-based fund management in the future. Some authors like E Syrotyuk, ‘State of Machine Learning Applications in Investment Management’ in T Guida (ed), Big Data and Machine Learning in Quantitative Investment (2019) seem to be more sceptical in regard to fully automized asset management because of the more erratic nature of financial markets.

29 Companies like Teradata (teradata.com) and Datavisor (datavisor.com) provide AI-based financial fraud detection solutions. Datavisor, for instance, claims that their solution can detect 30% more frauds with 90% accuracy. Their solutions are mainly based on machine learning algorithms according to own research.

30 See A Bouveret, ‘Cyber Risk for the Financial Sector: A Framework for Quantitative Assessment’ (2018) International Monetary Fund Working Paper 18/143 for a thorough overview and analysis of cyber security risk in the financial industry by sectors and countries/regions.

31 See J Li, ‘Cyber Security Meets Artificial Intelligence: a Survey’ (2018) 19 Frontiers of Information Technology & Electronic Engineering for a more detailed analysis of the potential of using AI systems in preventing or reducing cyberattacks. The article also highlights the fact that AI systems might be used in facilitating cyber security attacks, as will be discussed also later in the article.

32 A new sector has emerged in recent years often referred to as RegTech – see Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) – using technology to help financial institutions to comply with the various regulatory requirements. Quite a few regtech solutions have increasingly made use of AI technologies; for a good overview see ‘AI in RegTech: a quiet upheaval’ (Chartis, 2018) www.ibm.com/downloads/cas/NAJXEKE6.

33 See Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) as a recent example.

34 See AM Fleckner, ‘Regulating Trading Practices’ in N Moloney, E Ferran, and J Payne (eds), The Oxford Handbook of Financial Regulation (2015) 597 (hereafter Fleckner, ‘Regulating Trading Practices’).

35 For the different meanings of the notion ‘regulation’, cf. T Schmidt and S Voeneky, Chapter 8, in this volume.

36 For a thorough analysis of regulation of trading practices in the financial industry discussing both sides of regulation see the article by Fleckner, ‘Regulating Trading Practices’ (Footnote n 34).

37 European Securities and Market Authority (ESMA), the EU’s securities market regulator located in Paris, created in 2011 and replacing the Committee of European Securities Regulators (CESR).

38 Securities and Exchange Commission (SEC), the independent agency of the US federal government, created in the early 1930s following the stock market crash in 1929.

39 See the KPMG report, ‘EU Financial Services Regulation – A New Agenda Demands a New Approach’ www.kpmg.com/regulatorychallenges, for giving a good overview on the various regulatory perspectives of regulation of financial services in the EU.

40 See for a concise and high-level summary of the Basel I–III regulations the article ‘History of the Basel Committee’ (BIS) bis.org/bcbs/history.htm.

41 See the European Commission, ‘Proposal for the Regulation of the European Parliament and of the Council on Digital Operational Resilience for the Financial Sectors’ COM (2020) 595.

42 See M Comana, D Previtali, and L Bellardini, The MiFID II Framework: How the Standards Are Reshaping the Investment Industry (2019) for a detailed analysis of the MiFID II regulations including a comparison with the MiFID I rules.

43 The laws and regulations around data privacy protections can also be seen as falling into this category but it has been listed here separately given its recent prominence.

44 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.

45 This classification mirrors or reflects to some extent on the classification by the French prudential regulatory authority within the bank of France. It has recently put forward the following four risk categories as allegedly stemming from AI applications in the financial industry: (1) data processing risk, (2) cybersecurity risk, (3) challenges to financial stability, (4) player’s dependency and change in power relationships in the financial market. See ‘Artificial Intelligence: Challenges for the Financial Sector’ (ACPR, December 2018), acpr.banque-france.fr/sites/default/files/medias/documents/2018_12_20_intelligence_artificielle_en.pdf.

46 For instance, the construction of error correction codes can be used in handling issues in data transmission through noisy channels as for instance happens sometimes in the case of market data feeds. More recently AI techniques have been used in optimizing the design of error correction codes, see for instance L Huang and others, ‘AI Coding: Learning to Construct Error Correction Code’ (2019) 20(10) IEEE Transactions on Communications (hereafter Huang and others, ‘AI Coding’).

47 In their interesting paper W Dobbie and others, ‘Measuring Bias in Consumer Lending’ (2021) The Review of Economics Studies https://doi.org/10.1093/restud/rdaa078, tried to measure the amount of bias in consumer lending decision. What they found is that in traditional non-AI-based lending decisions there is a significant bias against immigrant and older loan applicants.

48 There has been a lot of debate around the so-called flash crash which happened on May 6, 2010, when the Dow Jones Index lost about a tenth of its value in just 36 minutes – see for instance A Kirilenko and others, ‘The Flash Crash: The Impact on High Frequency Trading on an Electronic Market’ (2017) 72 The Journal of Finance 967. In his recent article D Busch, ‘MiFID II: Regulating High Frequency Trading, other Forms of Algorithmic Trading and Direct Electronic Market Access (2017) 2 Law and Financial Markets Review https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3068104 (hereafter Busch, ‘MiFID II’) looks at how, by the MiFID II regulation, such flash crashes are meant to be banned by ruling out a technique of market manipulation referred to as ‘spoofing’. This technique was allegedly used by a British stock market trader in 2010 when he tricked the market into believing that the prices were about to fall by placing huge amounts of sell orders which were later cancelled by him by his specially developed algorithms.

49 See Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) 21.

50 See the discussion in Section II (5).

51 See Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) 21.

52 See Lewis, Flash Boys (Footnote n 27) and Patterson, Dark Pools (Footnote n 27) for a vivid description of the individuals, often IT experts or nerds, in setting up high frequency trading firms or the respective trading units at major banks. Also, major hedge funds with a quantitative focus like Renaissance Technologies, which had 133 billion USD under management as of November 2020, have a strong focus on developing their own mathematical models and algorithms.

53 See the article by T York, ‘Banks Becoming Technology Companies, Technology Companies Becoming Banks’ (San Diego Business Journal, 30 September 2019) www.sdbj.com/news/2019/sep/30/banks-becoming-technology-companies-technology-com/; see also the recent BCG publication on this topic, J Erlebach and others, ‘The Sun Is Setting on Traditional Banking’ (BCG, 24 November 2020) www.bcg.com/publications/2020/bionic-banking-may-be-the-future-of-banking.

54 For instance, market data providers like Bloomberg and Thomson Reuters have started to use AI methods and techniques, helping to digest larger data sets including unstructured data like texts from different sources, thereby delivering new kinds of analytics such as so-called sentiment analysis or feeds, trying to identify the sentiments in certain markets or regarding certain securities.

55 In Germany for instance the advisory services offered mostly by banks have been reviewed frequently by consumer protection agencies and independent bodies, and over many years the findings have been very disappointing with many banks not even fulfilling basic standards and requirements – see the magazine Finanztest 2/2016. In particular, elderly people have been frequently ‘ripped off’ and have been referred to internally as ‘AD’s (alt (old) and dumm (stupid)), to whom the advisors could sell products not suitable to the financial situation of the elderly or asking them to re-allocate their portfolio frequently mainly with the aim of generating extra commission fees on the triggered transaction, thereby exploiting their trust – see C Bauer, ‘Banken zocken Senioren als “AD-Kunden” ab’ Westfaelische Rundschau (9 July 2009) www.wr.de/wr-info/banken-zocken-senioren-als-ad-kunden-ab-id79712.html. In States like the US, where there has been a long tradition of investing in the financial markets also by private investors through their 401K pension plans with tax benefits, financial advisory services have been on higher professional levels. For a more thorough cross-country comparison see J Burke and A Hang (2015), ‘Financial Advice Markets – A Cross-Country Comparison’ (study by the Rand Corporation prepared for the US department of labor) www.rand.org/pubs/research_reports/RR1269.html.

56 There has been a long history of insider trading; see the article by the New York Times, ‘Dealbook – Timeline: A History of Insider Trading’ The New York Times (6 December 2016), mainly focusing on cases in the US www.nytimes.com/interactive/2016/12/06/business/dealbook/insider-trading-timeline.html.

57 Over many years traders had manipulated the banks’ central lending rate, i.e. the LIBOR rate, to their benefit before it was discovered, see L Vaughan and G Finch, ‘Libor Scandal: The Bankers Who Fixed the World’s Most Important Number’ The Guardian (18 January 2017) www.theguardian.com/business/2017/jan/18/libor-scandal-the-bankers-who-fixed-the-worlds-most-important-number.

58 See Patterson, Dark Pools (Footnote n 27) 4, and also M Lewis, Flash Boys (Footnote n 27) for more details on this fascinating topic.

59 For another view, cf. T Schmidt and S Voeneky, Chapter 8, in this volume.

60 See the report by European Banking Authority for example: EBA, ‘Final Report on Guidelines on internal governance under Directive 2013/36/EU’ (2 July 2021) EBA/GL/2021/05, 5-7 www.eba.europa.eu/sites/default/documents/files/document_library/Publications/Guidelines/2021/1016721/Final%20report%20on%20Guidelines%20on%20internal%20governance%20under%20CRD.pdf. For the UK, see the conduct rules as applied to the senior management functions as defined by the Bank of England report, Bank of England ‘Senior Managers Regime: Approvals’ www.bankofengland.co.uk/prudential-regulation/authorisations/senior-managers-regime-approvals.

61 Cf. Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2).

62 MAS, ‘Guidelines on Individual Accountability and Conduct’ (MAS, 10 September 2020) www.mas.gov.sg/-/media/MAS/MPI/Guidelines/Guidelines-on-Individual-Accountability-and-Conduct.pdf, para 3.3.

63 Zetzsche and others, ‘Artificial Intelligence in Finance’ (Footnote n 2) 44.

67 Such a situation seems not so rare as also discussed in the recent documentary ‘The Social Dilemma’ (2020) on Netflix, in which many of the creators of the algorithms underlying the leading social media platforms like Facbook or Youtube discuss their inability to understand the content proposing aspects based on user profiling at a later stage of the operations of the algorithms. Essentially, the algorithms develop in their own way, which is hard to understand at later stages of their employment.

68 The other two limitations they mention are overdeterrence – as long as the benefits are higher, I think this won’t be such an issue – and the increased role of fintechs in developing AI applications which usually have less experienced managerial resources. Here, they propose that by suitable board structures this could be handled, a thought with which I agree.

69 For instance, regarding the numerous cloud-based services in place today in areas like financial market data systems, trading terminals, or wealth management advisory solutions.

70 See for instance the recent 2020 report by the consultancy firm Deloitte on this topic: ‘Modernizing Legacy Banking Systems Practical Advice to Help Banks Succeed at Core and Application Modernization’ (Deloitte, 2020) www2.deloitte.com/us/en/pages/financial-services/articles/modernizing-legacy-systems-in-banking.html.

71 See for instance all the different norms and standards defined by the VDE (the German association for electrical, electronic, and information technologies) over more than 100 years. In 1885, the first VDE regulation, the ‘VDE 0100’, was introduced, which regulated the safe construction of electrical systems. In 1904, the VDE published its first ‘book for standards’ comprising more than 17 provisions. Today, there exists a wide group of norms and standards ensuring the safety and well-functioning of all kinds of electrical systems.

72 A similar obligation has been put forward by the EU in its recent Draft EU AIA, (Footnote n 6).

73 In the literature, there has been a long discussion of the so-called flash-crashes and the extent to which they have been caused by certain algorithmic trading practices. See Busch, ‘MiFID II’ (Footnote n 48) on this topic for a more detailed discussion regarding the recent MiFID II regulation and its impact on algorithmic trading practices. See also Huang and others, ‘AI Coding’ (Footnote n 46) for more details on this topic.

74 For more details on this topic see Section IV and Huang and others, ‘AI Coding’ (Footnote n 46) of this chapter.

75 For details cf. T Burri, Chapter 7, T Schmidt and S Voeneky, Chapter 8, and C Wendehorst, Chapter 12, in this volume.

76 See the Draft EU AIA, (Footnote n 6).

77 For requirements to be met by high-risk AI systems, cf. Article 8 et seq., Article 16 et seq., and especially the conformity assessment, Article 43.

78 I assume this refers to the fact that simple learning algorithms might be trained on past credit decisions of financial institutions which might have embodied certain forms of discrimination. As has been pointed out before in Section IV, also before the arrival of AI in financial services, many credit decisions have been prone to discrimination. One solution could be that in training algorithms on making such credit decisions the training data could be prepared in a way that would make them bias free.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×