Simon Chesterman has published a bold and ambitious book. It surveys the challenges posed by artificial intelligence (AI) and provides regulators a road map for how best to engage with those challenges to improve public welfare.
AI regulation is an important and timely subject. Even in the short time since the book's publication in 2021, AI has improved significantly in terms of its capabilities and adoption. Consider, for instance, the case of self-driving vehicles which Chesterman uses to illustrate liability issues—in 2022, the company Cruise launched the first commercial self-driving car service in San Francisco. Chesterman also examines AI generating creative works and copyright implications—again in 2022, commercially valuable AI-generated works are now being made at scale thanks to systems like DALL⋅E 2. The view that it is premature to be regulating mindful of AI now appears Luddite.
Chesterman makes a good case for why AI is worthy of special regulatory consideration. While AI has been around for decades, and other frontier technologies may also not fit seamlessly into existing governance frameworks, Chesterman argues that modern AI is disruptive due mainly to its speed, autonomy and opacity. For example, historically court filings, while public documents, were kept ‘practically obscure’ due to an overwhelming number of court filings and high search costs. AI now allows just about anyone to search these filings in moments. This has major practical implications for privacy, even though the underlying public nature of court filings has not changed. As another example, facial recognition in public spaces by law enforcement is an ancient practice. But the ability of AI simultaneously to track every person in a public space and use that information to determine someone's political affiliations (based on the locations they visit and the purchases they make) has similar—and worrying—privacy implications.
Chesterman examines how existing laws deal with AI, and how these laws might change. While most of the English literature on AI regulation is rooted in American and European approaches, Chesterman's book usefully engages with Asian, and particularly Chinese and Singaporean, regulatory efforts.
He argues that the primary responsibility for regulating AI must fall to State governments, which can do so by leveraging responsibility, personality and transparency. For instance, States must ensure appropriate responsibility for the acts and omissions of AI, which can involve special product liability rules, insurance schemes and preventing the outsourcing of liability. Chesterman argues against legal personality for AI systems, but notes that it may be necessary in the future depending on how technology evolves. He also engages with the explainability and transparency of AI systems and decision-making, and how these can be supplemented with tools like audits and impact-assessments. Responsibility, personality and transparency are useful concepts for risk management, addressing the morality of automated decision-making and evaluating delegation of authority to AI.
Finally, Chesterman considers where existing rules and regulatory bodies come up short. He focuses on the weaponisation and victimisation of AI. For this, he argues that an international legal approach and harmonisation is needed to adequately regulate technologies like lethal autonomous weapons. He posits a hypothetical International Artificial Intelligence Agency modelled after the post-Second World War agency to promote peaceful uses of nuclear energy. Chesterman also examines AI being used in regulation, including in judicial processes, and even using it as a means to regulate itself. Ultimately, he concludes there should be a procedural guarantee of transparency and a substantive norm of maintaining human control—both to constrain AI activity and ensure appropriate responsibility.
Chesterman's regulatory road map is one worth following. Hopefully, human regulators agree, before the artificial regulators arrive.