Hostname: page-component-848d4c4894-4hhp2 Total loading time: 0 Render date: 2024-06-06T16:23:41.842Z Has data issue: false hasContentIssue false

Is wearing these sunglasses an attack? Obligations under IHL related to anti-AI countermeasures

Published online by Cambridge University Press:  20 March 2024

Jonathan Kwik*
Affiliation:
Postdoctoral Researcher, ELSA Lab Defence, T. M. C. Asser Institute, The Hague, Netherlands

Abstract

As usage of military artificial intelligence (AI) expands, so will anti-AI countermeasures, known as adversarials. International humanitarian law offers many protections through its obligations in attack, but the nature of adversarials generates ambiguity regarding which party (system user or opponent) should incur attacker responsibilities. This article offers a cognitive framework for legally analyzing adversarials. It explores the technical, tactical and legal dimensions of adversarials, and proposes a model based on foreseeable harm to determine when legal responsibility should transfer to the countermeasure's author. The article provides illumination to the future combatant who ponders, before putting on their adversarial sunglasses: “Am I conducting an attack?”

Type
Research Article
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of ICRC

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

ORCID No. 0000-0003-0367-5655. The author thanks Dr William Boothby for his helpful comments and insights during the drafting of this paper.

The advice, opinions and statements contained in this article are those of the author/s and do not necessarily reflect the views of the ICRC. The ICRC does not necessarily represent or endorse the accuracy or reliability of any advice, opinion, statement or other information provided in this article.

References

1 See Anish Athalye, Logan Engstrom, Andrew Ilyas and Kevin Kwok, Synthesizing Robust Adversarial Examples, 6th International Conference on Learning Representations, Vancouver, 2018, para. 1, available at: https://openreview.net/forum?id=BJDH5M-AW (all internet references were accessed in February 2024).

2 See e.g. US Department of Defense (DoD), “Autonomy in Weapon Systems”, DoD Directive 3000.09, 25 January 2023, p. 4 (US); NATO, Summary of the NATO Artificial Intelligence Strategy, 22 October 2021, para. 14, available at: www.nato.int/cps/en/natohq/official_texts_187617.htm (NATO); Josh Baughman, “China's ChatGPT War”, China Aerospace Studies Institute, 2023, p. 7, available at: www.airuniversity.af.edu/CASI/Display/Article/3498584/chinas-chatgpt-war/ (China).

3 The term “adversarial” is used to distinguish countermeasures specifically designed to exploit modern AI from countermeasures in general.

4 This article deliberately uses legally neutral terms instead of “attacker” and “defender” since, as argued below, these statuses may shift depending on the circumstances. The current nomenclature offers more consistency, since the deployer is always the party activating the system, and the adversary is always the author of the countermeasure.

5 Scharre, Paul, Army of None: Autonomous Weapons and the Future of War, W. W. Norton & Co, New York, 2018, p. 40Google Scholar; Krupiy, Tetyana, “A Case against Relying Solely on Intelligence, Surveillance and Reconnaissance Technology to Identify Proposed Targets”, Journal of Conflict and Security Law, Vol. 20, No. 3, 2015, p. 4CrossRefGoogle Scholar.

6 Protocol Additional (I) to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, 1125 UNTS 3, 8 June 1977 (entered into force 7 December 1978) (AP I), Art. 37(2). See also Boothby, William H., “Highly Automated and Autonomous Technologies”, in William H. Boothby (ed.), New Technologies and the Law in War and Peace, Cambridge University Press, Cambridge, 2018, p. 154CrossRefGoogle Scholar.

7 Vego, Milan N., “Operational Deception in the Information Age”, Joint Forces Quarterly, Spring 2002, p. 60Google Scholar.

8 Samuel Estreicher, “Privileging Asymmetric Warfare? Part I: Defender Duties under International Humanitarian Law”, Chicago Journal of International Law, Vol. 11, No. 2, 2011, p. 435; William H. Boothby, “Control in Weapons Law”, in Rogier Bartels et al. (eds), Military Operations and the Notion of Control Under International Law, T. M. C. Asser Press, The Hague, 2021, p. 384.

9 Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, UN Doc. A/HRC/23/47, 9 April 2013, para. 98; Schmitt, Michael N. and Thurnher, Jeffrey, ““Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict”, Harvard Law School National Security Journal, Vol. 4, 2013, p. 242Google Scholar.

10 P. Scharre, above note 5, p. 222.

11 Defense Science Board, Report of the Defense Science Board Summer Study on Autonomy, Secretary of Defense for Acquisition, Technology and Logistics, Washington, DC, 2016, p. 14.

12 Yoram Dinstein and Arne Willy Dahl, Oslo Manual on Select Topics of the Law of Armed Conflict, Springer, Cham, 2020 (Oslo Manual), p. 41.

13 T. Krupiy, above note 5, p. 33.

14 Schmitt, Michael N., “Wired Warfare 3.0: Protecting the Civilian Population during Cyber Operations”, International Review of the Red Cross, Vol. 101, No. 910, 2019, p. 340CrossRefGoogle Scholar.

15 For a summary of the polemic regarding AI in military systems, see Elvira Rosert and Frank Sauer, “How (Not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies”, Contemporary Security Policy, Vol. 42, No. 1, 2021, pp. 18–21.

16 See e.g. GGE on LAWS, Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, UN Doc CCW/GGE1/2019/3, 25 September 2019, Annex IV(f); Office of the Assistant Secretary of Defense for Research and Engineering, Technical Assessment: Autonomy, DoD, Washington, DC, 2015 (DoD Autonomy Assessment), p. 3. For AI systems more generally, see David Leslie, Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector, Alan Turing Institute, London, 2019, p. 31.

17 See e.g. Defense Innovation Board, AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense Defense Innovation Board, DoD, Washington, D.C., 2019, p. 16, available at: https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF; International Committee of the Red Cross (ICRC), Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach, Geneva, 2019, p. 11; Ministère des Armées, L'intelligence artificielle au service de la défense, Ministère des Armées, Paris, 2019, p. 7.

18 See Ram Shankar Siva Kumar et al., “Adversarial Machine Learning – Industry Perspectives”, 2020, p. 2, available at: http://arxiv.org/abs/2002.05646; Paul Scharre, Autonomous Weapons and Operational Risk, Center for a New American Security, Washington, DC, 2016, pp. 34–36.

19 Integrity attacks may nonetheless be preparatory for a poisoning attack or model replacement.

20 Comiter similarly divides adversarials broadly into adversarial inputs and poisoning. Marcus Comiter, Attacking Artificial Intelligence: AI's Security Vulnerability and What Policymakers Can Do about It, Belfer Center for Science and International Affairs, Cambridge, 2019, pp. 7–10.

21 P. Scharre, above note 5, p. 182.

22 Cristoph Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, Leanpub, 2019, p. 10; UK House of Lords, Select Committee on Artificial Intelligence, Report of Session 2017–19: AI in the UK: Ready, Willing, and Able?, HL Paper No. 100, 16 April 2018, p. 14.

23 Bathaee analogizes the situation with teaching a child how to ride a bike: “Although one can explain the process descriptively or even provide detailed steps, that information is unlikely to help someone who has never ridden one before to balance on two wheels. One learns to ride a bike by attempting to do so over and over again and develops an intuitive understanding.” Bathaee, Yavar, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law and Technology, Vol. 31, No. 2, 2018, p. 902Google Scholar.

24 See e.g. Fan-jie Meng et al., “Visual-Simulation Region Proposal and Generative Adversarial Network Based Ground Military Target Recognition”, Defence Technology, Vol. 18, No. 11, 2022.

25 US Air Force Office of the Chief Scientist, Autonomous Horizons: System Autonomy in the Air Force – A Path to the Future, Vol. 1: Human-Autonomy Teaming, AF/ST TR 15-01, 1 June 2015, p. 22.

26 Brian Haugh, David Sparrow and David Tate, The Status of Test, Evaluation, Verification, and Validation (TEV&V) of Autonomous Systems, Institute for Defense Analyses, Alexandria, 2018, pp. 2–3.

27 Arthur Holland-Michel, Known Unknowns: Data Issues and Military Autonomous Systems, United Nations Institute for Disarmament Research (UNIDIR), Geneva, 2021, p. 1, available at: https://unidir.org/publication/known-unknowns.

28 Vincent Boulanin, Mapping the Development of Autonomy in Weapon Systems: A Primer on Autonomy, Stockholm International Peace Research Institute, Stockholm, 2016, p. 22.

29 UK House of Lords, above note 22, p. 45.

30 See e.g. Ministère des Armées, above note 17, p. 5; A. Holland-Michel, above note 27, p. 27.

31 ICRC, above note 17, p. 11.

32 A survey found that many organizations neglect extra security testing and verification, assuming instead that available datasets are safe and tested against adversarials. Evidently, this is not the case. R. S. S. Kumar et al., above note 18, p. 3.

33 M. Comiter, above note 20, p. 29.

34 This is potentially problematic for legal labels, e.g. letting laypeople label whether a person is a combatant or not.

35 Ibid., p. 38.

36 See Shiqi Shen, Shruti Tople and Prateek Saxena, “Auror: Defending against Poisoning Attacks in Collaborative Deep Learning Systems”, in ACSAC ’16: Proceedings of the 32nd Annual Conference on Computer Security Applications, New York, 2016, p. 517, available at: https://dl.acm.org/doi/10.1145/2991079.2991125.

37 Tianyu Gu, Brendan Dolan-Gavitt and Siddharth Garg, BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 2017, p. 3, available at: http://arxiv.org/abs/1708.06733.

38 Ibid., p. 1.

39 E.g. Defense Innovation Board, above note 17, p. 16; ICRC, above note 17, p. 11.

40 Lee Fang, “Google Hired Gig Economy Workers to Improve Artificial Intelligence in Controversial Drone-Targeting Project”, The Intercept, 4 February 2019, available at: https://theintercept.com/2019/02/04/google-ai-project-maven-figure-eight/. For such projects, crowdsourcing is often used to recruit online freelance workers tasked with labelling data for minimal renumeration.

41 Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen and George E. Dahl, Motivating the Rules of the Game for Adversarial Example Research, 2018, p. 13, available at: http://arxiv.org/abs/1807.06732.

42 See Justin Gilmer et al., Adversarial Spheres, 2018, available at: http://arxiv.org/abs/1801.02774; Andrew Ilyas et al., Adversarial Examples Are Not Bugs, They Are Features, 2018, available at: http://arxiv.org/abs/1905.02175.

43 No system can be 100% reliable, including AI. IHL does not require perfect systems, but rather that the best effort is taken to uphold its rules and to spare civilians. Geoffrey S. Corn, “Autonomous Weapons Systems: Managing the Inevitability of ‘Taking the Man out of the Loop’”, in Nehal Bhuta, Suzanne Beck, Robin Geiβ, Hin-Yan Liu and Claus Kreβ (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge University Press, Cambridge, 2016, p. 228. No pronouncements are made here on the broader legal or ethical discussion of whether such decisions should be ceded to AI in the first place. Instead, it is merely observed that if AI systems are used to make decisions on the battlefield, it is inevitable that accidents will happen.

44 There is no fixed terminology for this type of countermeasure, and they have also been called evasion attacks, OOD inputs or input attacks. The term “adversarial input” is used in this article for its descriptiveness (i.e., a malicious input by an adversary).

45 Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt and Aleksander Madry, A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations, International Conference on Machine Learning, 2019, p. 1.

46 Adversarial inputs are not limited to visual stimuli; audio and digital signals can also be manipulated to evoke a desired output.

47 Arrieta, Alejandro et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI”, Information Fusion, Vol. 58, 2020, p. 101Google Scholar.

48 A. Ilyas et al., above note 42, p. 1.

49 See M. Comiter, above note 20, p. 17.

50 Ibid., p. 18.

51 Ibid., p. 24.

52 J. Gilmer et al., above note 41, p. 2.

53 See e.g. T. Gu, B. Dolan-Gavitt and S. Garg, above note 37, p. 6.

54 This level of distortion is invisible to the human eye, even if the original and manipulated images are placed side-by-side for comparison (a luxury no commander will have on the battlefield). To experience the complete imperceptability of such perturbations, see Figure 5 in Christian Szegedy et al., Intriguing Properties of Neural Networks, 2014, p. 6, available at: http://arxiv.org/abs/1312.6199.

55 M. Comiter, above note 20, p. 17.

56 T. Krupiy, above note 5, p. 33.

57 Will Knight, “The Dark Secret at the Heart of AI”, Technology Review, 11 April 2017, available at: www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai.

58 Adversarial inputs have justifiably been raised as significant threats to military systems with higher autonomy. See Ministère des Armées, above note 17, p. 7; Defense Science Board, above note 11, p. 14; P. Scharre, above note 5, pp. 180–182.

59 For more technical explanations, see D. Leslie, above note 16, p. 33; T. Gu, B. Dolan-Gavitt and S. Garg, above note 37, pp. 4–5.

60 As such, the deployer could have faithfully discharged its Article 36 review obligations and concluded in good faith that the system was reliable enough to be fielded, while being unaware that it was poisoned. See AP I, above note 6, Art. 36.

61 M. Comiter, above note 20, p. 32.

62 Barreno, Marco, Nelson, Blaine, Joseph, Anthony D. and Tygar, J. D., “The Security of Machine Learning”, Machine Learning, Vol. 81, No. 2, 2010, p. 127CrossRefGoogle Scholar.

63 For different methods of effectuating a poisoning attack, see Micah Goldblum et al., Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses, 2020, available at: https://arxiv.org/abs/2012.10544, pp. 2–10.

64 Ibid., p. 1.

65 M. Comiter, above note 20, p. 30.

66 D. Leslie, above note 16, p. 33.

67 T. Gu, B. Dolan-Gavitt and S. Garg, above note 37, p. 2.

68 Ibid., p. 6.

69 The DoD likens this property of AI to monoculture: it only takes “one or a small number of weaknesses to endanger a large proportion of the force”. DoD Autonomy Assessment, above note 16, pp. 3–4.

70 A racist could poison a weapon system to always kill persons from a certain minority group just because he hates them, for example.

71 This term is used colloquially and not in a criminal law sense.

72 See the below section entitled “Synthesis: When Is an Adversarial an Attack?”

73 P. Scharre, above note 18, pp. 38–39.

74 See also W. H. Boothby, above note 6, p. 155.

75 P. Scharre, above note 5, p. 187.

76 Chris Mayer, “Developing Autonomous Systems in an Ethical Manner”, in Andrew Williams and Paul Scharre (eds), Autonomous Systems: Issues for Defence Policymakers, NATO, The Hague, 2015, p. 73.

77 Michael N. Schmitt, “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics”, Harvard National Security Journal, 2013, p. 7, available at: https://harvardnsj.org/wp-content/uploads/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf.

78 M. N. Vego, above note 7, p. 61.

79 Carnahan, Burrus, “The Law of Air Bombardment in Its Historical Context”, Air Force Law Review, Vol. 17, No. 2, 1975, fn. 114Google Scholar.

80 Bombing campaigns against civilians with this aim were relatively common pre-World War II but are explicitly unlawful in contemporary law: see AP I, Art. 51(2). Recent examples can be found in the Russo-Ukrainian conflict. Their effectiveness is debatable.

81 Michael N. Schmitt, “Bellum Americanum: The U.S. View of Twenty-First Century War and Its Possible Implications for the Law of Armed Conflict”, in Michael N. Schmitt and Leslie C. Green (eds), The Law of Armed Conflict: Into the Next Millennium, Naval War College, Newport, 1998, p. 409.

82 C. Mayer, above note 76, p. 76.

83 Kreps, Sarah E., “The 2006 Lebanon War: Lessons Learned”, US Army War College Quarterly: Parameters, Vol. 37, No. 1, 2007, pp. 7273Google Scholar, available at: https://press.armywarcollege.edu/parameters/vol37/iss1/7.

84 See e.g. Simen Thys, Wiebe Van Ranst and Toon Goedemé, Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection, 2019, available at: http://arxiv.org/abs/1904.08653.

85 See e.g. T. Krupiy, above note 5, p. 33.

86 AP I, Art. 37(1). Note that we are talking here of “simple perfidy”; see the below section entitled “Thoughts on False Negative Strategies”.

87 See ibid, Art. 85(3)(f).

88 Some have argued that this would be a necessary prerequisite for the use of systems with higher levels of autonomy. E.g. Jeffrey S. Thurnher, “Feasible Precautions in Attack and Autonomous Weapons”, in Wolff Heintschel von Heinegg, Robert Frau and Tassilo Singer (eds), Dehumanization of Warfare, Springer International Publishing, Cham, 2018, p. 110.

89 S. Shen, S. Tople and P. Saxena, above note 36, p. 510.

90 J. Gilmer et al., above note 41, p. 3 (emphasis in original).

91 S. Shen, S. Tople and P. Saxena, above note 36, p. 510.

92 M. Comiter, above note 20, p. 11.

93 W. H. Boothby, above note 6, p. 154.

94 J. Gilmer et al., above note 41, p. 3.

95 Some kind of custom trigger is likely required because otherwise, the deployer would notice earlier that its systems tend to blow each other up.

96 P. Scharre, above note 18, p. 39.

97 Ibid., p. 39.

98 As Holland-Michel argues, deployers must take all feasible measures to prevent AI failures, including “unintended harm resulting from adversarial data issues”. A. Holland-Michel, above note 27, p. 13. In Boothby's view, AP I Art. 57(1) implies that “everything feasible must be done to seek to ensure those systems remain robust against the kinds of known cyber interference that would render the use of such weapon systems indiscriminate”. W. H. Boothby, above note 8, p. 385 (emphasis added). Ideally such quality assurance should have been carried out as part of the weapon owner's Article 36 obligation. Commanders are entitled to trust in their military organization's positive weapon review, but are also required to remain vigilant if signs in the field indicate that the weapon is compromised or is being targeted for an adversarial.

99 W. H. Boothby, above note 6, p. 154.

100 Jean-Marie Henckaerts and Louise Doswald-Beck (eds), Customary International Humanitarian Law, Vol. 1: Rules, Cambridge University Press, 2005, Rule 57, available at: https://ihl-databases.icrc.org/en/customary-ihl.

101 Program on Humanitarian Policy and Conflict Research at Harvard University, HPCR Manual on International Law Applicable to Air and Missile Warfare, Cambridge University Press, Cambridge, 2013 (HPCR Manual), para. 116.

102 AP I, Arts 51(4), 57(2).

103 S. Estreicher, above note 8, p. 435.

104 This analysis ignores the mens rea component which would be necessary for this act to constitute a war crime.

105 Boothby argues that AP I Art. 51(4)(b) implies the obligation to ensure that systems “remain robust against the kinds of known cyber interference that would render the use of such weapon systems indiscriminate”. W. H. Boothby, above note 8, p. 384.

106 Michael N. Schmitt, “Attack” as a Term of Art in International Law: The Cyber Operations Context, in C. Czosseck, R. Ottis and K. Ziolkowski (eds), 4th International Conference on Cyber Conflict: Proceedings 2012, NATO CCD COE Publications, 2012, p. 290.

107 Gisel, Laurent, Rodenhäuser, Tilman and Dörmann, Knut, “Twenty Years On: International Humanitarian Law and the Protection of Civilians against the Effects of Cyber Operations during Armed Conflicts”, International Review of the Red Cross, Vol. 102, No. 913, 2020, p. 312CrossRefGoogle Scholar; Droege, Cordula, “Get Off My Cloud: Cyber Warfare, International Humanitarian Law, and the Protection of Civilians”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 552CrossRefGoogle Scholar.

108 See e.g. AP I, Art. 57(2)(a)(ii).

109 L. Gisel, T. Rodenhäuser and K. Dörmann, above note 107, p. 312.

110 This debate is unsettled also with regard to cyber attacks. See e.g. M. N. Schmitt, above note 14, p. 338.

111 A cyber operation designed to disable air defence would be considered an attack by the same rationale. C. Droege, above note 107, p. 560.

112 Nils Melzer, Cyberwarfare and International Law, UNIDIR, 2011, p. 28, available at: https://unidir.org/sites/default/files/publication/pdfs//cyberwarfare-and-international-law-382.pdf (emphasis added).

113 This term is used as a counterpart to “attacker” and not in a ius ad bellum sense.

114 US Office of General Counsel of the Department of Defense, Law of War Manual, DoD, Washington, DC, June 2015 (updated December 2016) (DoD Manual), p. 418.

115 M. N. Schmitt, above note 106, p. 290.

116 AP I, Arts 57(1), 58.

117 Ibid., Art. 58(c).

118 See e.g. GGE on LAWS, above note 16, Annex IV(f).

119 Oslo Manual, above note 12, Rule 22.

120 Michael N. Schmitt (ed.), Tallinn Manual 2.0 on the International Law Applicable to Cyber Warfare, 2nd ed., Cambridge University Press, Cambridge, 2017 (Tallinn Manual), Rule 114; DoD Manual, above note 114, para. 5.2; L. Gisel, T. Rodenhäuser and K. Dörmann, above note 107, p. 326.

121 W. H. Boothby, above note 6, p. 154.

122 See DoD Manual, above note 114, para. 16.5.2.

123 AP I, Art. 51(8).

124 DoD Manual, above note 114, para. 5.16.4. This analysis ignores the debate related to voluntary shields.

125 The Oslo Manual addresses modern capita selecta such as cyber operations, outer space and autonomous technologies. As with the HPCR Manual (above note 101), it does not constitute hard law but records the relative consensus of an international group of experts.

126 The Manual does not define “cyber hacking”.

127 Oslo Manual, above note 12, p. 41.

128 Ibid.

129 Ibid.

130 We will avoid the term “attack” for the moment due to its legal weight.

131 Ibid.

132 Ibid. If the Oslo Manual intended to paint the hacker as effectuating their own attack, it would have referenced Arts. 51(4) and 57(2)(a)(iii) instead.

133 Ibid., p. 42.

134 David Turns, “Cyber War and the Concept of ‘Attack’ in International Humanitarian Law”, in Dan Saxon (ed.), International Humanitarian Law and the Changing Technology of War, Brill Nijhoff, Leiden, 2013, p. 224.

135 Schmitt, Michael N., “Cyber Operations and the Jus in Bello: Key Issues”, Israel Yearbook on Human Rights, Vol. 41, 2011, p. 94Google Scholar, available at: https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1077&context=ils.

136 L. Gisel, T. Rodenhäuser and K. Dörmann, above note 107, p. 313.

137 Tallinn Manual, above note 120, p. 416.

138 W. H. Boothby, above note 6, p. 157.

139 Ibid., pp. 157–158.

140 Ibid., p. 158.

141 Common-law concepts are used for their higher level of descriptiveness. See Thomas Weigend, “Subjective Elements of Criminal Liability”, in Markus Dubber, Tatjana Hörnle and Thomas Weigend (eds), The Oxford Handbook of Criminal Law, Oxford University Press, Oxford, 2014, pp. 494–500.

142 Even failed or intercepted attacks count as attacks as long as the intent to inflict violence was direct, underlining the primal import of the adversary's purpose. Program on Humanitarian Policy and Conflict Research at Harvard University, Commentary on the HPCR Manual on International Law Applicable to Air and Missile Warfare, Harvard University, Cambridge, MA, 2010 (HPCR Commentary), p. 28; Tallinn Manual, above note 120, p. 419.

143 Tallinn Manual, above note 120, p. 416 (emphasis added). For State support of this interpretation, see L. Gisel, T. Rodenhäuser and K. Dörmann, above note 107, p. 313.

144 M. N. Schmitt, above note 135, p. 94.

145 See the above section entitled “False Positives”.

146 The exact risk is very circumstantial. For example, using this adversarial against systems in an open field versus in an urban area would generate very different levels of risk to civilians.

147 See HPCR Commentary, above note 142, p. 91.

148 A. Holland-Michel, above note 27, p. 13.

149 L. Gisel, T. Rodenhäuser and K. Dörmann, above note 107, p. 312.

150 W. H. Boothby, above note 6, p. 154.

151 See the above section entitled “Related Legal Concepts”.

152 At least in the case of “pure” false negatives. Cf. the above section entitled “Specific versus Random”.

153 W. H. Boothby, above note 6, p. 154; A. Holland-Michel, above note 27, p. 13. The fact that the deployer can only be held responsible for foreseeable and identifiable adversarials also applies to weapon reviews. To take poisoning as an example, if the deployer recklessly approves models that were constructed using unsafe data collection methods (see the above section entitled “Vulnerabilities of Modern AI”), it is easier to argue that the deployer violated its review and constant care obligations compared to if a very sophisticated adversary manages to covertly install a backdoor in the system despite the deployer having taken all reasonable steps to ensure data security.

154 W. H. Boothby, above note 8, p. 385 fn. 57.

155 AP I, Art. 85(3)(f).

156 See Watts, Sean, “Law-of-War Perfidy”, Military Law Review, Vol. 219, 2014, p. 153Google Scholar. Simple perfidy is used by Watts to refer to actions that betray the opponent's confidence in IHL but which only lead to military or tactical benefit without causing any death, injury or capture. Cf. AP I, Arts 37(1) and 85(3)(f), which are result violations.

157 Yves Sandoz, Christophe Swinarski and Bruno Zimmerman, Commentary on the Additional Protocols, ICRC, Geneva, 1987, para. 1500.

158 Schmitt, Michael N., “Human Shields in International Humanitarian Law”, Columbia Journal of Transnational Law, Vol. 47, No. 2, 2009, p. 293Google Scholar.

159 Ibid., p. 298.

160 See AP I, Art. 51(7).

161 Shapira-Ettinger, Keren, “The Conundrum of Mental States: Substantive Rules and Evidence Combined”, Cardozo Law Review, Vol. 28, 2007, p. 2582Google Scholar.

162 W. H. Boothby, above note 6, p. 157.

163 K. Shapira-Ettinger, above note 161, p. 2584.

164 For example, patterns can be quickly removed by the adversary after the adversarial succeeds.