Blog

Class aptent taciti sociosqu ad litora

Dancing With Killer Robots

  • February 28, 2021
  • Clayton Rice, K.C.

Lethal autonomous weapons are a type of robot that can independently search out and engage targets without meaningful human control. They are also called lethal autonomous weapon systems (LAWS), lethal autonomous robots (LAR) and autonomous weapons systems (AWS). The international non-governmental organization, Human Rights Watch, has expressed “serious doubts” whether autonomous weapons can be capable of meeting international humanitarian law standards including the rules of distinction, proportionality and necessity. As the global coordinator of the Campaign to Stop Killer Robots, Human Rights Watch has called for a preemptive ban on the development of fully autonomous weapons. The development of lethal autonomous weapons is, together with climate change and facial recognition technology, among the gravest threats to human dignity in the early twenty-first century.

1. Introduction

The nonpartisan U.S. Congressional Research Service, in a document titled Defense Primer: U.S. Policy on Lethal Autonomous Weapons Systems (here), described LAWS as “a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.” There is, however, no universal definition that has been accepted internationally. The U.S. Department of Defense Directive (DODD) 3000.09 provides definitions for different categories of LAWS that are grounded in “the role of the human operator with regard to target selection and engagement decisions”.

The directive defines LAWS as “weapons systems[s] that, once activated, can select and engage targets without further intervention by a human operator.” The concept of autonomy, also called “human out of the loop” or “full autonomy” is distinct from “human on the loop” systems that are human-supervised in which “operators have the ability to monitor and halt a weapon’s target engagement.” Semi-autonomous weapons include “fire and forget” weapons that “deliver effects to human-identified targets using autonomous functions.”

DODD 3000.09 requires that all systems be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” However, what is “appropriate” may differ across (a) various weapon systems and (b) domains of warfare. Furthermore, human judgment over the “use of force” does not require manual human “control” of the weapon system but, rather, “broader human involvement in decisions about how, when, where, and why the weapon will be employed.”

In a paper titled Autonomous weapons systems: how to work towards a total ban, published by the Canadian Bar Association (here), Christiane Saad and Ewa Gosal identified three categories of systems. The first category, human-in-the-loop, requires a human operator in the selection and engagement of targets. The second category, human-on-the-loop, allows human intervention to terminate an engagement “with the exception of time-critical attacks on platforms or installations”. The third category, human-out-of-the-loop, allows a system to select and engage targets without human intervention.

2. What is Autonomy?

The third category, human-out-of-the-loop, includes the definition by the International Committee of the Red Cross (ICRC) “of those systems able to independently select and attack targets with autonomy in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets without human intervention.” (here) Saad and Gosal have argued that “[a]utonomy results from the delegation of a decision to an authorized entity to take action within specific boundaries.” An important distinction, they said, is that “systems governed by prescriptive rules that permit no deviations are automated, but they are not autonomous.” To be autonomous, a system must have the capability of selecting among different courses of action “based on its knowledge and understanding of the world, itself, and the situation.”

Although the use if artificial intelligence (AI) may reduce collateral damage, fully autonomous weapon systems present a new moral dilemma which is why some scientists and international organizations have called for a preemptive ban. AI raises a myriad of ethical concerns including questions of bias, transparency and accountability. “With millions of lines of code in each application,” Saad and Gosal said, “it is difficult to know what values are inculcated in software and how algorithms actually reach decisions.” It will be difficult for fully autonomous weapon systems to distinguish between a civilian and a combatant which implicates the right to life under Article 3 of the Universal Declaration of Human Rights (1948).

3. The Right to Life

In a 2014 report titled Shaking the Foundations: The Human Rights Implications of Killer Robots, Human Rights Watch described the right to life as the “bedrock of international human rights law.” (here) In the International Covenant on Civil and Political Rights (1966), Article 6 states that “[e]very human being has the inherent right to life” of which “[n]o one shall be arbitrarily deprived”. In the context of arbitrary killing in law enforcement situations, HRW argued that arbitrary killings under the right to life fail to meet “three cumulative requirements” for when and how much force may be used. To be lawful, force must be (a) necessary (b) a last resort and (c) proportionate. The three prerequisites are instructive in positioning the right to life in the context of the diverging views on fully autonomous weapons.

Proponents argue that LAWS would reduce the risk to soldiers’ lives and decrease military expenditures. Opponents counter that these systems would endanger civilians as they would lack compassion, empathy and judgment – necessities of the subjective assessment underlying the protections of international law. Proponents say that roboticists could theoretically develop technology with sensors to interpret complex situations and the ability to exercise near-human judgment. Opponents question that assumption, emphasizing that such technology would not be possible in the near future and allowing continued development will lead to a “robotic arms race”. (at p. 6)

In a 2020 report titled Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control, HRW reviewed the policies of ninety-seven nations that have publicly stated their views on killer robots since 2013. (here) The vast majority regard human control as critical to the legality of weapon systems. “Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action,” said Mary Wareham, arms division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. “An international ban treaty is the only effective way to deal with the serious challenges raised by fully autonomous weapons.”

The question of how to respond to the concern has “steadily climbed the international agenda” since 2013 according to the recent HRW report. The urgency is characterized by a singular theme in the call for international collaboration – a ban on fully autonomous weapons and reinforcement of meaningful human control over the use of force. There are two choices. Such an instrument could be developed in the form of a new protocol to the 1980 Convention on Conventional Weapons (CCW). (here) Or, the systems could be banned by a stand-alone treaty negotiated by a process similar to the initiatives that prohibited antipersonnel land mines in 1997 and cluster munitions in 2008. Canada is in the majority of nations supporting a preemptive ban and has participated in every CCW meeting on killer robots in 2014-2019. (at p. 11)

Any discussion about killer robots should also consider the deployment of cyberweapons by state actors involved in attacks like the SolarWinds breach that I discussed in a recent post to On The Wire. (here) In a piece titled Do Killer Robots Violate Human Rights? published by The Atlantic, Professor Patrick Lin, at California Polytechnic State University, argued that we need to think about the broader implications of invoking the principles of the right to life and the right to human dignity. There has been very little discussion about meaningful human control over cyberweapons. “They often attack autonomously, proliferate indiscriminately, can result in physical harm, and cannot be stopped,” he said. (here)

4. Proposed Treaty

A new treaty, proposed by HRW, would apply to all weapon systems that select and engage targets based on sensor processing, rather than human input. In a report titled Elements of and Models for a Treaty on Killer Robots, HRW urged that the substantive content must include (a) a general obligation to maintain meaningful human control over the use of force (b) prohibitions banning the development, production, and use of weapon systems that autonomously select and engage targets and, by their nature, pose fundamental moral or legal problems and (c) specific position obligations aiming to ensure that meaningful human control is maintained in the use of all other systems that select and engage targets. (here)

Human Rights Watch suggested that the general obligation to maintain meaningful human control establishes an “overarching principle” that can close unexpected loopholes in other provisions of the treaty. It may also guide interpretation of the prohibitions and positive obligations. The focus of the general obligation on control over conduct (“use of force”) rather than control over a specific system “helps future-proof the treaty by obviating the need to foresee all possible technologies in a rapidly developing field.”

Since 2014, the Convention on Conventional Weapons has held eight meetings, attended by more than 100 nations, to discuss the moral, legal, accountability and security implications of lethal autonomous weapons. But, as Human Rights Watch observed in the report on the elements of a proposed treaty, “the gravity of the problem warrants a much more urgent response.” A majority of CCW nations, and the Campaign to Stop Killer Robots, have called for the negotiation of a legally binding instrument. The Campaign advocates for a treaty to maintain meaningful human control over the use of force and prohibit weapon systems that operate without such control. (here)

To “increase momentum” for the adoption of a timely binding instrument, HRW made three recommendations in the report on treaty elements. First, nations should launch negotiations by the end of 2021 with the aim of “swiftly adopting” a treaty to retain meaningful human control over the use of force and prohibit weapons systems that lack such control. Second, nations should consider and build on the precedent of earlier treaties and normative frameworks to address the concerns posed by fully autonomous weapons. Third, nations should articulate their national positions on the structure and content of a new treaty.

5. Conclusion

As suggested by HRW in the report on treaty elements, other international humanitarian law treaties have used general obligations to set out the “core purpose and foundational principles” of a binding instrument and to “inform interpretation of more specific provisions”. The 1977 Additional Protocol I to the Geneva Conventions provides a viable precedent because “its origins resemble those of the proposed fully autonomous weapons treaty.” Protocol I responded to new developments in weapons technology. Aerial bombing, for example, did not exist at the time of the 1907 Hague Regulations. “The development of autonomy in weapons systems likewise creates an impetus to clarify and strengthen the law,” the report concludes.

I will leave you, then, with this comment made by U.N. Secretary General Antonio Guterres at a 2019 meeting of AI experts in Geneva: “[M]achines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” (here) Oh, and I will leave you with this too. There is an exchange in the James Cameron film Terminator 2: Judgment Day between John Connor, the hunted, and the benign Terminator that seems a world away from where we are today. John tells the Terminator that it “just can’t go around killing people” and the Terminator repeatedly asks why. “Because you just can’t, okay?” John insists. “Trust me on this.” Recently, thirty years later, the robotics firm Boston Dynamics published a video on YouTube of its robot dog Spot dancing with other robots to the tune “Do You Love Me?” (here) While the robot Pepper, manufactured by SoftBank Robotics, patrols Paris malls reminding shoppers to put on their masks during the pandemic, maybe we have come a step closer to the last waltz with killer robots. (here)

Comments are closed.