Will Killer Robots Be Banned? An Update After Geneva

By Julia M. Macdonald for Denver Dialogues 

Over the past year, a growing chorus of voices has raised concerns about the rise of “killer robots,” or lethal autonomous weapon systems (LAWS) as they are more formally known. Policymakers, technology leaders, academics, and NGO activists alike have raised serious questions about the future risks of reducing human control over weapons systems that are able to select and engage targets on their own. On November 13-17 2017, after several years of debate, the Convention on Certain Conventional Weapons convened a Group of Governmental Experts (GGE) to discuss the future of LAWS. Arms control advocates hoped that the meeting would provide a valuable opportunity to better define this category of weapons, with a view to establishing a preemptive ban on LAWS in the future.

Unfortunately, despite the publicity surrounding the meeting and significant momentum within civil society in favor of a ban, there were few signs of tangible progress at the end of the week in Geneva. An overly broad agenda, continued lack of clarity over crucial definitions, and resistance from key stakeholders in the research and development of LAWS, prevented states from making any serious headway on fundamental questions surrounding the regulation of these new technologies. Advocates of a ban will now have to wait until the next GGE meeting in mid-2018 to see if further progress can be made on these issues.

Difficulties in Establishing a Preemptive Ban

In the lead up to the Geneva meeting, Michael Horowitz, Professor of Political Science at the University of Pennsylvania, and I wrote a piece in Lawfare on one of the leading advocates for a preemptive ban on LAWS–the Campaign to Stop Killer Robots. The purpose of the post was to draw a comparison between this campaign and others that have attempted to establish weapons bans in the past, highlighting some of the unique barriers that exist to achieving an international prohibition on LAWS.

In particular, we argued that there are at least three hurdles to achieving an international ban. Firstly, the continued lack of definitional clarity and agreement around what constitutes a lethal autonomous weapon is a fundamental challenge to making progress in this issue area. NGOs and many states agree on the need for “meaningful human control” over weapons, but it remains unclear what that means exactly. Absent agreement by states over the category of weapons to be prohibited, it is difficult to see how governments would agree to any significant restrictions.

Secondly, the absence (so far) of human casualties from the use of LAWS also muddies the waters and makes it hard for arms control advocates to claim that these weapons violate international humanitarian law. Successful weapons bans in the past, such as that on landmines, have relied in large part on their ability to shock and shame governments into action through casualty statistics, images of injured innocent civilians, and a clear demonstration of how these weapons violate international law. Absent these statistics, images, and a strong legal basis, motivating governments to agree to a ban is much harder.

Thirdly, there is continued uncertainty surrounding the military effectiveness of LAWS. This uncertainty makes states–and especially powerful states – wary of committing themselves to a ban that might undermine their national security in the future. Given the possible advantages that autonomous weapons might offer in terms of both speed and reliability, and the disadvantage a state might face if it restricted its ability to field LAWS while an enemy did not, achieving a ban will be difficult.

Despite these significant challenges, however, we concluded that continued dialogue and discussion about what LAWS are, and what “meaningful human control” over weapons really means, is essential to establishing agreement on the appropriate role of humans in decisions surrounding the use of force-something that is of great importance irrespective of one’s views on an outright ban. The meeting in Geneva in November thus provided states, NGOs, academics, and industry experts alike with an important opportunity to engage in this dialogue and move the conversation forwards.

What happened in Geneva?

The decision to elevate the discussion of LAWS to a Group of Governmental Experts meeting marked a significant shift in diplomatic formality and granted this group power to better define this class of weapons. The GGE was chaired by Ambassador Singh Gill of India and was mandated with examining emerging technologies in the area of LAWS, building on discussions over the past three years. A total of 86 countries participated in the meeting, in addition to the United Nations Institute for Disarmament Research (UNIDIR), the International Committee of the Red Cross (ICRC), and the Campaign to Stop Killer Robots. Discussions lasted for one week, from 13-17 November 2017.

What were the outcomes of these deliberations? Unfortunately, it seems, not very much. There were some positives for arms control advocates in terms of increasing support for a ban. There are now 22 countries calling for a prohibition on LAWS, with Brazil, Iraq, and Uganda joining the list of endorsers during the week of discussions. In addition, the countries of the Non-Aligned Movement (NAM) for the first time called for the development of a legally binding instrument on lethal autonomous weapons.

But beyond that, little progress was made. Crucially, there was very little headway on the most fundamental challenge facing the regulation of LAWS – definitional clarity about what this these weapons are exactly, and what is meant by “meaningful human control.” Instead of focusing narrowly on this key issue area, Ambassador Gill’s preparatory paper for the week’s meetings covered a broad array of issues relating to the uses and applications of artificial intelligence in society more broadly. This expansive agenda, combined with the decision to focus on panel presentations by external experts over governmental exchanges, has been criticized by civil society participants who had expected more substantive discussions towards a working definition of LAWS.

In addition to the lack of progress on definitional issues, a number of powerful states voiced their concerns over a preemptive ban for many of the reasons outlined in our Lawfare piece. Russia was vocal in its objections, making clear that it would not adhere to a ban, moratorium, or regulation of such weapons. This falls in line with President Putin’s September statement that whoever leads in AI “will be the ruler of the world.” In Geneva, Russian diplomats noted that the lack of existence of such systems, combined with the large class of weapons that LAWS covers, makes any progress on the issue very difficult. For its part, the US government, which has made robotics and autonomy a key pillar of its “Third Offset Strategy,” also cautioned against developing a “premature” definition of such weapons, and highlighted some of their potential benefits of autonomous systems in reducing civilian combat casualties.

Looking Ahead to 2018

There are significant challenges to achieving a preemptive ban on LAWS that should not be underestimated. That being said, there is a growing need for governments to engage in substantive dialogue and discussion about what LAWS are in order to establish the appropriate role for human decision making in the application of military force. The November GGE meeting held some promise of providing a venue for such discussions, but unfortunately yielded few outputs due to the expansive agenda, limited focus on key definitional issues, and the continued resistance from powerful states. We will now have to wait until the next GGE meeting in mid-2018 to see if these challenges can be overcome.

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like