Death By Algorithm

Science and Tech War and Peace

 

squad
Legged Squad Support System robot prototype, 2021,Credit: DARPA

 

 

In 2007, Noel Sharkey published a dire warning in The Guardian titled “Robot Wars Are a Reality.” An expert in artificial intelligence and robotics, Sharkey expressed concern about the use of battlefield robots; electronic soldiers that act independently of any human control. Sharkey argued that we “are sleepwalking into a brave new world where robots decide who and when to kill.” I remember reading this article and wondering if killer robots could really become a reality. Would humanity allow robots to make life and death decisions? Why do human beings continually develop malevolent technologies? 

 

War is no longer fought primarily on the battlefield. Developments in computing, cyber warfare and artificial intelligence, have changed the way nations and non-state actors engage in hostilities. Technologies which enable point and shoot (or click) destruction are growing exponentially year by year, fuelled by a digital revolution that has ricocheted across the globe. Nations have realized that today’s conflicts are waged by 1s and 0s, and that algorithms can be trusted allies in the never ending war. It is now essential for the world to grasp the reality that robot wars are no longer just the fictive imaginings of science fiction, but a possibility that has grave consequences for our future.  

 

Countries such as the United States, Israel, the U.K., Russia, South Korea and China are heavily funding research and development in science and technology, with special attention paid to the cutting edge fields of A.I. and robotics. The U.S. is specifically concerned about their technological edge in comparison with China and Russia, and the 2017 defense budget reflects this apprehension. In his opening statement to the House Appropriation Committee, Secretary of Defense Ash Carter proudly outlined upcoming war expenditures and unveiled the plan to spend $34 billion on cyber, electronic warfare, and space “to among other things help build our cyber mission force, develop next generation electronic jammers, and prepare for the possibility of a conflict that extends into space. In short DoD will continue to ensure dominance in all domains.” And yes, space conflict is the next frontier, a possible reality that extends the battlefield to environments only made possible by developments in space aeronautics and robotic warfare.

 

So, how does one make sense of this “brave new world”? Are killer robots and space wars fait accompli?  

 

On April 11, 2016, the Convention on Conventional Weapons (CCW) in Geneva, will convene the third session of informal talks on issues surrounding emerging technologies in the realm of lethal autonomous weapons systems. International dignitaries, human rights activists, scientists and academics hope to achieve common ground regarding the best way to move forward on the topic and will pay particular attention to the concept of “meaningful human control.” Since discussions amongst decision makers are taking place at a much slower pace than the growth of technology, it is time to stop talking and get busy creating international law designed to prevent the development and use of killing machines.  

 

With the rise of autonomous weapons, we are confronted with questions that must be answered. With weapons that act on their own, what mechanisms will protect innocent civilians from harm? Who is accountable when those machines malfunction - the developers who create the machines, or the states that deploy them? Can robots be programmed to comply with international human rights law? Since computers are susceptible to viruses and hacking, it is also feasible to assume that the systems which control killer robots could be overtaken by state and non-state actors.  

 

The best way to avoid absurd dystopian scenarios is to implement an international ban on the development and use of lethal autonomous weapons systems. This necessity was acknowledged by many who participated in the last round of informal talks at the CCW.  Steve Goose, Director of the Arms Division at Human Rights Watch, urged parties to the CCW to develop formal state policies on autonomous weapons as an important step towards implementing change, and asked the CCW to move onward to form a Group of Governmental Experts (GGE) in order to move out of the realm of discourse and into the arena of concrete action.           

 

According to Mary Wareham, Advocacy Director of the Arms Division at Human Rights Watch, the consensual agreement by parties regarding the necessity of human presence within the weapons loop is tantamount to a ban on lethal autonomous weapons. As global coordinator for the Campaign to Stop Killer Robots, Wareham is well positioned to unpack the key points of the issue. 

 

wells

 

 

In a phone interview, Wareham cited the precedent of Protocol IV of the CCW on Blinding Laser Weapons, which human rights advocates consider a monumental development in international human rights law. Through the CCW, blinding laser weapons were banned before they were developed and thus the implementation of Protocol IV pre-emptively prevented the usage of blinding weapons in war. Wareham argues that the “CCW is the place to negotiate international human rights law, and within this mechanism a sixth protocol could easily be added.” By adding another protocol which bans lethal autonomous weapons, the CCW would effectively eliminate a robotic arms race, which is inevitable if developments in robotic warfare continue unchecked. Wareham is hopeful that a ban will be supported by Zimbabwe, Bolivia, Cuba, Ecuador, Egypt, Ghana, the Holy See, Pakistan and Palestine, but asserts that if concrete action is not taken soon “the future will not get any better.”

 

During the CCW talks last November UN Special Rapporteur on Extrajudicial Summary, or Arbitrary Executions, Christof Heyns, argued that lethal autonomous weapons are an affront to human dignity stating “a human being in the sights of a fully autonomous machine is reduced to being an object - merely a target. This is death by algorithm; it has also been called ethics by numbers.” Heyn’s statement highlights the heart of this issue. Can a robot be programmed to understand the value of life?  

 

Ronald Arkin, Professor in the School of Interactive Computing at Georgia Institute of Technology, believes the answer is yes. Arkin argues that “humanity has a rather dismal record in ethical behavior in the battlefield,” and that robotic soldier could be programmed to follow the rules of international law, much better than human can.  But considering it is people who program machines, how can we trust the programmers?  If machines do not really comprehend emotions how can they be expected to understand moral issues?  And it is important to remember, that using robot soldiers will not change the reasons that humans engage in conflict, and in fact, the usage of killer robots may actually make it easier to wage war. As we have already seen in the growing preference for drone strikes over traditional war.

 

One way that robotic wars can be avoided is if industry refuses to develop killing machines. The Canadian company Clearpath Robotics develops autonomous robotic systems, but refuses to develop lethal autonomous weapons. Other robotic companies could take their lead. The Future of Life Institute recently released an open letter from A.I. and robotics researchers demanding a ban on lethal autonomous weapons, thus far over 22,000 individuals have signed the letter. At the CCW talks last November the Women’s International League for Peace and Freedom stated its concern, arguing that “the use of force has already become too disengaged from human involvement, with the use of armed drones. Autonomous weapons go beyond remotely controlled drones, devolving life and death decision making to software and senses.”  

 

A robot dystopia can be avoided, but the time to act is now. While life and death decisions are being discussed, the software to destroy life is developing rapidly. So, how will the international community respond? 

 

 

Robots, Drones, International Law