Q. What is the future of autonomy?
The move toward miniaturization on the one hand and sophisticated, nuclear-capable, dogfighting combat drones on the other both speak to advances in hardware. Another set of developments is taking place in the area of software, in particular in the direction of autonomy. Autonomous systems are just as the name implies: technologies that have been programmed and operate without remote piloting. They can be used for many of the same purposes as semiautonomous systems and with perhaps a greater range will come to include cars and space vehicles. These systems do introduce a different set of technical challenges since more sensitive and nuanced sensors are required to anticipate and control situations such as landing, targeting, or navigating. They also have triggered considerable skepticism about the implications of altogether removing humans from military decision-making.
In 2013, the Pentagon issued a roadmap that spelled out its 25-year vision for drones, with bullet points highlighting the hope that it would "take the 'man' out of 'unmanned' " through greater automation of its drones. Its hope is to create higher levels of autonomy and move away from the high level of human control that characterizes the current technology.17 At the same time, the military expresses wariness about full automation in the following way:
For a significant period into the future, the decision to pull the trigger or launch a missile from an unmanned system will not be fully automated, but it will remain under the full control of a human operator. Many aspects of the firing sequence will be fully automated but the decision to fire will not likely be fully automated until legal, rules of engagement, and safety concerns have all been thoroughly examined and resolved.18
That the Pentagon believes that these legal questions could be resolved points to the expectation that at some point we will live in a fully automated world, even in the realm of targeting. As the section on international law suggests, decisions about combatants versus civilians are inherently fraught, especially in environments where civilians routinely traverse into combatant status and back and where the definition of
"direct participation in hostilities" can be quite ambiguous. Technology may be able to provide better intelligence but is unlikely to be better at adjudicating subjective and inherently philosophical questions of where indirect participation ends and direct participation begins.19 The statement that automation will only result under those conditions either underestimates those philosophical or fully appreciates those challenges, in which case fully automated systems will remain science fiction.
Some roboticists are trying to tackle this problem by equipping machines with a moral compass, an "ethical adapter," that can generate a sense of compassion when faced with the prospect of lethal force. The ethical adapter also tries to inculcate an "after-action reflection," which means the robot can then try to modify its future behavior based on what it learned from the previous event (including error). Another emotion that roboticists are trying to generate is one of guilt, wherein if "specific affective threshold values are exceeded, the system will cease being able to deploy lethality partially or in totality."20 To put it mildly, as Philip Alston, former UN Special Rapporteur on extrajudicial, summary or arbitrary executions, did, "the notion that the laws of war can be reduced to programmable formulae and the idea that the human conscience can be mechanically replicated are both far more problematic than Arkin's work would suggest." Robert Arkin, who studies artificial intelligence (AI), asserts that ultimately the ethical adapter on an autonomous system can ultimately reduce civilian casualties compared to their human counterparts.21
In response to these potential developments, a transnational movement has mobilized to prevent the use of systems that engage targets and acquire situational awareness in the absence of human intervention. This is a form of AI that Arkin describes in his book. AI makes judgments with no human guidance after its coordinates and objectives are programmed. One movement named the Campaign to Stop Killer
Robots questions the basic premise that fully autonomous systems can function ethically and of their own accord. The group was launched in April 2013 by Jody Williams, who was also responsible for the movement that culminated in the Ottawa and Oslo Treaties that banned land mines and cluster munitions, and has proposed a ban on fully autonomous weapons. It works through the mobilization of states, which have then worked within the Convention on Conventional Weapons (CCW) to create a treaty that would preemptively ban additional development and use of these systems.22 In November 2014, countries involved in the CCW agreed to continue to a second round of discussion about lethal autonomous weapons systems, which Human Rights Watch Arms Advocacy Director and Campaign to Stop Killer Robots Coordinator Mary Wareham cited as an acknowledgement of the importance of the topic; she also, however, cautioned that "the technology is moving faster than the international response."23
More recently, the CEO of electric car maker Tesla, Elon Musk, joined forces with scientist Stephen Hawking and Apple cofounder Steve Wozniak to write an open letter about the potential consequences of autonomous weapons. The letter built on Hawking's and Musk's previous cautions about AI, warning that there might be unintended consequences of greater automation since machines might not be able to understand the good and bad effects of the actions they take. The letter states that "if any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable... autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethic group," meaning that the proliferation of this technology would be a dangerous development.24
The anti-autonomy movement has found a number of allies moving the discussion forward, but some observers urge caution in taking the ban too far, risking that the proverbial baby is thrown out with the bath water. If the movement bans precision weapons, such as the US long-range anti-ship missile and Norway's Joint Strike Missiles, which are more like precision weapons than they are autonomous weapons, "one of the most significant developments in the twentieth century toward making warfare more humane and reducing civilian casualties" will come to a halt according to Michael Horowitz and Paul Scharre. These authors note that compared to the 50/ 50 chance of bombs landing within 1.25 miles of their target in World II, some precision munitions are accurate to 5 ft., which reduces the likelihood of hitting unintended targets. Nonetheless, they too agree that fully autonomous systems "do raise serious issues worthy of further debate."25
Autonomous technologies, potentially quite problematic in armed conflict circumstances, are less likely to prove controversial in terms of civilian applications. As the New York Times suggested in a review of ethics and robots, "the favorite example of an ethical, autonomous robot is the driverless car, which is still in the prototype stage at Google and other companies."26 They are thought to be consistently safe compared to the often-multitasking and sometimes-competitive human drivers. These technologies are also the basis of many of the tests being done by Amazon and Google or by delivery companies that program an address, for example, to which the drone uses GPS to reach and then flies back. Project Wing, the Google X test project seeking to enable drone-delivery services, is intended to be autonomous, though still requires human intervention to direct the drone around birds, weather, and trees. The founder of Project Wing acknowledged that the idea of doing autonomous delivery is "years from a product, but it is the first prototype that we want to stand behind."27