Drones Will Soon Decide Who to Kill
Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.
Ultimately, the crew has the final ethical, legal and operational responsibility for killing designated human targets. As one Reaper operator states: “I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians.”
Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.
And this actually points to one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided. The weakness in this argument is that you don’t have to be responsible for killing to be traumatised by it. Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.
When I interviewed over 100 Reaper crew members for an upcoming book, every person I spoke to who conducted lethal drone strikes believed that, ultimately, it should be a human who pulls the final trigger. Take out the human and you also take out the humanity of the decision to kill.
Grave consequences
The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.
The legal implications of these developments are already becoming evident. Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.
With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.
Ethically, there are even darker issues still. The whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – that the technology uses is that they become better at whatever task they are given. If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed. In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths will count as acceptable as the technology is refined.
Recent experiences of autonomous AI in society should serve as a warning. Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended autonomous drone deaths as computer bugs are ironed out.
If machines are left to decide who dies, especially on a grand scale, then what we are witnessing is extermination. Any government or military that unleashed such forces would violate whatever values it claimed to be defending. In comparison, a drone pilot wrestling with a “kill or no kill” decision becomes the last vestige of humanity in the often inhuman business of war.
Peter Lee is a Reader in Politics and Ethics and Theme Director for Security and Risk Research and Innovation at the University of Portsmouth.
Overactive Imagination Risks Panic and Distress
November 28, 2017, The Conversation
The newly released short film offers a bleak dystopia with humans at the mercy of “slaughterbots”. These are autonomous micro-drones with cameras, facial recognition software and lethal explosive charges. Utterly terrifying, and – the film claims – not science fiction but a near-future scenario that really could happen. The film warns with a frightening, deep voice: “They cannot be stopped.” The only salvation from this impending hell is, it is suggested, to ban killer robots.
This imaginative use of film to scare its viewers into action is the 21st-century version of the panic that HG Wells’s science fiction writings created in the early 20th century. New technologies can almost always be used for malevolent purposes but those same technologies – in this case flying robots, facial recognition, autonomous decision-making – can also drive widespread human benefit.
What about the killing part? Yes, three grams of explosive to the head could kill someone. But why go to the expense and trouble of making a lethal micro-drone? Such posturing about the widespread use of targeted, single-shot flying robots is a self-indulgence of technologically advanced societies. It would be hugely costly to develop such selective killing capability for use on a mass scale – certainly outside the capacity of terrorist organisations and, indeed, most militaries.
By comparison, in Rwanda in 1994, 850,000 people were killed in three months, mainly by machetes and garden tools. A shooter in Las Vegas killed at least 59 people and wounded more than 500 in only a few minutes. Meanwhile, in Germany, France and the UK, dozens of innocent people have been killed by terrorists using ordinary vehicles to commit murder. Cheap, easy and impossible to ban.
Bombing from aircraft was not outlawed at the 1922-23 Peace Convention at The Hague because governments didn’t want to surrender the security advantages it offered. Similarly, no government will want to relinquish the potential military benefit from drone technology.
Over-dramatic films and active imaginations might well cause panic and distress. But what is really needed is calm discussion and serious debate to put pressure on governments to use new technologies in ways that are beneficial to humankind – not ban them altogether. And where there are military applications, they should follow existing Laws of Armed Conflict and Geneva Conventions.
Peter Lee is a Reader in Politics and Ethics and Theme Director for Security and Risk Research and Innovation at the University of Portsmouth.
A Wake-up Call on How Robots Could Change Conflicts
The Campaign Against Killer Robots‘ terrifying new short film “Slaughterbots” predicts a new age of warfare and automated assassinations, if weapons that decide for themselves who to kill are not banned. The organisation hopes to pressure the UN to outlaw lethal robots under the Convention on Certain Conventional Weapons (CCW), which has previously banned antipersonnel landmines, cluster munitions and blinding lasers on the battlefield.
Some have suggested that the new film is scaremongering. But the technologies needed to build such autonomous weapons – intelligent targeting algorithms, geo-location, facial recognition – are already with us. Many existing lethal drone systems only operate in a semi-autonomous mode because of legal constraints and could do much more if allowed. It won’t take much to develop the technology so it has the capabilities shown in the film.
Perhaps the best way to see the film is less a realistic portrayal of how this technology will be used without a ban and more a wake-up call about how it could change conflicts. For some time to come, small arms and light weapons will remain the major instruments of political violence. But the film highlights how the intelligent targeting systems supposedly designed to minimise causalities could be used for a selective cull of an entire city. It’s easy to imagine how this might be put to use in a sectarian or ethnic conflict.
No international ban on inhumane weapons is absolutely watertight. The cluster munitions treaty has not prevented Russia from using them in Syria, or Saudi Arabia bombing Yemeni civilians with old British stock. But the landmine treaty has halved the estimated number of casualties – and even some of those states that have not ratified the ban, such as the US, now act as if they have. A ban on killer robots could have a similar effect.
Similarly, a ban might not remove all chance of terrorists using these weapons. The international arms market is too promiscuous. But it would remove potential stockpiles of killer robots by forcing governments to limit their manufacture.
Some have argued armed robotic systems might actually help reduce suffering in war since they don’t get tired, abuse captives, or act in self-defence or revenge. They believe that autonomous weapons could be programmed to uphold international law better than humans do.
But, as Prof Noel Sharkey of the International Campaign for Armed Robot Control points out, this view is based on the fantasy of robots being super smart terminators when today “they have the intelligence of a fridge”. While the technology to enable killer robots exists, without the technology to restrain them, a ban is our best hope of avoiding the kind of scenario shown in the film.
Steve Wright is a Reader in the Politics and International Relations Group at Leeds Beckett University and a member of the International Campaign for Armed Robot Control.