Technology sure moves fast. Google glass…aerial drones…self-driving cars…and now…
A few weeks ago, I attended a conference put on by the Canadian Red Cross called “‘Killer Robots’—the Future of Weaponry and International Humanitarian Law.” Bringing together experts from academia, the military, legal community, robotics industry, and civil society, for one mind-bending afternoon we explored the ethical, legal, and technical angles of the issue.
So, what is a “killer robot,” anyway?
Killer robots—more officially (albeit less provocatively) known as fully autonomous weapons systems—would be able to select and fire on targets without human intervention.
Although, for the moment, “killer robots” do not exist, high-tech militaries are developing or have deployed precursor weapons that demonstrate the drive towards more autonomy for machines in military theater. Perhaps not surprisingly, the U.S. is a leader in this technological development; but, China, Russia, Germany, South Korea, Israel, and U.K. are also participants.
Back in 2012, Human Rights Watch released the first major publication by an NGO on the topic. Intriguingly called “Losing Humanity: The Case Against Killer Robots,” the report analyzed the nature of existing and potential technology, and articulated compelling arguments for why fully autonomous weapons should be preemptively banned before we sleepwalk into a new—and deeply troubling—reality.
Robotic weapons are often divided into three categories depending on the level of human involvement in their actions. There are:
- Human-in-the-Loop Weapons: which can select targets and deliver force only by human command;
- Human-on-the-Loop Weapons: which can select targets and deliver force under the oversight of a human who can override a robot’s action; and
- Human-out-of-the-Loop Weapons: which are capable of selecting targets and delivering force without any human input or interaction.
As a stark shift in policy, taking humans out-of-the-loop would involve the intentional (and unprecedented!) relinquishment of control—delegating crucial moral decisions around who lives and who dies to machines.
Not surprisingly, this is hotly-contested terrain.
Those in the “for-autonomous-weapons” camp have argued that substituting machines for humans in combat is justified (and preferable) because robots—invulnerable to the perils of the human condition (exhaustion, emotional outbursts, perception bias, etc.)—would outperform soldiers physically, emotionally, and ethically. Ethical standards, proponents argue, could simply be “programmed” into machines.
But is this wishful thinking?
Those in the “against-autonomous-weapons” camp articulate compelling legal, technological, and ethical concerns around why killer robots are a bad idea. Full stop.
- Legal concerns: Robots could never comply with the complexity of the laws of armed conflict (aka International Humanitarian Law) in chaotic contexts. First, a robot would need to be able to distinguish between combatants and non-combatants; Second, it would need to morally assess every conflict to justify whether a particular use of force is proportional, and; Third, it would need to comprehend military operations in order to decide whether the use of force on a particular occasion is of military necessity;
- Technological concerns: While people expect robots not to make mistakes, this is not realistic. As one roboticist told us, robots, tested in very controlled environments (entirely unlike any battlefield!) do not have situational awareness or the ability to recognize aggressive postures as even a child can. What about the risks of malfunction? Cyber attacks? Decoys? And who ultimately is liable for mistakes—the military? people in the lab writing code? manufacturers?
- Ethical concerns: Would having the capability for autonomous weapons lower the threshold for war because the risk to soldiers’ lives is minimized? Do we want to give human life over to computer codes? Or, as one military general put it, is “death by algorithm not the ultimate human indignity”?
Clearly, these questions are compelling and should stop us in our tracks. But what are civil society groups and other experts doing about these concerns?
Well, the Campaign to Stop Killer Robots—an international movement of over 50 civil society organizations (including MCC partner, Mines Action Canada) in 24 countries—is pushing for a preemptive ban on the development, production, and use of fully autonomous weapons.
In other words, they are urging countries to “pull the plug on killer robots” before they move from science fiction to reality. While I enjoyed The Matrix (well, the first movie at least), I’d prefer these remain only on the movie screen.
Calling for a comprehensive treaty and for countries to pass laws and polices that ban autonomous weapons, the Campaign urges states to implement the recommendations made in 2013 by Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions. His recommendations include things such as placing national moratoriums on lethal autonomous robots; participating in international fora on the issue; committing to full transparency on weapons development review processes; etc.
Lest we think a preemptive ban on a weapon is impossible, there is a precedent. In the 1990s, blinding lasers were banned prior to their use on the battlefield (through Protocol IV of the Convention on Certain Conventional Weapons [CCW]).
Important conversations on autonomous robots are starting to creep ahead in multilateral fora like the CCW, and bold steps are being take in the robotics field. Last year, Canadian-based Clearpath Robotics—a Kitchener-Waterloo based company specializing in autonomous control systems—became the first company to pledge not to make killer robots!
Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, if not sooner. It is urgent that concerned people continue to ask important questions.
Just because technology can be developed, should it be?
By Jenn Wiebe, Interim Ottawa Office Director