Pages

Monday, March 3, 2014

Should We Ban Autonomous Weapons?

It sounds like something right out of a blockbuster science fiction movie: killer robots that make decisions on who to kill without any human involvement. Not surprisingly, several human rights groups have argued that now is the time for a ban on the development and deployment of these weapons. While there are very real ethical and legal concerns with these potential weapon systems, such a ban is both unnecessary and likely counterproductive.

There are very serious legal concerns with the use of any autonomous weapon. Under well established principles of international law, every targeting decision in war requires a careful set of judgments that are currently made by human beings: Is this target a legitimate military target? Will there be harm to civilians from the strike? Is the value of the military target nonetheless proportional to this harm?

Great progress has been made in robotics, but it is unlikely that any autonomous robot now or in the near future would have the capacity to distinguish military targets from civilians with any accuracy or make the legally critical judgment about the proportionality of military value to civilian harm.

This is true even on battlefields where there are fewer risks of civilian casualties – such the as use of robots to attack underwater submarines, or in strictly machine on machine fights such as missile defence or defence against unmanned drones. We are even further away from machines that can tell the difference between military and civilian targets in much more difficult environments, such as against an un-uniformed enemy in an urban setting.

For these reasons, the official U.S. Department of Defense policy is that autonomous weapon systems can only be used to apply non-lethal, non-kinetic force (such as forms of electronic attack) unless, among other requirements, senior DoD leadership is convinced after rigorous testing that the system will comply with international law.

Even setting aside the legal issues, the limitations of current technology also make autonomous weapons ineffective as a weapon. If a human can do a better job hitting the right target, militaries won’t want to deploy autonomous systems. Indeed, while the US military is certainly doing research on autonomous systems, there are no current plans to acquire or use autonomous lethal weapons.

Given that the technology has not developed sufficiently to field machines that satisfy either international legal requirements or military operational needs, we already effectively have a moratorium in place on the deployment of these systems. But some would say that if deployment of these systems would be unlawful today, why not move forward in imposing a on a ban on the further development and deployment of autonomous systems in the future?

The reason is that such a ban on development would either be ineffective, or would stifle peaceful uses of robotics and artificial intelligence.  The vast majority of research done today in developing autonomous systems is being done by industry and academia with an focus on peaceful, not military uses.  Some prominent examples include efforts to develop self-driving carssearch and rescue robots, and even surgical robots.  The technology needed for these peaceful uses, however, would be directly applicable to lethal military uses.  All require greatly improved sensing technology and advances in artificial intelligence – exactly what would be necessary and useful in military applications. If a moratorium were imposed, we would either have an ineffective ban on the development of military technology or an unfortunate ban on technologies that could greatly improve our lives.

In addition, a ban could some day prevent the use of technologies that actually reduce civilian casualties. Useful military deployment of autonomous systems will require a greater degree of capability than that required to comply with international law. To win a battle, it is not enough that a machine can hit a lawful target.  Instead, military success requires careful judgments about which targets to hit and in what order. And a machine that doesn’t do a good job of distinguishing civilians from soldiers can be too easily fooled by an enemy.  As such, until autonomous weapon systems reach a capacity well beyond that needed to comply with international law, military considerations  will mean that weapons will stay under human control.

This has implications for civilian casualties.  If robotics technologies reach the point at which they become militarily useful, they will also likely be more capable than humans of distinguishing between civilian and military targets. If that is the case, deployment of autonomous systems could have the effect of reducing civilian casualties, and increasing compliance with international law. As the last ten years have shown, even well-disciplined soldiers can make serious errors – particularly in the heat of battle.  The result of this human error has been the death and maiming of civilians.  While we can’t predict the future of robotics, do we really want to ban weapon systems that potentially could be less likely to cause harm to civilians?

Autonomous weapons must be treated with great caution, and the international community needs to raise the alarm when these systems are used before the technology ensures compliance with international law. There needs to be a discussion about the norms that must be followed before any deployment. A ban, however, is not the right answer.

Originally posted on the Reuters (UK) Great Debate blog.

1 comment: