There has certainly been much written about the controversy over autonomous weapons systems, but in my preparation for a Chatham House conference on autonomous weapons, I found one argument made by advocates of a ban on such weapons, however, that merits some close examination. These advocates make the point that there will be a robotics arms race that will result in development and deployment of autonomous weapons even if these weapons are not able to comply with international law. For example, here is a Human Rights Watch statement: “But the temptation will grow to acquire fully autonomous weapons, also known as ‘lethal autonomous robotics’ or ‘killer robots.’ If one nation acquires these weapons, others may feel they have to follow suit to avoid falling behind in a robotic arms race.”
In essence, Human Rights Watch is arguing that even nations, like the
United States, that are taking a very cautious approach to autonomous
weapons will feel compelled to deploy these weapons for fear of losing a
military advantage. The result will be the deployment of these weapons
despite the fact that the technology does not really ensure compliance
with international law.
This is a powerful argument except for one fatal flaw: a robotic
weapon that cannot meet international norms is unlikely to have a
military advantage on the battlefield.
Under well established principles of international law, every
targeting decision in war requires a careful set of judgments that are
now done by human beings: Is this target a legitimate military target?
Will there be harm to civilians from the strike? Is the value of the
military target nonetheless proportional to this harm? As much as
progress has been made in robotics, it is unlikely that any autonomous
robot in even the near future would have the capacity to determine
military targets from civilians with any accuracy or make the critical
judgment about the proportionality of military value to civilian harm.
Would deployment of even these inadequate autonomous weapons provide
an advantage on the battlefield? Even if these weapons will have
difficulty distinguishing a civilian target from a legitimate military
target, would they provide a military advantage over weapons still
controlled by humans? I doubt it.
Effectiveness on the battlefield actually requires a higher degree of
judgment than that required to meet international legal requirements.
It is not enough to hit a legitimate target. Effectiveness on the
battlefield requires that a weapon hit the most important targets and in
the right sequence. A computer that even has difficulty making
judgments about what is a legitimate target will not do well making the
more challenging tactical and operational decisions required on the
In addition, an autonomous weapon that can’t easily distinguish
civilians from military targets can all too easily be fooled by an
enemy. Sun Tzu famously wrote that all warfare is based on deception and deception techniques (such as decoys) were used quite effectively by Serbia against the NATO air campaign in Kosovo.
Perhaps the best evidence that there will be no robotic arms race is
the fact that no major military power is rushing to develop or deploy
these weapons. For example, while there is certainly a great deal of
research activity on autonomous systems, there is no current DoD program
of record for any autonomous weapon. DoD is showing great caution in
the development of autonomous weapons not merely out of concern for
international law. While that is obviously a significant concern, there
is also great skepticism that purely autonomous weapons will provide a
military advantage even in the battle spaces twenty or more years in the
In short, an autonomous weapon that cannot satisfy the laws of war is
unlikely to be an effective weapon on the battlefield. Concerns about
robotic arms race are misplaced.
this post was previously published on the Lawfare Blog.