Friday, September 22, 2017

Letting Weapons Make The Decision to Kill

We are living in an age of machine autonomy.  We have autonomous cars on our highways already, and there are some who believe that driverless cars will be the rule, and not the exception, within the decade.  As machine learning grows in sophistication, many are asking some fundamental questions about autonomous weapons:  Should they be allowed?  Should they be developed?  Should we support an international treaty to ban them?

These are all great questions, but by and large, the discussions about them ignore some subtle, but important distinctions.  For example, should our answer be different if the weapon at issue is purely defensive?  Our Aegis (ship-based) and  Patriot (land-based)  anti-missile defense systems must react quite quickly to missiles fired against US targets, and therefore have a fully autonomous mode.  In most cases, this would mean only that a machine killed another machine (and saved human lives in doing so), but they could also kill incoming enemy fighter planes as well.  Acceptable?  What about "smart missiles" that are programmed to only hit a particular type of enemy ship--are we uncomfortable when they are sent into the battlefield to make the actual kill decisions?  The next generation of ship-to-ship missiles will likely have this capacity.

Marine JAG officer Lt. Col. Alan Schuller has an outstanding post at Just Security that takes all of these issues to a new level.  He looks at the issue of autonomous weapon systems through the prism of international law, but drill downs on a real fundamental point--at what point are machines really making kill decisions:
But consider the hypothetical case of an unmanned submarine that was granted the authority to attack hostile warships after spending days, weeks, or even months without human interaction.  Suppose the submarine was programmed to independently learn how to more accurately identify potential targets.  The link between human decision-making and lethal kinetic action gives us pause because it is attenuated.

As such, when some commentators speculate about future AWS equipped with sophisticated AI, they ascribe decisions to machines.  But even advanced machines do not decide anything in the human sense.  The hypothetical submarine above, for example, was granted a significant degree of autonomy in its authority and capability to attack targets.  Even if the system selects and engages targets without human intervention, however, it has not made a decision.  Humans programmed it to achieve a certain goal and provided it some latitude in accomplishing that goal.  Rather than focusing on human interactions with autonomous weapons, commentators’ inquiries should center on whether we can reasonably predict the effects of an AWS.
 He argues that allowing autonomous weapons on the battlefield whose decisions we do not guide in such a way that they are predictable is an abrogation of the responsibilities of military commanders under international law.  But he also argues, that as long as the weapons are guided by humans and that the weapon will, with some certainty, operate under the decisions made by humans, we should not be concerned if the human actions are temporally removed from the final kill decision:
Let’s explore why a blanket requirement of human input that is temporally proximate to lethal kinetic action is unnecessary from an IHL standpoint.  An anti-tank land mine may remain in place for an extended period without activating.  Yet, such systems are not indiscriminate per se.  Indeed, if future land mines were equipped with learning capacity that somehow increased their ability to positively identify valid military objectives, this could potentially enhance the lawfulness of the system.  As such, the analysis of the legality of an AWS will turn in large part on whether its possible to reasonably predict the target or class of targets the system will attack.  The determination will depend on the specific authorities and capabilities granted to the AWS.  If the lethal consequences of an AWS’ actions are unpredictable, the decision to kill may have been unlawfully delegated to a machine.

Moreover, future military commanders may need to address threats that are too numerous and erratic for humans to respond.  For example, China is allegedly developing unmanned systems that could operate in swarms to rapidly overwhelm a U.S. aircraft carrier strike group.  In order to address such threats, future forces will likely need scalable options to fight at “machine speed.”  If a commander is forced to rely on affirmative human input in order to use force against each individual threat, the battle may be over before it has begun.
Read it all here.  You can read some of my earlier posts on autonomous weapon systems herehere and here.

No comments:

Post a Comment