Wednesday, May 10, 2017

Killer Robots: The Real Challenge to Using Artificial Intelligence in Military Applications


We are undergoing a revolution in the use of artificial intelligence (AI) to solve problems, and some of the results are amazing.  For example, a research team at Mount Sinai Hospital applied AI to the hospital's massive patient records, and the resulting program was amazingly effective in predicting disease.  Not surprisingly, many at the Department of Defense are advocating the use of AI to empower autonomous weapons:  weapons that can identify the enemy and make targeting recommendations (if a human is involved) or decisions (if the system is completely autonomous).  The attraction of such autonomous weapons is obvious--the weapons can where human-controlled weapons cannot go (think emerging air defense systems) and can act more quickly to threats than human beings.

Now there are lots of concerns with military AI.  Indeed, there is an active campaign to stop killer robots, and the U.N. is actively considering a ban on these weapons.  The Campaign focuses on a hosts of concerns, but focuses largely on the inability of current technology to comply with the laws of armed conflict.  But even aside from these concerns is an even more fundamental problem: for modern "deep learning" AI programs, the system teaches itself and we really have no idea why the machine is making the decisions that it makes.  As the MIT Technology Review explains, "deep learning" AI does not work by simply following an algorithm.  Instead, modern AI uses a biological model in which the computer essentially teaches itself:
Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself
. .  .  .
 You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
Now it is unsettling enough that we don't know exactly why the Mount Sinai Hospital program is so effective  in predicting disease, but at least it can still be useful even if we don't understand why it works.  It becomes a non-starter, in my view at least, if we unleash a lethal weapon in the wild if we don't really understand why it is so effective in making targeting decisions.  A failure to understand how a machine makes targeting decisions could lead to surprising, and tragic, errors.

This is an problem with deep learning AI that is getting a lot of focus.  The MIT Tech Review article does a great job of explaining the work that is being done to solve this problem.  But until we know why AI works, I don't see it being useful in autonomous weapons unless a thinking human remains in the decisionmaking process.

You can see what else I have written on the topic of autonomous weapons here, here, and here.

1 comment:

  1. Thanks for the post, here we have, unbelievably complicated issue, yet , just one major fundamental issue:

    Every criminal liability , is personal . Only natural persona , can bear criminal liability ( see for example , article 25 to the " Rome statute " ) Whatsoever , even an invasion of huge army , only one person , or several individual , each one separately , can be charged ( whether in domestic law , or international law ) .

    Now , suppose that such robot is engaged in the battle field, and exercises autonomous discretion, who is to be blamed for: Tort, recklessness, even malice (It is not a joke, such robot can act maliciously). Can we imagine , that no one shall bear criminal responsibility , for tragic consequences , in huge scales , occurring in war ?? Even if it is efficient, just for the " one off " case, one mistake, one tort, we should think twice, weather, no one shall be held liable and accountable.

    P.S : What you describe as : " A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. " has a professional name by the way : " synapse " .

    Thanks

    ReplyDelete