We are undergoing a revolution in the use of artificial intelligence (AI) to solve problems, and some of the results are amazing. For example, a research team at Mount Sinai Hospital applied AI to the hospital's massive patient records, and the resulting program was amazingly effective in predicting disease. Not surprisingly, many at the Department of Defense are advocating the use of AI to empower autonomous weapons: weapons that can identify the enemy and make targeting recommendations (if a human is involved) or decisions (if the system is completely autonomous). The attraction of such autonomous weapons is obvious--the weapons can where human-controlled weapons cannot go (think emerging air defense systems) and can act more quickly to threats than human beings.
Now there are lots of concerns with military AI. Indeed, there is an active campaign to stop killer robots, and the U.N. is actively considering a ban on these weapons. The Campaign focuses on a hosts of concerns, but focuses largely on the inability of current technology to comply with the laws of armed conflict. But even aside from these concerns is an even more fundamental problem: for modern "deep learning" AI programs, the system teaches itself and we really have no idea why the machine is making the decisions that it makes. As the MIT Technology Review explains, "deep learning" AI does not work by simply following an algorithm. Instead, modern AI uses a biological model in which the computer essentially teaches itself:
Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself
. . . .
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.Now it is unsettling enough that we don't know exactly why the Mount Sinai Hospital program is so effective in predicting disease, but at least it can still be useful even if we don't understand why it works. It becomes a non-starter, in my view at least, if we unleash a lethal weapon in the wild if we don't really understand why it is so effective in making targeting decisions. A failure to understand how a machine makes targeting decisions could lead to surprising, and tragic, errors.
This is an problem with deep learning AI that is getting a lot of focus. The MIT Tech Review article does a great job of explaining the work that is being done to solve this problem. But until we know why AI works, I don't see it being useful in autonomous weapons unless a thinking human remains in the decisionmaking process.
You can see what else I have written on the topic of autonomous weapons here, here, and here.