Wednesday, March 5, 2014

A Tale of Two Defense Budgets

Sharon Tiezzi, an editor at The Diplomat, has a very interesting post today discussing the defense budgets recently submitted by China and the United States:


The defense budgets released by Beijing and Washington share few similarities, but they do have one thing in common: spokespeople claiming that their increased military spending is good for “global security” or “world peace.” On a global level, and more particularly on a regional one, both the U.S. and China are convinced that security can be achieved through an increased military presence.

China believes that the U.S. is pursuing a policy of containment, egging on its friends and allies in the region to challenge China over territorial disputes. Many top-level academics in China worry that U.S. support for Japan and the Philippines in particular has encouraged these two nations to directly challenge China, thus worsening the security environment. Accordingly, China is forced to build up its military to defend its claims, and also to discourage provocation by its neighbors.

The U.S., however, thinks recent actions by Japan and the Philippines are a natural response to what is viewed as increased Chinese aggression. Under this line of thinking, a more robust U.S. military presence in the region is taken as a positive contributor to regional security, because it would reassure countries that are increasingly nervous about China’s strength.

It’s a classic question of the chicken vs the egg: which came first, China’s aggression or U.S. containment?

Regardless of who is blamed for starting the cycle, it’s hard to deny that China and the U.S. are locked into a low-key (for now) arms race, where military spending by one side is used to justify defense budget increases by the other. But already, given the divergent trends in spending, some in the region are wondering how long the U.S. will be willing or able to match China’s investment in a regional military presence. Though the announced Chinese military budget is less than 27 percent of the U.S. budget, it’s safe to assume that close to 100 percent of China’s budget will be focused on upping Chinese readiness in the Asia-Pacific region. With a variety of global security concerns, the U.S. cannot make the same claim.
Read it all here.   The final point is worth making by analysists who like to compare the U.S. defense budget to that of other countries--China's military spending is single-mindedly focused on power profection in one part of the world.  The U.S defense budget reflects a far more global perspective.

Autonomous Weapons: Is an Arms Race Really a Threat?

There has certainly been much written about the controversy over autonomous weapons systems, but in my preparation for a Chatham House conference on autonomous weapons, I found one argument made by advocates of a ban on such weapons, however,  that merits some close examination.  These advocates make the point that there will be a robotics arms race that will result in development and deployment of autonomous weapons even if these weapons are not able to comply with international law.  For example, here is a Human Rights Watch statement: “But the temptation will grow to acquire fully autonomous weapons, also known as ‘lethal autonomous robotics’ or ‘killer robots.’ If one nation acquires these weapons, others may feel they have to follow suit to avoid falling behind in a robotic arms race.”

In essence, Human Rights Watch is arguing that even nations, like the United States, that are taking a very cautious approach to autonomous weapons will feel compelled to deploy these weapons for fear of losing a military advantage.  The result will be the deployment of these weapons despite the fact that the technology does not really ensure compliance with international law.

This is a powerful argument except for one fatal flaw: a robotic weapon that cannot meet international norms is unlikely to have a military advantage on the battlefield.

Under well established principles of international law, every targeting decision in war requires a careful set of judgments that are now done by human beings:  Is this target a legitimate military target?  Will there be harm to civilians from the strike?  Is the value of the military target nonetheless proportional to this harm?  As much as progress has been made in robotics, it is unlikely that any autonomous robot in even the near future would have the capacity to determine military targets from civilians with any accuracy or make the critical judgment about the proportionality of military value to civilian harm.

Would deployment of even these inadequate autonomous weapons provide an advantage on the battlefield?  Even if these weapons will have difficulty distinguishing a civilian target from a legitimate military target, would they provide a military advantage over weapons still controlled by humans?  I doubt it.

Effectiveness on the battlefield actually requires a higher degree of judgment than that required to meet international legal requirements.  It is not enough to hit a legitimate target.  Effectiveness on the battlefield requires that a weapon hit the most important targets and in the right sequence.  A computer that even has difficulty making judgments about what is a legitimate target will not do well making the more challenging tactical and operational decisions required on the battlefield.

In addition, an autonomous weapon that can’t easily distinguish civilians from military targets can all too easily be fooled by an enemy.  Sun Tzu famously wrote that all warfare is based on deception and deception techniques (such as decoys) were used quite effectively by Serbia against the NATO air campaign in Kosovo.


Perhaps the best evidence that there will be no robotic arms race is the fact that no major military power is rushing to develop or deploy these weapons.  For example, while there is certainly a great deal of research activity on autonomous systems, there is no current DoD program of record for any autonomous weapon. DoD is showing great caution in the development of autonomous weapons not merely out of concern for international law.  While that is obviously a significant concern, there is also great skepticism that purely autonomous weapons will provide a military advantage even in the battle spaces twenty or more years in the future.

In short, an autonomous weapon that cannot satisfy the laws of war is unlikely to be an effective weapon on the battlefield.  Concerns about  robotic arms race are misplaced.

this post was previously published on the Lawfare Blog.

Tuesday, March 4, 2014

In the Trenches: The Other Civilian/Military Conflict

Former Defense Secretary Robert Gates new book, Duty, Memoirs of a Secretary at War, which describes the tensions and lack of trust between the White House and senior military leaders, is merely the latest of a very old story of the often contentious relationship between the military and civilian leaders.  Of course, Presidents have long had conflicts with their senior military commanders.

But the problem of civilian-military relationships gets far less attention outside the rarified atmosphere of the Oval Office despite the fact that clashes between political appointees and senior military leaders are all too common.  While there is an inherent and healthy tension between military and civilian leadership, this tension becomes counterproductive thanks to some avoidable mistakes on both sides.

First, in Democratic and Republican Administrations alike,  new political appointees are too quick to assert “civilian control of the military” in response to any dispute with their military counterparts.  While the primacy of the President, Defense Secretary, and Service Secretary is clear, matters can be a bit murky at the Assistant Secretary or National Security Staff  level.  Assertions of “civilian control” at this level all too often leads to a food fight over turf that accomplishes nothing. For example, there have been some very nasty, and ultimately debilitating debates between service General Counsels and their military lawyer counterparts about whether civilian control of the military had any relevance to the relationship.  And there have been similar turf battles throughout the Pentagon whenever civilians and military staffs share a portfolio.  These debates were ugly and result in an unproductive lack of cooperation..

The fact of the matter is that it is the top leadership in the Department of Defense and in each service who decide who to listen to on particular matters, and assertions of power are empty and counterproductive.  When I became General Counsel of the Army, a veteran DoD official counseled me that in the Pentagon the best relationships were ones in which the civilians never mention the fact of civilian primacy, and military members never forget it.

Second, political appointees often  make decisions based on a misunderstanding of the military.  For example, some Bush Administration officials were shocked when military officers assigned to defend members of al Qaeda in military commission proceedings took their duty of zealous defense very seriously.  Rather than cooperate with the negotiation of guilty pleas, military lawyers challenged the constitutionality of the military commission system itself.

Anyone with any significant understanding of the character and ethos of modern military lawyers should have not been surprised.  From a very early stage in their career, young JAG officers defend defendants in courts martial and they are rated highly for vigorous defense.

Yet, it is also a mistake for senior military officials to expect unquestioning acquiescence to their assertions of military judgment.  When a General with decades of training and experience asserts his or her military judgment on a matter, it can be jarring to confront a different view, particularly when the political appointee did not serve in the military.  Yet, a military judgment is ultimately just that–a judgment–and wise policy is well served by a robust discussion of the basis of that judgment: What assumptions were made?  What factors were considered?  What alternatives were analyzed?

The discussion needs to be respectful, but political appointees are failing their job if they don’t engage, and senior military officials fail when they refuse to engage in a dialogue.  Perhaps the best example of the value of this discussion comes from the tenure of Secretary Gates.  At the time he launched the Comprehensive Review Group to evaluate the continued need for the “Don’t Ask, Don’t Tell” policy, the large majority of senior military leaders shared the judgment that open service by gay and lesbian service members was inconsistent with good order and discipline.  Yet this judgment was merely the starting point of discussions, and the real value of the work of the Comprehensive Review group is that it tested the assumptions that lay behind this judgment–and found them wanting.

Political appointees and military leaders come from remarkably different backgrounds  and perspectives, and some tension is inherent in the relationship.  Heavy-handed assertions of civilian control or unquestioning acquiescence to assertions of military judgment are both mistaken responses to this tension. Political appointees do best when they work hard to understand the military, and take seriously their role to ensure that military judgments are based on sound reasoning, and that the full range of options are considered.

Originally published at Just Security.

Monday, March 3, 2014

Should We Ban Autonomous Weapons?

It sounds like something right out of a blockbuster science fiction movie: killer robots that make decisions on who to kill without any human involvement. Not surprisingly, several human rights groups have argued that now is the time for a ban on the development and deployment of these weapons. While there are very real ethical and legal concerns with these potential weapon systems, such a ban is both unnecessary and likely counterproductive.

There are very serious legal concerns with the use of any autonomous weapon. Under well established principles of international law, every targeting decision in war requires a careful set of judgments that are currently made by human beings: Is this target a legitimate military target? Will there be harm to civilians from the strike? Is the value of the military target nonetheless proportional to this harm?

Great progress has been made in robotics, but it is unlikely that any autonomous robot now or in the near future would have the capacity to distinguish military targets from civilians with any accuracy or make the legally critical judgment about the proportionality of military value to civilian harm.

This is true even on battlefields where there are fewer risks of civilian casualties – such the as use of robots to attack underwater submarines, or in strictly machine on machine fights such as missile defence or defence against unmanned drones. We are even further away from machines that can tell the difference between military and civilian targets in much more difficult environments, such as against an un-uniformed enemy in an urban setting.

For these reasons, the official U.S. Department of Defense policy is that autonomous weapon systems can only be used to apply non-lethal, non-kinetic force (such as forms of electronic attack) unless, among other requirements, senior DoD leadership is convinced after rigorous testing that the system will comply with international law.

Even setting aside the legal issues, the limitations of current technology also make autonomous weapons ineffective as a weapon. If a human can do a better job hitting the right target, militaries won’t want to deploy autonomous systems. Indeed, while the US military is certainly doing research on autonomous systems, there are no current plans to acquire or use autonomous lethal weapons.

Given that the technology has not developed sufficiently to field machines that satisfy either international legal requirements or military operational needs, we already effectively have a moratorium in place on the deployment of these systems. But some would say that if deployment of these systems would be unlawful today, why not move forward in imposing a on a ban on the further development and deployment of autonomous systems in the future?

The reason is that such a ban on development would either be ineffective, or would stifle peaceful uses of robotics and artificial intelligence.  The vast majority of research done today in developing autonomous systems is being done by industry and academia with an focus on peaceful, not military uses.  Some prominent examples include efforts to develop self-driving carssearch and rescue robots, and even surgical robots.  The technology needed for these peaceful uses, however, would be directly applicable to lethal military uses.  All require greatly improved sensing technology and advances in artificial intelligence – exactly what would be necessary and useful in military applications. If a moratorium were imposed, we would either have an ineffective ban on the development of military technology or an unfortunate ban on technologies that could greatly improve our lives.

In addition, a ban could some day prevent the use of technologies that actually reduce civilian casualties. Useful military deployment of autonomous systems will require a greater degree of capability than that required to comply with international law. To win a battle, it is not enough that a machine can hit a lawful target.  Instead, military success requires careful judgments about which targets to hit and in what order. And a machine that doesn’t do a good job of distinguishing civilians from soldiers can be too easily fooled by an enemy.  As such, until autonomous weapon systems reach a capacity well beyond that needed to comply with international law, military considerations  will mean that weapons will stay under human control.

This has implications for civilian casualties.  If robotics technologies reach the point at which they become militarily useful, they will also likely be more capable than humans of distinguishing between civilian and military targets. If that is the case, deployment of autonomous systems could have the effect of reducing civilian casualties, and increasing compliance with international law. As the last ten years have shown, even well-disciplined soldiers can make serious errors – particularly in the heat of battle.  The result of this human error has been the death and maiming of civilians.  While we can’t predict the future of robotics, do we really want to ban weapon systems that potentially could be less likely to cause harm to civilians?

Autonomous weapons must be treated with great caution, and the international community needs to raise the alarm when these systems are used before the technology ensures compliance with international law. There needs to be a discussion about the norms that must be followed before any deployment. A ban, however, is not the right answer.

Originally posted on the Reuters (UK) Great Debate blog.