Pages

Tuesday, September 26, 2017

Time for a Little Humility about Military Intervention

I have been watching the Ken Burns series on the Vietnam War with great interest.  One of the themes of the series is that many of the initial decisions to escalate the War were made in good faith, but were disastrous just the same.  One of the lessons of Vietnam is that we need to be much more humble about what military power can achieve.

To be clear, we can point to many successful uses of military power--even when judged many years later.  Our intervention in Kosovo seems to have stabilized the Balkans.  Our defense of Kuwait in the first Gulf War achieved its objective of restoring Kuwait to power.  Heck, even the intervention in Mali to defeat the Islamist forces who took over that nation seems to be a success.

Why were these engagements successful?  Perhaps it was because our political objectives could be satisfied by military lessons, and we did not need to engage in hubris about changing "hearts and minds."  Our other recent interventions, however, have not been as successful.  Indeed, most of been unmitigated disasters that made the world a less safe place.

Robert Kaplan has an interesting post at the National Interest blog about this issues:

The people I know who supported the Iraq War genuinely intended the human-rights situation in Iraq to be improved by the removal of Saddam Hussein, not made worse through war and chaos. The group of policymakers who supported the Libya campaign genuinely thought that by toppling the regime of Muammar el-Qaddafi a humanitarian catastrophe in Benghazi would be averted and the country as a whole would benefit. Instead, Libya collapsed into anarchy with many more thousands of casualties the consequence. The people who supported an early intervention to topple the regime of Bashar al-Assad, or at least limit the suffering in Syria, genuinely thought they were in both the moral and strategic right. And they might actually have been correct. Since there was no intervention in this case, the results of one remain an unknowable.
.  .  .. In all three cases, both sides have had at least some claim on our sympathies, however partial, even if we have disagreed with them. There were the interests of the state and its many limitations on one hand, and the interests of humanity on the other. Of course, the interests of humanity can in quite a few circumstances coincide with the interests of state. But it cannot do so all the time, or else we would be intervening everywhere, and that would not be sustainable. And yet just because you cannot intervene everywhere does not mean you cannot intervene, consistent with your interests, somewhere.
In ancient tragedy, as Hegel notes, the truth always emerges. What, then, is the truth about humanitarian intervention in the Muslim Middle East? The truth is that American power can do many things, but fixing complex and populous Muslim societies on the ground is not one of them: witness Iraq and Libya. But in the case of Syria, where a humanitarian and strategic nightmare has ensued without our intervention, it behooves us to treat each crisis individually, as sui generis. For intervening in one country might be the right thing to do, while it may be the wrong thing in other countries.
Read it all here.  I remain skeptical that intervention in Syria would have been a wise decision, but Kaplan's larger issue rings true--we need to judge each situation individually before using military force.  And in doing so we must be more humble about what military intervention will accomplish.  At the very least, we need to consider what we must do after we win the initial battles.

Saturday, September 23, 2017

Was the Vietnam War Winnable?

A few months back Mark Moyar, the director of the Center for Military and Diplomatic History, wrote an op-ed in the New York Times, arguing that the U.S. and its South Vietnamese allies could have won the war.  He actually makes a strong case.  The best response, however is from Robert Farley.  While agreeing with much of Moyar's analysis, Farley makes a fundamental point:  while the war might indeed have been winnable, the benefits of continuing the fight were not worth the cost:
In 1972, American political leadership made the overdue decision that any benefit of further contribution to Vietnam was outweighed by costs in material, in national dissensus and in international reputation. This leadership came to the conclusion that maintaining the U.S. commitment to Europe, North Asia and the Middle East was vastly more important to the struggle against the Soviet Union than continued fighting in Southeast Asia

Continuing the war would have incurred other costs. Hanoi’s conquest of South Vietnam was violent and brutal, killing thousands and forcing many others to flee as refugees. But continuing the fight against the North surely would also have been brutal, especially if it had involved direct coercive measures against Hanoi. Efforts to disrupt the Ho Chi Minh Trail would have led to heavier fighting in Cambodia and Laos.

Finally, it’s worth putting the broader strategic context on the table. The Sino-Soviet split demonstrated conclusively that the “socialist bloc” was nothing of the kind; communist states could disagree with one another in violent ways. Ho Chi Minh and his successors may have been, as Moyar points out, “doctrinaire Communists,” but Vietnam itself invaded another communist state in 1977, and went to war with one of its erstwhile patrons in 1979. The U.S. “loss” of Southeast Asia had no noticeable effect on the broader strategic balance between Moscow and Washington, a conclusion to which the Europeans had come at some point in the late 1960s.
Read it all here.  What do you think?

Friday, September 22, 2017

Letting Weapons Make The Decision to Kill

We are living in an age of machine autonomy.  We have autonomous cars on our highways already, and there are some who believe that driverless cars will be the rule, and not the exception, within the decade.  As machine learning grows in sophistication, many are asking some fundamental questions about autonomous weapons:  Should they be allowed?  Should they be developed?  Should we support an international treaty to ban them?

These are all great questions, but by and large, the discussions about them ignore some subtle, but important distinctions.  For example, should our answer be different if the weapon at issue is purely defensive?  Our Aegis (ship-based) and  Patriot (land-based)  anti-missile defense systems must react quite quickly to missiles fired against US targets, and therefore have a fully autonomous mode.  In most cases, this would mean only that a machine killed another machine (and saved human lives in doing so), but they could also kill incoming enemy fighter planes as well.  Acceptable?  What about "smart missiles" that are programmed to only hit a particular type of enemy ship--are we uncomfortable when they are sent into the battlefield to make the actual kill decisions?  The next generation of ship-to-ship missiles will likely have this capacity.

Marine JAG officer Lt. Col. Alan Schuller has an outstanding post at Just Security that takes all of these issues to a new level.  He looks at the issue of autonomous weapon systems through the prism of international law, but drill downs on a real fundamental point--at what point are machines really making kill decisions:
But consider the hypothetical case of an unmanned submarine that was granted the authority to attack hostile warships after spending days, weeks, or even months without human interaction.  Suppose the submarine was programmed to independently learn how to more accurately identify potential targets.  The link between human decision-making and lethal kinetic action gives us pause because it is attenuated.

As such, when some commentators speculate about future AWS equipped with sophisticated AI, they ascribe decisions to machines.  But even advanced machines do not decide anything in the human sense.  The hypothetical submarine above, for example, was granted a significant degree of autonomy in its authority and capability to attack targets.  Even if the system selects and engages targets without human intervention, however, it has not made a decision.  Humans programmed it to achieve a certain goal and provided it some latitude in accomplishing that goal.  Rather than focusing on human interactions with autonomous weapons, commentators’ inquiries should center on whether we can reasonably predict the effects of an AWS.
 He argues that allowing autonomous weapons on the battlefield whose decisions we do not guide in such a way that they are predictable is an abrogation of the responsibilities of military commanders under international law.  But he also argues, that as long as the weapons are guided by humans and that the weapon will, with some certainty, operate under the decisions made by humans, we should not be concerned if the human actions are temporally removed from the final kill decision:
Let’s explore why a blanket requirement of human input that is temporally proximate to lethal kinetic action is unnecessary from an IHL standpoint.  An anti-tank land mine may remain in place for an extended period without activating.  Yet, such systems are not indiscriminate per se.  Indeed, if future land mines were equipped with learning capacity that somehow increased their ability to positively identify valid military objectives, this could potentially enhance the lawfulness of the system.  As such, the analysis of the legality of an AWS will turn in large part on whether its possible to reasonably predict the target or class of targets the system will attack.  The determination will depend on the specific authorities and capabilities granted to the AWS.  If the lethal consequences of an AWS’ actions are unpredictable, the decision to kill may have been unlawfully delegated to a machine.

Moreover, future military commanders may need to address threats that are too numerous and erratic for humans to respond.  For example, China is allegedly developing unmanned systems that could operate in swarms to rapidly overwhelm a U.S. aircraft carrier strike group.  In order to address such threats, future forces will likely need scalable options to fight at “machine speed.”  If a commander is forced to rely on affirmative human input in order to use force against each individual threat, the battle may be over before it has begun.
Read it all here.  You can read some of my earlier posts on autonomous weapon systems herehere and here.