11

THE MILITARY

The U.S. military is perhaps the largest and most complex organization in the world. And since at least the Vietnam era, it has tried to use metrics in its counterinsurgency (COIN) campaigns, most recently in Iraq and Afghanistan. Though a small part of the U.S. military’s use of metrics, COIN is a particularly instructive case, with larger ramifications for our topic. For not only has the military made extensive use of metrics in the interests of accountability and transparency, its efforts have also been scrutinized by academic researchers working at American military academies and at the Rand Corporation, which conducts research for the Department of Defense. Some of these researchers are both soldiers and scholars, while others have a more conventional academic background. What characterizes their work is close contact with actual experience, either in the form of direct participation in counterinsurgency or of access to recently deployed officers. Writing in good part for policymakers and officers who will be deployed in the future, the stakes of their scholarship are high. As a result, perhaps, some are extraordinarily honest and astute about the use and misuse of metrics.1

As the American experience in Vietnam shows, metrics may be misleading, and their pursuit may have unseen negative consequences. For one thing, the information may be costly to gather: American soldiers lost their lives searching for corpses to include in the body counts so valued by Secretary of Defense McNamara (see chapter 3). Those statistics were frequently exaggerated in order to boost the commanding officers’ chances of promotion. And the stream of seemingly objective but actually fallacious information led policymakers and politicians to mistake improvement in the measured performance for real progress.2

David Kilcullen is a soldier/scholar who served as an officer in the Australian army before moving to the United States. He has held a number of key positions as a strategist of counterinsurgency for the U.S. Army and the Department of State, and spent time in Afghanistan and Iraq. His book Counterin-surgency includes an illuminating essay, “Measuring Progress in Afghanistan.” “Counterinsurgency,” as he simply puts it, is “whatever governments do to defeat rebellions.”3 The environment faced by counterinsurgents is complex and dynamic: “Insurgents and terrorists evolve rapidly in response to countermeasures, so that what works once may not work again, and insights that are valid for one area or one period may not apply elsewhere.” Thus, Kilcullen emphasizes, metrics must be adapted to the particularities of the case: standardized metrics drawn from past wars in other venues will simply not work. Not only that, but use of the best performance metrics demands judgment based upon experience:

Interpretation of indicators is critically important, and requires informed expert judgment. It is not enough merely to count incidents or conduct quantitative or statistical analysis—interpretation is a qualitative activity based on familiarity with the environment, and it needs to be conducted by experienced personnel who have worked in that environment for long enough to detect trends by comparison with previous conditions. These trends may not be obvious to personnel who are on short-duration tours in country, for example.4

Kilcullen explains why many standard metrics can be deceptive and should be avoided, including body counts and counts of “significant activity” (SIGACTs), meaning violent incidents against counterinsurgency forces. The usual assumption is that the lower the number of such violent encounters, the better. But that is not necessarily the case, Kilcullen explains, since “[v]iolence tends to be high in contested areas and low in government-controlled areas. But it is also low in enemy-controlled areas, so that a low level of violence indicates that someone is fully in control of a district but does not tell us who.” He also warns against the use of all “input metrics,” that is, metrics that count what the army and its allies are doing, for these may be quite distinct from the outcomes of those actions:

Input metrics are indicators based on our own level of effort, as distinct from the effects of our efforts. For example, input metrics include numbers of enemy killed, numbers of friendly forces trained, numbers of schools or clinics built, miles of road completed, and so on. These indicators tell us what we are doing but not the effect we are having. To understand that effect, we need to look at output metrics (how many friendly forces are still serving three months after training, for example, or how many schools or clinics are still standing and in use after a year) or, better still, at outcome metrics. Outcome metrics track the actual and perceived effect of our actions on the population’s safety, security, and well-being.5

Coming up with useful metrics often requires an immersion in local conditions. Take, for example, the market price of exotic (i.e., nonlocal) vegetables, which few outsiders look to as a useful indicator of a population’s perceived peace and well-being. Kilcullen, however, explains why they might be helpful:

Afghanistan is an agricultural economy, and crop diversity varies markedly across the country. Given the free-market economics of agricultural production in Afghanistan, risk and cost factors—the opportunity cost of growing a crop, the risk of transporting it across insecure roads, the risk of selling it at market and of transporting money home again—tend to be automatically priced in to the cost of fruits and vegetables. Thus, fluctuations in overall market prices may be a surrogate metric for general popular confidence and perceived security. In particular, exotic vegetables—those grown outside a particular district that have to be transported further at greater risk in order to be sold in that district—can be a useful telltale marker.6

Thus, developing valid metrics of success and failure requires a good deal of local knowledge, knowledge that may be of no use in other circumstances—to the chagrin of those who look for universal templates and formulae. The hard part is knowing what to count, and what the numbers you have counted actually mean in context.

Some broader lessons of counterinsurgency assessment are drawn out by Ben Connable, an analyst at the Rand Corporation, in his recent study Embracing the Fog of War: Assessment and Metrics in Counterinsurgency. “It would be difficult (if not impossible),” he writes, “to develop a practical, centralized model for COIN assessment because complex COIN environments cannot be clearly interpreted through a centralized process that removes data from their salient local context.” Therefore “information can have very different meanings from place to place and over time.” The problem arises from “the incongruity between decentralized and complex COIN operations and centralized, decontextualized assessment.”7

These concerns apply well beyond the military realm: to the extent that we try to develop performance metrics for any complex environment or organization that is either unique or substantially different from other environments or organizations, standardized measures of performance will be inaccurate and deceptive. Yet the desire to create performance metrics that are “transparent” in the interests of “accountability” usually translates into using metrics that are standardized and centralized, since such metrics are more easily grasped by superiors and by publics far from the field of operations. Moreover, as another recent Rand study notes, observations that are communicated through quantitative measures are regarded as “empirical,” while observations conveyed in qualitative form are treated as less reliable, despite the fact that “in practice, many of the quantitative metrics used in assessments are themselves anecdotal in that they reflect the observational bias of those reporting.”8

Connable characterizes counterinsurgency as “both art and science, but mostly art.”9 That applies to the management of many other complex situations. The tendency is to treat as pure, measureable science what is of necessity largely a matter of art, requiring judgment based on experience.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.111.30