13

The Politics of Information and Analysis

There is little doubt that information, and the certainty that it can provide, is a source of power. It can be used as part of a very important political strategy—getting one’s way through analysis. Perhaps no figure in recent corporate and public life has so exemplified the exercise of power through facts and analysis as Robert McNamara, who was first president of Ford, then secretary of defense under Presidents Kennedy and Johnson, and most recently, prior to his retirement, the head of the World Bank. McNamara’s success in rising rapidly through the corporate ranks at Ford came from his mastery of information and analysis:

Henry Ford was new and unsure of himself, particularly in the field of financial systems. To an uneasy, uncertain Ford, McNamara of fered reassurance; when questions arose he always seemed to have the answers, not vague estimates, but certitudes, facts, numbers, and lots of them. Though his critics might doubt that he knew what the public wanted or what it was doing, he could always forecast precisely the Ford part of the equation.1

It might seem odd to have a chapter on information and analysis in a book on power in organizations. But as we will see, our belief that there is a right answer to most situations and that this answer can be uncovered by analysis and illuminated with more information, means that those in control of the facts and the analysis can exercise substantial influence. And facts are seldom so clear cut, so unambiguous, as we might think. The manipulation and presentation of facts and analysis are often critical elements of a strategy to exercise power effectively.

Information and analysis can be useful, but it is important to recognize that, as Peter Drucker once remarked, anyone over the age of 21 can find the facts to support his or her position. Information and analysis are important for getting things done, largely because of our faith in them and in those who seem to have mastered them. But it is not invariably true that they will produce the “right answer,” or even a good answer. Halberstam’s history of the Vietnam War is the story of a U.S. government filled with brilliant people, who gathered lots of information and formulated analyses that were, unfortunately, not often based on sound judgment, common sense, or reasonable assumptions. It is important to understand the use of information as a political strategy, but it is also important to understand the limits of information and analysis. I particularly like the story of Lyndon Johnson, who inherited Kennedy’s brilliant cabinet and set of advisers when Kennedy was assassinated, discussing these people with his friend and mentor, Speaker of the House Sam Rayburn:

Stunned by their glamour and intellect, he [Johnson] had rushed back to tell Rayburn, his great and crafty mentor, about them, about how brilliant each was, that fellow Bundy from Harvard, Rusk from Rockefeller, McNamara from Ford. On he went, naming them all. “Well, Lyndon, you may be right and they may be every bit as intelligent as you say,” said Rayburn, “but I’d feel a whole lot better about them if just one of them had run for sheriff once.”2

There are four useful points to make about information and analysis as political tactics. First, all organizations strive for the appearance of rationality and the use of proper procedures, which include using information and analysis to justify decisions, even if this information and analysis is mustered after the fact to ratify a decision that has been made for other reasons. In constructing the appearance of legitimate and sensible decision processes, the use of outside experts, particularly expensive outside experts, is especially helpful. Such experts are at once legitimate sources of information and analysis and at the same time likely to be responsive to the needs of their specific clients within the organization. Second, in complex, multidimensional decisions such as those faced by high-level managers, it is very unlikely that processes of straightforward analysis will clearly resolve the issue of what to do. This means that, third, there is room for the advocacy of criteria and information that favor one’s own position, or, in other words, there is the opportunity to use information and analysis selectively.

Some might argue that even if information and analysis cannot fully determine the quality of decisions before they are made, decision quality does become known after the fact, leading to a process of learning over time. People who misuse information and analysis for their own political ends, the argument goes, will eventually be “uncovered” when decisions or results turn out badly. This learning will ensure that, over time, better information and better analysis are rewarded and incorporated into the organization’s standard operating procedures. As we will see, however, there is little evidence that these assumptions are true, and there are numerous examples of organizations behaving, for quite predictable reasons, in exactly the opposite way. The last point, then, is simply that the discovery of decision quality is both a difficult process and one that is often assiduously avoided in organizations of all types. As a consequence, the opportunity to use information and analysis as potent political weapons is available, and those with the skills and knowledge of how to do so can often, like Robert McNamara, gain substantial power and influence in their organizations.

THE NEED FOR THE APPEARANCE OF RATIONALITY

Power is most effectively employed when it is fairly unobtrusive. Using rational, or seemingly rational, processes of analysis helps to render the use of power and influence less obvious. Perhaps as important, decisions are perceived to be better and are accepted more readily to the extent that they are made following prescribed and legitimate procedures.

John Meyer and his colleagues have argued that the appearance of bureaucratic rationality is important, if not essential, for making organizations appear legitimate.3 And this appearance of legitimacy is crucial for attracting support and resources. Thus, in many instances, individuals in organizations do not seek out information in order to make a decision, but rather, they amass information so that the decision will seem to have been made in the “correct” fashion—i.e., on the basis of information rather than uninformed preferences or hunches.4 Kramer, writing about the analysis of public policies, has made a similar point:

Apparently analysis is used primarily to justify actions that are based on political predilections. . . . the techniques used and the emphasis on quantification give the results of analysis a “scientific” appearance—an appearance of value-free rationality at work.5

A study of a firm’s decision to purchase a computer observed that, contrary to the prescriptions of rational choice approaches, information was selectively collected and used in the decision process to provide support for the decision that was already favored.6 One might reasonably wonder, why bother? Why not just go ahead and purchase the desired computer without going through the exercise of gathering information, selectively and strategically, to favor the choice that had already been made? The answer is that decisions made either without information or simply by directive from above do not have the legitimacy or produce the same level of comfort as decisions that are made on the basis of information and analysis. We rely on facts and analytical technique to produce the “right” choice; how can the right choice be produced in the absence of these comforting certainties?

If information is necessary to get the decisions we want, then it seems obvious that the way to get things done in organizations is to develop skill in obtaining the facts that support our intended course of action. Sometimes we can get the facts we want because of our social ties and alliances, as Donald Frey, former CEO and chairman of Bell and Howell and a former executive at Ford Motor Company, relates:

I was . . . interested in getting . . . my ideas sold (sometimes against resistance to change), but this meant learning . . . another language.

I well remember being asked in the seemingly endless efforts to get approval for the original “Mustang” . . . what the net, “non substitutional” increase in vehicle sales volume would be with the car—that is, the number of cars that could be sold without cannibalizing our existing market. Since the Mustang was a completely new vehicle concept, no one really knew. One of the corporate market research types was given the question of substitution to answer. He knew from me that the stand-alone break even volume was about 84,000 units. A week later he reported that the increased volume, net of substitution, would be 86,000 units. I called and asked him how he got the number. He said he liked the car and its concept.7

In this case, the project turned out to be a big success, as the Mustang sold 400,000 units its first year. Sometimes this process of getting the answers to support the decision produces a less fortuitous result, but the tactics are the same. When the Cadillac division was deciding whether to launch the Allante, the price was set at $55,000. There was some question whether the projected volume and profits could be attained with that price:

Originally, GM’s internal staff projected sales of three thousand cars, with a $45,000 price tag. But at that volume and price, the project failed to generate a 15-percent ROI, so the division’s answer was to raise both estimates to make the project work on paper.8

This internal manipulation of numbers to support one’s position is, of course, somewhat unseemly, and besides, one might get caught. A better strategy is to employ an outside expert, such as a consulting firm, to produce the numbers or answers you need. For if you use a third party, at a substantial cost, to produce a report, how can the organization ignore a study on which it has spent a lot of money? Moreover, given that it was done by a legitimate, reputable firm with an aura of expertise, the analysis must be correct. And furthermore, since the work was performed by an outside organization, with apparently no particular political stake in the results, the recommendations must surely be objective and impartial.

In 1981, John Debbink, at the time general manager of the Delco Moraine subsidiary of General Motors, was given the assignment to find out whether it would be possible to organize all the engine plants and engineering into one organization:

The challenge facing Debbink’s team was to see if a cultural shift could happen in the process of making the urgent organizational changes that were needed. They began by asking McKinsey and Company, a . . . consulting firm, to help evaluate the options. McKinsey provided the logical format for analyzing GM car operations and formalizing what had already been decided. Their final approach differed little from Debbink’s original concept.9

The nice thing about using consultants is that they can usually be relied on to further the decision you have in mind. With one exception, I have never seen a consulting firm recommend the abolition of the job or the division that hired them—I think it is called “client relations.” Most firms know who brought them in, and provide the answers they are expected to give. The one exception was a firm that recommended the abolition of a department in a long-distance telephone company, even though it was the head of that function who had brought them in to do the study. But this is a rare event. More commonly I have observed that the outside expert recommended the advancement and enhancement of both the individual sponsor and the unit led by that sponsor. Because they are so often used to legitimate choices, I have heard consultants referred to as “hired guns.” And we all know where to stand with respect to a gun—behind it, not in front of it.

Consultants, then, can be powerful allies in internal political struggles. After George Ball had left E.F. Hutton for Prudential-Bache Securities, and after the check-kiting scandal at Hutton, Robert Rittereiser was hired as president. Hutton’s economic results continued to deteriorate, in part because of general problems in the securities industry, in part because of the legacy of the check-kiting scandal, and in part because of internal management problems, including some attributable to Rittereiser. Hutton desperately needed to raise additional capital, either through a merger with another organization, or by convincing another organization to make a substantial investment in the firm. To protect his own position and to gain influence with the board, Rittereiser decided to make sure that he was in control of bringing in investment bankers to assist in this process. He developed a relationship with Pete Peterson (formerly the managing partner of Lehman Brothers) and his firm, the Blackstone Group:

“The way Ritt saw it, he could borrow from Peterson’s prestige to regain the credibility he’d lost with the directors. He thought that if he could say, ‘I want to take this particular action, or talk to this particular buyer, and Pete Peterson thinks it’s a good idea too,’ he could get the board to side with him. In effect, he was hiring Peterson as a corporate ally.”10

The importance of obtaining seemingly impartial judgments, and the use of outside experts to achieve this end, is also illustrated in an example from Apple Computer. In the late 1980s, Apple Computer was interested in controlling headcount, or the number of permanent employees. John Sculley liked to talk about how high Apple’s revenues per employee were, and like many other organizations, Apple believed that it could control its expenses by managing its number of permanent employees. Of course, when work needs to get done and there are not enough people to do it, alternative arrangements are found, which in Apple’s case involved using a lot of independent contractors and workers provided by temporary agencies. Particularly in the case of contract employees, Apple may have been violating a number of state and federal labor laws and tax regulations, because the workers were actually legally employees even though they were treated as independent vendors. Apple’s human resource staff was concerned about this situation for several reasons: 1) Apple faced some legal risk because of its practices; 2) the hiring of temporary and contract employees bypassed the human resource staffs control over hiring criteria and wage determination—indeed, one problem was that a person would leave Apple as an employee on Friday and return to work on Monday as an independent contractor earning more money than he or she had earned the previous week and than the person’s co-workers, who had remained employees, were earning; 3) many of the contract employees, hired in a hurry, would probably not have passed muster as permanent employees; and 4) the use of a large number of contract and temporary workers threatened to weaken the Apple culture (and HR saw itself as a keeper of the culture), as well as exposed the corporation to some strategic risk, since a large fraction of its technical work force, involved in both hardware and software design, had no permanent attachment to the organization.

When the human resource staff broached these issues, their concerns were dismissed. However, the threat of possible legal problems got the corporation to agree to have their outside counsel, Pillsbury, Madison and Sutro, send some labor lawyers to look into these practices. These outside experts found that, at least with respect to the legal and tax issues, the HR department’s concerns had been well founded. Based on the legal analysis, a study of part-time, temporary, and contract employees was commissioned. Hiring and compensation practices were changed, and many of the workers were either taken on as regular employees or let go. The role of human resources, the point of contact for the labor attorneys, was strengthened, and the unit acquired various projects and more visibility.

Because of the need for the appearance, if not the reality, of rational decision processes, analysis and information are important as strategic weapons in battles involving power and influence. In these contests, the ability to mobilize powerful outside experts, with credibility and the aura of objectivity, is an effective strategy.

THE LIMITS OF FACTS AND ANALYSIS

It is evident that analysis and outside expertise can be employed strategically to affect decisions and actions. One might argue that such studies are nevertheless desirable—although the numbers and analysis may be used as part of a political contest, they also shed important light on organizational questions. But this is not invariably the case. It turns out in organizational life, common sense and judgment are often more important than so-called facts and analysis. Three examples illustrate this point.

If good decisions were solely the result of intellectual capacity, then few mistakes would have been made during the Vietnam War. McNamara exerted tremendous influence over the war and the policies that were adopted toward it. And McNamara believed wholeheartedly in facts, in analysis, in data. The issue, of course, in this as in any other decision situation, is not whether it is right to gather information, but the more subtle question of what are the correct indicators and the appropriate information to consider. If you can find the facts to support virtually any decision, then your only concern is how to sort and weigh the information you obtain. There is also a bigger danger. In the absence of facts and analysis, you may admit that you are uncertain. But surrounded by information, even useless, misleading information, you will no longer feel uncertain or uninformed. In this sense, bad or misleading information is much worse than no information at all.

McNamara went to Vietnam to see things for himself. The man who loved and trusted data wanted to get it firsthand:

And there was that confidence which bordered on arrogance, a belief he could handle it. Perhaps . . . the military weren’t all that good; still, they could produce raw data, and McNamara, who knew data, would go over it carefully and extricate truth from the morass. . . . Talking with reporters and telling them that all the indices were good. He could not have been more wrong; he simply had all the wrong indices, looking for American production indices in an Asian political revolution. . . . he scurried around Vietnam, looking for what he wanted to see. . . . He was so much a prisoner of his own background. . . . memories of him still remain: McNamara in 1962 going to Operation Sunrise, the first of the repopulated villages, the villagers obviously filled with bitterness and hatred, ready, one could tell, to slit the throat of the first available Westerner, and McNamara not picking it up, innocently firing away his questions. How much of this? How much of that?11

When he finally turned against the war, McNamara was bitter about the generals who he thought had misled him. But he had obtained the data he wanted, and had it analyzed by the very best systems analysts around. The problem wasn’t the numbers or the analysis—the problem was interpretation.

Thus did the Americans ignore the most basic factor of the war, and when they did stumble across it, it continued to puzzle them. McNamara’s statistics and calculations were of no value at all, because they never contained the fact that if the ratio was ten to one in favor of the government, it still meant nothing, because the one man was willing to fight and die and the ten were not.12

We apparently learned little from the limits of analysis in Vietnam in the 1960s, because the same type of mistakes occurred in business corporations in the 1970s. During the 1970s, the Xerox Corporation’s finance staff and president came from Ford Motor Corporation and the Robert McNamara school of systems analysis and quantification.13 Archie McCardell was hired in 1966 as group vice president of finance and control, and became president in 1971. With McColough, the CEO, increasingly preoccupied with external relations, McCardell and his numbers orientation came to dominate the Xerox culture. The question that might be posed, however, is whether any of the numbers and the decisions based upon them made sense. Xerox’s failure to respond to the threat posed by small Japanese copiers, which was discussed in Chapter 10, is yet another example of the limits of numbers and analysis.

Xerox adopted a strategy of grouping customers “by the volume of their copying needs, then designed, built, and sold machines with copying speeds to match each segment.”14 Because of the segmentation by market, when the Japanese entered the small machine, low copy-volume segment, Xerox did not react. Savin/Ricoh quickly outflanked Xerox, distributing their machines through office equipment dealers, dealing with the service issue by building machines that broke down one-third as often and were easier to repair because of modular parts, and designing standardized components in the machines to reduce manufacturing costs. Moving first into the low end of the market, away from the centralized reprographics department, Savin/Ricoh was able to get machines installed in the facilities of virtually all of Xerox’s customers. Moving upmarket, with an established reputation for reliability, product innovation, and dependability, was simple. Xerox lost one-third of its market share in five years, between 1972 and 1977, simply by following analysis that suggested the part of the market it was losing was not of concern anyway.

When Xerox finally began to see its margins erode, analysis again tried to provide the answer—cut manufacturing costs. The problem with this strategy, as Ford was learning about the same time, is how to measure costs. Manufacturing costs are, of course, only part of the costs of installing and maintaining a product—warranty and service costs are important as well, as are customer good will and market acceptance. In Xerox’s drive to cut manufacturing costs, in part by substituting cheaper components, warranty and service costs increased at a rate that almost completely eroded any “savings” in manufacturing. This was not captured in the cost analysis, which focused only on what it cost to get the machine out the door. And, Xerox’s market share continued to erode under the pressure of poor product quality; a market share of 95% in 1972 had fallen to 65% in 1977, 54% in 1978, and 46% in 1979.15 Xerox had lost half its market in seven years, all the while paying attention to the numbers and being dominated by financial analysis. This example does not mean that numbers and analysis will invariably produce poor results—but it shows that good results don’t necessarily follow, either.

We have seen that information and analysis cannot really help one weigh the importance of alternative perspectives. We have also seen that numbers, particularly numbers from traditional cost accounting systems, can be misleading in terms of developing and implementing sound manufacturing and marketing strategy.16 Our final example illustrates that the rapture of information and analysis can, in the end, mislead even those who are gathering the information and doing the analysis, and who ought to know better.

Time Inc.’s ill-fated attempt to launch TV-Cable Week was based on an analysis done by two Harvard MBAs. The concept involved system-specific listings for each cable system, which would be costly to edit and produce, especially because the editor, Richard Burgheim, was committed to doing only a quality product. The magazine would probably be marketed by the cable system operators, and the question was how high a market penetration the magazine would need in each cable system to make money:

Neither one knew, so they started experimenting with various penetration assumptions. At 3 percent . . . (roughly what Time Magazine enjoyed in its own markets), losses . . . ran far into the millions annually without letup. . . . if one raised the penetration assumption to 8 percent? Still a loss. . . . Go for 15 percent. No deal. Try 20 percent. At 60 percent penetration, the numbers at last showed a profit . . . as it happened, 60 percent market penetration, while admittedly a level unheard of in mass magazine marketing, equaled almost exactly the level of penetration enjoyed by HBO in its own cable markets. . . . the players reasoned thus: if HBO can get such penetration, then why not a listings guide that tells viewers what HBO is showing? The thinking was logical, the mathematics impeccable. But the conclusion was totally out of touch with reality. For no mass market magazine had ever gotten more than one-fifth the market penetration their project now seemed to require to show a profit.17

Once reduced to black and white, once pro formas had been done, the analysis took on a life of its own. Regardless of the absurdity of the underlying assumptions, the analysis became reality, and the magazine was eventually launched. Of course, its penetration never even approached 3%.

SELECTIVE USE OF INFORMATION

Because of the need for a rationalized decision process, when such processes are inherently ambiguous, there is room for individuals to selectively advocate criteria that favor their own interests and units. Almost all decisions involve not only choosing among the available alternatives but also selecting the appropriate criteria. Because organizations are inevitably confronted with multiple, occasionally competing, objectives, the assessment of the effects of organizational choices is inherently ambiguous and uncertain.18

Given the availability of multiple bases for making a decision, one strategic use of power and influence involves advocating the employment of standards that favor one’s own position. A study of resource allocation at the University of Illinois found that:

There is support in the data for the idea that when asked what the criteria for budget allocations should be, respondents replied with criteria that tended to favor the relative position of their own organizational subunit. . . . To the extent the department head perceived a comparative advantage in terms of his department’s obtaining grants and contracts and to the extent his department actually did receive more restricted funds, the department head tended to favor grants and contracts as a basis of budget allocation. . . . Preferences for basing budget allocation on the number of undergraduate students taught was correlated .34 . . . with the proportion of undergraduate instructional units taught. . . . Preferences for basing budget allocations on the national rank [prestige] of the department was correlated .43 . . . with the national rank of the department in 1969. . . . The data indicate that departments with a comparative advantage in a particular area favored basing budget allocations more on this criterion. 19

Using data selectively comes from simple self-interested behavior. But, it is more than self-interest that produces the selective use of both data and a particular perspective. Through a process of commitment, individuals come to believe in what they do. And under conditions of uncertainty, which often characterize managerial decision making, individuals would prefer to use both data and decision-making processes with which they are comfortable.20 Thus, it is not surprising that finance types, often unfamiliar with engineering or with manufacturing processes, rely on quantitative indicators of operations and forecasts of economic return, 21 while engineers rely more on technical factors and on their sense of the product design or the design of the operating system. We do what we know how to do, and we make choices according to the criteria that are most familiar to us.

Not all of us, however, are equally sensitive to the strategic purposes served by selectively favoring certain data, nor are we all equally skilled at the process involved. One study examined the effects of three factors on resource allocations at the University of Illinois, once departmental power and objective factors were taken into account: 1) the accuracy with which the department head perceived his unit’s relative standing on various criteria used for resource allocation; 2) the extent to which the department head advocated basing resource allocations on criteria on which the department scored relatively well; and 3) the accuracy of the department head’s perceptions of the distribution of departmental power in the university.22 The study found that advocating criteria that favored the department, and having an accurate understanding of where the department stood on the potential criteria for resource allocations, were both positively related to the department’s ability to obtain resources, with the effects being particularly strong for less critical and scarce resources and for higher power departments. The evidence suggests that power is effectively employed, in part, through advocating the use of decision rules that favor one’s own department, and the use of this strategy requires that departmental representatives understand their relative benefit from the use of alternative criteria.

The use of analysis and information to favor one’s own position is enhanced by having certain technical skills. I remember talking to a former student who had taken a job with the Washington Post, a newspaper that had hired a number of MBAs, some from schools with less of a quantitative orientation than Stanford. I asked him how he was doing, particularly in his interactions with his colleagues from other schools. He replied that he was doing very well, and that he had found he was particularly effective in winning acceptance for proposals he favored. When I asked how he did this, he explained that his knowledge of both statistics and operations research and quantitative analysis was helpful, in that he could develop elaborate and sophisticated presentations and rationales for his point of view. Of course, he stated, he did not use the analysis to decide what course of action to pursue, but rather, to convince others of the validity of his ideas. In this sense, knowledge of analytic techniques is very helpful, if not critical, in the exercise of power and influence in organizations. The key is to understand what form of argument will be convincing in one’s particular environment, and to have the ability to formulate an argument in the appropriate fashion, using whatever analysis or data are accepted in that context.

Of course, employing information selectively means strategically ignoring information that does not advance one’s own point of view. We are particularly likely to ignore or distort information when it is inconsistent with our biases and with the course of action we have already undertaken. The following example from the Second World War, concerning the Allied decision to attack Germany through Holland in 1944, is particularly striking:

The whole enterprise depended upon an absence of strong German forces both in the Arnhem area and on the approach route from the south. Hence it came as something of a jolt when SHAEF received reports from the Dutch underground that two S.S. Panzer divisions which had mysteriously “disappeared” some time previously had now reappeared almost alongside the dropping zone. . . . since these ugly facts did not accord with what had been planned they fell upon a succession of deaf ears. . . . When one of his intelligence officers showed him the aerial photographs of German armour, General Browning, at First British Airborne H.Q., retorted: “I wouldn’t trouble myself about these if I were you. . . . they’re probably not serviceable at any rate.” The intelligence officer was then visited by the Corps medical officer who suggested he should take some leave because he was so obviously exhausted.23

WHY THERE IS OFTEN NO LEARNING

The final issue to be considered in our discussion of the political strategy of information and analysis is the question of learning. If the distortion of data occurs so routinely, why don’t people treat analyses with greater wariness? And why doesn’t feedback tend to correct some of these errors? For instance, if data are tabulated according to criteria that promote someone’s favored decision, and that decision does not work out, he or she might be expected to suffer the consequences. In fact, however, such consequences are rarely visited on those who use information strategically. There are several reasons for their rarity.

First, there is often no possible way of knowing whether the right decision has been made, because as I noted in Chapter 7, the right decision is a construct with almost no meaning in many circumstances. For instance, say a government agency or department head such as Robert Moses uses information and analysis to show that his unit is comparatively more deserving of resources, and as a consequence, receives a disproportionate share of the budget. How can one know if that was a correct or incorrect decision? How can one know whether New York City has too many parks and not enough firehouses, if San Francisco has spent too much on public health and not enough on roads, if the University of Illinois has spent too much on its physics department and not enough on its romance languages? Different people will have different points of view. There are a number of ways of measuring the effects of budget allocations and other decisions, but there is no way of completely resolving the uncertainty.

But surely things are clearer in private, for-profit organizations—the profit goal must provide a yardstick against which to evaluate judgments, so that people who use information strategically will be in trouble if their particular perspective turns out not to be the best for the organization’s profitability.

Not necessarily.

In the first place, many decisions have remote or highly indirect connections to the outcomes that are measured or measurable in organizations. For instance, we saw in Chapter 5 that many people like to help their allies obtain positions of power, or alternatively, to have other executives beholden to them—allies are a critical source of power in organizations. But, unless your allies are egregiously incompetent, it is not likely that appointing your friends and supporters to positions of power will have a visible effect on organizational performance.

A related point is that outcomes in organizations are overdetermined in the sense that they have multiple causes. It is difficult, if not impossible, to ascertain which of the possible causes is the true source of the result. Consider Xerox’s loss of market share and potential profitability during the 1970s. To what should we attribute this—Peter McColough’s appointment of McCardell as president and the consequent dominance of Ford-trained finance types? The location of PARC in Palo Alto, away from the rest of Xerox, so that integrating the results of its research into product development was more difficult? McColough’s preoccupation with external relations and organizations? The strategic insights of the Japanese, who saw both a market opportunity (in the low-end copiers) and a way of exploiting that opportunity? Or one of a number of other factors?24 With multiple causation, assigning of the blame for a failure is itself a political process, rather than an inferential one.

There are three other factors that also tend to prevent the kind of feedback that might constrain both the operation of power and influence and the strategic use of information, analysis, and outside expertise. The first factor is the length of time many decisions take to have consequences that are ascertainable, even if such consequences can be evaluated at all. Building a nuclear power plant takes more than a decade, and many capital construction projects extend over many years. The launching of a new product, expansion geographically, and alterations in product strategy are all actions that take time both to implement and to produce consequences. This time lag before decisions come to fruition makes it less likely that who is responsible will be clearly remembered.

It is also the case that the very nature of most organizational decision making involves building up some degree of collective responsibility, which means that it is difficult to assign blame to individuals when plans go awry. Most of the notable failures we have discussed so far did not have single architects who could be called to account for their mistakes. For instance, Time Inc.’s decision to launch TV-Cable Week was essentially a group decision. There were several executive committee meetings held, many people were involved in the process, and finally, even the board of directors gave approval to pursuing the project. And if it is difficult to determine responsibility for what has been done, it is nearly impossible to find those responsible for what has not been done. There were a number of people at Xerox, including some at PARC, who did not push for the rapid commercialization of their personal computer technology, thus permitting other companies, such as Apple, to gain first-mover advantages.

Just as there is collective responsibility for decisions, there is a collective unwillingness to determine the causes of past failures. Organizations are notorious for avoiding evaluation and avoiding looking backward. They are incredibly nonintrospective, if I can use that term. It is only under extreme public pressure, for instance, that either schools or hospitals have recently begun to publish outcome measures—in the case of schools, student scores on standardized tests, and in the case of hospitals, data on costs and morbidity and mortality outcomes. For years both types of institutions resisted not only publishing such data but even collecting it for internal use.

My colleague Jim Baron sat on a panel considering the implementation of pay-for-performance in the Civil Service System. It occurred to him that over the years there had been literally hundreds of innovations in personnel practices, both in the government and in the private sector. Yet the amount of evaluation of these innovations, to see whether they had any effect at all, let alone the desired effect, was trivial, in either the public or the private sector. I suspect many readers of this book will have been in organizations in which performance evaluation systems were changed, compensation practices altered, organizations restructured, work organization reformed, and so forth. In how many instances was the evaluation of any of these changes undertaken or even contemplated? Although we often think that the avoidance of evaluation and assessment is particularly likely in so-called institutionalized organizations such as those in the public sector,25 I often see the same reluctance to evaluate the results of changes in private sector firms.

We should consider what happened to various executives involved in ventures that were clearly unsuccessful—a clarity that is, in fact, quite rare in complex organizations. Under the McCardell presidency, Xerox lost half its market share, lost its lead in technology (in the high end of the copier business to Kodak, in the low end to Savin/Ricoh), developed a reputation for poor-quality products, failed to keep pace with the manufacturing efficiencies of the Japanese, and failed to take advantage of the digital technologies being developed at the Palo Alto Research Center. In 1977, International Harvester hired Archie McCardell to be its chief executive officer, with a multiyear compensation package worth more than $6 million,.26

After TV-Cable Week was closed down, having cost Time Inc. approximately $50 million:

Saving face proved to be a major thirty-fourth floor concern. . . . When a television reporter asked Grunwald to comment on the failure, Time Inc.’s Editor-in-Chief said, “Well, everybody’s entitled to one Edsel.” . . . When a trade reporter asked Executive Vice President Clifford Grum which management official had been responsible for overseeing the project, Time Inc.’s second-in-command answered, “There was no one man in charge; it was a group effort,” then looked ready to end the interview if the reporter should press further. . . . In his three years as the company’s president and CEO, Munro failed at virtually every new venture he authorized, eventually accumulating losses that totaled nearly 10 percent of Time Inc.’s entire net worth. . . . corporate debt increased, earnings per share stagnated, and investment analysts began to view the company as lacking in direction. Yet his weak performance did not stop Time Inc.’s five-man Compensation and Personnel Committee . . . from bestowing on him regular annual salary and stock bonus increases anyway,.27

Furthermore, Munro elevated the head of the Video Group and the head of the Magazine Group, each of them closely involved with the failure, to the corporation’s board of directors even as the magazine was closing.28

When Eastern Airlines was in severe financial trouble in the mid-1980s and the employees had lost confidence in Frank Borman, the CEO, the board of directors rejected Borman’s resignation, offered in return for promised additional wage concessions on the part of the employees. Rather, they preferred to sell the company to Frank Lorenzo, with Borman receiving a $1 million severance package shortly thereafter. And, of course, in 1990, as he retired after presiding over General Motors’ loss of about one-third of its market share, Roger Smith was awarded an increased pension in excess of $1 million per year by the board of directors.

Such outcomes are not inevitable, and I am certainly not arguing that the road to success is through corporate failure and disaster. But it is important to recognize that the connection between results and what happens to people inside large organizations is quite tenuous, for all the reasons that I have presented. What this means is that we should probably not hesitate to use information and analysis to exercise power in organizations, since the strategy is an effective one and the likelihood of our being called to account for our actions is not very great.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.60.192