Innovation Killers

How Financial Tools Destroy Your Capacity to Do New Things. by Clayton M. Christensen, Stephen P. Kaufman, and Willy C. Shih

FOR YEARS WE’VE BEEN puzzling about why so many smart, hardworking managers in well-run companies find it impossible to innovate successfully. Our investigations have uncovered a number of culprits, which we’ve discussed in earlier books and articles. These include paying too much attention to the company’s most profitable customers (thereby leaving less-demanding customers at risk) and creating new products that don’t help customers do the jobs they want to do. Now we’d like to name the misguided application of three financial-analysis tools as an accomplice in the conspiracy against successful innovation. We allege crimes against these suspects:

• The use of discounted cash flow (DCF) and net present value (NPV) to evaluate investment opportunities causes managers to underestimate the real returns and benefits of proceeding with investments in innovation.

• The way that fixed and sunk costs are considered when evaluating future investments confers an unfair advantage on challengers and shackles incumbent firms that attempt to respond to an attack.

• The emphasis on earnings per share as the primary driver of share price and hence of shareholder value creation, to the exclusion of almost everything else, diverts resources away from investments whose payoff lies beyond the immediate horizon.

These are not bad tools and concepts, we hasten to add. But the way they are commonly wielded in evaluating investments creates a systematic bias against innovation. We will recommend alternative methods that, in our experience, can help managers innovate with a much more astute eye for future value. Our primary aim, though, is simply to bring these concerns to light in the hope that others with deeper expertise may be inspired to examine and resolve them.

Misapplying Discounted Cash Flow and Net Present Value

The first of the misleading and misapplied tools of financial analysis is the method of discounting cash flow to calculate the net present value of an initiative. Discounting a future stream of cash flows into a “present value” assumes that a rational investor would be indifferent to having a dollar today or to receiving some years from now a dollar plus the interest or return that could be earned by investing that dollar for those years. With that as an operating principle, it makes perfect sense to assess investments by dividing the money to be received in future years by (1 + r)n, where r is the discount rate—the annual return from investing that money—and n is the number of years during which the investment could be earning that return.

While the mathematics of discounting is logically impeccable, analysts commonly commit two errors that create an anti-innovation bias. The first error is to assume that the base case of not investing in the innovation—the do-nothing scenario against which cash flows from the innovation are compared—is that the present health of the company will persist indefinitely into the future if the investment is not made. As shown in the exhibit “The DCF trap,” the mathematics considers the investment in isolation and compares the present value of the innovation’s cash stream less project costs with the cash stream in the absence of the investment, which is assumed to be unchanging. In most situations, however, competitors’ sustaining and disruptive investments over time result in price and margin pressure, technology changes, market share losses, sales volume decreases, and a declining stock price. As Eileen Rudden at Boston Consulting Group pointed out, the most likely stream of cash for the company in the do-nothing scenario is not a continuation of the status quo. It is a nonlinear decline in performance.

It’s tempting but wrong to assess the value of a proposed investment by measuring whether it will make us better off than we are now. It’s wrong because, if things are deteriorating on their own, we might be worse off than we are now after we make the proposed investment but better off than we would have been without it. Philip Bobbitt calls this logic Parmenides’ Fallacy, after the ancient Greek logician who claimed to have proved that conditions in the real world must necessarily be unchanging. Analysts who attempt to distill the value of an innovation into one simple number that they can compare with other simple numbers are generally trapped by Parmenides’ Fallacy.

The DCF trap

Most executives compare the cash flows from innovation against the default scenario of doing nothing, assuming—incorrectly—that the present health of the company will persist indefinitely if the investment is not made. For a better assessment of the innovation’s value, the comparison should be between its projected discounted cash flow and the more likely scenario of a decline in performance in the absence of innovation investment.

It’s hard to accurately forecast the stream of cash from an investment in innovation. It is even more difficult to forecast the extent to which a firm’s financial performance may deteriorate in the absence of the investment. But this analysis must be done. Remember the response that good economists are taught to offer to the question “How are you?” It is “Relative to what?” This is a crucial question. Answering it entails assessing the projected value of the innovation against a range of scenarios, the most realistic of which is often a deteriorating competitive and financial future.

The second set of problems with discounted cash flow calculations relates to errors of estimation. Future cash flows, especially those generated by disruptive investments, are difficult to predict. Numbers for the “out years” can be a complete shot in the dark. To cope with what cannot be known, analysts often project a year-by-year stream of numbers for three to five years and then “punt” by calculating a terminal value to account for everything thereafter. The logic, of course, is that the year-to-year estimates for distant years are so imprecise as to be no more accurate than a terminal value. To calculate a terminal value, analysts divide the cash to be generated in the last year for which they’ve done a specific estimate by (r–g), the discount rate minus the projected growth rate in cash flows from that time on. They then discount that single number back to the present. In our experience, assumed terminal values often account for more than half of a project’s total NPV.

Terminal value numbers, based as they are on estimates for preceding years, tend to amplify errors contained in early-year assumptions. More worrisome still, terminal value doesn’t allow for the scenario testing that we described above—contrasting the result of this investment with the deterioration in performance that is the most likely result of doing nothing. And yet, because of market inertia, competitors’ development cycles, and the typical pace of disruption, it is often in the fifth year or beyond—the point at which terminal value factors in—that the decline of the enterprise in the do-nothing scenario begins to accelerate.

Arguably, a root cause of companies’ persistent underinvestment in the innovations required to sustain long-term success is the indiscriminate and oversimplified use of NPV as an analytical tool. Still, we understand the desire to quantify streams of cash that defy quantification and then to distill those streams into a single number that can be compared with other single numbers: It is an attempt to translate cacophonous articulations of the future into a language—numbers—that everyone can read and compare. We hope to show that numbers are not the only language into which the value of future investments can be translated—and that there are, in fact, other, better languages that all members of a management team can understand.

Using Fixed and Sunk Costs Unwisely

The second widely misapplied paradigm of financial decision making relates to fixed and sunk costs. When evaluating a future course of action, the argument goes, managers should consider only the future or marginal cash outlays (either capital or expense) that are required for an innovation investment, subtract those outlays from the marginal cash that is likely to flow in, and discount the resulting net flow to the present. As with the paradigm of DCF and NPV, there is nothing wrong with the mathematics of this principle—as long as the capabilities required for yesterday’s success are adequate for tomorrow’s as well. When new capabilities are required for future success, however, this margining on fixed and sunk costs biases managers toward leveraging assets and capabilities that are likely to become obsolete.

For the purposes of this discussion we’ll define fixed costs as those whose level is independent of the level of output. Typical fixed costs include general and administrative costs: salaries and benefits, insurance, taxes, and so on. (Variable costs include things like raw materials, commissions, and pay to temporary workers.) Sunk costs are those portions of fixed costs that are irrevocably committed, typically including investments in buildings and capital equipment and R&D costs.

An example from the steel industry illustrates how fixed and sunk costs make it difficult for companies that can and should invest in new capabilities actually to do so. In the late 1960s, steel minimills such as Nucor and Chaparral began disrupting integrated steelmakers such as U.S. Steel (USX), picking off customers in the least-demanding product tiers of each market and then moving relentlessly upmarket, using their 20% cost advantage to capture first the rebar market and then the bar and rod, angle iron, and structural beam markets. By 1988 the minimills had driven the higher-cost integrated mills out of lower-tier products, and Nucor had begun building its first minimill to roll sheet steel in Crawfordsville, Indiana. Nucor estimated that for an investment of $260 million it could sell 800,000 tons of steel annually at a price of $350 per ton. The cash cost to produce a ton of sheet steel in the Crawfordsville mill would be $270. When the timing of cash flows was taken into account, the internal rate of return to Nucor on this investment was over 20%—substantially higher than Nucor’s weighted average cost of capital.

Incumbent USX recognized that the minimills constituted a grave threat. Using a new technology called continuous strip production, Nucor had now entered the sheet steel market, albeit with an inferior-quality product, at a significantly lower cost per ton. And Nucor’s track record of vigilant improvement meant that the quality of its sheet steel would improve with production experience. Despite this understanding, USX engineers did not even consider building a greenfield minimill like the one Nucor built. The reason? It seemed more profitable to leverage the old technology than to create the new. USX’s existing mills, which used traditional technology, had 30% excess capacity, and the marginal cash cost of producing an extra ton of steel by leveraging that excess capacity was less than $50 per ton. When USX’s financial analysts contrasted the marginal cash flow of $300 ($350 revenue minus the $50 marginal cost) with the average cash flow of $80 per ton in a greenfield mill, investment in a new low-cost minimill made no sense. What’s more, USX’s plants were depreciated, so the marginal cash flow of $300 on a low asset base looked very attractive.

And therein lies the rub. Nucor, the attacker, had no fixed or sunk cost investments on which to do a marginal cost calculation. To Nucor, the full cost was the marginal cost. Crawfordsville was the only choice on its menu—and because the IRR was attractive, the decision was simple. USX, in contrast, had two choices on its menu: It could build a greenfield plant like Nucor’s with a lower average cost per ton or it could utilize more fully its existing facility.

So what happened? Nucor has continued to improve its process, move upmarket, and gain market share with more efficient continuous strip production capabilities, while USX has relied on the capabilities that had been built to succeed in the past. USX’s strategy to maximize marginal profit, in other words, caused the company not to minimize long-term average costs. As a result, the company is locked into an escalating cycle of commitment to a failing strategy.

The attractiveness of any investment can be completely assessed only when it is compared with the attractiveness of the right alternatives on a menu of investments. When a company is looking at adding capacity that is identical to existing capacity, it makes sense to compare the marginal cost of leveraging the old with the full cost of creating the new. But when new technologies or capabilities are required for future competitiveness, margining on the past will send you down the wrong path. The argument that investment decisions should be based on marginal costs is always correct. But when creating new capabilities is the issue, the relevant marginal cost is actually the full cost of creating the new.

When we look at fixed and sunk costs from this perspective, several anomalies we have observed in our studies of innovation are explained. Executives in established companies bemoan how expensive it is to build new brands and develop new sales and distribution channels—so they seek instead to leverage their existing brands and structures. Entrants, in contrast, simply create new ones. The problem for the incumbent isn’t that the challenger can outspend it; it’s that the challenger is spared the dilemma of having to choose between full-cost and marginal-cost options. We have repeatedly observed leading, established companies misapply fixed-and-sunk-cost doctrine and rely on assets and capabilities that were forged in the past to succeed in the future. In doing so, they fail to make the same investments that entrants and attackers find to be profitable.

A related misused financial practice that biases managers against investment in needed future capabilities is that of using a capital asset’s estimated usable lifetime as the period over which it should be depreciated. This causes problems when the asset’s usable lifetime is longer than its competitive lifetime. Managers who depreciate assets according to the more gradual schedule of usable life often face massive write-offs when those assets become competitively obsolete and need to be replaced with newer-technology assets. This was the situation confronting the integrated steelmakers. When building new capabilities entails writing off the old, incumbents face a hit to quarterly earnings that disruptive entrants to the industry do not. Knowing that the equity markets will punish them for a write-off, managers may stall in adopting new technology.

This may be part of the reason for the dramatic increase in private equity buyouts over the past decade and the recent surge of interest in technology-oriented industries. As disruptions continue to shorten the competitive lifetime of major investments made only three to five years ago, more companies find themselves needing to take asset write-downs or to significantly restructure their business models. These are wrenching changes that are often made more easily and comfortably outside the glare of the public markets.

What’s the solution to this dilemma? Michael Mauboussin at Legg Mason Capital Management suggests it is to value strategies, not projects. When an attacker is gaining ground, executives at the incumbent companies need to do their investment analyses in the same way the attackers do—by focusing on the strategies that will ensure long-term competitiveness. This is the only way they can see the world as the attackers see it and the only way they can predict the consequences of not investing.

No manager would consciously decide to destroy a company by leveraging the competencies of the past while ignoring those required for the future. Yet this is precisely what many of them do. They do it because strategy and finance were taught as separate topics in business school. Their professors of financial modeling alluded to the importance of strategy, and their strategy professors occasionally referred to value creation, but little time was spent on a thoughtful integration of the two. This bifurcation persists in most companies, where responsibilities for strategy and finance reside in the realms of different vice presidents. Because a firm’s actual strategy is defined by the stream of projects in which it does or doesn’t invest, finance and strategy need to be studied and practiced in an integrated way.

Focusing Myopically on Earnings per Share

A third financial paradigm that leads established companies to underinvest in innovation is the emphasis on earnings per share as the primary driver of share price and hence of shareholder value creation. Managers are under so much pressure, from various directions, to focus on short-term stock performance that they pay less attention to the company’s long-term health than they might—to the point where they’re reluctant to invest in innovations that don’t pay off immediately.

Where’s the pressure coming from? To answer that question, we need to look briefly at the principal-agent theory—the doctrine that the interests of shareholders (principals) aren’t aligned with those of managers (agents). Without powerful financial incentives to focus the interests of principals and agents on maximizing shareholder value, the thinking goes, agents will pursue other agendas—and in the process, may neglect to pay enough attention to efficiencies or squander capital investments on pet projects—at the expense of profits that ought to accrue to the principals.

That conflict of incentives has been taught so aggressively that the compensation of most senior executives in publicly traded companies is now heavily weighted away from salaries and toward packages that reward improvements in share price. That in turn has led to an almost singular focus on earnings per share and EPS growth as the metric for corporate performance. While we all recognize the importance of other indicators such as market position, brands, intellectual capital, and long-term competitiveness, the bias is toward using a simple quantitative indicator that is easily compared period to period and across companies. And because EPS growth is an important driver of near-term share price improvement, managers are biased against investments that will compromise near-term EPS. Many decide instead to use the excess cash on the balance sheet to buy back the company’s stock under the guise of “returning money to shareholders.” But although contracting the number of shares pumps up earnings per share, sometimes quite dramatically, it does nothing to enhance the underlying value of the enterprise and may even damage it by restricting the flow of cash available for investment in potentially disruptive products and business models. Indeed, some have fingered share-price-based incentive compensation packages as a key driver of the share price manipulation that captured so many business headlines in the early 2000s.

The myopic focus on EPS is not just about the money. CEOs and corporate managers who are more concerned with their reputations than with amassing more wealth also focus on stock price and short-term performance measures such as quarterly earnings. They know that, to a large extent, others’ perception of their success is tied up in those numbers, leading to a self-reinforcing cycle of obsession. This behavior cycle is amplified when there is an “earnings surprise.” Equity prices over the short term respond positively to upside earnings surprises (and negatively to downside surprises), so investors have no incentive to look at rational measures of long-term performance. To the contrary, they are rewarded for going with the market’s short-term model.

The active leveraged buyout market has further reinforced the focus on EPS. Companies that are viewed as having failed to maximize value, as evidenced by a lagging share price, are vulnerable to overtures from outsiders, including corporate raiders or hedge funds that seek to increase their near-term stock price by putting a company into play or by replacing the CEO. Thus, while the past two decades have witnessed a dramatic increase in the proportion of CEO compensation tied to stock price—and a breathtaking increase in CEO compensation overall—they have witnessed a concomitant decrease in the average tenure of CEOs. Whether you believe that CEOs are most motivated by the carrot (major increases in compensation and wealth) or the stick (the threat of the company being sold or of being replaced), you should not be surprised to find so many CEOs focused on current earnings per share as the best predictor of stock price, sometimes to the exclusion of anything else. One study even showed that senior executives were routinely willing to sacrifice long-term shareholder value to meet earnings expectations or to smooth reported earnings.

We suspect that the principal-agent theory is misapplied. Most traditional principals—by which we mean shareholders—don’t themselves have incentives to watch out for the long-term health of a company. Over 90% of the shares of publicly traded companies in the United States are held in the portfolios of mutual funds, pension funds, and hedge funds. The average holding period for stocks in these portfolios is less than 10 months—leading us to prefer the term “share owner” as a more accurate description than “shareholder.” As for agents, we believe that most executives work tirelessly, throwing their hearts and minds into their jobs, not because they are paid an incentive to do so but because they love what they do. Tying executive compensation to stock prices, therefore, does not affect the intensity or energy or intelligence with which executives perform. But it does direct their efforts toward activities whose impact can be felt within the holding horizon of the typical share owner and within the measurement horizon of the incentive—both of which are less than one year.

Ironically, most so-called principals today are themselves agents—agents of other people’s mutual funds, investment portfolios, endowments, and retirement programs. For these agents, the enterprise in which they are investing has no inherent interest or value beyond providing a platform for improving the short-term financial metric by which their fund’s performance is measured and their own compensation is determined. And, in a final grand but sad irony, the real principals (the people who put their money into mutual funds and pension plans, sometimes through yet another layer of agents) are frequently the very individuals whose long-term employment is jeopardized when the focus on short-term EPS acts to restrict investments in innovative growth opportunities. We suggest that the principal-agent theory is obsolete in this context. What we really have is an agent-agent problem, where the desires and goals of the agent for the share owners compete with the desires and goals of the agents running the company. The incentives are still misaligned, but managers should not capitulate on the basis of an obsolete paradigm.

Processes That Support (or Sabotage) Innovation

As we have seen, managers in established corporations use analytical methods that make innovation investments extremely difficult to justify. As it happens, the most common system for green-lighting investment projects only reinforces the flaws inherent in the tools and dogmas discussed earlier.

Stage-gate innovation

Most established companies start by considering a broad range of possible innovations; they winnow out the less viable ideas, step by step, until only the most promising ones remain. Most such processes include three stages: feasibility, development, and launch. The stages are separated by stage gates: review meetings at which project teams report to senior managers what they’ve accomplished. On the basis of this progress and the project’s potential, the gatekeepers approve the passage of the initiative into the next phase, return it to the previous stage for more work, or kill it.

Many marketers and engineers regard the stage-gate development process with disdain. Why? Because the key decision criteria at each gate are the size of projected revenues and profits from the product and the associated risks. Revenues from products that incrementally improve upon those the company is currently selling can be credibly quantified. But proposals to create growth by exploiting potentially disruptive technologies, products, or business models can’t be bolstered by hard numbers. Their markets are initially small, and substantial revenues generally don’t materialize for several years. When these projects are pitted against incremental sustaining innovations in the battle for funding, the incremental ones sail through while the seemingly riskier ones get delayed or die.

The process itself has two serious drawbacks. First, project teams generally know how good the projections (such as NPV) need to look in order to win funding, and it takes only nanoseconds to tweak an assumption and run another full scenario to get a faltering project over the hurdle rate. If, as is often the case, there are eight to 10 assumptions underpinning the financial model, changing only a few of them by a mere 2% or 3% each may do the trick. It is then difficult for the senior managers who sit as gatekeepers to even discern which are the salient assumptions, let alone judge whether they are realistic.

The second drawback is that the stage-gate system assumes that the proposed strategy is the right strategy. Once an innovation has been approved, developed, and launched, all that remains is skillful execution. If, after launch, a product falls seriously short of the projections (and 75% of them do), it is canceled. The problem is that, except in the case of incremental innovations, the right strategy—especially which job the customer wants done—cannot be completely known in advance. It must emerge and then be refined.

The stage-gate system is not suited to the task of assessing innovations whose purpose is to build new growth businesses, but most companies continue to follow it simply because they see no alternative.

Discovery-driven planning

Happily, though, there are alternative systems specifically designed to support intelligent investments in future growth. One such process, which Rita Gunther McGrath and Ian MacMillan call discovery-driven planning, has the potential to greatly improve the success rate. Discovery-driven planning essentially reverses the sequence of some of the steps in the stage-gate process. Its logic is elegantly simple. If the project teams all know how good the numbers need to look in order to win funding, why go through the charade of making and revising assumptions in order to fabricate an acceptable set of numbers? Why not just put the minimally acceptable revenue, income, and cash flow statement as the standard first page of the gate documents? The second page can then raise the critical issues: “Okay. So we all know this is how good the numbers need to look. What set of assumptions must prove true in order for these numbers to materialize?” The project team creates from that analysis an assumptions checklist—a list of things that need to prove true for the project to succeed. The items on the checklist are rank-ordered, with the deal killers and the assumptions that can be tested with little expense toward the top. McGrath and MacMillan call this a “reverse income statement.”

When a project enters a new stage, the assumptions checklist is used as the basis of the project plan for that stage. This is not a plan to execute, however. It is a plan to learn—to test as quickly and at as low a cost as possible whether the assumptions upon which success is predicated are actually valid. If a critical assumption proves not to be valid, the project team must revise its strategy until the assumptions upon which it is built are all plausible. If no set of plausible assumptions will support the case for success, the project is killed.

Traditional stage-gate planning obfuscates the assumptions and shines the light on the financial projections. But there is no need to focus the analytical spotlight on the numbers, because the desirability of attractive numbers has never been the question. Discovery-driven planning shines a spotlight on the place where senior management needs illumination—the assumptions that constitute the key uncertainties. More often than not, failure in innovation is rooted in not having asked an important question, rather than in having arrived at an incorrect answer.

Today, processes like discovery-driven planning are more commonly used in entrepreneurial settings than in the large corporations that desperately need them. We hope that by recounting the strengths of one such system we’ll persuade established corporations to reassess how they make decisions about investment projects.

_____________________

We keep rediscovering that the root reason for established companies’ failure to innovate is that managers don’t have good tools to help them understand markets, build brands, find customers, select employees, organize teams, and develop strategy. Some of the tools typically used for financial analysis, and decision making about investments, distort the value, importance, and likelihood of success of investments in innovation. There’s a better way for management teams to grow their companies. But they will need the courage to challenge some of the paradigms of financial analysis and the willingness to develop alternative methodologies.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.1.158