It’s often said that insanity is doing the same thing over and over again but expecting different results. By that definition, I have seen plenty of it in my four decades as a strategy adviser.
When executives and managers find that a given framework, general practice, theory, or way of thinking—what I will call a “model” for short—doesn’t lead to the desired outcome, they almost automatically assume that the model in question wasn’t applied rigorously enough. The prescription, therefore, is to apply the model again, more vigorously. And when that produces the same unsatisfactory result, the prescription is to try even harder. If focusing on maximizing shareholder value doesn’t maximize shareholder value, then focus more singularly on shareholder value. If making execution a priority doesn’t result in better execution, make execution still more of a priority. If your culture doesn’t change in the direction you want, then mandate culture change even more aggressively.
Existing models are extraordinarily persistent in the face of ineffectiveness and that is because our use of models to organize our thinking and action is so automatic. As MIT Sloan system dynamics professor John Sterman points out, human beings don’t consciously make a choice of whether to model; “it is only a question of which model.” On that “which” front, we prefer to apply a known and accepted way of thinking about a problem at hand because we know that thinking from first principles, as we sometimes must do when we encounter an unprecedented situation for which we have no model, is arduous, time-consuming, and downright scary. And once we have been confronted by that novel situation, next time round we’ll almost certainly apply some form of the model we eventually figured out. We favor using existing available models because it’s easier and quicker. This inclination is reinforced over and over in our formal training. From the very start, the education system teaches us models—how to multiply, how to structure a paragraph, how to categorize species—and gets us to practice using them over and over until each becomes second nature.
Business education is no different. It teaches a vast array of models—Five Forces, CAPM, the 4 Ps, EOQ, Black Scholes, GAAP, WACC, to name just a few. Over time, competing models battle for dominance, and as happens in nature with species, there is typically convergence on a dominant design in each domain in management. The winners tend to become received business wisdom. These winners get used over and over again, becoming the default framework in the contexts for which they were designed. It should come as no surprise, therefore, that when one of these models doesn’t seem to work, the manager in question won’t reject the model but will instead assume personal responsibility for failing to correctly apply it. It’s extremely difficult—and socially risky—to question an established model that many people believe and to start building a new model from scratch.
That questioning and building has become my job, though it took me a while through the process of working with my clients to figure that it indeed was my job. Executives, mainly CEOs, hire me to help them improve the performance of their companies. That usually means that there is something that frustrates or worries them in some way—something that isn’t working as well as they wish, or they wouldn’t have hired me in the first place. To help them, I need to diagnose why the results aren’t what they wish. It has become clear to me over the years that in nearly every case, the poor results weren’t down to their not working diligently enough in pursuit of their goals; it was because the model that guided their actions wasn’t up to the task.
In one classic example, a client hired me to figure out why its R&D program was producing smaller and smaller wins even though the company had invested ever more time and energy in screening R&D projects through a rigorous gating procedure that weeded out the projects that showed less promise. Despite all that rigor, the company hadn’t had a real breakthrough product in several years. What quickly became clear to me is that the model implicitly guiding its actions was that early screening based on the rigorous analysis of available market data would increase R&D productivity through eliminating unlikely prospects, thereby freeing up time and resources for the more promising prospects.
On the surface, that model made sense. But when I looked closely at the process, I realized that the screening methodology the client used involved projecting future sales based on currently existing data. This meant that for innovations that were minor variations on the status quo, relatively compelling data tended to be available, and these projects consistently made it through the various gates. For more breakthrough innovations, however, there just wasn’t good data available (because the ideas were new), and hence, projections of huge future sales tended to be dismissed as speculative. In other words, the seemingly sensible model was logically flawed: it was predicated on the availability of good market data, but existing market data is not likely to be relevant for genuinely breakthrough innovations.
So was there a different model the company could use instead? There was and it is based on American pragmatist philosopher Charles Sanders Peirce’s observation that no new idea in the history of the world has been proven in advance analytically, which means that if you insist on rigorous proof of the merits of an idea during its development, you will kill it if it is truly a breakthrough idea, because there will be no proof of its breakthrough characteristics in advance. If you are going to screen innovation projects, therefore, a better model is one that has you assess them on the strength of their logic—the theory of why the idea is a good one—not on the strength of the existing data. Then, as you get further into each project that passes the logic test, you need to look for ways to create data that enables you to test and adjust—or perhaps kill—the idea as you develop it.
In the case of my client with the R&D problem—and thousands of others like it—applying the model more diligently wasn’t the answer. Solving the problem required a new way to think. It required a different model. That became the heart of my work. Rather than accept a client’s existing model, I would step back to ask what it was about the model that caused it to fail to meet the needs of the problem it was designed to address. And, more importantly, was there a different, more powerful way to think about the problem?
Looking back over my career now, I realize that I have always been fascinated by models because of the degree to which they shape everything we do. From elementary school onward throughout my formal education, I probed the models that my teachers and professors taught me. How did they know that the world worked that way? Were they sure? Did it work in all cases? Asking these questions was how I learned and how I got to what I thought was a better answer. And while I am sure that a lot of my teachers, bosses, and clients have found my constant questioning annoying, a fair few have also found the questions interesting and have acted on the answers we came up with together. And that brings me to this book.
Whenever I find that one of my alternative models is helpful across multiple clients for addressing a given class of problems, I write about it to share the advice more widely. My favorite place to do so has been Harvard Business Review (HBR) with my favorite editorial partner, senior editor David Champion, with whom I have written twenty HBR articles since our first piece together in 2010.
Not all the articles with David take on a dominant model that is not producing the outcomes desired and provide a superior alternative. But at one point during our collaborations, David noted that a goodly number of them did, and broached the idea of doing a whole book in that vein. This book is the fruit of that conversation. Each of the fourteen self-contained chapters compares a dominant but flawed model to an alternative that I argue is superior.
I am, however, not so arrogant to claim my alternative is the right model or a perfect model. I come from the Karl Popper/Imre Lakatos school of falsificationism. Like them, I don’t believe there are right answers or wrong answers, just better ones and worse ones. One should always use the best model available, but watch closely to see whether it produces the outcomes that it promised. If it does, keep using it. If it doesn’t, then you should work on creating a better model—one that produces results more in keeping with your goals. But be assured that in due course your new model, too, will be found wanting and will be replaced by a better model still.
I’m aware that many managers and executives are trained as scientists, and when you’ve been trained that way, you may well think that there is, indeed, a right answer or model to apply in any given situation. If that’s what you think, though, I should remind you that Sir Isaac Newton’s models of physics were widely taught as absolutely right for over a century, until such time that the world figured out, thanks to Albert Einstein, that they weren’t exactly right but just mainly right. I’m not promising to provide fourteen correct models in this book. I’m proposing, rather, that my fourteen new or different models will provide a better likelihood of getting you an outcome you want than the model it replaces. And I would welcome the next thinker who will improve on each of my models.
Finally, I want to point out that across the fourteen chapters, you will find that I make disproportionate use of Procter & Gamble (P&G) as an example and frequently mention its former CEO A.G. Lafley. The reason for this is that I have had a uniquely long and productive relationship with P&G, having worked nearly continuously as an adviser to the company since 1986. I have had the pleasure advising a number of P&G CEOs in that time, from the late John Smale in the late 1980s to the just-retired David Taylor, but my longest and deepest single relationship was with A.G. Lafley, who served as CEO for thirteen years across two stints. We were thinking partners to the extent that two of the fourteen chapters are based on HBR articles that we coauthored, and, of course, we coauthored the book Playing to Win.
As a consequence of that long and deep relationship with P&G, I had an up-close vantage point on many situations on which I worked, and which provide great illustrations of the concepts in the book. Because I know the circumstances and facts of these cases, I would rather use them than second- or thirdhand stories. On top of which, P&G also has the advantage of being an extremely well-known consumer products company to which readers will perhaps more readily relate than they might to an industrial services company that they may never have heard of. But I am very well aware that many other companies equally illustrate what is meritorious in business as does P&G.
By design, the fourteen chapters in this book don’t build on one another in a way that absolutely requires them to be read in order. They can be read in order of interest or left to read until the situation described in the chapter arises. In that sense, you can treat this as a management handbook. That said, I am an academic and a consultant, and both professions have a strong interest in categorizing ideas, and as I was putting the chapters together, I mentally grouped them into four general buckets, which helped me to figure out the order I wanted to present them in. So here goes …
My first bucket deals with context or, maybe, the framework in which most corporations operate. Three topics seemed to me to belong to this group and are discussed in part 1:
My next bucket focuses on how managers within a corporation make decisions. Two topics seemed to belong in this bucket: deal with the act of making choices within the corporation, discussed in part 2:
Having made their key choices, managers have to figure out how to deliver on those choices, so my next bucket is about structuring work, discussed in part 3. Three topics seemed to belong here:
Having structured how people work, I went deeper into a number of key activities in which most units in a given business engage. This category makes up the six remaining chapters of the book, in part 4:
The fourteen dominant models are in place not because they are stupid. All of them make a lot of sense. So, I don’t believe that these one-sentence descriptions of the alternative models will convince you to jettison the dominant model and adopt my alternative suggestion. But my hope is that you will be intrigued enough to read the whole chapter for each and will be convinced to at least experiment with using the alternative model. If you do, I am confident you will become a still more effective executive, and, like my hero Peter Drucker, my primary writing goal is just that: to help executives increase their effectiveness.
3.17.150.89