1. The Power of M3 and the Need to Understand Mistakes

“Anything worth doing is worth doing right.”

—My dad and others of his era

Doing things right in business has gotten a lot of press in recent years. We seem to have finally discovered that just having ideas is not enough. Results are what really matter, and results come from both ideas and execution, but the biggest enemy of great execution is mistakes.

I learned many things while serving as an officer in U.S. Navy nuclear submarines in the late 1960s, but one of the things you heard from the beginning was the saying, “There’s no partial credit in the fleet.” Win or lose in battle—there is no in-between—something that is especially true in the unforgiving environment hundreds of feet below the surface of the ocean and thousands of miles from home.

The problem with mistakes is that they creep up on you—individuals do not get up in the morning and say, “Boy, this would be a great day to make some mistakes.” They just find themselves in a place they do not want to be, fighting to survive a crisis and, if they do not survive the crisis, wondering how it all happened.

There are places, like the world of sports, where your mistakes are very visible. For baseball fans, and especially for Boston Red Sox fans, the seventh game of the 2003 American League Championship Series was a lesson in making mistakes.

Boston was ahead in the eighth inning with a two-run lead. The pitcher, Pedro Martinez, had pitched a very good game, and everyone thought this was the year that the “Curse of the Bambino”* would be broken. Martinez had thrown nearly 120 pitches (a lot) and was no longer pitching perfectly, resulting in runners at second and third. The manager, Grady Little, went to the mound and decided to leave his star pitcher in the game. Fans in the ballpark and watching on TV all over the world were saying, “This guy looks tired—are you crazy?” It was very late in the game, the tying run was at the plate, the game was on the line, the chance to go to the World Series was on the line, Martinez was looking weak, and Little decided to stick with the plan.

* Boston fans believe Boston has not won a World Series since 1918 because the legendary player, Babe Ruth, was traded to the New York Yankees.

The rest is history and is now part of the lore of “The Curse.” A double by Jorge Posada drove in two runs to tie the game, which went to extra innings. The Yankees won in the eleventh inning. A series of small mistakes built up to cause a disaster (for Boston fans). There were chances to break the chain of mistakes, but Martinez was not able to do so, and Little, uncertain about the alternative path (replacing the pitcher), was unwilling to take any action, sticking with what most people watching the game considered to be a high-risk strategy.

There’s no partial credit in the Fleet. There’s no partial credit in championship sports—you win or lose. You may not die physically from a sports mistake, but your career might, as Grady Little found when he was fired within a week of his team’s loss.

But isn’t this true in business as well? Many people are uncomfortable with the stark reality of winning and losing. My wife always roots for the underdog in the World Series, the Superbowl, or the Academy Awards, but she has learned how to win in local politics and business. Especially in the United States, we would like to believe there “is enough to go around,” whether it’s food or market share. The reality in a globally competitive world is different, however—win or lose. Deliver value or be shunned. Grow or die.

We learned as kids to compete for grades, approval, awards, a spot in the school play, entrance to college, or a place on a team or in a club. As individuals, we compete for jobs. When there are enough, we compete for the best jobs. If there are not enough, we compete for any job. In groups, we compete as teams for causes or recognition. What makes the difference between winners and losers on a personal basis? Sometimes it is raw intelligence. But often it is mistakes: in choices we have made along the way, in how we present ourselves, or the way in which we view the world. We often rationalize personal failures by saying, “Everybody makes mistakes.” While this may be true, it may also be a point of differentiation that changes lives—or businesses.

Most industries in developed countries have consolidated or will soon. Developing countries are becoming more competitive. In most industries, one or two top players emerge who will do better than others, at least for some period. True differentiation is hard to find. The top players look a lot like each other, and the real difference boils down to the ability to execute. Execution, according to one recent book on the subject, boils down to leadership, culture, and people.1

In my experience, the top players know that execution is important and are working hard on leadership, culture, and people. But some don’t get it right, and then a winner and a loser emerges. Why don’t they get it right? Mistakes—big ones, medium ones, and small ones.

Winners, whether in business, sports, or geopolitics, learn that getting near the top is really tough, but once you get near the top, mistakes are usually the difference between base camp and the peak. Winners learn this quickly and learn how to avoid mistakes—at least the big ones. Losers do not learn this as quickly, and in some cases, they make the same mistakes over and over.

The mistakes and mistake chains or sequences that we will discuss in subsequent chapters are primarily human mistakes. There are often mechanical failures, environmental circumstances (such as weather), technology changes, competitive moves, or other initiating actions that create a situation that requires response. It is in these situations that the ability of individuals to make decisions and cope with the circumstances is tested, and it is where mistakes do or do not occur.

Even the situations that are the initiating events may have their origin in human mistakes. For example, not all mechanical failures of equipment are random. Some are the result of poor design, choice of materials, or manufacturing quality, each of which was likely a human mistake. Some actions by competitors occur because you allow them an opening or indirectly give them a clue as to how to compete more effectively.

Business books have enjoyed great popularity in the last 20 years. One of the biggest in 1994 was Built to Last2 in which the authors identified companies they considered “visionary” and used words like “icon” to describe these leaders. Just 10 years ago, among the 18 companies they classified as visionary were Boeing, Ford, HP, Merck, Motorola, Sony, and Walt Disney. Each of these has fallen on harder times since and, while highly respected for past contributions, is seeing questions raised about its future. Are these venerable names in American and global business just going through a rough patch, or have their positions changed in ways that will prevent them from ever achieving their former prominence in their industries? I have opinions about each, but I don’t have a crystal ball for the future. What I can say definitively is that a number of these companies have made serious mistakes or a chain of mistakes that accelerated their fall from the pedestal of business admiration.

In many ways, these are the most challenging times for business in a generation. We have all been awakened to the need to look beyond the comfort of our day-to-day existence, to the need to synthesize the implications of external events, including heightened competition. That, in turn, leads to the need to focus not just on execution but on flawless execution. There is no partial credit in the Fleet.

Patterns of Mistakes and Exponential Growth

At some point in your secondary education, you learned about something called an “exponential.” You may have thought this was an abstract mathematical concept, but the reality is that it has all sorts of real-world implications. At the simplest level, the most important thing to understand about anything that involves exponential movement is that it grows (or declines) really fast. Whether you are talking about an ant colony multiplying, the magic of compound interest, or the increase in the number of components possible per integrated circuit (Moore’s Law), changes happen very rapidly and in a nonlinear fashion.

Exponential growth in severity of damage is often descriptive of a business crisis. The damage may be in the form of lost customers, lost sales, higher costs, liability costs, employee morale, or physical assets. If we make a mistake in business and brush it off, it probably was not too severe. If we start making estimates of what the cost was or will be, it was probably quite damaging financially, if not in other ways.

Whether it is physical, financial, or strategic, phrases such as “things went to hell in a hand basket” do not come close to describing what occurred. This can happen in any business. Geography, culture, and business size are irrelevant. Mistakes happen and businesses that were otherwise successful suddenly suffer a change of direction. This is rarely simply fate. In business, most things happen for a reason, not because a deity willed them to happen—and, aside from natural disasters, when bad things happen, people and their flawed judgment are usually involved.

The objective is to learn to recognize the patterns of mistakes that precede most business disasters and take actions to eliminate the threat or to reduce the incident to something that does not require full-scale crisis management. These patterns of mistakes and potential responses are surprisingly similar across physical and business disasters and across industries. This should make it easier to learn how to deal with dangerous situations, but we rarely take the time to see the parallels in what appear to be unrelated experiences. If we did take the time, it might help us learn and change our behavior. We can learn to see patterns, and patterns can help us anticipate, prevent, minimize, or control the potential exponential downside for most crisis, accident, or disaster scenarios.

Mistakes in business are pervasive, but we do not always witness them unfolding as visibly as we do in physical disasters. We see reports of a chemical plant disaster or an airplane crash on the news within minutes of its occurrence. For business disasters, we do not get blow-by-blow accounts of the decision-making process as we did during Three Mile Island. No, business mistakes, except for the very largest, are hidden from view. They are hidden for many reasons including protection of competitive information, protection of employees and management, potential legal exposure in a variety of ways, and finally, the desire to not upset “the street.” Additionally, some “mistakes” are quite clear very quickly, but some may not be seen as mistakes in the eyes of all who examine a situation at a given point in time.

Strategic mistakes are rarely black and white until well after the fact, so there are times when the time frame is relevant for classifying an action or lack of action as a mistake in business. It is also common for some companies to have such a long string of explanations regarding one-time charges that it is hard to figure out if they are making mistakes or have just been hit by a string of bad luck. For years, AT&T booked restructuring and other “one-time” charges, giving the appearance that they were doing well when they were not.

But this is not about how you can become a more effective analyst of a company’s mistakes from the outside. The real question is whether or not, as an insider, you are capable of recognizing that a chain of mistakes is underway and are willing to take action to prevent or mitigate damage.

The biggest reason you do not hear much about corporate mistakes, unless they are so colossal that some government entity forces an investigation, is that most companies do not put together blue-ribbon investigative committees to find causes of failures and recommend improvements. No one would accept a statement that an airliner “just crashed—we’re not sure why, but we’ll try not to do it again.” Yet in business, we see all kinds of failures that are not investigated in any serious depth unless laws were violated or people were physically injured.

Physical disasters, things like plants blowing up, are usually investigated in depth because companies have visible and usually costly incentives to understand them. Big physical events can affect public safety, insurance costs, and liabilities related to injury or death. But management mistakes that do not “hurt anyone” except perhaps shareholders, employees, and communities are rarely investigated with the same fervor as physical disasters.

While not as visible, I would argue that strategic and management blunders are likely to be more costly to a corporation and its stakeholders than almost any physical disaster. They thus deserve the same level of inquiry, learning, and improvement to avoid repetition and future damage. I have also observed that many physical disasters have root causes that are similar to management blunders. The specifics are different, but the human behaviors, biases, and blind spots are similar. For this reason, we will examine both physical and management disasters as we explore the commonality of causes and the potential for learning one from the other. The word “accident” is often used to convey the impression that an undesirable event was unavoidable. This is rarely the case. Business accidents, blunders, incidents, crises, or disasters are usually no different than a child who has an “accident” spilling grape juice on a beige carpet, which is then cleaned with an incorrect cleaning compound that leaves a permanent spot.

The damage was avoidable if we had given the child water instead, had not let him go into the carpeted room, or had put a restrictive top on the cup. Mistakes are made in not thinking through situations ahead of time, in not anticipating the possible range of consequences, or through incorrect remedial actions. Regardless of what we call such an occurrence, the idea is to focus on the occurrence and what we can learn from the pattern of mistakes that led to the it and the resulting damage to prevent similar situations or to minimize damage in the future.

The concept of Managing Multiple Mistakes (M3) is based on the observation that nearly all serious accidents, whether physical or business, are the result of more than one mistake. If we do not “break the chain” of mistakes early, the damage that is done, and its cost will go up exponentially, as illustrated in Figure 1.1, until the situation is irreparable.

Figure 1.1. Mistakes and costs.

image

This applies to all types of human endeavor: physical systems for transportation or manufacturing, business decisions from the front line to the boardroom, healthcare delivery, the structure or operation of our electricity grid (or lack thereof), personal relationships, and even politics.

In fact, the Watergate scandal may be one of the best illustrations of failing to manage multiple mistakes. It took some time to understand, but when it was all done, it became clear that this was a case where the initial mistake, the decision to burglarize Democratic Party offices to obtain information that was of little value, was compounded severely by subsequent attempts to cover up earlier mistakes. This is a classic case of what Joseph Grundfest, a Stanford law professor and former SEC Commissioner, calls “crimes of upholstery”—where the damage done by the cover-up was far worse than the original crime. Would Richard Nixon have finished his second term if he and his colleagues had admitted immediately that they had done something stupid and had not attempted to cover it up with unbelievable stories of accidental tape erasure and other fabrications? We will never know, but the pattern of multiple mistakes involves sins of both omission and commission in nearly all the stories you will read in this book.

Understanding accidents of one sort or another has become an organized activity of government and academic study, especially over the last century as industrial and transportation systems became more complex. Most countries have special boards to study accidents and incidents that affect public safety in transportation or potentially dangerous industries. The public visibility of investigations of one sort or another has changed with technology and society over time. Investigations of mine, maritime, and train disasters were in the public eye 50 to 100 years ago, but in the last two decades, we have heard more about nuclear power, airplane crashes, chemical plant problems, carcinogens left behind from industrial activity, failures in space, and Internet worms.

This shift in focus of the types of investigations that get public attention is not surprising given the growth and implementation of new technologies. Investigating agencies worldwide are formed and focus on improving public safety, usually with a goal of zero accidents. Yet there are some, such as Charles Perrow in his book Normal Accidents3, who believe that some level of very significant accidents is “normal” because today’s systems are so complex that, as we attempt to build in more sophisticated safeguards, we actually create new categories of accidents that were previously unanticipated. In fact, as a sociologist, he questions what our reaction will be to accidents that we realize we cannot control with better management and training.

In some ways, recent corporate disasters, especially Enron, seem to support Perrow’s hypothesis, namely that bigger, faster-acting, technology-driven businesses and systems simply have the opportunity to spawn larger disasters more rapidly. Enron was a man-made disaster, but the availability of systems for rapid and complicated energy trading, the complexity of financing vehicles, and a wide range of businesses all increased complexity beyond any individual’s ability to completely comprehend and control the business.

While Perrow’s hypothesis is understandable, there are many of us who believe that, while accidents in physical systems and businesses are inevitable, it is possible to understand them and find ways to reduce such events in both frequency and severity. In fact, even though accidents still occur, their incidence in a number of very visible areas (such as aviation) has been reduced over the years, something that we will discuss in later chapters.

Deadly Business Mistakes—Strategy, Execution, and Culture

Some years ago, Peter Drucker wrote an article4 describing “Five Deadly Business Sins” that have driven many companies into deep strategic and financial trouble. His characterization of these “sins” included:

• “Worship of high profit margins and premium pricing”

• “Mispricing a new product by charging what the market will bear”

• “Cost-driven pricing”

• “Slaughtering tomorrow’s opportunity on the alter of yesterday”

• “Feeding problems and starving opportunities”

These, and others we will discuss, are primarily examples of longer-term cultural mistakes that companies make with regularity. Damage does not occur overnight; it occurs slowly and consistently until someone or something breaks the chain and fixes the problem. Breaking the chain for these types of mistakes is difficult because the decision criteria and mindset are hard-wired into the brains of company managers and executives as a result of past successes.

As we will discuss later, the U.S. auto industry has been guilty of many of these mistakes and is trying to change, but serious remedial action was delayed for years until their market share and profitability was decimated by competition from Japan and Germany. Sometimes the initial recognition that a problem exists is the biggest hurdle.

In other cases, individual companies, such as IBM, have made one or more of these mistakes but have realized it early enough, changed, and recovered. But for every company that has detected its mistakes and taken action in time to survive, there are many more that never saw the danger that was coming until it was too late.

Strategic mistakes, particularly those affected by the organization’s culture, are among the most difficult to deal with because, at any given point in time, it may not seem like there is a huge crisis. In cultures not known for rapid change, it is too easy to feel comfortable with the way things have always been done until there is a huge crisis that wakes you up to the need for change. This is analogous to an individual’s problems with weight control. The problem does not result from a single bad decision or action but from a thousand small bad decisions over a period of time. Just as with weight control, however, if allowed to go too far, these types of business mistakes become life threatening.

Other cultures make it difficult to expose and deal with mistakes of strategy or execution even if they are detected early. Organizations that are paternalistic, hierarchal, consensual, or family dominated all have unique characteristics that may make them inept, defensive, or slow to act on bad news. Many organizations do not even understand what their culture is, much less think about how to take advantage of its strengths and design around its weaknesses, which is necessary to avoid mistakes.

Most execution mistakes are related to operations but may have strategic implications. Execution mistakes usually revolve around tangible actions that are more visible than strategic blunders. They happen more rapidly and are usually measurable in customer dissatisfaction, lost sales, warranty returns, or other shorter-term measures. They have immediate consequences and are thus easier to see and understand.

Culture-driven mistakes, especially around strategy, are usually colossal and fairly permanent in their damage. AT&T attempted to enter the computer business by acquiring NCR—a colossal cultural mistake chain that took years to clean up and cost both companies dearly. While this was a strategic mistake, it affected operations directly with confused product offerings, angry customers, and conflicts over resource allocation and resulting poor financial performance. It eventually resulted in spinning off NCR, which should never have been acquired in the first place.

Execution mistakes can be fatal as well but are more often just very expensive, unless they continue so long that they become cultural. There are many categories of execution mistakes, from not following procedures, as in many airline crashes, to not understanding markets enough to bring out the right product, to bad timing with good products. The dustbin of product development is filled with things like the RCA Videodisk. Introduced in the early 1980s, it was actually a decent product in a clumsy format that was inconvenient for the market at the time. This product was the result of a series of mistakes related to market understanding, technology, product design, and pricing.

Subsequent chapters will deal with the impact that culture can have on the likely success or failure of organizations in avoiding multiple mistakes. A common theme that runs through all the cases we will explore, whether strategy or execution related, is that in most cases it takes three, four, or five mistakes that must occur in sequence to create a serious failure. We will also look at the dramatic effect that organization culture has on affecting a positive or negative outcome.

The reality is that the business world, and perhaps life in general, is more forgiving than we realize. More often than not, you have to mess up a number of times and pretty badly to get a really bad outcome.

Can Technology Change the Odds?

An important question is whether we can use technology to automatically prevent accidents in complex systems, and if so, are these measures a net positive force? Technology is being used for operations in more businesses every day. Common examples include automation of production processes, automation of customer service functions, call center support systems to help make operators “smarter” and more effective salespeople, and information systems that monitor key variables constantly and warn managers when limits are exceeded.

We want to believe that if we program operations and response, we can ensure standard quality and minimize or prevent mistakes. The reality is that business, broadly speaking, is not as far along in this regard as those businesses that must use technology to operate at all.

For example, Airbus Industries pioneered “fly-by-wire” and first introduced it in commercial passenger aircraft in the A320. Historically, the pilot’s yoke (or stick, depending on the aircraft) was physically connected, via cables, to the ailerons and elevator, the primary control surfaces for roll and pitch of the aircraft. As airplanes grew larger and heavier, hydraulic actuator systems (something like power steering in automobiles) were connected to the cables to make it easier for the pilot to control the airplane. Even with a hydraulic system, pilots physically feel a direct relationship between the movement of their hands and the response of the aircraft.

Fly-by-wire removes the physical connection, with a joystick generating an electronic signal that is sent to actuators that drive the movement of the control surfaces. Fly-by-wire makes controlling a large passenger aircraft akin to playing a video game, literally using a joystick to control the aircraft attitude and direction. For pilots, this was a major technological leap that was not necessarily welcome since the “feel” of the aircraft is artificially induced in the stick by electronics and the response of the airplane may be limited by algorithms and parameters set in software.

For aircraft engineers, this technology simplified construction and maintenance and potentially enhanced safety. It meant that a computer could be put in the loop to limit what the pilot can command the airplane to do. This is an attractive capability providing engineers the ability to actually limit what the airplane will do rather than just writing a manual that warns operators not to exceed certain parameters. To improve safety, Airbus aircraft with fly-by-wire are limited in ways that change under different conditions. Parameters such as angle of attack, bank angle, roll rate, and engine power, among other things, are monitored, managed systemically, and limited, no matter what the pilot does with the stick. This has the effect of making it impossible to stall* the airplane, and it simplifies the number of things a pilot needs to remember to do in response to certain emergency situations.

* A “stall” in an airplane is not what the layman might think—the engine doesn’t quit. This technical term means that the aircraft wing angle of attack is so steep that the wing ceases to produce lift (because airflow is disrupted over the upper surface). This is dangerous because it can lead to uncontrolled descent, especially a spin. The angle of attack in a stall is always the same for a given aircraft, but the stall speed varies as a function of many variables including weight, density altitude, and bank angle.

Boeing held steadfastly to the mechanical control model until the 777, which is Boeing’s first fly-by-wire passenger aircraft. The reason for being late to adopt this technology was explained to me by a retired Boeing senior engineering director, “You just don’t know what’s going to happen to those electrons between the front and the back of the plane. I like a direct connection better.”

For a time, the old view seemed the safest as Airbus worked out its fly-by-wire bugs in a very public fashion at an air show in France in 1988 with the crash of an A320 into a forest while performing a low-altitude, low-speed fly-by. There were a few other related crashes*, but in recent years the technology has been improved, proven in commercial service, and incorporated into all Airbus aircraft developed after the A320.

* Engineering bulletins issued by Airbus to operators indicated that certain aspects of the many interrelated inputs and controls were still being worked out at the time but apparently had not been applied to that specific airplane. Another A320 crashed in Bangalore, India, in 1990 and a third in France in 1992 fueled discussion about whether the new technology was ready for commercial service.

When Boeing adopted fly-by-wire on the 777, it came with a big difference—the ability for the pilot to override the computer’s limits. Boeing argues that the pilot should be the ultimate judge of whether an emergency requires going beyond standard operating and safety parameters. As an example, a Boeing spokesman5 cited a China Air 747 incident in 1985 where the crew recovered from an out-of-control dive with a recovery that stressed the airplane at up to 4g’s, something that Airbus fly-by-wire would limit to 2.5g’s, perhaps limiting its capability to recover from some unusual attitudes.

Those who advocate unlimited pilot control believe that fly-by-wire limits are akin to saving the airplane from overstress but crashing it in the process. Advocates of fully integrated system control tell the old joke that describes the computer-controlled airplane of the future as having a seat in the cockpit for a pilot and a dog. The pilot’s job is to the feed the dog, and the dog’s job is to bite the pilot if he tries to touch anything.

Can technology save us from our own mistakes? Yes and no. Technology can improve the odds when we understand the range of possible actions of something like an aircraft or another technically controlled machine or system. But most businesses have many dimensions, and not all have accepted preprogrammed responses, so while technology may help, it is unlikely to stop business mistakes.

Debates about the appropriate use of technology are constant. In recent months, these have included issues such as whether the New York Stock Exchange should be replaced with an electronic exchange and whether the electric power grid in the United States can be improved with more technology. These and other examples involve complex systems of human, economic, and technical interaction with a range of parameters under normal conditions. Yet all have the potential to spiral out of control when there are multiple mistakes or unusual situations that were not anticipated and built into operating parameters and designs.

This is the conundrum that surrounds multiple mistakes. We can anticipate many, but not all, mistakes that people or systems will induce in business or the operation of complex machines. If we can anticipate mistakes, should we train people to avoid the circumstances or build technology-based systems that prevent those things from happening? If we build programmed error-control systems, will we induce more mistakes or prevent recovery from mistakes we did not anticipate?

We can use technology to improve business processes like the supply chain, but computers cannot decide how you will identify and design new products that go into that supply chain or where and how they will be manufactured. This is where physical systems and businesses diverge. Business systems still require judgment, thus we need to continue to refine and improve the quality of judgment and decision-making abilities of individuals operating businesses.

Mental Preparation, Patterns, and Warning Signs

Many of the accidents or disasters described in this book and the mistake chains that caused them ended badly and were unusual because they had not been previously experienced in exactly the same form. Similar mistake chains may have occurred, but organizations and individuals failed to see the lesson if the learning was not internal and personal. Regardless of the history, though, some organizations and individuals clearly handle unexpected challenges better than others.

In successful cases, we will see that there was some combination of luck and skill, but the most important element in handling the unexpected in business is prior mental preparation. This preparation takes the form of training, orientation, expert consultation, and communication or cultural values for guidance, but it exists in some form. The converse is true with the multiple mistake scenarios that lead to severe damage or disaster. The success factors for others simply do not exist in the unsuccessful organizations, and thus the mistake chains are not broken.

Louis Pasteur reportedly said, “Half of scientific discovery is by chance, but chance favors the prepared mind.” The power of multiple mistakes is strong, but it can be managed with preparation.

Insight #1: Mental preparation is critical because organizations and individuals are rarely good at learning by drawing parallels. They need to be taught to recognize types and patterns of mistakes and learn to extrapolate implications from other situations into their own.

In subsequent chapters, we will examine what constitutes a mistake and how in some circumstances, particularly around company strategy, it may take a long time to understand that a mistake chain is underway. We will see that managers and employees at all levels can have an impact on monitoring and understanding mistakes. Additionally, the initiative and bias for action of individuals who may not even have formal responsibility in an area is often the difference between success and failure in avoiding or minimizing damage. We will also contrast some of the most visible mistakes that companies and organizations have made with the often less visible efforts of excellent companies that never seem to find themselves in much difficulty.

Our exploration of mistakes, both in business and nonbusiness settings across industries, reveals patterns that are so repetitive that every manager should recognize them as potential red flags. All of these are behaviors or actions that each of us has encountered or observed at some point in our careers, but they continue to be catalysts for events that inflict serious damage in the form of reputation, money, management time, and other resources. Look for the following things as you read the examples we will describe, and begin to ask yourself if these catalysts are already at work in your organization:

• Failure to believe information that you do not like

• Failure to evaluate assumptions

• Success that breeds arrogance and adversely affects decision-making

• Frequent communications absence, failure, or misunderstanding (internal and external, including customers)

• Failure to have and/or follow standard procedures

• Cultures that suppress initiative, information, or action

Lack of understanding and respect for the laws of economics and cycles

• Failure to evaluate past mistakes and learn from them

These patterns are more common that we might believe, but an instance or two of one or two items from this list is rarely fatal. The interesting thing is how these same things come together in ways that create damaging disasters for those who do not pay enough attention to “break the chain of mistakes.”

There are warning signs that presage many of these typical mistake patterns. The following list of “red flags” will reveal themselves in the incidents we discuss throughout the book. In most cases, if the warnings had been observed and acted upon, there would have been significant economic value added in breaking or avoiding a mistake chain or sequence that led to significant economic and/or physical damage. As you read the stories in subsequent chapters, look for these warning signs as indicators and ask yourself if the same ones apply in your business:

• Situations you have not seen before

• Operating experience different than your competitors

• Unusual or rapidly changing data (about operations or customers)

• Results off plan

• Results on plan through luck

• Constant revision of plan/budget

• Failures of control systems

• Need to retrain significant numbers of personnel because they are not performing

• Frequent operational problems that are not addressed by standard procedures

• Problems caused by communications issues

• Problems where help was available but not utilized

The occurrence of an item from this list does not in itself mean that you are about to have a disaster. But these are warning signs that further investigation may be required to ensure that you are not already in the process of starting a series of mistakes that will create a disaster for your business.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.8.42