A. Summary of Insights

Summarized here are the insights associated with the accidents, incidents, and successes examined in the book. They are offered with the caution that slavishly adhering to them in the hope of avoiding all mistakes will not necessarily lead to success, because success is different than a lack of failure, which might lead to mediocrity.

Despite the caution that success requires more than avoiding mistakes, these are powerful insights into the ways that many have “snatched defeat from the jaws of victory.” The patterns are similar and have shown themselves repeatedly over and over again in a variety of industries, countries, and businesses.

Additionally, if we are honest we can see similarities to the things that drive or hinder our personal success in school, careers, sports, and social relationships. Consider these guidelines for most situations, personal or business, and look not only for their occurrence individually, but in patterns that serve as warnings for analysis and action.

From Chapter 1

Insight #1: Mental preparation is critical because organizations and individuals are rarely good at learning by drawing parallels. They need to be taught to recognize types and patterns of mistakes and learn to extrapolate implications from other situations into their own.

From Chapter 2

Insight #2: “Fly the airplane.” It is easy to get distracted, and there are times when you need to have a stern talk with yourself and ask if you are spending your time on the most important things.

Insight #3: You cannot afford even a whiff of an ethical lapse. Issues of trust are serious and strategic in today’s world, largely because there have been so many ethical lapses that consumers do not trust the actions of corporate executives. The slightest sense of uncertainty or lack of openness creates suspicion that mushrooms into a lack of confidence that can cost a great deal.

Insight #4: Execution mistakes can be generated through a lack of resources or knowledge. Even a good strategy will fail without adequate resources, training, and discipline around implementation.

Insight #5: Establish and enforce standard operating procedures. Aviation knows how to do this, complex manufacturing operations know how to do this, but management teams do not like it because they believe not everything can be made routine. There is some truth to this, but you need to look for everything that can be standardized and make the procedures known, train for them, and hold those accountable who do not follow them.

Insight #6: Make responsibilities clear. Whether it is Coca-Cola Belgium or one of the airline crashes, mistakes are more likely to be caught and stopped if you know who is responsible for what and who should be providing additional oversight and advice.

Insight #7: Seek advice and seek to understand assumptions. In a number of cases, we saw a disconnect between the views of customers and those of insiders or a lack of sensitivity to regional cultural and political views. In other cases, we saw outright disregard for data provided by others (Air Florida) or easily available data (Webvan or AmEx). Failure to seek and use advice and direct disregard for data on customer behavior is a significant cause of mistakes of all types.

Insight #8: If something does not make sense or feels confused, STOP and figure out what’s going on. In most of the cases throughout the book, there was evidence of confusion or lack of information at some point that troubled those involved. Calling a “timeout” in one way or another to understand what is happening is a useful practice.

Insight #9: People are usually at the root of the problem. Looking at mishaps as system problems is the only way to move toward perfection. Multiple causes are far more likely than single causes, but multiple causes almost always means some set of mistakes that were directly people related. An analysis that looks for simple answers, blaming only one cause or only a physical cause, will likely yield an inadequate understanding of the problem and lead to repetition of the problems that caused the accident. It is critical to focus on people-related issues of process, training, and knowledge-building that will allow them to think their way through when technology or process fails.

From Chapter 3

Insight #10: A significant portion of execution-related mistakes occur because criteria for measuring progress and performance have not been identified and/or communicated explicitly. This includes the need to understand not only what the measures are, but how frequently they should be checked and what the priorities and actions should be when an out-of-specification condition occurs.

Insight #11: Failure to analyze data points and ask what they mean is a major source of mistakes. This seems obvious, but we block our interest and ability to be analytic with time pressures, distractions, and cultures that are not curious. The question, “I see it, but what does it mean?” may be the most important thing you can ask to begin to break a mistake chain. The answer will not always be obvious, but starting the inquiry process is a necessity.

Insight #12: Ignoring data is dangerous—ignoring or misinterpreting customer data can be catastrophic. Intel initially ignored customer concerns, whereas in the Tylenol case J&J never lost sight of its responsibility to its customer. Coca-Cola did not adequately test the depth of its data with hard-core users of its product.

Insight #13: Across industries and situations, ineffective communications can accelerate deterioration of a mistake chain. Conversely, effective communication is one of the keys to breaking a mistake chain.

Insight #14: Spending time and money to build a culture that takes mistakes seriously may have the highest ROI of anything you can do as a manager. This is something that paid off for the airline industry in improving safety and for Intel, J&J, and Coca-Cola.

Insight #15: Look for the opportunity for an accident or even a major success to be a rallying cry for change and transformation. This is a unique opportunity that should not be ignored. This is the silver lining in an accident—your ability to identify some greater benefit that comes from the learning.

From Chapter 4

Insight #16: A very successful business can blind you to opportunity. This is because you will make comparative judgments on the basis of current business criteria that may not last, while underestimating the potential of new businesses that have not yet grown far enough to show their full potential. Being successful also raises, often inappropriately, your confidence in your own decision-making.

Insight #17: Your competitors are not who you think they are. Until recently, Xerox did not realize that the biggest threat to the copier business in smaller market segments was not Canon or Minolta but Hewlett-Packard and the laser printer that Xerox invented.

Insight #18: Sometimes a mistake is not a mistake. If a mistake is a wrong action, then we have to make some judgment about whether a strategic business decision is “right” or “wrong,” and that may not be obvious as quickly as we think. This reinforces the importance of continuing analysis of decisions after the fact and potential future scenarios.

Insight #19: Even companies that have successfully reinvented themselves have to work hard, perhaps even harder, to understand when it is time to do so again. Motorola reinvented itself when it was early to the new TV market and when it sold Quasar and committed everything to communications and semiconductors. It made a huge leap from older communications products based on single sideband technologies to cellular communications. A number of strategic blunders that no one expected from a company with that history caused it to stumble as the twentieth century closed.

Insight #20: With disruptive technology, prices usually drop and value shifts to customers. This is a normal part of the economic cycle for new technologies that you should anticipate and use proactively to advantage. Ignore this phenomenon at your peril.

Insight #21: Some changes happen without your permission. Learn to recognize the signs and get on board early.

Insight #22: Many more industries and companies will see the value continue to shift from hardware to software and services. Even companies like Motorola and Kodak, which at one time thought they were primarily manufacturers, are likely to move more deeply into services for growth.

From Chapter 5

Insight #23: Test and retest assumptions—until proven beyond a doubt. Assumptions are at the core of mistakes in physical systems and business. The problem is that we often make assumptions and draw what we think are conclusions on the basis of limited data, and then if nothing bad happens, we begin to view the assumptions as truth. Titanic was assumed to be unsinkable. Three Mile Island was assumed to be fail-safe under all conditions.

NASA assumed that since foam had been coming off the center fuel tank for more than 100 launches and had never caused any serious damage, that it could not cause serious damage. Yet in the investigation of the Columbia accident, it did not take long to show that a piece of foam moving at over 500 mph relative to the shuttle wing could do enough damage to bring the shuttle down—but only if it hit in just the right place. This is the problem with assumptions; they are just that—and have limitations that we may not realize. Once we believe them, we have closed doors of understanding, killing curiosity and analysis.

Insight #24: Push or ignore engineered safety at your peril. Engineers and designers that build systems of all types build in features designed to enhance the ability of the system to perform the intended function, but they also include features to minimize the chance of damage in the event of partial or full failure. This is true for physical systems from airplanes to zoos and is also important in today’s more complicated business world where “systems” include complex human—software—process systems in a wide range of businesses. Do airline reservation systems, manufacturing control and supply chain systems, credit card billing systems, and billing and receivable systems for most businesses have anything in common with physical disasters? Absolutely—the same opportunity to damage a business is there because of the complexity of the design and operations interface of man and machine.

Although designers try to anticipate adverse conditions that threaten the success of the system, they will not always successfully design for every circumstance, and even if they do, human intervention can often overcome the most rigorous safety design. Understanding that built-in safety features are there for a reason should be a cause for understanding limits. Pushing such systems to their limits or ignoring threats that test or evade safety systems should be undertaken only with the greatest care and understanding of the extreme risk involved. Titanic, TMI, Challenger, and Columbia all pushed engineering limits and lost.

Insight #25: Believe the data. “Believe your indications” is something all of us in the nuclear Navy learned. The failure to believe information that is staring you in the face is one of the most common causes of catastrophes. In Titanic, TMI, and at NASA, operators had warnings of danger and did not heed them. In the case of Titanic, the warnings were well in advance and could have easily been acted on if minds had been receptive. At TMI, damage could have minimized if warnings that were part of the recovery process had been observed. With NASA, the warnings were repeatedly offered in advance and were analytically sound but were dismissed just as routinely as the captain of Titanic ignored the warnings he received. Data were also available in all the business situations discussed in the book and had been ignored in the disaster situations.

Insight #26: Use available resources. Captain Smith of Titanic ignored available resources, other than consulting the ship’s designer after the collision, who told him exactly how long it would take the ship to go down. When it was clear the ship would sink, Smith began to seek rescue help.

At TMI, it was two hours before the team on watch seriously sought outside help, apparently believing mistakenly until then that they could handle the situation. Some of the help literally wandered in the door as the next shift reported for work. Others, such as the Babcock & Wilcox (B&W) representative, were sought out, and still others, such as the NRC and Pennsylvania government and regulatory officials, were required notifications.

NASA engineers repeatedly asked their superiors to use Department of Defense (DOD) capabilities to get close images of Columbia on-orbit and were denied. The debate will go on forever about whether anything could have been done to save the crew, but the opportunity was missed, not once but repeatedly.

Insight #27: Train for the “can’t happen” scenario. Those involved with Titanic, Columbia, and TMI all thought many things could not happen or at worst were very remote. This mindset is obviously dangerous, but so is an overly conservative “If I don’t go outside the sky can’t fall on me” attitude. You obviously train for known situations in operating any device from a car to the space shuttle. But thinking about how you would handle something “they” say “can’t happen” may be more important than an idle intellectual game.

Insight #28: Open your mind past your blinders. This is extremely difficult. How do you know that your response has been conditioned by the context of your experiences? Perhaps the only defense is to play “what if” games with yourself and your colleagues. Regardless of the business or physical context, these are useful exercises. If you find yourself in a confusing situation, perhaps the proper question when you have exhausted all avenues is “What’s the other right answer?” This question often opens the mind to looking at whether there is another answer by discarding what you have already thought about without success.

From Chapter 6

Insight #29: Culture is powerful—what creates success may kill you. The cultures of American Medical International (AMI), Ford, Firestone, and Enron worked for and against them. There are many examples, typically in the early stages of successful companies, where culture helps organizations see things in markets that others miss, get past survival challenges, grow faster, and weather competitive threats. The same powerful, but hard to define, force that binds an organization together for success can also be a catalyst, or even a cause, of failure.

From Chapter 7

Insight #30: Culture is powerful, but be sure you understand where to extend it. As McDonald’s found out, its core culture is built around attention to detail, standardization, and discipline in operations and marketing. This is a tremendous strength, but it cannot be extended easily to other businesses or even other food businesses. Other food businesses are different enough that extending the same detailed procedures did not work well. This was predictable because the McDonald’s culture thrives on standardization with minimal adaptation. Going into new businesses requires rapid learning and adaptation. This is not to say that McDonald’s cannot get results in other food areas, but it will do so less efficiently or will have to develop teams with different competencies.

Insight #31: Rapid culture change designed to obliterate mistakes in supercritical areas is possible, but sharp focus, extra diligence, and continuous training are necessary for success. The difficulty is that if you do not have a history of being a high-performance organization, you can rarely invent this capability on demand. These standards cannot be relaxed if you wish to maintain performance.

Insight #32: Most cultures develop by accident—those that are designed to accomplish a purpose are more effective. Whether we look at McDonald’s, Southwest Airlines, the Navy’s submarine force, or IBM, when you see successful organizations, you find strong cultures with teams that understand what they need to focus on and reinforce it over and over again. Successful companies design cultures through consistent priorities and behaviors. This does not mean that priorities remain unchanged, but when they do change, the changes take place in a considered and deliberate fashion and are communicated very well.

From Chapter 8

Insight #33: Economic forces and laws are real, and industry changes are real. They are not as unexpected as most people believe—it is usually only a matter of the timing. The mistake chain where entire industry changes occur is driven by a failure to recognize the need to make fundamental changes in a business model early enough to avoid being consumed by the natural laws of economics.

Insight #34: Being #1 or #2 really does matter. This is not because it was the much-heralded rallying cry at GE, but because it is a reflection of the laws of economics that will bite if you are not a leader in your field. Having a vision that includes an understanding of the forces at work and the time you have available is a must.

Insight #35: Economic business visioning (EBV) is not optional. However, few companies, except perhaps pure commodity businesses, factor such a process into their planning. The usual set of assumptions in most planning cycles revolves around a range of economic assumptions from “a little better” to “a little worse,” with “a lot better” and “a lot worse” thrown in to make the set look complete, but with little belief that either will occur. For both the short and the longer term, designing a process for analysis and spending time understanding the shifts in the industry economic issues will help avoid mistakes that are deadly.

From Chapter 9

Insight #36: Startups and small businesses make mistakes in the same ways that larger organizations make mistakes. However, they usually have fewer resources to avoid or recover and less flexibility to survive mistakes with alternate plans or products. While the patterns are similar, some mistakes or sequences are unique to small business. These have to do with fundraising and the mechanics of getting things done in the early stages, but many others look similar to what occurs in larger, more established entities.

From Chapter 10

Insight #37: Do you want to trust “saving the business” to your last line of defense? That is what you will do if you do not develop systems for detecting and correcting mistakes before there is any damage of consequence.

Insight #38: If you do not make any mistakes, you may not be taking enough risk, but failing to take any risks at all may be the most dangerous type of mistake that a business can make. This does not mean you should seek mistakes for the sake of making them, but the lack of mistakes (perfection) does not always correlate with the highest level of success. Risk-reward is an economic principle that underlies our whole business system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.28.50