1. From Problem-Solving to Problem-Finding

“It isn’t that they can’t see the solution. It’s that they can’t see the problem.”

G. K. Chesterton

Code blue! Code blue! Mary’s heart has stopped, and her nurse has called for help. A team rushes to the patient’s room. No one expected this crisis. Mary had come to the hospital for routine knee-replacement surgery, and she had been in fairly good health prior to the procedure. Now, she isn’t breathing. Working from a “crash cart” full of key equipment and supplies, the expert team begins trying to resuscitate the patient. Working at lightning speed, yet with incredible calm and precision, they get Mary’s heart beating again. They move her to the intensive care unit (ICU), where she remains for two weeks. In total, she spends one month more than expected in the hospital after her surgery. Her recovery, even after she returns home, is much slower than she anticipated. Still, Mary proved rather lucky, because the survival rate after a code blue typically does not exceed 15%.

After Mary begins breathing regularly again, the patient’s family praises the team that saved her life. Everyone expresses relief that the team responded so quickly and effectively. Then, the team members return to their normal work in various areas of the hospital. Mary’s nurse attends to her other patients. However, as she goes about her normal work, she wonders: Could this cardiac arrest have been foreseen? Did I miss the warning signs? She recalls noticing that Mary’s speech and breathing had become slightly labored roughly six hours before the arrest. She checked her vitals. While her respiratory rate had declined a bit, her other vital signs—blood pressure, heart rate, oxygen saturation, and body temperature—remained normal. Two hours later, the nurse noticed that Mary appeared a bit uncomfortable. She asked her how she was feeling, and Mary responded, “I’m OK. I’m just a little more tired than usual.” Mary’s oxygen saturation had dipped slightly, but otherwise, her vitals remained unchanged. The nurse considered calling Mary’s doctor, but she didn’t feel comfortable calling a physician without more tangible evidence of an urgent problem. She didn’t want to issue a false alarm, and she knew that a physician’s assistant would come by in approximately one hour to check on each patient in the unit.1

This scenario, unfortunately, has transpired in many hospitals over the years. Research shows that hospitalized patients often display subtle—and not-so-subtle—warning signs six to eight hours before a cardiac arrest. During this time, small problems begin to arise, such as changes in heart rate, blood pressure, and mental status. However, hospital personnel do not necessarily notice the symptoms. If they notice a problem, they often try to address it on their own, rather than bringing their concerns to the attention of others. One study found that two-thirds of patients exhibited warning signs, such as an abnormally high or low heart rate, within six hours of a cardiac arrest, yet nurses and other staff members brought these problems to the attention of a doctor in only 25% of those situations.2 In short, staff members wait too long to bring these small problems to the attention of others. Meanwhile, the patient’s health continues to deteriorate during this window of opportunity when an intervention could perhaps prevent a crisis.

Several years ago, Australian hospitals set out to save lives by acting sooner to head off emerging crises. They devised a mechanism whereby caregivers could intervene more quickly to address the small problems that typically portend larger troubles. The hospitals invented Rapid Response Teams (RRTs). These teams respond to calls for assistance, typically from a floor nurse who notices an early warning sign associated with cardiac arrest. The team typically consists of an experienced critical-care nurse and a respiratory therapist; in some cases, it also includes a physician and/or physician’s assistant. When the nurse pages an RRT, the team arrives at the patient’s bedside within a few minutes and begins its diagnosis and possible intervention. These teams quickly assess whether a particular warning sign merits further testing or treatment to prevent a cardiac arrest.

To help the nurses and other staff members spot problems in advance of a crisis, the hospitals created a list of the “triggers” that may foreshadow a cardiac arrest and posted them in all the units. Researchers identified these triggers by examining many past cases of cardiac arrest. Most triggers involved a quantitative variable such as the patient’s heart rate. For instance, many hospitals instructed staff members that the RRT should be summoned if a patient’s heart rate fell below 40 beats per minute or rose above 130 beats per minute. However, hospitals found that nurses often noticed trouble even before vital signs began to deteriorate. Thus, they empowered nurses to call an RRT if they felt concerned or worried about a patient, even if the vital signs appeared relatively normal.3

The invention of RRTs yielded remarkable results in Australia. The innovation soon spread to the United States. Early adopters included four sites at which my colleagues (Jason Park, Amy Edmondson, and David Ager) and I conducted research: Baptist Memorial Hospital in Memphis, St. Joseph’s Hospital in Peoria, Missouri Baptist Medical Center in St. Louis, and Beth Israel Deaconess Medical Center in Boston. Nurses reported to us that they felt much more comfortable calling for assistance, especially given that the RRTs were trained not to criticize or punish anyone for a “false alarm.” As one said to us, “It’s about the permission the nurses have to call now that they didn’t have before the RRT process was established.” Another nurse commented, “There is nothing better than knowing you can call an RRT when a patient is going bad.” With the implementation of this proactive process for spotting problems, each of these pioneering hospitals reported substantial declines in cardiac arrests, transfers to the intensive care unit, and deaths. A physician explained why RRTs proved successful: “The key to this process is time. The sooner you identify a problem, the more likely you are to avert a dangerous situation.”

Academic research confirms the effectiveness of RRTs. For instance, a recent Stanford study, published in the Journal of the American Medical Association, found a 71% reduction in “code blue” incidences and an 18% reduction in mortality rate after implementation of an RRT in a pediatric hospital.4 With these kinds of promising results, the innovation has spread like wildfire. The Institute for Healthcare Improvement has championed the idea. Now, more than 1,600 hospitals around the country have implemented the RRT model. Many lives have been saved.

What is the moral of this remarkable story? Small problems often precede catastrophes. In fact, most large-scale failures result from a series of small errors and failures, rather than a single root cause. These small problems often cascade to create a catastrophe. Accident investigators in fields such as commercial aviation, the military, and medicine have shown that a chain of events and errors typically leads to a particular disaster.5 Thus, minor failures may signal big trouble ahead; treated appropriately, they can serve as early warning signs. Many large-scale failures have long incubation periods, meaning that managers have ample time to intervene when small problems arise, thereby avoiding a catastrophic outcome.6 Yet these small problems often do not surface. They occur at the local level but remain invisible to the broader organization. These hospitals used to expend enormous resources trying to save lives after a catastrophe. They engaged in heroic efforts to resuscitate patients after a cardiac arrest. Now, they have devised a mechanism for spotting and surfacing small problems before they escalate to create a catastrophic outcome. Code Blue Teams are in the business of fighting fires. The Rapid Response Team process is all about detecting smoke (see Figure 1.1).7

Image

Figure 1.1 Fighting fires versus detecting smoke

This book uses the terms problem and failure interchangeably; they are defined as a condition in which the expected outcome has not been achieved. In other words, we do not witness desired positive results, or we experience negative results. These problems may entail breakdowns of a technical, cognitive, and/or interpersonal nature. Technical problems consist of breakdowns in the functioning of equipment, technology, natural systems, and the like. Cognitive problems entail judgment or analytical errors on the part of individuals or groups. Interpersonal problems involve breakdowns in communication, information transfer, knowledge sharing, and conflict resolution.8

Many organizations devote a great deal of attention to improving the problem-solving capabilities of employees at all levels. Do they spend as much time thinking about how to discover problems before they mushroom into large-scale failures? One cannot solve a problem that remains invisible—unidentified and undisclosed. Unfortunately, for a variety of reasons, problems remain hidden in organizations for far too long. We must find a problem before it can be addressed appropriately. Great leaders do not simply know how to solve problems. They know how to find them. They can detect smoke, rather than simply trying to fight raging fires. This book aims to help leaders at all levels become more effective problem-finders.

Embrace Problems

Most individuals and organizations do not view problems in a positive light. They perceive problems as abnormal conditions, as situations that one must avoid at all costs. After all, fewer problems mean a greater likelihood of achieving the organization’s goals and objectives. Most managers do not enjoy discussing problems, and they certainly do not cherish the opportunity to disclose problems in their own units. They worry that others will view them as incompetent for allowing the problem to occur, or incapable of resolving the problem on their own. In short, many people hold the view that the best managers do not share their problems with others; they solve them quietly and efficiently. When it comes to small failures in their units, most managers believe first and foremost in the practice of discretion.

Some organizations, however, perceive problems quite differently. They view small failures as quite ordinary and normal. They recognize that problems happen, even in very successful organizations, despite the best managerial talent and most sophisticated management techniques. These organizations actually embrace problems. Toyota Motor Corporation exemplifies this very different attitude toward the small failures that occur every day in most companies. Toyota views problems as opportunities to learn and improve. Thus, it seeks out problems, rather than sweeping them under the rug.9

Toyota also does not treat small problems in isolation; it always tries to connect them to the bigger picture. Toyota asks: Is this small failure symptomatic of a larger problem? Do we have a systemic failure here?10 In this way, Toyota resembles organizations such as nuclear power plants and U.S. Navy aircraft carriers—entities that operate quite reliably in a high-risk environment. Scholars Karl Weick and Kathleen Sutcliffe point out that those organizations have a unique view of small problems:

“They tend to view any failure, no matter how small, as a window on the system as a whole. They view any lapse as a signal of possible weakness in other portions of the system. This is a very different approach from most organizations, which tend to localize failures and view them as specific, independent problems... [They act] as though there is no such thing as a confined failure and suspect, instead, that the causal chains that produced the failure are long and wind deep inside the system.”11

With this type of approach, Toyota maintained a stellar reputation for quality in the automobile industry for many years. Experts attributed it to the vaunted Toyota Production System, with its emphasis on continuous improvement. As many people now know, Toyota empowers each frontline worker to “pull the Andon cord” if they see a problem, thereby alerting a supervisor of a potential product defect or process breakdown. If the problem cannot be solved in a timely manner, this process actually leads to a stoppage of the assembly line. This system essentially empowered everyone in a Toyota manufacturing plant to become a problem-finder. Quality soared as Toyota detected problems far earlier in the manufacturing process than other automakers typically did.12 Like the hospitals that deployed Rapid Response Teams, Toyota discovered that the likelihood of a serious failure increases dramatically if one reduces the time gap between problem detection and problem occurrence. Both the hospitals and Toyota learned that acting early to address a small potential problem may lead to some false alarms, but it proves far less costly than trying to resolve problems that have mushroomed over time.

This attitude about problems permeates the organization, and it does not confine itself to quality problems on the production line. It applies to senior management and strategic issues as well. In a 2006 Fast Company article, an American executive describes how he learned that Toyota did not operate like the typical organization. He reported attending a senior management meeting soon after his hire at Toyota’s Georgetown, Kentucky plant in the 1990s. As he began reporting on several successful initiatives taking place in his unit, the chief executive interrupted him. He said, “Jim-san. We all know you are a good manager. Otherwise, we would not have hired you. But please talk to us about your problems so we can work on them together.”13

More recently, though, Toyota’s quality has slipped by some measures. In a recent interview with Harvard Business Review, Toyota CEO Katsuaki Watanabe addressed this issue, noting that the firm’s explosive growth may have strained its production system. His answer speaks volumes about the company’s attitude toward problems:

“I realize that our system may be overstretched. We must make that issue visible. Hidden problems are the ones that become serious threats eventually. If problems are revealed for everybody to see, I will feel reassured. Because once problems have been visualized, even if our people didn’t notice them earlier, they will rack their brains to find solutions to them.”14

Most executives would not be so candid about the shortcomings of the organization they lead. In contrast, Watanabe told the magazine that he felt a responsibility to “surface problems” in the organization. By speaking candidly about Toyota’s recent quality troubles, rather than trying to minimize or downplay them, Watanabe models the attitude that he wants all managers at the firm to embrace. For Watanabe and the Toyota organization he leads, problems are not the enemy; hidden problems are.

Why Problems Hide

Problems remain hidden in organizations for a number of reasons. First, people fear being marginalized or punished for speaking up in many firms, particularly for admitting that they might have made a mistake or contributed to a failure. Second, structural complexity in organizations may serve like dense “tree cover” in a forest, which makes it difficult for sunlight to reach the ground. Multiple layers, confusing reporting relationships, convoluted matrix structures, and the like all make it hard for messages to make their way to key leaders. Even if the messages do make their way through the dense forest, they may become watered down, misinterpreted, or mutated along the way. Third, the existence and power of key gatekeepers may insulate leaders from hearing bad news, even if the filtering of information takes place with the best of intentions. Fourth, an overemphasis on formal analysis and an underappreciation of intuitive reasoning may cause problems to remain hidden for far too long. Finally, many organizations do not train employees in how to spot problems. Issues surface more quickly if people have been taught how to hunt for potential problems, what cues they should attend to as they do their jobs, and how to communicate their concerns to others.

Cultures of Fear

Maxine Clark founded and continues to serve as chief executive of Build-a-Bear Workshop, a company that aims to “bring the teddy bear to life” for children and families. Clark’s firm does so by enabling children to create customized and personalized teddy bears in its stores. Kids choose what type of bear they want. Store associates stuff, stitch, and fluff the bears for the children, and then the kids choose precisely how they want to dress and accessorize the teddy bear. If you have young children or grandchildren, you surely have heard of Clark’s firm.

Clark has built an incredibly successful company, growing it to over $350 million in sales over the past decade. She has done so by delivering a world-class customer experience in her stores. Clark credits her store associates, who constantly find ways to innovate and improve. How do the associates do it? For starters, they tend not to fear admitting a mistake or surfacing a problem. Clark’s attitude toward mistakes explains her associates’ behavior. She does not punish people for making an error or bringing a problem to light; she encourages it.

Clark credits her first-grade teacher, Mrs. Grace, for instilling this attitude toward mistakes in her long ago. As many elementary school teachers do, Mrs. Grace graded papers using a red pencil. However, unlike most of her colleagues, Mrs. Grace gave out a rather unorthodox award at the end of each week. She awarded a red pencil prize to the student who had made the most mistakes! Why? Mrs. Grace wanted her students engaged in the class discussion, trying to answer every question, no matter how challenging. As Clark writes, “She didn’t want the fear of being wrong to keep us from taking chances. Her only rule was that we couldn’t be rewarded for making the same mistake twice.”15

Clark has applied her first-grade teacher’s approach at Build-a-Bear by creating a Red Pencil Award. She gives this prize to people who have made a mistake but who have discovered a better way of doing business as a result of reflecting on and learning from that mistake. Clark has it right when she says that managers should encourage their people to “experiment freely, and view every so-called mistake as one step closer to getting things just right.”16 Of course, her first-grade teacher had it right as well when she stressed that people would be held accountable if they made the same mistake repeatedly. Failing to learn constitutes the bad behavior that managers should deem unacceptable. Clark makes that point clear to her associates.17

Many organizations exhibit a climate in which people do not feel comfortable speaking up when they spot a problem, or perhaps have made a mistake themselves. These firms certainly do not offer Red Pencil Awards. My colleague Amy Edmondson points out that such firms lack psychological safety, meaning that individuals share a belief that the climate is not safe for interpersonal risk-taking. Those risks include the danger of being perceived as a troublemaker, or of being seen as ignorant or incompetent. In an environment of low psychological safety, people believe that others will rebuke, marginalize, or penalize them for speaking up or for challenging prevailing opinion; people fear the repercussions of admitting a mistake or pointing out a problem.18 In some cases, Edmondson finds that frontline employees do take action when they see a problem in such “unsafe” environments. However, they tend to apply a Band-Aid at the local level, rather than raising the issue for a broader discussion of what systemic problems need to be addressed. Such Band-Aids can do more harm than good in the long run.19 Leaders at all levels harm psychological safety when they establish hierarchical communication protocols, make status differences among employees highly salient, and fail to admit their own errors. At Build-a-Bear, Maxine Clark’s Red Pencil Award serves to enhance psychological safety, and in so doing, helps ensure that most problems and errors do not remain hidden for lengthy periods of time.

Organizational Complexity

In the start-up stage, most companies have very simple, flat organizational structures. As many firms grow, their structures become more complex and hierarchical. To some extent, such increased complexity must characterize larger organizations. Without appropriate structures and systems, a firm cannot continue to execute its strategy as it grows revenue. However, for too many firms, the organizational structure becomes unwieldy over time. The organization charts become quite messy with dotted-line reporting relationships, matrix structures, cross-functional teams, ad hoc committees, and the like. People find it difficult to navigate the bureaucratic maze even to get simple things accomplished. Individuals cannot determine precisely where decision rights reside on particular issues.20

Amidst this maze of structures and systems, key messages get derailed or lost. Information does not flow effectively either vertically or horizontally across the organization. Vertically, key messages become garbled or squashed as they ascend the hierarchy. Horizontally, smooth handoffs of information between organizational units do not take place. Critical information falls through the cracks.

The 9/11 tragedy demonstrates how a complex organizational structure can mask problems.21 Prior to the attacks, a labyrinth of agencies and organizations worked to combat terrorism against the U.S. These included the Central Intelligence Agency, the Federal Bureau of Investigation, the Federal Aviation Administration, and multiple units within the Departments of State and Defense. Various individuals within the federal government discovered or received information pertaining to the attacks in the days and months leading up to September 11, 2001. However, some critical information never rose to the attention of senior officials. In other cases, information did not pass from one agency to another, or the proper integration of disparate information did not take place. Individuals did not always recognize who to contact to request critical information, or who they should inform about something they had learned. On occasion, officials downplayed the concerns of lower-level officials, who in turn did not know where else to go to express their unease. Put simply, the right information never made it into the right hands at the right time. The dizzying complexity of the organizational structures and systems within the federal government bears some responsibility. The 9/11 Commission concluded:

“Information was not shared, sometimes inadvertently or because of legal misunderstandings. Analysis was not pooled. Effective operations were not launched. Often the handoffs of information were lost across the divide separating the foreign and domestic agencies of the government. However the specific problems are labeled, we believe they are symptoms of the government’s broader inability to adapt how it manages problems to the new challenges of the twenty-first century. The agencies are like a set of specialists in a hospital, each ordering tests, looking for symptoms, and prescribing medications. What is missing is the attending physician who makes sure they work as a team.”22

Gatekeepers

Each organization tends to have its gatekeepers, who control the flow of information and people into and out of certain executives’ offices. Sometimes, these individuals serve in formal roles that explicitly require them to act as gatekeepers. In other instances, the gatekeepers operate without formal authority but with significant informal influence. Many CEOs have a chief of staff who serves as a gatekeeper. Most recent American presidents have had one as well. These individuals may serve a useful role. After all, someone has to ensure that the chief executive uses his or her time wisely. Moreover, the president has to protect against information overload. The chief executive can easily get buried in reports and data. If no one guards his schedule, the executive could find himself bogged down in meetings that are unproductive, or at which he is not truly needed.23 Former President Gerald Ford commented on the usefulness of having someone in this gatekeeper function:

“I started out in effect not having an effective Chief of Staff and it didn’t work. So anybody who doesn’t have one and tries to run the responsibilities of the White House, I think, is putting too big a burden on the President himself. You need a filter, a person that you have total confidence in who works so closely with you that, in effect, is almost an alter ego. I just can’t imagine a President not having an effective Chief of Staff.”24

Trouble arises when the gatekeeper intentionally distorts the flow of information. Put simply, the gatekeeper function bestows a great deal of power on an individual. Some individuals, unfortunately, choose to abuse that power to advance their agendas. In their study of the White House Chief of Staff function, Charles Walcott, Shirley Warshaw, and Stephen Wayne concluded:

“In performing the gatekeeper’s role, the Chief of Staff must function as an honest broker. Practically all of the chiefs and their deputies interviewed considered such a role essential. James Baker (President Reagan’s Chief of Staff) was advised by a predecessor: ‘Be an honest broker. Don’t use the process to impose your policy views on the President.’ The President needs to see all sides. He can’t be blindsided.”25

Gatekeepers do not always intentionally prevent executives from learning about problems and failures. In some cases, they simply make the wrong judgment as to the importance of a particular matter, or they underestimate the risk involved if the problem does not get surfaced at higher levels of the organization. They may think that they can handle the matter on their own, when in fact they do not have the capacity to do so. They might oversimplify the problem when they try to communicate it to others concisely. Finally, gatekeepers might place the issue on a crowded agenda, where it simply does not get the attention it deserves.

Dismissing Intuition

Some organizations exhibit an intensely analytical culture. They apply quantitative analysis and structured frameworks to solve problems and make decisions. Data rule the day; without a wealth of statistics and information, one does not persuade others to adopt his or her proposals. While fact-based problem-solving has many merits, it does entail one substantial risk. Top managers may dismiss intuitive judgments too quickly in these environments, citing the lack of extensive data and formal analysis. In many instances, managers and employees first identify potential problems because their intuition suggests that something is not quite right. Those first early warning signs do not come from a large dataset, but rather from an individual’s gut. By the time the data emerge to support the conclusion that a problem exists, the organization may be facing much more serious issues.26

In highly analytical cultures, my research suggests that employees also may self-censor their intuitive concerns. They fear that they do not have the burden of proof necessary to surface the potential problem they have spotted. In one case, a manager told me, “I was trained to rely on data, going back to my days in business school. The data pointed in the opposite direction of my hunch that we had a problem. I relied on the data and dismissed that nagging feeling in my gut.”27

In the Rapid Response Team study, we found that nurses often called the teams when they had a concern or felt uncomfortable, despite the lack of conclusive data suggesting that the patient was in trouble. Their hunches often proved correct. In one hospital, the initiative’s leader reported to us that “In our pilot for this program, the best single predictor of a bad outcome was the nurse’s concern without other vital sign abnormalities!” Before the Rapid Response Team process, most of the nurses told us that they would have felt very nervous voicing their worries simply based on their intuition. They worried that they would be criticized for coming forward without data to back up their judgments.

Lack of Training

Problems often remain hidden because individuals and teams have not been trained how to spot problems and how to communicate their concerns to others. The efficacy of the Rapid Response Team process rested, in part, on the fact that they created a list of “triggers” that nurses and other personnel could keep an eye on when caring for patients. That list made certain cues highly salient to frontline employees; it jump-started the search for problems. The hospitals also trained employees in how to communicate their concerns when they called a Rapid Response Team. Many hospitals employed a technique called SBAR to facilitate discussions about problems. The acronym stands for Situation-Background-Assessment-Recommendation. The SBAR methodology provides a way for health care personnel to discuss a patient’s condition in a systematic manner, beginning with a description of the current situation and ending with a recommendation of how to proceed with testing and/or treatment. The Institute for Healthcare Improvement explains the merits of the process:

“SBAR is an easy-to-remember, concrete mechanism useful for framing any conversation, especially critical ones, requiring a clinician’s immediate attention and action. It allows for an easy and focused way to set expectations for what will be communicated and how between members of the team, which is essential for developing teamwork and fostering a culture of patient safety.”28

The commercial aviation industry also provides extensive checklists for its pilots to review before, during, and after flights to enhance safety. It also conducts training for its flight crews regarding the cognitive and interpersonal skills required to identify and address potential safety problems in a timely and effective manner. The industry coined the term CRM—Crew Resource Management—to describe the set of principles, techniques, and skills that crew members should use to communicate and interact more effectively as a team. CRM training, which is employed extensively throughout the industry, helps crews identify potential problems and discuss them in an open and candid manner. Through CRM training, captains learn how to encourage their crew members to bring forth concerns, and crew members learn how to raise their concerns or questions in a respectful, but assertive, manner.29

Aviation experts credit CRM with enhancing flight safety immeasurably. In one famous incident in 1989, United Airlines Flight 232 experienced an engine failure and a breakdown of all the plane’s hydraulic systems. By most accounts, no one should have survived. However, the crew managed to execute a remarkable crash landing that enabled 185 of the 296 people onboard to survive. Captain Alfred Haynes credited CRM practices with helping them save as many lives as they did.30

Making Tradeoffs

At times, leaders will find it difficult to distinguish the true “signals” of trouble from all the background “noise” in the environment. Chasing down all the information required to discern whether a signal represents a true threat can be very costly. False alarms will arise when people think they have spotted a problem, when in fact, no significant threat exists. Too many false alarms can begin to “dull the senses” of the organization, causing a reduction in attentiveness over time. Leaders inevitably must make tradeoffs as they hunt for problems in their organizations. They have to weigh the costs and benefits of expending time and resources to investigate a potential problem. Naturally, we do not always make the right judgments when we weigh these costs and benefits; we will choose not to further investigate some problems that turn out to be quite real and substantial.

How do the best problem-finders deal with these challenges? First, a leader does not necessarily have to consume an extraordinary amount of resources to surface and examine potential problems. Some leaders and organizations have developed speedy, low cost methods of inquiry. Toyota’s “Andon cord” system represents one such highly efficient process for examining signals of potential trouble. The organization does not grind to a halt every time a front-line worker pulls the “Andon cord.” Second, the best problem-finders recognize that false alarms can be remarkable learning opportunities. Moreover, making someone feel bad for triggering a false alarm can discourage him from ever coming forward again. The cost of suppressing people’s voices can be far higher than the expense associated with chasing down a false alarm. For the Rapid Response Teams, the hospitals train the experts to be gentle with those who call for help when no true threat exists. They even tell them not to use the “false alarm” terminology. Instead, the experts work with people to help them refine their ability to discern true threats from less serious concerns. Finally, effective problem-finders recognize that the process of trying to uncover potential threats can have positive “spillover effects.” For instance, hospitals have found that the process for investigating possible medical errors often leads to the discovery of opportunities for reducing expenses or improving patient satisfaction.

Perhaps most importantly, leaders must remember that problem-finding abilities tend to improve over time. As you practice the methods described in this book, you will become better at distinguishing the signals from the noise. You will become more adept at identifying whether a piece of information suggests a serious problem or not. The nurses, for instance, told us that experience proves to be a great teacher. Over time, they learned how to discern more accurately whether a patient could be headed for cardiac arrest. Moreover, the Rapid Response Teams became more efficient at diagnosing a patient when they arrived at the bedside. In short, costs of problem-finding do fall substantially as people practice these skills repeatedly.31

Becoming an Effective Problem-Finder

In the remainder of this book, we will lay out the key skills and capabilities required to ensure that problems do not remain hidden in your organization. Keep in mind that problem-finding does not precede processes of continuous improvement. Learning does not follow a linear path. Take the athlete who practices her sport on a regular basis. She does not always discover a problem first and then practice a new technique for overcoming that flaw. Sometimes, an athlete sets out on a normal practice routine, and through that process, she discovers problems that diminish her effectiveness. In sum, the processes of problem-finding and continuous improvement are inextricably linked. A person should not focus on one at the expense of the other, nor should he expect to proceed in a linear fashion from problem discovery to performance improvement. We often will discover new problems while working to solve old ones.

The following chapters explain the seven vital behaviors of effective problem-finders. To discover the small problems and failures that threaten your organization, you must do the following:

•  Circumvent the gatekeepers: Remove the filters at times, and go directly to the source to see and hear the raw data. Listen aggressively to the people actually doing the work.32 Keep in touch with what is happening at the periphery of your business, not simply at the core.

•  Become an ethnographer: Many anthropologists observe people in natural settings, which is known as ethnographic research. Emulate them. Do not simply ask people how things are going. Do not depend solely on data from surveys and focus groups. Do not simply listen to what people say; watch what they do—much like an anthropologist. Go out and observe how employees, customers, and suppliers actually behave. Effective problem-finders become especially adept at observing the unexpected without allowing preconceptions to cloud what they are seeing.

•  Hunt for patterns: Reflect on and refine your individual and collective pattern-recognition capability. Focus on the efficacy of your personal and organizational processes for drawing analogies to past experiences. Search deliberately for patterns amidst disparate data points in the organization.

•  Connect the dots: Recognize that large-scale failures often are preceded by small problems that occur in different units of the organization. Foster improved sharing of information, and build mechanisms to help people integrate critical data and knowledge. You will “connect the dots” among issues that may initially seem unrelated, but in fact, have a great deal in common.

•  Encourage useful failures: Create a “Red Pencil Award” philosophy akin to the one at Build-a-Bear. Encourage people to take risks and to come forward when mistakes are made. Reduce the fear of failure in the organization. Help your people understand the difference between excusable and inexcusable mistakes.

•  Teach how to talk and listen: Give groups of frontline employees training in a communication technique, such as Crew Resource Management, that helps them surface and discuss problems and concerns in an effective manner. Provide senior executives with training on how to encourage people to speak up, and then how to handle their comments and concerns appropriately.

•  Watch the game film: Like a coach, reflect systematically on your organization’s conduct and performance, as well as on the behavior and performance of competitors. Learn about and seek to avoid the typical traps that firms encounter when they engage in lessons learned and competitive-intelligence exercises. Create opportunities for individuals and teams to practice desired behaviors so as to enhance their performance, much like elite athletic performers do.

The Isolation Trap

Problem-finders do not allow themselves to become isolated from their organization and its constituents. They tear down the barriers that often arise around senior leaders. They reach out to the periphery of their business, and they engage in authentic, unscripted conversations with those people on the periphery. They set out to observe the unexpected, while discarding their preconceptions and biases.

Unfortunately, far too many senior executives of large companies become isolated in the corner office. Their professional lives involve a series of handlers—people who take their calls, screen their email, drive them places, run errands for them. They live in gated communities, travel in first class, and stay at five-star hotels. They have worked hard for these privileges; few would suggest that they don’t deserve them. However, executives often find themselves living and working in a bubble. They lose touch with their frontline employees, their customers, and their suppliers.

The isolation trap does not afflict only senior leaders. Leaders at all levels sometimes find themselves isolated from those who actually know about the problems that threaten the organization. Yes, many leaders conduct town-hall meetings with employees, and they go on customer visits periodically. They tour the company factories or stores, and they visit supplier locations. However, these events are often highly orchestrated and quite predictable. People typically know that they are coming, which clearly alters the dynamic a great deal. Often, executives simply witness a nice show, put on by lower-level managers to impress them. They don’t actually come to understand the needs and concerns of people who work in their factories or consume their goods. Such isolation breeds complacency and an inability to see the true problems facing the organization.

Problem-finders recognize the isolation trap, and they set out to avoid it. They put themselves out there; they open themselves to hearing about, observing, and learning about problems. Problem-finders acknowledge and discuss their own mistakes publicly. They recognize that one cannot make great decisions or solve thorny problems unless one knows about them. Novartis senior executive Larry Allgaier told me recently that he always keeps in mind an adage: “I worry the most about what my people are not telling me.”33 That statement reflects the philosophy of the successful problem-finder. They worry deeply about what they do not know. They worry deeply that they do not know what they do not know.34

Endnotes

1 This disguised example is drawn from research that I conducted along with a thesis student, Jason Park, as well as Harvard Professors Amy Edmondson and David Ager. We conducted the research at four hospitals: Baptist Memorial Hospital in Memphis, St. Joseph’s Hospital in Peoria, Missouri Baptist Medical Center in St. Louis, and Beth Israel Deaconess Medical Center in Boston. For more on this research, see Jason Park’s award-winning 2006 Harvard College senior thesis, “Making rapid response real: Change management and organizational learning in critical patient care.” During this project, Jason and I interviewed forty-nine medical professionals at the four hospitals, and we observed weekly administrative meetings at one of these hospitals for a period of several months. We would like to especially thank Nancy Sanders, R.N., from Missouri Baptist; Marla Slock, R.N., from St. Joseph’s; Dr. Emmel Golden from Baptist Memorial in Memphis; and Dr. Michael Howell from Beth Israel Deaconess for their support and cooperation in our research initiative.

2 Franklin, C., and J. Matthew. (1994). “Developing strategies to prevent in-hospital cardiac arrest: Analyzing responses of physicians and nurses in the hours before the event.” Critical Care Medicine. 22(2): 244–247.

3 For more on rapid response teams, see the Institute for Healthcare Improvement’s How-to Guide titled “Getting Started Kit: Rapid Response Teams.”

4 Sharek, P. J., L. M. Parast, K. Leong, J. Coombs, K. Earnest, J. Sullivan, et al. (2007). “Effect of a Rapid Response Team on Hospital-wide Mortality and Code Rates Outside the ICU in a Children’s Hospital.” Journal of the American Medical Association. 298: 2267–2274.

5 Two classic works in this regard are by sociologist Charles Perrow and psychologist James Reason. For more information, see C. Perrow. (1999). Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton University Press, and J. Reason (1990). Human Error. Cambridge, England: Cambridge University Press. In his book, Reason argues that organizational accidents represent a chain of errors in most circumstances. He also puts forth his “Swiss cheese model” regarding the organizational defenses against accidents. According to this conceptual framework, an organization’s layers of defense against accidents are described as slices of cheese. Reason likens the holes in the block of cheese to the weaknesses in those defenses. The holes in a block of Swiss cheese typically do not line up perfectly, such that one cannot look through a hole on one side and see through to the other side. Unfortunately, in some rare instances, the holes become completely aligned. Reason argues that a small error then can traverse the block—that is, cascade quickly through the organizational system. In most cases, though, the holes do not line up. Thus, one of the layers of defense catches a small error before it cascades throughout the system.

6 Turner, B. A. (1976). “The organizational and interorganizational development of disasters.” Administrative Science Quarterly. 21(3): 378–397.

7 For a review of the literature on catastrophic failure, you might want to take a look at a recent book chapter I wrote: Roberto, M. (2008). “Why Catastrophic Organizational Failures Happen” in C. Wankel (ed.), 21st Century Management. (pp. 471–481). Thousand Oaks, CA: Sage Publications.

8 Edmondson, A. C. and M. D. Cannon. (2005). “Failing to Learn and Learning to Fail (Intelligently): How Great Organizations Put Failure to Work to Improve and Innovate.” Long Range Planning Journal. 38(3): 299–320.

9 Sim Sitkin wrote a seminal paper on the issue of how organizations can benefit from what he called “intelligent failures.” These failures represent opportunities for learning that must be embraced. See Sitkin, S. B. (1996). “Learning through failure: The strategy of small losses.” In M. D. Cohen and L. S. Sproull (eds.), Organizational Learning. (pp. 541–578). Thousand Oaks, CA: Sage.

10 For more on Toyota’s culture of continuous improvement, see Takeuchi, H., E. Osono, and Norihiko Shimizu. (2008). “The contradictions that drive Toyota’s success.” Harvard Business Review. June: 96–105; Spear, S. and Kent Bowen. (1999). “Decoding the DNA of the Toyota Production System.” Harvard Business Review. September: 96–107.

11 Weick, K. and Kathleen Sutcliffe. (2001). Managing the Unexpected: Assuring High Performance in an Age of Complexity. San Francisco: Jossey Bass. p. 56.

12 Mishina, K. (1992). “Toyota Motor Manufacturing, U.S.A., Inc.” Harvard Business School Case Study No. 9-693-019. Mishina provides an in-depth description of the Toyota Production System, including the procedure by which frontline workers can pull the Andon cord to alert supervisors of a potential problem. Mishina also describes how and why the line actually stops on some occasions when the Andon cord has been pulled.

13 Fishman, C. (2006). “No satisfaction.” Fast Company. 111: 82–91.

14 Watanabe, K. (2007). “The HBR Interview: Lessons from Toyota’s Long Drive.” Harvard Business Review. July–August: 74–83.

15 Clark, M. with A. Joyner. (2006). The Bear Necessities of Business: Building a Company with Heart. Hoboken, NJ: John Wiley and Sons. p. 89.

16 Ibid, p. 92.

17 Ibid.

18 Amy Edmondson has written prolifically on the subject of psychological safety. For example, see Edmondson, A. (1999). “Psychological safety and learning behavior in work teams.” Administrative Science Quarterly. 44: p. 354; Edmondson, A., R. Bohmer, and Gary Pisano. (2001). “Disrupted Routines: Team Learning and New Technology Adaptation.” Administrative Science Quarterly 46: 685–716; Edmondson, A. (2003). “Speaking up in the Operating Room: How Team Leaders Promote Learning in Interdisciplinary Action Teams.” Journal of Management Studies 40(6): 1419–1452; Detert, J. R. and A. C. Edmondson. (2007). “Why Employees Are Afraid to Speak Up.” Harvard Business Review. May: 23–25.

19 Anita Tucker and Amy Edmondson wrote an award-winning article in 2003 in which they distinguish between first-order and second-order problem-solving. In their research, they found that hospital nurses often fixed the problems they encountered on the front lines so that they could get their work done (first-order problem-solving), but they often did not dig deeper to address the underlying systemic failures (second-order problem-solving). Nurses solved the problems within their own unit, but they did not communicate more broadly about the issues they had encountered. This isolation impeded learning and meant that problems continued to recur. See Tucker, A. and A. Edmondson. (2007). “Why Hospitals Don’t Learn from Failures: Organizational and Psychological Dynamics That Inhibit System Change.” California Management Review 45(2): 53–71.

20 Former General Electric CEO Jack Welch describes the dangers of structural complexity in one of his books. See Welch, J. (2001). Jack: Straight from the Gut. New York: Warner Business Books.

21 This section draws upon research that I conducted along with Professor Jan Rivkin of the Harvard Business School and our research associate, Erika Ferlins. See Rivkin, J. W., M. A. Roberto, and Erika Ferlins. (2006). “Managing National Intelligence (A): Before 9/11.” Harvard Business School Case Study 9-706-463.

22 The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks Upon the United States. (2004). New York: W.W. Norton & Company. p. 353.

23 For an excellent analysis of presidential decision-making, including the role of the chief of staff, see the following: George, A. (1980). Presidential Decision-making in Foreign Policy: The Effective Use of Information and Advice. Boulder, Colorado: Westview Press; Johnson, R. T. (1974). Managing the White House. New York: Harper Row. In addition, you might examine Stephen Ambrose’s biographical work on Dwight D. Eisenhower, both as a general and as president. See Ambrose, S. E. (1990). Eisenhower: Soldier and President. New York: Touchstone.

24 Walcott, C., S. Warshaw, and Stephen Wayne. (2000). “The Chief of Staff.” The White House 2001 Project: Report No. 21. p. 1.

25 Ibid, p. 12.

26 In both NASA space shuttle accidents, engineers had serious concerns about the safety of the vehicle, but they could not prove their case with statistically significant data. Instead, their intuition told them that the shuttle was not safe. The NASA culture tended to downplay judgments based on instinct, instead emphasizing quantitative evidence from large datasets. For more on the Challenger accident, see Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press. For more on the Columbia accident, see Edmondson, A., M. Roberto, R. Bohmer, E. Ferlins, and Laura Feldman. (2005). “The Recovery Window: Organizational Learning Following Ambiguous Threats.” In M. Farjoun and W. Starbuck (eds.), Organization at the Limit: Lessons from the Columbia Disaster (220–245). London: Blackwell.

27 Interview with a former senior executive at Bright Horizons, the employer-sponsored child care provider.

28 http://www.ihi.org/IHI/Topics/PatientSafety/SafetyGeneral/Tools/SBARTechniqueforCommunicationASituationalBriefingModel.htm.

29 Weiner, E. L., B. G. Kanki, and Robert L. Helmreich. (1995). Cockpit Resource Management. London: Academic Press.

30 During a speech at NASA’s Ames Research Center on May 24, 1991, Captain Alfred Haynes credited Crew Resource Management (CRM) techniques with helping him crash-land United Airlines Flight 232. For a copy of this speech, see the following URL: http://yarchive.net/air/airliners/dc10_sioux_city.html. For an academic interpretation of this particular incident, see McKinney, E. H., J. R. Barker, K. J. Davis, and Daryl Smith. (2005). “How Swift Starting Action Teams Get off the Ground: What United Flight 232 and Airline Flight Crews Can Tell Us About Team Communication.” Management Communication Quarterly. 19: 198–237.

31 For more discussion of the cost-benefit tradeoffs that problem-finders face, see Edmondson, A., M. Roberto, R. Bohmer, E. Ferlins, and Laura Feldman (2005).

32 Retired Captain Michael Abrashoff uses the term “aggressive listening” in his book about the leadership lessons that he learned as the commander of a U.S. Navy Arleigh Burke class destroyer.

33 Conversation with Larry Allgaier during a Novartis Customized Executive Education program at the Harvard Business School in fall 2007.

34 Karlene Roberts is an expert on high-reliability organizations—enterprises that cope with high levels of risk on a daily basis, yet maintain very low accident rates. She argues that managers in these organizations aggressively seek to know what they don’t know. See Roberts, K., R. Bea, and D. Bartles. (2001). “Must accidents happen? Lessons from high reliability organizations.” Academy of Management Executive. 15(3): 70–79.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.255.36