4
Dangerous Silence

“Regret for the things we did can be tempered by time; it is regret for the things we did not do that is inconsolable.”

—Sydney Harris1

More than just business failure is at stake when psychological safety is low. In many workplaces, people see something physically unsafe or wrong and fear reporting it. Or they feel bullied and intimidated by someone but don't mention it to supervisors or counselors. This reticence unfortunately can lead to widespread frustration, anxiety, depression, and even physical harm. In short, we live and work in communities, cultures, and organizations in which not speaking up can be hazardous to human health.

This chapter explores how silence at work leads to harm that could have been prevented. You will read stories that come predominantly, but not exclusively, from high-risk industries. In these cases, employees find themselves unable to speak up; the ensuing silence then creates conditions for physical and emotional harm. Although never easy, in some workplaces, as we will see in Chapter 5 and Chapter 6, people do feel both safe and compelled to speak up. This gives everyone the chance to develop constructive solutions and avoid harmful outcomes.

We'll start with stories of silence that gave rise to major accidents in high-risk settings where risk and routine often exist in an uneasy balance. The first two accidents take place in the air. From there, we'll move to a hospital bed, tsunami waves, and finally the volatile setting of public opinion.

Failing to Speak Up

On February 1, 2003, NASA's Space Shuttle Columbia experienced a catastrophic reentry into the Earth's atmosphere.2 All seven astronauts perished. Although space travel is obviously risky and fatal accidents seem part of the territory, this particular accident did not come “out of the blue.” Two weeks earlier, a NASA engineer named Rodney Rocha had watched launch-day video footage, a day after what had seemed to be a picture-perfect launch on a sunny Florida morning. But something seemed amiss. Rocha played the tape over and over. He thought a chunk of insulating foam might have fallen off the shuttle's external tank and struck the left wing of the craft. The video images were grainy, shot from a great distance, and it was impossible to really tell whether or not the foam had caused damage, but Rocha could not help worrying about the size and position of that grainy moving dot he saw on the screen. To resolve the ambiguity, Rocha wanted to get satellite photos of the Shuttle's wing. But this would require NASA higher ups to ask the Department of Defense for help.

Rocha emailed his boss to see if he could get help authorizing a request for satellite images. His boss thought it unnecessary and said so. Discouraged, Rocha sent an emotional email to his fellow engineers, later explaining that “engineers were…not to send messages much higher than their own rung in the ladder.”3 Working with an ad hoc team of engineers to assess the damage, he was unable to resolve his concern about possible damage without obtaining images. A week later, when the foam strike possibility was briefly discussed by senior managers in the formal mission management team meeting, Rocha, sitting on the periphery, observed silently.

A formal investigation by experts would later conclude that a large hole in the shuttle wing occurred when a briefcase-sized piece of foam hit the leading edge of the wing, causing the accident.4 They also identified two, albeit difficult and highly-uncertain, rescue options that might have prevented the tragic deaths. Reporting on the investigation, ABC News anchor Charlie Gibson asked Rocha why he hadn't spoken up in the meeting. The engineer replied, “I just couldn't do it. I'm too low down [in the organization]…and she [meaning Mission Management Team Leader Linda Ham] is way up here,” gesturing with his hand held above his head.5

Rocha's statement captures a subtle but crucial aspect of the psychology of speaking up at work. Consider his words carefully. He did not say, “I chose not to speak,” or “I felt it was not right to speak.” He said that he “couldn't” speak. Oddly, this description is apt. The psychological experience of having something to say yet feeling literally unable to do so is painfully real for many employees and very common in organizational hierarchies, like that of NASA in 2003. We can all recognize this phenomenon. We understand why his hands spontaneously depicted that poignant vertical ladder. When probed, as Rocha was by Gibson, many people report a similar experience of feeling unable to speak up when hierarchy is made salient. Meanwhile, the higher ups in a position to listen and learn are often blind to the silencing effects of their presence.

What Was Not Said

Twenty-six years earlier, workplace silence played a major role in the collision of two Boeing 747 jets on an island runway in the Canary Islands in March 1977.6 The crash ignited two jumbo jets into flames, and 583 people died. Subsequent investigations into what has been called the Tenerife disaster, still considered the worst accident in the history of civil aviation, were among the first to study the roles played by human factors in airline fatalities. The resulting changes made to aviation procedures and cockpit training laid the groundwork for some of today's most crucial psychological safety measures.

Let's look at what went wrong on that afternoon in late March at the small Los Rodeos Airport on the island of Tenerife. The runway was covered in heavy fog and the airport was small, which made it difficult for the pilots of both aircrafts to see the runway and one another. An unexpected landing at Tenerife due to a bomb scare earlier that day at nearby Las Palamas airport put extra stress on the crew, intent on keeping to their scheduled flight arrival times. Air control personnel may have been watching a sports game, distracting their attention. However, these relatively common, if unfavorable, conditions need not have resulted in tragedy. If we look more closely into what was said in the aircraft cockpit – and more importantly, what was not said, and why not – we can better understand the outsized role played by psychological safety.

Captain Jacob Veldhuyzen van Zanten, one of the company's most senior pilots, chief flight trainer of most of the company's 747 pilots, and head of flight safety for Royal Dutch Airlines (KLM), piloted the flight.7 Nicknamed “Mr. KLM,” van Zanten held the power to issue pilots' licenses and oversaw pilots' six-month flight checks to determine whether licenses would be extended. His photograph, which had just appeared in a KLM advertising spread, depicted a smiling and confident man in a white shirt sitting in front of a control panel. He looked like a man who was comfortable being in charge.

Flying with van Zanten that day were two other top-notch and highly-experienced pilots: First Officer Klaas Meurs, age 32, and Flight Engineer Willem Schreuder, age 48. Importantly, two months earlier, van Zanten had been Meur's “check pilot,” testing his ability to fly the Boeing 747.

The crucial moments came as the KLM and the Pan Am flights were preparing for takeoff. Immediately after lining up on the runway, Captain van Zanten impatiently advanced the throttles and the aircraft started to move forward. First Officer Meurs, implying that van Zanten was moving too soon, then advised that air traffic control (ATC) had not yet given them clearance.

Van Zanten, sounding irritated, responded: “No, I know that. Go ahead, ask.”8

Following his captain's request, Meurs then radioed the tower that they were “ready for takeoff” and “waiting for our ATC clearance.” The ATC then specified the route that the aircraft was to follow after takeoff. Although the ATC used the word “takeoff,” their communication did not include an explicit statement that KLM was cleared for takeoff. Meurs began reading the flight clearance back to the controller, but van Zanten interrupted with an imperative: “We're going.”

Given the captain's authority, it was in this moment that Meurs apparently did not feel safe enough to speak up. Meurs, in that split second, did not open his mouth to say, “wait for clearance!”

Meanwhile, after the KLM plane had started its takeoff roll, the tower instructed the Pan Am crew to “report when runway clear.” To which the Pan Am crew replied, “OK, will report when we're clear.” On hearing this, Flight Engineer Schreuder expressed his concern that Pan Am was not clear of the runway by asking, “is he not clear, that Pan American?”

Van Zanten emphatically replied, “oh, yes,” and continued with the takeoff.

And in this moment Schreuder did not say a thing. Although he had correctly surmised that the Pan Am jet might be blocking their way, Schreuder did not challenge Van Zanten's confident retort. He did not ask ATC to clarify or confirm by asking, for example, “is Pan American on the runway?” His reticence indicates a lack of the psychological safety that would make such a query all but second nature.

By then it was too late. The KLM Boeing was going too fast to stop when van Zanten, Meurs, and Schreuder finally could see the Pan Am jet blocking their way. The KLM's left-side engines, lower fuselage, and main landing gear struck the upper right side of the Pan Am's fuselage, ripping apart the center. The KLM plane remained briefly airborne before going into a stall, rolling sharply, hitting the ground, and igniting into a fireball.

Such is the inexorably psychological pull of hierarchy that even when their own lives were at risk, not to mention the lives of others, the first officer and the flight engineer did not push back on their captain's authority. In those moments where speaking up might make sense, we all go through an implicit decision-making process, weighing the benefits and costs of speaking up. The problem, as explained in Chapter 2, is that the benefits are often unclear and delayed (e.g. avoiding a possible collision) while the costs are tangible and immediate (van Zanten's irritation and potential anger). As a result, we consistently underweight the benefits and overweight the costs. In the case of Tenerife, this biased process led to disastrous outcomes.

Many who analyze events leading up to tragic accidents such as this one-which could have been avoided had the junior officer spoken up-cannot help pointing out that people should demonstrate a bit more backbone. Courage. It is impossible to disagree with this assertion. Nonetheless, agreeing doesn't make it effective. Exhorting people to speak up because it's the right thing to do relies on an ethical argument but is not a strategy for ensuring good outcomes. Insisting on acts of courage puts the onus on individuals without creating the conditions where the expectation is likely to be met.

For speaking up to become routine, psychological safety – and expectations about speaking up – must become institutionalized and systematized. After Tenerife, cockpit training was changed to place more emphasis on crew decision-making, encourage pilots to assert their opinion when they believed something was wrong, and help captains listen to concerns from co-pilots and crews.9 These measures were a precursor to the official crew resource management (CRM) training that all pilots must now undergo.

Excessive Confidence in Authority

Medicine, like commercial aviation, is another profession where authority is well understood and tightly linked to one's place in a strict hierarchy. A direct line of command, where everyone knows his or her place, has its benefits. However, deference to others, especially in the face of ambiguity, can become the default mode of operation, leading everyone to believe that the person-on-top always knows best. In some cases, an implicit belief that the person with the highest place on the hierarchy must also be the authority can lead to fatal consequences. In other cases, an implicit belief in the authority of the medical system itself can be fatal.

On December 3, 1994, Betsy Lehman, a 39-year-old mother of two and a healthcare columnist at The Boston Globe, died at the Dana-Farber Cancer Institute while undergoing a third round of high-dose chemotherapy for breast cancer.10 In part because of her profession as a journalist, Lehman's death was well publicized in the media, especially once it was linked to a medical error.11

The Dana-Farber Cancer Institute where Lehman sought treatment was renowned for its cancer research and its success in treating complex and difficult cases. With only 57 inpatient beds, its patient care was a kind of boutique unit that enabled informal information sharing among physicians, nurses, and pharmacy staff rather than the formal communications mechanisms that exist in a traditional hospital setting. As Senior Oncologist Stephen Sallan noted, “our confidence was based on the assumption that if we were all wonderful then our pharmacy safety would be wonderful.”12 Unfortunately, this assumption did not leave much room for questioning or routine checking. The absence of a Director of Nursing at the time of Lehman's admittance, a post that had been vacant for over a year, also signals that the medical and clinical teams did not adequately appreciate the interdependence and complexity of their work.

Lehman was admitted to the Dana-Farber for the planned chemotherapy on November 14, 1994. Although the chemotherapy agent was the commonly used cyclophosphamide, the dose was especially high because Lehman's treatment plan involved a cutting-edge stem-cell transplant. The protocol called for the chemotherapy to be infused over four days, with the amount given during each 24-hour period to be “barely shy of lethal.”13 As part of the clinical trial, Lehman was also given another drug, cimetidine, which was supposed to boost the effect of the first drug.14

In routine cancer treatments, courses of chemotherapy doses are typically standardized; however, in a research trial such as Lehman was undergoing, upper limits could be ambiguous. At Dana-Farber, where 30% of patients might be enrolled in a clinical trial at any one time, staff members who administered chemotherapy were accustomed to seeing unusual drug combinations and dosages.15 That may partly explain why no alarm bells went off even though the prescription – written by a clinical research fellow in oncology, copied into Lehman's records by a nurse, and filled by three different pharmacists – had mistakenly ordered the entire four-day dosage for each day, providing Lehman with four times the dosage she was supposed to receive.

The treatment was expected to produce severe nausea and vomiting. However, over the next three weeks in the hospital, Lehman's symptoms were extraordinary. She had not been as sick during the first two high-dose treatments. Now she was “grossly swollen” and had abnormal blood and EKG tests.16 High-dose cyclophosphamide was known to be toxic to the heart. Lehman's husband reported that she was “vomiting sheets of tissue. [The doctors] said this was the worst they had ever seen. But the doctors said this was all normal with bone marrow transplant.”17 At one point, Lehman asked a nurse, “Am I going to die from vomiting?”18 Meanwhile, another patient, admitted shortly before Lehman, given the same incorrect chemotherapy dose, had suddenly collapsed and was rushed to the intensive care unit.

The day before Lehman's discharge, her symptoms seemed to be abating. And there were signs that the experimental stem cell transplant was proceeding successfully. An EKG, however, was abnormal. On December 3, the day of her discharge and the day she died of heart failure, the last people she spoke to – a friend, a social worker, and a nurse – confirmed that she was very upset, frightened, and felt that something “was wrong.”19 We do not know whether or not she had voiced this concern as distinctly or coherently in the previous weeks. Surely, she must have wondered. Of course, an extremely ill patient is usually not in a position to assertively question her treatment plan, especially one that is experimental.

The medical error was not discovered until three months later – by a routine data check rather than by a clinical inquiry. As part of its corrective actions, Dana-Farber instituted automated medication checks into its chemotherapy procedures. Ultimately, Lehman's death became a catalyst for hospital and healthcare institutions in the US to craft policy to help reduce medical errors, including more systemic checking of routine procedures throughout a patient's treatment process and more reporting provisions for caregivers, regardless of their professional status.

From the perspective of psychological safety, however, the bigger question that remains is why, given Lehman's extreme physical distress, did no one deeply and persistently question whether something had gone profoundly wrong? Did Lehman and her husband place too much trust in the highly regarded medical institution? Similarly, why did pharmacists not question the extraordinary fourfold dosage of the already high-dose chemotherapy agent? The same can be asked about the nurses. Perhaps their implicit trust in the expertise of the physician-researchers left them incurious. Or, they may have been reluctant to speak up to inquire into rationale for the treatment plan only to be put down by their higher-status colleagues. We don't know whether the nurses and physicians who observed Lehman's symptoms assigned too little significance to the type of side effects the high-dose chemotherapy was supposed to induce. No one involved seemed to accurately assess the gravity of her condition. Ultimately, Betsy Lehman's mother, Mildred K. Lehman, was the one who concisely summed up the problem: “Betsy's life might have been saved if staff had stepped forward to attend to the multiple signs that her treatment was far off course.”20

What is important to take away from this story, and what most hospitals today work hard to avoid, is that a climate in which people err on the side of silence – implicitly favoring self-protection and embarrassment avoidance over the possibility that one's input may be desperately needed in that moment – is a serious risk factor. It is clearly far better for people to ask questions or raise concerns and be wrong than it is for them to hold back, but most people don't consciously recognize that fact. Raising concerns that turn out to be unfounded presents a learning opportunity for the person speaking up and for those listening who thereby glean crucial information about what others understand or don't understand about the situation or the task.

A Culture of Silence

Cassandra, one of the most tragic characters in classical Greek mythology, was given the gift of prophecy along with the curse that she would never be believed. Low levels of psychological safety can create a culture of silence. They can also create a Cassandra culture – an environment in which speaking up is belittled and warnings go unheeded. Especially when speaking up entails drawing attention to unpleasant outcomes, as was the case for Cassandra in her prediction of war, it's easy for others not to listen or believe. A culture of silence is thus not only one that inhibits speaking up but one in which people fail to listen thoughtfully to those who do speak up – especially when they are bringing unpleasant news.

Consider the Challenger shuttle explosion back in 1986. Unlike Rodney Rocha's silence in a crucial workplace moment, Roger Boisjoly, an engineer at NASA contractor Morton-Thiokol, did speak up. The night before the disastrous launch, Boisjoly raised his concern that unusually cold temperatures might cause the O-rings that connected segments of the shuttle to malfunction. His data were incomplete and his argument vague, but the assembled group could have readily resolved the ambiguity with some simple analyses and experiments had they listened intensely and respectfully. In short, for voice to be effective requires a culture of listening.

Let's take a look at a more recent example of what can happen when the listening culture is weak.

Dismissing Warnings

On March 11, 2011, a 9.0 magnitude earthquake occurred off the northeastern coast of Japan. The quake, later dubbed the “Great East Japanese Earthquake,” created tsunami waves up to 45 feet high that struck the Fukushima Daiichi Nuclear Power Plant.21 Waves of mythic proportions leapt easily over the plant's undersized sea walls, flooding the site and completely destroying emergency generators, seawater cooling pumps, and the electric wiring system. Without power to cool down the nuclear reactors, three of the reactors overheated, resulting in multiple explosions that injured workers on the ground. Most alarmingly, nuclear fuel was released into the ocean, and radionuclides were released from the plant into the atmosphere. As a result of the nuclear meltdown, hundreds of thousands of Japanese were forced to flee their homes to avoid radiation exposure. Most will be unlikely to ever return home, as it's estimated the cleanup will take between 30 and 40 years.22

Although the earthquake itself, the most powerful ever recorded in Japan's history, wreaked unpreventable catastrophic damage that killed an estimated 15,000 people,23 it's now universally accepted that the corollary disaster at the nuclear power plant was in fact preventable. By the summer of 2012, an independent investigation, released after having conducted 900 hours of hearings, interviews with over a thousand people, 9 plant tours, 19 committee meetings, and 3 town halls, concluded that “the accident was clearly manmade” and the “direct causes of the accident were all foreseeable.”24 Examining the evidence, it becomes clear that in the years leading up to the disaster at the Daiichi Nuclear Power Plant, more than one Cassandra-like figure spoke up more than once to warn of such an accident. Recommendations were made for reasonable safety measures that would likely have prevented or mitigated the plant's destruction. But each time, the warnings were dismissed or not believed. The question is, why?

In 2006, Katsuhiko Ishibashi, a professor at the Research Center for Urban Safety and Security at Kobe University, was appointed to a Japanese subcommittee tasked with revising the national guidelines on the earthquake-resistance of the country's nuclear power plants. Ishibashi proposed that the group review the standards for surveying active fault lines and criticized the government's record of allowing the construction of power plants, like Fukushima Daiichi, in areas with the potential for such high seismic activity. But the rest of the committee, the majority of which consisted of advisors with ties to the power companies, rejected his proposal and downplayed his concerns.25

The following year, Ishibashi spoke up again, publishing a prescient article titled Why Worry? Japan's Nuclear Plants at Grave Risk from Quake Damage, with the claim that Japan had been lulled into a false sense of confidence after many years of relatively quiet seismic activity. An expert on seismicity and plate tectonics in and around the Japanese islands, he believed that tectonic plates followed regular schedules and that the area in question was overdue for an earthquake. His warning was explicit: “unless radical steps are taken now to reduce the vulnerability of nuclear power plants to earthquakes, Japan could experience a true nuclear catastrophe in the near future,” including one caused by tsunamis.26 Unfortunately, others dismissed Ishibashi's warnings. For instance, Haruki Madarame, a nuclear regulator and chairman of Japan's Nuclear Safety Commission during the Fukushima disaster, told the Japanese legislature not to worry, as Ishibashi was a “nobody.”27

If Madarame was harsh in his condemnation of Ishibashi as a “nobody,” it's true that, as an academic rather than an industry or government official, he was an outsider. He was, perhaps, not as tightly bound to the dominant post–World War II push for Japan to become independent from its historical dependence on energy imports. Since the mid-fifties, the island, which has few fossil fuels in the ground, had invested heavily in nuclear energy to diversify its energy supply from oil and achieve greater energy security.28 For the next 40 years, following the 1970s “oil shocks,” and despite the highly publicized 1979 Three Mile Island and 1986 Chernobyl accidents, Japan had worked doggedly and ferociously to develop its own domestic nuclear power production capacity.29 For instance, the government had provided subsidies and other incentives for rural towns to build plants. It even conducted public relations campaigns to convince citizens that nuclear power was safe.30 Even so, public opinion surrounding nuclear power remained mixed to negative, with several anti-nuclear demonstrations and the abandonment of several plans to build more plants.31 Given this political context, Ishibashi's safety concerns may have been perceived as unpatriotic or meddlesome.

A 2000 in-house study by Tokyo Electric Power Company (TEPCO), the country's biggest electric company and the owner of the Fukushima Daiichi plant, did acknowledge the possibility that Japan could experience a tsunami of as high as 50 feet. In fact, the report recommended that measures be taken to provide better protection from the risks of flooding. However, nothing was ever done because TEPCO thought the risk of such a low probability event was unrealistic.32 Japanese regulators like the Nuclear and Industrial Safety Agency (NISA) also may have hesitated from policing the utilities because, by then, nuclear energy had become even more of a strategic priority for Japan, and increased nuclear power generation was required to reach the greenhouse gas emission goals laid out by the Kyoto protocol. A dozen new plants were slated to be built by 2011.33 Prior to the Fukushima disaster, Japan was generating 30% of its electricity via nuclear reactors, and the government planned to increase that percentage to 40% in the years to come.34

Although safety issues were ostensibly part of the nuclear power expansion plans, retrospective investigations demonstrate that the government and industry culture had not given due credence or consideration to the gravity of existing threats. For example, in a June 2009 meeting held by NISA specifically to discuss the readiness of Fukushima Daiichi to withstand a natural disaster, tsunamis were not even on the agenda. The agency simply did not see them as likely enough in the Fukushima region to warrant consideration. In creating safety guidelines for Fukushima, the panel thus used data from the biggest earthquake on record in the area, a 1938 earthquake that measured only 7.9 in magnitude and caused only a small tsunami. Because the reactors at Fukushima Daiichi were located near the sea, TEPCO constructed a seawall – one just tall enough to stop a tsunami similar to the one in 1938 from hitting it. The panel assumed that the wall was tall enough to stop any future tsunami and thus focused mainly on preparing the plant for earthquakes.

Another Cassandra-like figure spoke up at that June meeting. Dr. Yukinobi Okamura, the director of Japan's Active Fault and Earthquake Research Center, told the panel he disagreed with TEPCO's decision.35 He did not think the 1938 quake was big enough to serve as the basis for the Daiichi guidelines and instead brought up a much earlier example, the Jogan tsunami, which occurred in AD 869 after a massive earthquake. TEPCO representatives, wishing to discredit Okamura, minimize his concern, or both, claimed the Jogan earthquake “did not cause much damage.” Okamura insisted otherwise. The Jogan tsunami had destroyed castles and killed at least a thousand people. Historical writings compared the tsunami's fury to waves that “raged like nightmares and immediately reached the city center.” Okamura told the panel that he was worried that a tsunami like Jogan could overwhelm the Fukushima region and was confused that the panel was not using all of the available data.

Instead of listening and taking Okamura's concerns seriously, as might occur in a culture where psychological safety was high, a TEPCO executive countered that it didn't make sense to base the safety recommendations on a legendary earthquake that wasn't measured by contemporary tools and techniques. Besides, this meeting was to discuss the risks from earthquakes, rather than tsunamis. The meeting moved on, with TEPCO executives saying they would try to learn more. The next meeting, Okamura again tried to convince the panel of the severity of the threat. He described the predictive models his institute had created to show that the current seawall would not be high enough for anything above an 8.4 magnitude earthquake, and the detailed surveys they'd performed on the sand left behind by the Jogan tsunami. In the end, however, the panel did not listen.

Going Along to Get Along

A culture of silence can thus be understood as a culture in which the prevailing winds favor going along rather than offering one's concerns. It is based on the assumption that most people's voices do not offer value and thus will not be valued. Perhaps the most cogent indictment of how a culture of silence perpetuated a set of attitudes that enabled the Daiichi plant disaster was articulated by Kiyoshi Kurokawa, the Chairman of the NAIIC, who wrote at the beginning of the English version of the report that

For all the extensive detail it provides, what this report cannot fully convey – especially to a global audience – is the mindset that supported the negligence behind this disaster. What must be admitted – very painfully – is that this was a disaster “Made in Japan.” Its fundamental causes are to be found in the ingrained conventions of Japanese culture: our reflexive obedience; our reluctance to question authority; our devotion to “sticking with the program”; our groupism; and our insularity.36

Japanese culture does not have a monopoly on any of the ingrained conventions that Kurokawa lists. Each one is endemic of a culture with low levels of psychological safety where the internal reluctance to speak up or push back combines with a very strong desire to look good to the outside world. Concern with reputation can silence employees' voices internally as well as externally. Resistance to warnings about the safety about the Fukushima Daiichi plant – and what it would take to install better safety measures – were bound up in national aspirations for nuclear energy.

Similar to what we learned about the FRBNY in Chapter 3, where another powerful set of institutional bodies tacitly colluded to silence the few who dared to speak up, push back, or disagree, Japan's nuclear power industry suffered regulatory capture. According to Kurokawa, Japan's long-held policy goal to achieve national energy security via nuclear energy became “such a powerful mandate, [that] nuclear power became an unstoppable force, immune to scrutiny by civil society. Its regulation was entrusted to the same government bureaucracy responsible for its promotion.”37 This blinding need and ambition helped create a culture where “it became accepted practice to resist regulatory pressure and cover up small-scale accidents…that led to the disaster at the Fukushima Daiichi Nuclear Plant.”38

In 2013, a Stanford study concluded that a mere $50 million could have financed a wall high enough to prevent the disaster.39 Yet, the case shows how very challenging it can be to be heard – to have voice welcomed, explored, and sometimes acted upon – when the dominant culture does not want to hear the message.

Silence in the Noisy Age of Social Media

On October 15, 2017, actress Alyssa Milano tapped fewer than 140 characters into her personal device: “If you've been sexually harassed or assaulted write ‘me too’ as a reply to this tweet.” Within 24 hours, the hashtag #MeToo had been tweeted nearly half a million times.40 Although the MeToo movement had been created 10 years earlier by Tarana Burke,41 Milano's tweet, posted in the context of a slew of recent and highly publicized sexual harassment accusations leveled against celebrities, ignited a social media activism campaign. The goal: the simple act of speaking up. Women and men from all walks of life who had suffered myriad types of unwanted sexual attention, often egregious and persistent, the majority afraid to tell even their closest relations, were emboldened to tweet, post, and message about their experiences in what became a public forum.

Milano's tweet was hardly the first act of speaking up. Nine months earlier, on February 19, 2017, the social media landscape was emblazoned by a 3000-word blog post written by a young software engineer.42 Susan Fowler, who had recently left her job as a site-reliability engineer at the ride-sharing company Uber, was exercising her right to candor on her personal website. The specificity and honesty with which she described her experience, which she called “a strange, fascinating, and slightly horrifying story,” reveals much about how mechanisms of power and silence can perpetuate a psychologically unsafe culture. Fowler's voice, echoed by some of her colleagues, amplified by social media, and made louder still by mainstream press, tells us how an unsafe culture can ultimately become unsustainable.

On her first day at the company, Fowler's manager sent her a series of inappropriate messages over the company's chat system. The manager told her “he was looking for women to have sex with.” Fowler said, “It was clear that he was trying to get me to have sex with him…” She took screenshots of the messages and reported the manager to HR. But things didn't go as she expected. Both HR and upper management informed Fowler that it was “this man's first offense, and that they wouldn't feel comfortable giving him anything other than a warning and a stern talking-to” because he “was a high performer.” Fowler was given the choice of either finding another team to work on or remaining on her present team with the understanding that her manager would “most likely give [her] a poor performance review when review time came around, and there was nothing they could do about that.” Fowler tried to protest this “choice,” but got nowhere, and ultimately ended up switching teams.

Over the next few months, Fowler met other women engineers who had similar experiences of sexual harassment at Uber. They had also reported these to HR and gotten nowhere. Some of the women even reported having similar interactions with the same manager as Fowler. All were told it was his first offense. In each case, nothing was done. Fowler and her colleagues, feeling unheard, fell silent –for a while.

Ironically, as reported in her blog post, Fowler had been initially excited about joining Uber back in November 2015, citing that she had “the rare opportunity to choose whichever team was working on something that I wanted to be part of.”

Promoted and Protected

Uber Technologies, Inc., founded in 2009 by serial entrepreneurs and friends Garrett Camp and Travis Kalanick, had launched in San Francisco in 2011 with funding from prominent Silicon Valley venture capital firms.43 As Uber grew, so did its reputation as an aggressive, fast-moving, in-your-face company, not inconsistent with its overt intention to disrupt the long-established taxicab industry, replacing it with a ride-sharing economy.44 Top employees were “promoted and protected” – as long as they could hit or exceed their numbers, they were rewarded.45 After Fowler's post broke, current and former employees came forward to describe Uber's culture as “unrestrained,” a “Hobbesian environment…in which workers are sometimes pitted against one another and where a blind eye is turned to infractions from top performers.”46 Fowler's manager had merely been a case in point.

Fowler, like other new Uber hires, had been advised of the company's core values.47 Several of those values were likely to have contributed to a psychologically unsafe environment. For example, “super-pumpedness,” especially central to the company, involved a can-do attitude and doing whatever it took to move the company forward. This often meant working long hours, not in itself a hallmark of a psychologically unsafe environment; Fowler seems to have relished the intellectual challenges and makes a point to say that she is “proud” of the engineering work she and her team did. But super-pumpedness, with its allusions to the sports arena and male hormones, seems to have been a harbinger of the bad times to come. Another core value was to “make bold bets,” which was interpreted as asking for forgiveness rather than permission. In other words, it was better to cross a line, be found out wrong, and ask for forgiveness than it was to ask permission to transgress in the first place. Another value, “meritocracy and toe-stepping,” meant that employees were incented to work autonomously, rather than in teams, and cause pain to others to get things done and move forward, even if it meant damaging some relationships along the way.48

You may ask, so what? The same company that silenced, hurt, and eventually lost hardworking and talented engineers such as Susan Fowler was still tremendously successful in getting millions of people to speak with a new vocabulary word – “to uber.” The company's growth was exponential and as of early 2018 is valued at north of $70 billion.49 Maybe a bit of super-pumpedness and a little toe-stepping is just what it takes today to get ahead?

One problem is that social media enables a new kind of speaking up that makes it that much harder for companies to actively and shamelessly advocate for a psychologically unsafe culture. Fowler's exposé sent reporters running to investigate. The New York Times interviewed over 30 current and former Uber employees and reported on numerous incidents of harassment, some as egregious as an Uber manager who “groped female co-workers' breasts at a company retreat in Las Vegas” and “a director [who] shouted a homophobic slur at a subordinate during a heated confrontation in a meeting.”50 According to Fowler, when she joined Uber, the engineering site reliability organization was over 25% women, but before she left it had dropped to 6%. In the aftermath and reckoning that followed Fowler's blog, multiple lawsuits ensued, massive numbers of employees at all levels were either fired or left of their own accord, and the company's valuation and reputation fell far and fast.51 A second problem is that people suffer unnecessary harm.

On June 21, 2017, Travis Kalanick stepped down as Uber's CEO after five of its major shareholders demanded his resignation.52 Although Fowler petitioned the United States Supreme Court to consider her experience at Uber in its decision on whether employees can forfeit rights to collective litigation in their employment contracts, the proposal was later voted down.53 That year, she was featured on the cover of TIME Magazine as one of its “Person(s) of the Year” as one of five “Silence Breakers” who spoke out about sexual harassment in 2017.54 She was also named The Financial Times “Person of the Year 2017,”55 one of Vanity Fair's “New Establishments,”56 and No. 2 on Recode's Top 100, behind only Jeff Bezos.57

Susan Fowler at Uber is just one example of how social media has enabled the speaking of truth to power in the workplace. In 2017, thousands of women spoke up to say, “Me Too,” to workplace harassment, and hundreds of men in high-profile positions suffered the consequences of behavior that had, in many cases, worked for awhile – decades, or even entire careers. Communication technology gave social media movements such as MeToo and Black Lives Matter the power to ignite and move with rapidity into mainstream media, public opinion, and in some cases, into the legal courts. Such movements raise the sense of urgency to create and maintain organizations where psychological safety supports people to do their best work.

When Uber's new CEO, Dara Khosrowshahi, first came on board in August 2017, one of his priorities was to meet with women engineers. Alert to the damage done to the company's culture, he began by laying the groundwork for a psychologically safe workplace. As Jessica Bryndza, Uber's Global Director of People Experience, commented, “He [Khosrowshahi] didn't come in guns blazing. He came in listening.”58

The operative word here is “listening.” In the Chapters 5 and 6, you will read about eight flourishing organizations where leaders have created the conditions to make listening and speaking up the norm, not the exception. In these fearless workplaces, it's far less likely that employees will refrain from sharing valuable information, insights, or questions and far more likely that leaders will listen to rather than dismiss bad news or early warnings.

Endnotes

  • Schafer, U., Hagen, J., & Burger, C. Mr. KLM (A): Jacob Veldhuyzen. Case Study. ESMT No. 411-0117. Berlin, Germany: European School of Management and Technology, 2011.
  • Schafer, U., Hagen, J., & Burger, C. Mr. KLM (B): Captain van Zanten. Case Study. ESMT No. 411-0118. Berlin, Germany: European School of Management and Technology, 2011.
  • Schafer, U., Hagen, J., & Burger, C. Mr. KLM (C): Jaap. Case Study. ESMT No. 411-0119. Berlin, Germany: European School of Management and Technology, 2011.
  • Hagen, J.U. Confronting Mistakes: Lessons From The Aviation Industry When Dealing with Error. United Kingdom: Palgrave Macmillan UK, 2013. Print.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.74.205