Chapter 7 Team Decision Making

Pitfalls and Solutions

In the years leading up to JPMorgan Chase’s $2B trading loss, risk managers and senior investment bankers raised concerns that the bank was making increasingly large investments involving complex trades. However, concerns about the dangers were ignored and dismissed. CEO Jamie Dimon approved the risky trades, and this contributed to an atmosphere of disregard. Even though Dimon was renowned for his ability to sense risk, he failed to heed the alarm bells that sounded in April 2012. Instead, Dimon was convinced by Ina Drew, who led the investment office, that the turbulence was manageable. Unfortunately, no one questioned Drew’s conclusion, and, moreover, the operating committee was not even told the scope of the problem until days before Dimon went public with the news. The losses on the botched credit bet climbed to more than $9B. Questions were immediately raised about traders’ intent to defraud. Ina Drew resigned and volunteered to give back the $14M she made in the previous year.1

The debacle at JPMorgan Chase was particularly stunning because it came on the heels of a series of other trading scandals that tainted the reputation of the financial houses and business in general. Whenever teams make decisions, they rely on information and judgment. Sometimes the information is insufficient and sometimes it is erroneous. When the consequences of decision making are disastrous, we try to find the root of the problem, which may be due to a faulty process or erroneous “facts.” As we will see in this chapter, teams can follow a vigilant process and still reach bad decisions; in some cases, teams that seem to do all the wrong things still manage to succeed.

Decision Making in Teams

Decision making is an integrated sequence of activities that includes gathering, interpreting, and exchanging information; creating and identifying alternative courses of action; choosing among alternatives by integrating differing perspectives and opinions of team members; and implementing a choice and monitoring its consequences.2 For a schematic diagram of an idealized set of activities involved in a decision-making process, see Exhibit 7-1.

We begin by discussing how a variety of well-documented decision-making biases affect individual decision making and how these biases are ameliorated or exacerbated in groups. We identify five decision-making pitfalls that teams often encounter. For each, we provide preventive measures. We focus on groupthink, the tendency to conform to the consensus viewpoint in group decision making and then discuss escalation of commitment, the Abilene paradox, group polarization, and unethical decision making.

Individual Decision-Making Biases

A variety of decision-making biases plague individual decision making. (For a comprehensive review, see Bazerman and Moore’s Judgment in Managerial Decision Making.)3 In this section, we briefly review four of the most well-documented individual decision-making biases and discuss their implications for teams.

Exhibit 7-1 Rational Model of Group Decision Making

Source: Based on Forsyth, D. (1990). Group Dynamics (2nd ed., p. 286). Pacific Grove, CA: Brooks/Cole; Guzzo, R. A., Salas, E., & Associates. (1995). Team effectiveness and decision making in organizations. San Francisco, CA: Jossey-Bass.

Framing Bias

Consider the following problem:4

Imagine that the United States is preparing for the outbreak of an unusual disease, which is expected to kill 600 people in the United States alone. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

  • Plan A: If program A is adopted, 200 people will be saved.

  • Plan B: If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no one will be saved.

If forced to choose, which plan would you select?

When given this choice, most individual decision makers choose program A (72 percent). Now, consider the following options:

  • Plan C: If program C is adopted, 400 people will die.

  • Plan D: If program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.

When given the identical problem with the same options worded in terms of “deaths,” the majority of respondents choose the risky course of action (Plan D, 78 percent).5 This inconsistency is a preference reversal and reveals the framing bias.

Almost any decision can be reframed as a gain or a loss relative to something.6 This is because decision makers’ reference points for defining gain and loss are often arbitrary. Several investigations have compared individuals’ versus groups’ susceptibility to the framing effect. The results are mixed: Sometimes groups are less susceptible to framing, but in some investigations, they are just as fallible as individuals.7

Overconfidence

Many organizational situations require decision makers to assess the likelihood that judgments about someone or something will be correct. The overconfidence bias is the tendency for people to place unwarranted confidence in their judgments. Ninety-four percent of college professors believe they are above-average teachers; 90 percent of drivers believe they are above average; and when computer executives were given quizzes about their industry, they estimated they got 5 percent of the answers wrong—in fact, they had gotten 80 percent wrong.8 Consider the questions in Exhibit 7-2. Even though the instructions in Exhibit 7-2 ask decision makers to choose an upper and lower bound such that they are 98 percent sure that the actual answer falls within their bounds, most decision makers only have four answers that fall within their lower and upper

Exhibit 7-2 Overconfidence in Judgment

Source: U.S. Census Bureau (2011) Annual Estimates of the Resident Population by Sex and Age for States and for Puerto Rico: April 1, 2010 to July 1, 2011. census.gov; Global 500. (July 23, 2012) Fortune. fortune.com; Federal spending in pictures (2012, August). Federal Heritage. heritage.org/federalbudget.

Instructions: Listed below are 10 questions. Do not look up any information on these questions. For each, write down your best estimate of the answer. Next, put a lower and an upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual answer.

Question Your best estimate Lower bound Upper bound
1. The median age for the total U.S. population in 2011. _______ _______ _______
2. The median U.S. household income in 2011. _______ _______ _______
3. Percentage of the U.S. population under 5 years old in 2011. _______ _______ _______
4. U.S. Federal Government spending (Fiscal Year 2011 in $ per capita on health care). _______ _______ _______
5. Total 2012 federal spending per household. _______ _______ _______
6. Annual company cost of lost productivity per employee due to reading/deleting spam e-mails at work in 2011. _______ _______ _______
7. Number of all endangered and/or threatened animals and plants in the United States in 2011. _______ _______ _______
8. Number of Global 500 Companies in the United States in 2011. _______ _______ _______
9. Walmart 2011 revenue, which is one of the highest amongst Global 500 companies in the United States. _______ _______ _______
10. Highest paid U.S. CEO in 2011 (including salary, option awards, and restricted stock), John H. Hammergren of McKesson. _______ _______ _______

Answers:

  1. 37.3;

  2. $50,964;

  3. 6.5%;

  4. $2,693.60;

  5. $30,015;

  6. $1,934;

  7. 2,089;

  8. 132;

  9. $446,950,000;

  10. $131,000,000

confidence bounds. In a study of 100 people, 42 percent fell outside the 90 percent confidence range. In a team, overconfidence leads people to myopically focus on their teammates’ strengths, as opposed to their weaknesses, and neglect the strengths and weaknesses of members of competitor teams.9

Confirmation Bias

The confirmation bias is the tendency for people to consider evidence that supports their position, hypothesis, or desires and disregard or discount (equally valid) evidence that refutes their beliefs. When people are ego invested in a project, the confirmation bias is stronger. Even upon the receipt of unsupportive data, people who have fallen prey to the confirmation bias will maintain, and in some cases increase, their resolve. Further, both decided and undecided individuals show a strong tendency to selectively

Exhibit 7-3 Card Test

Source: Based on aOaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608–631.

bWason, R. C., & Johnson-Laird, P. N. (1972). Psychology of reasoning: Structure and content. Cambridge, MA: Harvard University Press.

Imagine that the following four cards are placed in front of you and are printed with the following symbols on one side:

Card 1 Card 2 Card 3 Card 4
E K 4 7

Now, imagine you are told that a letter appears on one side of each card and a number on the other. Your task is to judge the validity of the following rule, which refers only to these four cards: “If a card has a vowel on one side, then it has an even number on the other side.” Your task is to turn over only those card(s) that have to be turned over for the correctness of the rule to be judged. Which card(s) do you want to turn over? (Stop here and decide which card(s) to turn over before reading on.)

Averaging over a large number of investigations, 89 percent of people select E, which is a logically correct choice because an odd number on the other side would disconfirm the rule.a However, 62 percent also choose to turn over the 4, which is not logically informative because neither a vowel nor a consonant on the other side would falsify the rule. Only 25 percent of people elect to turn over the 7, which is a logically informative choice because a vowel behind the 7 would falsify the rule. Only 16 percent elect to turn over K, which would not be an informative choice.

Thus, people display two types of logical errors in the task. First, they often turn over the 4, an example of the confirmation bias. However, even more striking is the failure to take the step of attempting to disconfirm what they believe is true—in other words, turning over the 7.b

expose themselves to confirmatory information.10 Tunnel vision can often augment confirmation bias. For example, in studies of new product development, prototypes that have become focal tend to be judged overly favorably and are chosen for launch with unwarranted enthusiasm, even among experienced executives.11 (As a quick demonstration of the confirmation bias, take the test in Exhibit 7-3.)

Decision Fatigue

Making decisions, especially complex organizational decisions, requires a great deal of mental resources. Not surprisingly, the mere act of making decisions produces fatigue. However, unlike physical exertion, most decision makers are not consciously aware that they are depleted after making several decisions. Consequently, they become organizationally dangerous in terms of spending more money, refusing to make trade-offs, making harsh decisions, or avoiding decisions altogether. For example, one investigation examined how judges made over 1,000 parole decisions and found that after controlling for ethnic background, crime, length of current sentence, and so on, those parole decisions were related to the time of day that the judge heard the case. Whereas 70 percent of cases were granted parole in the morning before the judges were mentally fatigued, only 10 percent were granted parole later in the day.12 The interpretation was that judges are more fatigued later in the day and make harsher decisions.

Individual versus Group Decision Making in Demonstrable Tasks

A demonstrable task is a task that has an obvious, correct answer. Many management and executive education courses challenge businesspeople with simulations in which they find themselves stranded in inhospitable environments—arctic tundra, scorched desert, treacherous jungles—and together they must plan and enact strategies to ensure their survival.13 Some companies actually place people in such situations (see Exhibit 7-4). Popular television shows are based on “survival,” and some teams excel at adventure sports—but in the management laboratory and classroom, teammates are simply asked to rank (in order of importance) the usefulness of several objects (e.g., flashlight, canteen of water, and knives). The team’s rank order can be benchmarked against that of an expert and against the individual rankings made by each member of the team.

When this is done, an interesting phenomenon emerges: The performance of the team is inevitably better than the simple, arithmetic average of the group. Recall from Chapter 2 the team performance equation, wherein actual productivity (AP) of a team = potential productivity (PP) + synergy (S) – performance threats (T). In this case, the arithmetic average of the team represents the potential productivity of the group. If the actual productivity of the team exceeds this, it suggests that the group has experienced a synergistic process (i.e., working together has allowed the group to outperform how they could have performed by simply aggregating their own decisions). If the actual productivity of the team is worse, it suggests that the group process is flawed. However, the team leader is justified in asking whether the team’s performance exceeds that of the best member of the team. Perhaps team decision making is best served by putting the trust of the group into one knowledgeable and competent group member. One investigation studied individual versus group decision making in 222 project teams, ranging in size from three to eight members.14 In most instances, groups outperformed their most proficient group member 97 percent of the time.

Groups perform better than independent individuals on a wide range of demonstrable problems. For example, groups of three, four, or five people perform better than the best individual in letters-to-numbers problems (e.g., “A + D =?”)15, and groups outperform

Exhibit 7-4 Decision Making with Real Stakes

Source: aFitzgerald, G. (2012, February 29). Massive security drill for London Olympics. Sky News Online. news.sky.com.

bBiron, L. (2012, June 12). Immersive training hasn’t reached the Holodeck. Defense News. defensenews.com.

cFlegenheimer, M. (2011, September 5). Using pretend patients to train for real crises. New York Times. nytimes.com.

  • Passengers stumbled out of the London tube stop in terror as emergency crews swarmed through the chaos to tend to the casualties. The scene looked too terribly real to office workers and pedestrians in the heart of the city, but fortunately the horrifying scene was a drill. The 2012 Summer Olympic Games in London featured the tightest security in the history of the games, and in the weeks leading up to the event more than 2,500 police, fire and ambulance staff and, security forces participated in emergency drills based around every possible scenario. Deadly bombing attacks in the center of the Indian capital of Mumbai in 2006 and 2011 were a blueprint for training for frontline security for the always politically charged Olympic gathering. The United Kingdom was on a “severe” threat level in the weeks before and during the games, indicating an attack was highly likely. National Olympic Security Coordinator Chris Allison said, “Testing and exercising is vital to getting our safety and security operations for the Games right. We need to be confident that we have the right people in the right places, that we understand how others operate and that we are talking to each other at the right levels and in the right way.”a

  • It has been described as the best video game that the public cannot own. The Dismounted Soldier Training System (DSTS), a virtual game used by the U.S. Army, is turning heads among military personnel. The game allows a squad of nine players to practice real-life situations, from room clearing to bomb disposal to combat operations. Soldiers wear a head-mounted display, hold mock weapons, and rely on a sensor system. They can explore terrain, interact with civilians and enemy combatants, coordinate tactics together, and train much like they fight in real battle situations. Sensors on the body track whether the soldier is standing, kneeling, or is prone and reflect it on his avatar in the virtual world DSTS. “The feedback [from soldiers] has been, ‘This is the kind of training that we needed.’”b

  • At the New York Simulation Center for the Health Sciences in Manhattan, medical students train for every type of real-world medical condition. Created in large part as a response to the September 11 terrorist attacks, the facility is the most advanced of its kind in the city: a high-tech site where emergency personnel can train for just about any disaster—terrorist attacks, accidents, natural disasters—that can be replicated by a team of some of the most sophisticated mannequins to be found anywhere. Hospital workers, emergency responders, medical school students, and city agencies like the Fire Department make use of the center.c

individuals on estimation problems.16 Groups perform at the level of the second-best individual or group member on world knowledge problems, such as vocabulary, analogies, and ranking items for usefulness. Groups who use a structured approach for making decisions perform better than those without structure.17 People who have experience solving demonstrable problems in a group are able to transfer their performance to individual tasks,18 and people who anticipate group discussion are more accurate.19 Groups outperform individuals due to a process known as group-to-individual transfer, in which group members become more accurate during the group interaction.20

However, groups are much more overconfident than are individuals, regardless of their actual accuracy. For example, in one investigation, groups were asked to make stock price predictions. The actual accuracy of the group was 47 percent, but their confidence level was 65 percent.21 Three days before the disintegration of the space shuttle Columbia in 2003, NASA officials met to discuss ways to monitor and minimize the amount of falling debris during liftoff (one probable cause of Columbia’s breakdown), but concluded that repair work in flight—a possible solution—would be too costly, creating “more damage than what [they] were trying to repair.”22 Groups are also more likely to exacerbate some of the shortcomings displayed by individuals, namely, groups are more likely than individuals to (faultily) neglect case-specific information and ignore base-rate information.23

Group Decision Rules

Given the pervasiveness of group decision making, teams need a method by which to combine individuals’ decisions to yield a group decision. There are several kinds of group decision rules.24 The objective of decision rules may differ, such as finding the alternative the greatest number of team members prefer, the alternative the fewest members object to, or the choice that maximizes team welfare. In an extensive test of the success of several types of decision rules, majority and plurality rules did quite well, performing at levels comparable to much more resource-demanding rules, such as individual judgment averaging rule (see Exhibit 7-5).25 Thus, groups are well served in using majority or plurality voting in truth-seeking group decisions. Yet, they often avoid majority rule when given a choice. For example, groups tend to choose the alternative that is acceptable to all group members, even when a majority of members prefer a different alternative.26

The most common decision rule is majority rule. Teams may often use majority rule as a decision heuristic because of its ease and familiarity.27 However, despite its democratic

Exhibit 7-5 Group Decision Rules

Source: Hastie, R., & Kameda, T. (2005). The robust beauty of majority rules in group decisions. Psychological Review, 112(2), 494–508.

Group Decision Rule Description Individual cognitive effort Social (group) effort
Average winner Each member estimates the value of each alternative, and the group computes each alternative’s mean estimated value and chooses the alternative with the highest mean. High High
Median winner Each member estimates the value of each alternative, and the group computes each alternative’s median estimated value and chooses the alternative with the highest median. High High
Davis’ weighted average winner Each member estimates the value of each alternative, and the group assigns a weighted average value to each alternative and chooses the alternative with the highest weighted average rule. High High
Borda rank winner Each member ranks all alternatives by estimated value, and the group assigns a Borda rank score to each location (the sum of individual ranks for each alternative) and chooses the alternative with the lowest (most favorable) score. High High
Condorcet majority rule All pairwise elections are held (e.g., 45 for 10 candidates), and the alternative that wins all elections is the Condorcet winner (it is possible for there to be no unique, overall winner). Low High
Majority/plurality rule Each member assigns one vote to the alternative with the highest estimated value, and the alternative receiving the most votes is chosen. Low Low
Best member rule Member who has achieved the highest individual accuracy in estimating alternative values is selected, and this member’s first choice become the group’s choice. High Medium
Random member rule On each trial, one member is selected at random, and this member’s first choice becomes the group’s choice. Low Low
Group satisficing rule On each trial, alternatives are considered one at a time in a random order; the first alternative for which all members’ value estimates exceed aspiration thresholds is chosen by the group. Medium Medium

appeal, majority rule presents several problems in the attainment of consensus. First, majority rule ignores members’ strength of preference for alternatives. The vote of a person who feels strongly about an issue counts only as much as the vote of a person who is virtually indifferent. Consequently, majority rule may not promote creative trade-offs among issues.28 Majority rule may also encourage the formation of coalitions, or subgroups within a team. Although unanimous decision making is time consuming, it encourages team members to consider creative alternatives to satisfy the interests of all members. Teams required to reach consensus have greater accuracy than those that are not.29

Decision-Making Pitfall 1: Groupthink

Groupthink occurs when team members place consensus above all other priorities—including using good judgment–when the consensus reflects poor judgment or improper or immoral actions.

Groupthink involves a deterioration of mental efficiency, reality testing, and moral judgment as a result of group pressures toward conformity of opinion. For examples of groupthink fiascos in the political and corporate world, see Exhibit 7-6. The desire to agree can become so dominant that it can override the realistic appraisal of alternative courses of action.30 The causes of groupthink may stem from group pressures to conform or a sincere desire to incorporate and reflect the views of all team members. Such pressure may also come from management if the directive is to reach a decision that all can agree to.31 Conformity pressures can lead decision makers to censor their misgivings, ignore outside information, feel too confident, and adopt an attitude of invulnerability.

Exhibit 7-6 Instances of Groupthink in Politics and the Corporate World

Source: aJanis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment (p. 130). New York: The Free Press, a Division of Simon & Schuster.

bByrne, J. A. (2002a, July 29). No excuses for Enron’s board. Businessweek, 3793, 50–51.

cByrne, J. A. (2002b, August 12). Fall from grace. Businessweek, 3795, 51–56.

dRaven, B. H., & Rubin, J. Z. (1976). Social psychology: People in groups. New York: John Wiley & Sons.

eHuseman, R. C., & Driver, R. W. (1979). Groupthink: Implications for small group decision making in business. In R. Huseman & A. Carroll (Eds.), Readings in Organizational Behavior (pp. 100–110). Boston: Allyn & Bacon.

fWolinsky, H. (1998, November 11). Report calls AMA a house divided. Chicago Sun-Times, p. 72.

gHutton aftermath: A violation of business ethics or downright fraud? (1985, July). ABA Banking Journal; Computers and operations section, p. 30.

hSims, R. R. (1992). Linking groupthink to unethical behavior in organizations. Journal of Business Ethics, 11(9), 651–662.

EXAMPLES FROM POLITICS

  • The US invasion of Iraq in 2002 on the belief that Iraq possessed weapons of mass destruction (WMDs).a

  • Neville Chamberlain’s inner circle, whose members supported the policy of appeasement of Hitler during 1937 and 1938, despite repeated warnings and events that indicated it would have adverse consequences.a

  • President Truman’s advisory group, whose members supported the decision to escalate the war in North Korea, despite firm warnings by the Chinese Communist government that U.S. entry into North Korea would be met with armed resistance from the Chinese.a

  • President Kennedy’s inner circle, whose members supported the decision to launch the Bay of Pigs invasion of Cuba, despite the availability of information indicating that it would be an unsuccessful venture and would damage U.S. relations with other countries.a

  • President Johnson’s close advisors, who supported the decision to escalate the war in Vietnam, despite intelligence reports and information indicating that this course of action would not defeat the Viet Cong or the North Vietnamese, and would generate unfavorable political consequences within the United States.a

  • The decision of the Reagan administration to exchange arms for hostages with Iran and to continue commitment to the Nicaraguan Contras in the face of several congressional amendments limiting or banning aid.

EXAMPLES FROM THE CORPORATE WORLD

  • Enron’s board of directors was well informed about (and could therefore have prevented) the risky accounting practices, conflicts of interest, and hiding of debt that led to the company’s downfall; likewise, Arthur Andersen (Enron’s accounting firm) did nothing to halt the company’s high-risk practices.b, c

  • Gruenenthal Chemie’s decision to market the drug thalidomide.d

  • The price-fixing conspiracy involving the electrical manufacturing industry during the 1950s.

  • The decision by Ford Motor Company to produce the Edsel.e

  • The (AMA’s) American Medical Association’s decision to allow Sunbeam to use the AMA name as a product endorsement.f

  • The selling of millions of jars of “phony” apple juice by Beech-Nut, the third largest baby food producer in the United States.

  • The involvement of E. F. Hutton in “check kiting,” wherein a money manager at a Hutton branch office would write a check on an account in Bank A for more money than Hutton had in the account. Because of the time lag in the check-collection system, these overdrafts sometimes went undetected, and Hutton could deposit funds to cover the overdraft in the following day. The deposited money would start earning interest immediately. The scheme allowed Hutton to earn a day’s interest on Bank A’s account without having to pay anything for it—resulting in $250 million in free loans every day.g

  • The illegal purchases by Salomon Brothers at U.S. Treasury auctions in the early 1990s.h

Symptoms of groupthink cannot be easily assessed by outside observers. Three key symptoms of groupthink take root and blossom in groups that succumb to pressures of reaching unanimity:

  • Overestimation of the group: Members of the group regard themselves as invulnerable and, at the same time, morally correct. This can lead decision makers to believe they are exempt from standards.

  • Closed-mindedness: Members of the group engage in collective rationalization, often accompanied by stereotyping out-group members.

  • Pressures toward uniformity: There is a strong intolerance in a groupthink situation for diversity of opinion. Dissenters are subject to enormous social pressure. This often leads group members to suppress their reservations.

Deficits arising from groupthink can lead to many shortcomings in the decision-making process. Consider, for example, the following lapses that often accompany groupthink:

  • Incomplete survey of alternatives

  • Incomplete survey of objectives

  • Failure to reexamine alternatives

  • Failure to examine preferred choices

  • Selection bias

  • Poor information search

  • Failure to create contingency plans

Each of these behaviors thwarts rational decision making.

Learning from History

Consider two decisions made by the same U.S. presidential cabinet—the Kennedy administration. The Kennedy cabinet was responsible for the Bay of Pigs operation in 1961 and the Cuban Missile Crisis in 1962. The Bay of Pigs was a military operation concocted by the United States in an attempt to overthrow Fidel Castro, the leader of Cuba. The Bay of Pigs is often seen as one of the worst foreign policy mistakes in U.S. history. The operation was regarded as a disaster of epic proportions, resulting in the loss of lives and the disruption of foreign policy. It is also puzzling because the invasion, in retrospect, seemed to have been poorly planned and poorly implemented, yet it was led by people whose individual talents seemed to make them eminently qualified to carry out an operation of that sort. In contrast, Kennedy’s response to the Cuban Missile Crisis was regarded as a great international policy success. These examples, from the same organizational context and team, make an important point: Even smart and highly motivated people can make disastrous decisions under certain conditions. Kennedy’s cabinet fell prey to groupthink in the Bay of Pigs decision, but not in the Cuban Missile Crisis.

A number of detailed historical analyses have been performed comparing these two historical examples, as well as several others.32 Some sharp differences distinguish groupthink from vigilant decision making.

Exhibit 7-7 summarizes three kinds of critical evidence: (1) factors that may lead to groupthink; (2) factors that may promote sound decision making; and (3) factors that do not seem to induce groupthink. Leader behavior that is associated with too much concern for political ramifications, or the analysis of alternatives in terms of their political repercussions, is a key determinant of groupthink. Similarly, when groups are overly concerned with their political image, they may not make sound decisions.

Exhibit 7-7 Precipitating and Preventative Conditions for the Development of Groupthink

Source: Tetlock, P. E., Peterson, R., McGuire, C., Chang, S., & Feld, P. (1992). Assessing political group dynamics: The test of the groupthink model. Journal of Personality and Social Psychology, 63, 403–425.

Conditions Leader Behavior and Cognition Team Behavior and Cognition
Precipitous conditions (likely to lead to groupthink) • Narrow, defective appraisal of options • Analysis of options in terms of political repercussions • Concern about image and reputation • Loss-avoidance strategy • Rigidity • Conformity • View roles in political terms (protecting political capital and status) • Large team size • Team members feel sense of social identification with teama • Group interaction and discussion must produce or reveal an emerging or dominant group norma • Low self-efficacy, in which group members lack confidence in their ability to reach a satisfactory resolutiona • Perceived threat to social identity
Preventative conditions (likely to engender effective decision making) • Being explicit and direct about policy preferences allows the team to know immediately where the leader stands • Task orientation • Intellectual flexibility • Less consciousness of crisis • Less pessimism • Less corruption (i.e., more concerned with observing correct rules and procedures) • Less centralization • Openness and candidness • Adjustment to failing policies in timely fashion • Genuine commitment to solving problems • Encouraging dissent • Acting decisively in emergencies • Attuned to changes in environment • Focus on shared goals • Realization that trade-offs are necessary • Ability to improvise solutions to unexpected events
Inconclusive conditions (unlikely to make much of a difference) • Strong, opinionated leadership • Risk taking • Cohesion • Internal debate

Baron, R. (2005). So right it’s wrong: Groupthink and the ubiquitous nature of polarized group decision making. Advances in Experimental Social Psychology, 37, 219–253.

In terms of preventive conditions, the behavior of the team has a greater impact on the development of groupthink than does leader behavior. Sound group decision making can be achieved through task orientation, flexibility, less centralization, norms of openness, encouraging dissent, focus on shared goals, and realizing that trade-offs are necessary.

How to Avoid Groupthink

The empirical evidence for groupthink does not support some of the key predictions of the model.33 Most of the “groupthink” phenomena (e.g., close mindedness) occur in a far wider range of settings than originally believed. In this section, we identify some specific steps leaders can take to prevent groupthink. Prevention is predicated on two broad goals: the stimulation of constructive, intellectual conflict and the reduction of concerns about how the group is viewed by others—a type of conformity pressure. We focus primarily on team design factors because those are the ones managers have the greatest control over. None of these can guarantee success, but they can be effective in encouraging vigilant decision making.

Monitor Team Size

Larger teams are more likely to fall prey to groupthink.34 People grow more intimidated and hesitant as team size increases. Teams with more than 10 members may feel less personal responsibility for team outcomes.

Provide a Face-saving Mechanism for Teams

A small team that has the respect and support of their organization would seem to be in an ideal position to make effective decisions. Yet often, they fail to do so. One reason is that they are concerned with how their decision will be viewed by others. Many teams are afraid of being blamed for poor decisions—even decisions for which it would have been impossible to predict the outcome. Often, face-saving concerns prevent people from changing course, even when the current course is clearly doubtful. Teams that are given an excuse for poor performance before knowing the outcome of their decision are less likely to succumb to groupthink than teams that do not have an excuse.35

The Risk Technique

The risk technique is a structured discussion situation designed to reduce group members’ fears about making decisions.36 The discussion is structured so that team members talk about the dangers or risks involved in a decision and delay discussion of potential gains. Following this is a discussion of controls or mechanisms for dealing with the risks or dangers. The goal is to create an atmosphere in which team members can express doubts and raise criticisms without fear of rejection or hostility from the team. One way is to have a facilitator play the role of devil’s advocate for a particular decision. The mere expression of doubt about an idea or plan by one person may liberate others to raise doubts and concerns. A second method may be to have members privately convey their concerns or doubts and then post this information in an unidentifiable manner.

Invite Different Perspectives

In this technique, team members assume the perspective of other constituencies with a stake in the decision.37 In 1986, the space shuttle Challenger exploded after liftoff due to a major malfunction regarding the booster rockets and, in particular, O-ring failure. Roger Boisjoly, an engineer who tried to halt the flight in 1986 because he was aware of the likely trouble, said later, “I received cold stares…with looks as if to say, ‘Go away and don’t bother us with the facts.’ No one in management wanted to discuss the facts; they just would not respond verbally to . . . me. I felt totally helpless and that further argument was fruitless, so I, too, stopped pressing my case.”38 In the Challenger incident, the federal government, local citizens, space crew families, astronomers, and so on, all assumed the roles of “group members.” Although the Challenger disaster happened in large part because of poor understanding of how to interpret statistical data, the key point of adopting different perspectives is to create a mechanism that will instigate thinking more carefully about problems, which could prompt these groups to reconsider evidence.

When it comes to different perspectives, those persons offering the counterpoint should prepare as they would if they were working on a court case—in other words, they should assemble data and evidence, as opposed to personal opinions.39 Naysayers should not make accusations; it is better to take the “we have a problem” approach.40

Appoint a Devil’s Advocate

By the time that upper management is wedded to a particular plan or point of view, they are often impervious to evidence that is questionable or even downright contradictory. To make matters worse, those around them—often subordinates—don’t want to challenge management’s beliefs. This is why some teams institute a “devil’s advocate” responsibility to members of the team. Teams that anticipate having to refute counterarguments are less likely to engage in confirmatory information processing as compared to teams that anticipate having to give reasons for their decision.41 Winston Churchill knew how to combat groupthink and yes-men. Worried that his larger-than-life image would deter subordinates from bringing him the truth, he instituted a unit outside his chain of command, called the “statistical office,” whose key job was to bring him the bleakest, most gut-wrenching facts. Similarly, authors of the book How Companies Lie suggest that “counterpointers” be appointed in teams, whose chief function is to ask the rudest possible questions.42

Whereas a devil’s advocate procedure can be effective, it is contrived dissent. It is better for a team to have genuine dissent.43 For example, when businesspeople made investment decisions, genuine dissent was more effective than contrived dissent in avoiding confirmatory decision making.44 Genuine dissent is superior to contrived dissent or no dissent at all in terms of stimulating original ideas, considering opposing positions, and changing attitudes.45

Structure Discussion Principles

The goal of structured discussion principles is to delay solution selection and to increase the problem-solving phase. This prevents premature closure on a solution and extends problem analysis and evaluation. For example, teams may be given guidelines that emphasize continued solicitations of solutions, protection of individuals from criticism, keeping the discussion problem centered, and listing all solutions before evaluating them.46

Establish Procedures for Protecting Alternative Viewpoints

Although teams can generate high-quality decision alternatives, they frequently fail to adopt them as preferred solutions.47 Most problems that teams face are not simple, eureka-type decisions, in which the correct answer is obvious once it is put on the table. Rather, team members must convince others about the correctness of their views. This is difficult when conformity pressure exists and when team members have publicly committed to a particular course of action. For these reasons, it can be useful to instruct members to record all alternatives suggested during each meeting.

Second Solution

This technique requires teams to identify a second solution or decision recommendation as an alternative to their first choice. This enhances the problem-solving and idea-generation phases, as well as performance quality.48

Beware of Time Pressure

Time pressure leads to more risky decision making.49 Time pressure acts as a stressor on teams, and stress impairs the effectiveness of team decision making.50 Moral principles are more likely to guide decisions for the distant future than for the immediate future, whereas difficulty, cost, and situational pressures are more likely to be important in near-future decisions. Managers are more likely to compromise their principles in decisions regarding near-future actions compared with distant-future actions.51

Decision-Making Pitfall 2: Escalation of Commitment

Coca-Cola’s decision to introduce New Coke in 1985 was eventually recognized as a mistake and reversed. Do such clear failures prompt teams to revisit their decision-making process and improve upon it? Not necessarily. In fact, under some conditions, teams will persist with a losing course of action, even in the face of clear evidence to the contrary. This is known as the escalation of commitment phenomenon.

Consider the following decision situations.

  • As Facebook prepared for its highly anticipated IPO in 2012, inside sources said that both Facebook executives and Morgan Stanley bankers handling the offering ignored input about pricing and the value of the company. The initial offering was set at $38 per share. The stock dropped below that mark on the first day, before recovering to close just 23 cents above the initial price, a historically bad start for an IPO, which typically rises far above the initial price on the opening day. From there, Facebook began a drop that took it as low as $17.55 per share, putting many initial investors in a deep hole.52

  • When Netflix suddenly announced that it would be separating its online streaming service and DVD mail service under the Qwikster name, they raised prices and outraged a loyal customer base. Complaints started immediately, but Netflix stood its ground. As subscribers dropped and Netflix’s stock price plunged, the company was forced to abandon the idea. The not well-thought out plan cost Netflix 800,000 subscribers.53

  • General Motors (GM) put a lot of faith in the public appetite for electric cars, even speeding up the launch date for its Chevy Volt by 6 months. They were wrong. Despite company claims that the Volt was “virtually sold out” due to its popularity, the fact was that the car was hardly selling at all—only 125 Volts were sold in July 2011, worldwide. It wasn’t until the car went under investigation for fires involving the lithium-ion batteries that GM finally saw the writing on the wall.54

In all these situations, individuals and teams committed further resources to what eventually proved to be a failing course of action. Usually, the situation does not initially appear to be unattractive. The situation becomes an escalation dilemma when the persons involved in the decision would make a different decision if they had not been involved up until that point, or when other objective persons would not choose that course of action. In escalation situations, a decision is made to commit further resources

Exhibit 7-8 Escalation of Commitment

Source: Adapted from Ross, J., & Staw, B. M. (1993). Organizational escalation and exit: Lessons from the Shoreham Nuclear Power Plant. Academy of Management Journal, 36(4), 701–732.

to “turn the situation around.” The bigger the investment and the more severe the possible loss, the more prone people are to try to turn things around.

The escalation of commitment process is illustrated in Exhibit 7-8. In the first stage of the escalation of commitment, a decision-making team is confronted with questionable or negative outcomes (e.g., a price drop, decreasing market share, poor performance evaluations, or a malfunction). This external event prompts a reexamination of the team’s current course of action, in which the utility of continuing is weighed against the utility of withdrawing or changing course. This decision determines the team’s commitment to its current course of action. If commitment is low, the team may withdraw from the project and assume its losses. If commitment is high, however, the team will continue commitment and continue to cycle through the decision stages. There are four key processes involved in the escalation of commitment cycle: project-related determinants, psychological determinants, social determinants, and structural determinants.55

Project Determinants

Project determinants are the objective features of the situation. Upon receiving negative feedback, team members ask whether the perceived setback is permanent or temporary (e.g., is reduced market share a meaningful trend or a simple perturbation in a noisy system?). If it is perceived to be temporary, there may appear to be little reason to reverse course. Then, when considering whether to increase investment in the project or commit more time and energy, the team is essentially asking whether it wishes to escalate its commitment. Of course, this may often be the right choice, but it should be clear that such decisions also make it harder for the team to terminate that course of action if results continue to be poor.

Psychological Determinants

Psychological determinants refer to the cognitive and motivational factors that propel people to continue with a chosen course of action. When managers or teams learn that the outcome of a project may be negative, they should ask themselves the following questions regarding their own involvement in the process:

What Are the Personal Rewards for Me in This Project?

In many cases, the process of the project itself, rather than the outcome of the project, becomes the reason for continuing the project. This leads to a self-perpetuating reinforcement trap, wherein the rewards for continuing are not aligned with organizational objectives. Ironically, people who have high, rather than low, self-esteem are more likely to become victimized by psychological forces—people with high self-esteem have much more invested in their ego than those with low self-esteem.

Are My Ego and the Team’s Reputation on the Line?

“If I pull out of this project, would I feel stupid? Do I worry that other people would judge me to be stupid?” Ego protection often becomes a higher priority than the success of the project. When managers feel personally responsible for a decision, monetary allocations to the project increase at a much higher rate than when managers do not feel responsible for the initial decision.56

When managers personally oversee a project, they attempt to ensure that the project has every chance of success (e.g., by allocating more resources to it). After all, that is their job. A manager who works on a project from beginning to end is going to know more about it and may be in a better position to judge it. Furthermore, personal commitment is essential for the success of many projects. Whereas it is certainly good to nurture projects so that they have their best chance of survival, it is nearly impossible for most managers to be completely objective. This is where it is important to have clear, unbiased criteria by which to evaluate the success of a project.

Social Determinants

Most people want others to approve of them, accept them, and respect them. Consequently, they engage in actions and behaviors that they think will please most of the people most of the time, perhaps at the expense of doing the right thing, which may not be popular.

The need for approval and liking may be especially heightened among groups composed of friends. Indeed, groups of longtime friends are more likely to continue to invest in a losing course of action (41 percent) than groups composed of unacquainted persons (16 percent) when groups do not have buy-in from relevant organizational authorities. In contrast, when they are respected by their organization, groups of friends are extremely deft at extracting themselves from failing courses of action.57 The greater the group’s sense of social identity, the more likely the group is to escalate commitment to an unreasonable course of action. For example, teams in a city council simulation, faced with an important budget allocation decision regarding a playground project, wore either team name tags (high social identity) or personal name tags (low social identity).59 Groups that were stronger in social identity showed greater escalation of commitment to the ill-fated playground project.

Structural Determinants

A project can itself become institutionalized, removing it from critical evaluation. It becomes impossible for teams to consider removal or extinction of the project. Political support can also keep a project alive that should be terminated.

Avoiding Escalation of Commitment to a Losing Course of Action

Most teams do not realize that they are in an escalation dilemma until it is too late. How can a team best exit an escalation dilemma?58

The best advice is to adopt a policy of risk management: Be aware of the risks involved in the decision, learn how to best manage these risks, and set limits, effectively capping losses at a tolerable level. It is also important to find ways to get information and feedback on the project from a different perspective. More specifically:

Set Limits

Ideally, a team should determine at the outset what criteria and performance standards justify continued investment in the project or program in question.

Avoid the Bystander Effect

In many situations, especially ambiguous ones, people are not sure how to behave and do nothing because they don’t want to appear foolish. This dynamic explains the bystander effect, or the tendency to not take action when others are around.60

Avoid Tunnel Vision

Get several perspectives on the problem. Ask people who are not personally involved in the situation for their appraisal.

Recognize Sunk Costs

Probably the most powerful way to avoid escalation of commitment is to simply recognize and accept sunk costs. Sunk costs are resources, such as money and time, previously invested that cannot be recovered. If making the initial decision today, would you make the investment currently under consideration (as a continuing investment), or would you choose another course of action? If the decision is not one that you would choose anew, you might want to start thinking about how to terminate the project and move to the next one. For example, retailer J.C. Penney grudgingly recognized a $163M sunk cost when it finally shelved a plan to drop store discounts and switch to a simpler pricing plan. Longtime store customers, accustomed to store coupons, were confused by the new pricing; they visited and bought less in a quarter where most clothing retailers posted heavy profits.61

Avoid Bad Mood

Unpleasant emotional states are often implicated in poor decision making.62 Negative affect (such as bad mood, anger, and embarrassment) leads to nonoptimal courses of action—holding out the hope for some highly positive but risky outcome. When people are upset, they tend to choose high-risk, high-payoff options.63

External Review

In some cases, it is necessary to remove or replace the original decision makers from deliberations precisely because they are biased. One way to do this is with an external review of departments.

Decision-Making Pitfall 3: The Abilene Paradox

The Abilene Paradox results from group members’ desire to avoid conflict and reach consensus at all costs.64 The Abilene Paradox is a form of pluralistic ignorance: Group members adopt a position because they feel other members desire it; team members don’t challenge one another because they want to avoid conflict or achieve consensus. The story in Exhibit 7-9 illustrates this dilemma.

It may seem strange to think that intelligent people who are in private agreement may somehow fail to realize the commonality of their beliefs and end up in Abilene. However, it is easy to see how this can happen if members fail to communicate their beliefs to each other.

Quandaries like the Abilene Paradox are easy to fall into. Strategies to avoid the situation include playing devil’s advocate, careful questioning, and a commitment on the part of all team members to fully air their opinions as well as respectfully listen to others. Note that none of these requires team members to abandon consensus seeking as a goal—if that is indeed their goal. However, it does require that consensus actually reflect the true beliefs of the team.

What factors lead to problems like the Abilene Paradox? In general, if individual team members are intimidated or feel that their efforts will not be worthwhile, then they are less likely to air or defend their viewpoints. This is called self-limiting behavior. According to a survey of 569 managers, there are six key causes of self-limiting behavior in teams:65

  • The presence of an expert: If the group perceives that one of their members holds exceptional expertise or experience in the topic under discussion, individuals will self-limit their input.

    Exhibit 7-9 The Abilene Paradox

    Source: Harvey, J. (1974). The Abilene paradox: The management of agreement. Organizational Dynamics, 3(1), 63–80.

    That July afternoon in Coleman, Texas (population 5,607), was particularly hot—104 degrees as measured by the Walgreen’s Rexall Ex-Lax temperature gauge. In addition, the wind was blowing fine-grained West Texas topsoil through the house. But the afternoon was still tolerable—even potentially enjoyable. There was a fan going on the back porch; there was cold lemonade; and finally, there was entertainment with dominoes. Perfect for the conditions.

    The game required little more physical exertion than an occasional mumbled comment,

    “Shuffle ‘em,” and an unhurried movement of the arm to place the spots in the appropriate perspective on the table. All in all, it had the markings of an agreeable Sunday afternoon in Coleman—that is, it was until my father-in-law suddenly said, “Let’s get in the car and go to Abilene and have dinner at the cafeteria.”

    I thought, “What? Go to Abilene? Fifty-three miles? In this dust storm and heat? And in an un-air-conditioned 1958 Buick?”

    But my wife chimed in with “Sounds like a great idea. I’d like to go. How about you, Jerry?” Since my own preferences were obviously out of step with the rest I replied, “Sounds good to me,” and added, “I just hope your mother wants to go.”

    “Of course I want to go,” said my mother-in-law. “I haven’t been to Abilene in a long time.”

    So into the car and off to Abilene we went. My predictions were fulfilled. The heat was brutal. We were coated with a fine layer of dust that was cemented with perspiration by the time we arrived. The food at the cafeteria provided first-rate testimonial material for antacid commercials.

    Some 4 hours and 106 miles later we returned to Coleman, hot and exhausted. We sat in front of the fan for a long time in silence. Then, both to be sociable and to break the silence, I said, “It was a great trip, wasn’t it?”

    No one spoke. Finally my mother-in-law said, with some irritation, “Well, to tell the truth, I really didn’t enjoy it much and would rather have stayed here. I just went along because the three of you were so enthusiastic about going. I wouldn’t have gone if you all hadn’t pressured me into it.”

    I couldn’t believe it. “What do you mean ‘you all’?” I said. “Don’t put me in the ‘you all’ group. I was delighted to be doing what we were doing. I didn’t want to go. I only went to satisfy the rest of you. You’re the culprits.”

    My wife looked shocked. “Don’t call me a culprit. You and Daddy and Mama were the ones who wanted to go. I just went along to be sociable and to keep you happy. I would have had to be crazy to want to go out in heat like that.”

    Her father entered the conversation abruptly. “Hell!” he said.

    He proceeded to expand on what was already absolutely clear. “Listen, I never wanted to go to Abilene. I just thought you might be bored. You visit so seldom I wanted to be sure you enjoyed it. I would have preferred to play another game of dominoes and eat the leftovers in the icebox.”

    After the outburst of recrimination we all sat back in silence. Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnacelike temperature through a cloudlike dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go. In fact, to be more accurate, we’d done just the opposite of what we wanted to do. The whole situation simply didn’t make sense.

  • A strong argument: If the group has spent a lot of time in circular discussions and idea fatigue has taken hold, a group member who presents a compelling and reasonable solution, even if it is not the optimal resolution, may find that their argument is met with agreement, because it allows the group to move on with business.

  • Lack of self-confidence: When team members are unsure or uncertain about their contributions, they will self-limit.

  • Trivial decision: When group members don’t see how the decision impacts themselves or something important, they will self-limit.

  • Conformity: Roger Boisjoly, chief engineer on the tragic 1986 Challenger space shuttle accident, felt incredible pressure from the NASA management team to conform to their desire to launch.

  • A faulty decision-making climate: When team members are easily frustrated and believe that others are dispassionate, involved, or apathetic, they are likely to self-limit. Such dysfunctional climates are unwittingly created at the outset of many team endeavors, when members complain and mock the process.

How to Avoid the Abilene Paradox

The following suggestions are taken from Harvey and Mulvey et al.66

Confront the Issue in a Team Setting

The most straightforward approach for avoiding an ill-fated decision is to meet with organization members who are in positions of power with regard to the decision and to discuss options to continue as well as options to opt out: For example:

I asked to convene this meeting in order to share some of my thoughts regarding our work on project X. At the start of the project I did not feel this way, but with more thought over our current project solution, I have come to think that our current course of action is not working. My anxiety over this situation has grown and I felt it necessary to share these thoughts with the group in order that we not mislead one another into a false sense of mutual agreement. Without an honest discussion, we might continue to work on a solution that none of us really believes in. That is a poor use of our company time and resources. I would venture to say that others on the team may feel the same way about the current solution, but I don’t want to speak for them. Could we discuss where we all stand on this issue?

Conduct a Private Vote

People often go along with what they think the team wants to do. Dissenting opinions are easier to express privately—distribute blank cards and ask team members to privately write their opinions. Guarantee them anonymity and then share the overall outcomes with the team.

Minimize Status Differences

High-status members are often at the center of communication, and lower-status members are likely to feel pressures to conform more quickly. Although this can be difficult to avoid, reassurances by senior members about the importance of frank and honest discussion reinforced by the elimination of status symbols, like dress, meeting place, and title, may be helpful. When start-up Upromise was launched, the founders avoided employee titles, the idea being that titles affected the nimble nature of a start-up, where ideas were needed from all corners of the company, regardless of perceived rank. The fluidity of roles helped fuel the rapid growth of the company.67

Utilize The Scientific Method

When team members use the scientific method, they let the evidence make the decision, not their own beliefs.

Provide a Formal Forum for Controversial Views

This may be achieved by segmenting the discussion into pros and cons. Debate must be legitimized. Members should not have to worry about whether it is appropriate to bring up contrary views; it should be expected and encouraged.

Take Responsibility for Failure

It is important to create a climate in which teams can make mistakes, own up to them, and then move on without fear of recrimination. At Menlo Innovations, “make mistakes faster” philosophy allows for ideas to be tried out and tweaked, if necessary, without fear of recrimination.68 And at e-commerce company Etsy, awards are given for both the best and the worst team failure of the year, so as to create a culture where employees are not afraid to try new ideas at the risk of failure. A case in point: A team built a program to manage website abuses stemming from users with multiple accounts, but simultaneously managed to delete the accounts of Etsy staff, management, and even investors—in short, anyone who had ever logged in to a computer at the company offices! The “winning” team received a 3-arm sweater at an office ceremony.69

Decision-Making Pitfall 4: Group Polarization

Consider the case in Exhibit 7-10. Most people independently evaluating the problem state that the new company would need to have nearly a two-thirds probability of success before advising Mr. A to leave his current job and accept a new position.70 What do you think happens when the same people discuss Mr. A’s situation and are instructed to reach consensus?

You might expect the outcome of the team to be the same as the average of the individuals considered separately. However, this is not what happens. The group advises Mr. A to take the new job, even if it only has slightly better than a 50–50 chance

Exhibit 7-10 Advice Question

Source: Wallach, M. A., & Kogan, N. (1961). Aspects of judgment and decision making: Interrelationships and change with age. Behavioral Science, 6, 23–31.

Mr. A., an electrical engineer who is married and has one child, has been working for a large electronics corporation since graduating from college 5 years ago. He is assured of a lifetime job with a modest, though adequate, salary and liberal pension benefits upon retirement. On the other hand, it is very unlikely that his salary will increase much before he retires. While attending a convention, Mr. A. is offered a job with a small, newly founded company that has a highly uncertain future. The new job would pay more to start and would offer the possibility of a share in the ownership if the company survived the competition with larger firms.

Imagine that you are advising Mr. A. What is the lowest probability or odds of the new company proving financially sound that you would consider acceptable to make it worthwhile for Mr. A. to take the new job? Before reading on, indicate your response on a probability scale from 0 to 100 percent.

of success! In other words, groups show a risky shift. After a group discussion, people who are already supportive of war become more supportive, people with an initial tendency toward racism become more racist, and people with a slight preference for one-job candidate will advocate for that person more strongly.71 In a field investigation that analyzed the decisions of federal district court judges, in the 1,500 cases where judges sat alone, they took an extreme course of action only 30 percent of the time, but when sitting in groups of three, extreme decision making doubled to 65 percent.72

Now consider a situation in which a company is deciding the highest odds of a drug contraindication that could be tolerated on the release of a new medicine. In this case, individual advisors are cautious, but when the same people are in a group, they collectively insist on even lower odds. Thus, they exhibit a cautious shift. Nearly 4 years after Merck & Co. took the painkiller Vioxx off the market because of links to heart attacks and strokes, the new regulatory climate altered the landscape of drug development. In 2008, the FDA approved just 19 new medicines, the fewest in the previous 24 years, and announced 75 new or revised “black-box” warnings about potential side effects—the agency’s strongest—twice the number in 2004.73

Why are teams both more risky and more cautious than are individuals, considering the identical situation? The reason for this apparent disparity has to do with some of the peculiarities of group dynamics. Teams are not inherently more risky or cautious than individuals; rather they are more extreme than individuals. Group polarization is the tendency for group discussion to intensify group opinion, producing more extreme judgments than might be obtained by pooling the individuals’ views separately (see Exhibit 7-11).

Exhibit 7-11 Group Polarization

Source: Based on Forsyth, D. R. (1983). Group dynamics (2nd ed.). Pacific Grove, CA: Brooks/Cole; Rao, V. R., & Steckel, J. H. (1991, June). A polarization model for describing group preferences. Journal of Consumer Research, 18(1), 108–118.

Group polarization is not simply a case of social compliance or a bandwagon effect. The same individuals display the polarization effect when queried privately after group discussion. This means that people really believe the group’s decision—they have conformed inwardly! The polarization effect does not happen in nominal groups. The polarization effect grows stronger with time, meaning that the same person who was in a group discussion 2 weeks earlier will be even more extreme in his or her judgment.

There are two psychological explanations for group polarization: the need to be right and the need to be liked.

The Need to Be Right

Groups are presumed to have access to a broader range of decision-making resources and, hence, to be better equipped to make high-quality decisions than any person can alone. By pooling their different backgrounds, training, and experience, group members have the potential for working in a more informed fashion than would be the case were the decision left to an individual. People are information dependent—that is, they often lack information that another member has. Consequently, individuals look to the team to provide information that they do not know. However, it can lead to problems when people treat others’ opinions as facts and fail to question their validity. The need to be right, therefore, is the tendency to look to the group to define what reality is—and the more people who hold a particular opinion, the more right an answer appears to be. Whereas this information-seeking tendency would seem to be contradictory to the common information effect that we discussed in the previous chapter, the two processes are not inconsistent. The common information effect (and all of its undesirable consequences) is driven by a biased search for information. Conformity, or the adoption of group-level beliefs, is strongest when individuals feel unsure about their own position. Informational influence is likely to be stronger when people make private responses and communicate with the majority indirectly.74

The Need to Be Liked

Most people have a fundamental need to be accepted and approved of by others. One of the most straightforward ways to gain immediate acceptance in a group is to express attitudes consistent with those of the group members. Stated another way, most people like others who conform to their own beliefs. This means that people in groups will become more extreme in the direction of the group’s general opinion, because attitudes that are sympathetic toward the group are most likely to be positively rewarded. The need to be liked refers to the tendency for people to agree with a group so that they can feel more like a part of that group. Statistical minority group members are much more preoccupied with their group membership and less happy than majority members.75 Normative influence, or the need to be liked, is stronger when people make public responses and are face-to-face with a majority.76

Simply stated, people want to make the right decision and they want to be approved of by their team. Take the case concerning Mr. A, an electrical engineer. Most people are positively inclined when they agree to recommend to Mr. A that he seriously consider a job change. However, they vary greatly in their reasons for why Mr. A should change jobs. Someone in the group may feel that Mr. A should leave the secure job because it does not represent a sufficient challenge; others may think that Mr. A should leave the company because he should increase his standard of living. Thus, people feel that Mr. A should consider a move, but they have different (yet complementary) reasons supporting their belief. This is the rational type of conformity we discussed earlier. At the same time, members of the team want to be accepted—part of the socialization process we outlined in Chapter 4.

Conformity Pressure

The group polarization effect is related to conformity pressure. Conformity occurs when people bring their behavior into alignment with a group’s expectations and beliefs. Although many people think their beliefs and behavior are based on their own free will, social behavior is strongly influenced by others. For a clear and surprising demonstration of the power of conformity pressure, see Exhibit 7-12.

Exhibit 7-12 Conformity Pressure

Source: Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70(9), Whole No. 416.

Suppose that you are meeting with your team. The question facing your team is a simple one:

Which of the three lines in the right panel of the figures given below is equal in length to the line in the left panel? The team leader seeks a group consensus. She begins by asking the colleague sitting to your left for his opinion. To your shock, your colleague chooses line 1; then, each of the other four team members selects line 1, even though line 2 is clearly correct. You begin to wonder whether you are losing your mind. Finally, it’s your turn to decide. What do you do?

Most people who read this example find it nearly impossible to imagine that they would choose line 1, even if everyone else had. Yet 76 percent make an erroneous, conforming judgment (e.g., choose line 1) on at least one question; on average, people conform one-third of the time when others give the obviously incorrect answer.

Conformity is greater when the judgment or opinion issue is difficult and when people are uncertain. People are especially likely to conform if they face an otherwise unanimous group consensus.77 Conformity is greater when people value and admire their team—rejection from a desirable group is very threatening.78 When people are aware that another member of their team advocated an inferior solution to a problem, they are less likely to intervene if they are motivated to be compatible than if they are motivated to be accurate. 79 People are more willing to take a stand when they feel confident about their expertise, have high social status,80 are strongly committed to their initial view,81 and do not like or respect the people trying to influence them.82

Coupled with the need to be liked is the desire not to be ostracized from one’s team. There is good reason for concern, because individuals who deviate from their team’s opinion are more harshly evaluated than are those who conform.83 A group may reject a deviant person even when they are not under pressure to reach complete consensus.84 Apparently, holding a different opinion is enough to trigger dislike even when it does not directly block the group’s goals. For this reason, people are more likely to conform to the majority when they respond publicly,85 anticipate future interaction with other group members,86 are less confident,87 find the question under consideration to be ambiguous or difficult,88 and are interdependent concerning rewards.89 Ostracized team members experience a variety of deleterious effects; they don’t like or trust their team, and this may ultimately harm group functioning.90 Most managers dramatically underestimate the conformity pressures that operate in groups. Perhaps this is because people prefer to view themselves as individualists who are not afraid to speak their own minds. However, conformity pressures in groups are real, and they affect the quality of team decision making. Therefore, managers should anticipate conformity pressures in groups, to understand what drives it (i.e., the need to be liked and the desire to be right), and then to put into place group structures that will not allow conformity pressures to endanger the quality of group decision making.

Decision-Making Pitfall 5: Unethical Decision Making

Unethical decision making can be small scale or affect the lives and welfare of hundreds of thousands of people. Financier Bernie Madoff defrauded thousands of people by committing fraud, theft, and money laundering in a huge Ponzi scheme. At the time of his arrest, he claimed to manage $65 billion of investor money, but in reality there was just $1 billion.91 Thousands of people were affected and lost their entire life savings. Unethical decision making shares many of the same dynamics involved in the other concepts we have discussed in this chapter, such as groupthink. Groupthink can lead to a culture of unethical behavior within a company.92 Groups lie more than individuals when deception is guaranteed to result in financial profit.93 And, groups are more strategic than individuals in that they will adopt whatever course of action—deception or honesty—serves their financial interests.94 However, teams are concerned about ethics and value group morality more than competence or sociability.95 Below, we consider some of the situational triggers of unethical decision making. We resist the urge to explain all such occurrences of unethical decision making as a simple manifestation of evil personalities. Rather, we believe that certain conditions may act as enabling conditions for unethical behavior.

Rational Expectations Model

Undergirding virtually all economic theory and practice is the rational expectations model, also known as the rational man model. According to this model, people are fundamentally motivated to maximize their own utility, which has become equivalent to maximizing self-interest. So entrenched is this model in modern business analysis that to make any other assumption about human behavior is irrational, illogical, and flawed. A study of 126 teams revealed that teams that held a utilitarian orientation were more likely to make unethical decisions and engage in unethical behaviors, particularly when the team members had a high degree of psychological safety.96 A poll of 500 financial workers in the United States and the United Kingdom revealed that 24 percent would engage in unethical and illegal behavior if it could help people be more successful in their industry; 16 percent even said they would commit insider trading if they knew they could get away with it.97

The norm of self-interest is so pervasive that people often “invent” self-interested explanations of why they perform non-self-serving (or altruistic) acts, such as giving money to charity.98 Even more disconcerting, people who take business courses are significantly more likely to engage in questionable and potentially unethical behaviors—for example, failing to return money that they find in the street, behaving competitively in a prisoner’s dilemma game, and so on—than people who don’t take business courses.99

False Consensus

The false consensus effect is the tendency for people to believe that others share their own views, when in fact they do not. For example, people overestimate the degree to which others share their own views on ethical matters.100 This is particularly true for people who occupy central positions in their organizational network. For this reason, a major driver of unethical behavior is the belief that “everyone else is doing it.”

Vicarious Licensing

Paradoxically, people are more likely to express prejudiced and immoral attitudes when their group members’ past behavior has established nonprejudiced credentials. For example, participants who had the opportunity to view a group’s nonprejudiced hiring decision were more likely to reject an African American man for a job, presumably because they believed that they had established an initial foundation of morality!101

Desensitization

Another problem concerns desensitization of behavior. When someone first crosses the line of appropriate behavior, that person may experience a range of negative feelings and emotions. However, once the line is crossed, the individual is desensitized, and the normal system of internal checks and balances is turned off. When the New Orleans Saints’ defensive coordinator Gregg Williams bragged about the Saints’ best defensive tackles on players such as Brett Favre and Kurt Warner during the postseason playoffs of 2009, the violent hits prompted an examination of the Saints’ defensive tactics. The investigation found that the Saints employed the use of a financial bonus system that rewarded players for injuries caused to the opposing team. Through the season, the players became desensitized to where to draw the line in terms of competitive drive.102

The question, of course, is how to remedy or, ideally, prevent this situation. Consider the five strategies below:

Accountability for Behavior

To the extent that groups feel accountable for their behavior, they are more likely to behave ethically. For example, group members are more likely to compensate for the ethical transgression of an in-group member when they are observed by others. In contrast, an absence of accountability can lead to unethical decision making. For example, the World Bank is an institution which is not subject to auditing reviews by any one country. Without such agency oversight, the internal workings and policies of the World Bank have become increasingly corrupt. Individual accountability at the World Bank is loosely monitored and the only real insights on the Bank’s finances and decision ethics often come from internal whistle-blowers.103

Accountability is the implicit or explicit expectation that one may be called on to justify one’s beliefs, feelings, and actions to others.104 Accountability implies that people who do not provide a satisfactory justification for their actions will suffer negative consequences. However, there are multiple forms and types of accountability, each with their own beneficial as well as detrimental effects on decision making.

Most leaders want to take credit and responsibility for good news but disavow bad news. It’s a way of protecting the ego. For example, in 2011, former French President Nicolas Sarkozy and British Prime Minster David Cameron exchanged heated words about the relative health of the French and British economy, with Sarkozy defending his country’s economic policies that led to 36 consecutive years at the top-tier Triple-A credit rating.105 However, when France was stripped of that rating just a month later, Sarkozy went into temporary hiding, and sent his prime minister, Francois Fillon, to face the media questions.106

The following are considerations regarding accountability in organizational decision making:107

  • Accountability to an audience with known versus unknown views: People who know what conclusion the ultimate audience wants to hear often conform. For example, financial-aid agents who do not know their audience’s preferences match awards to needs effectively; agents who know their audience’s preferences tell them what they want to hear (not what will actually meet their needs).108

  • Pre- versus post-decision accountability: After people irrevocably commit themselves to a decision, they attempt to justify their decisions. For example, people form less complex thoughts and hold more rigid and defensive views when they are accountable and express their attitudes.109

  • Outcome accountability versus process accountability: Accountability for outcomes leads to greater escalation behavior, whereas accountability for process increases decision-making effectiveness.110

  • Legitimate versus illegitimate accountability: If accountability is perceived as illegitimate—for example, intrusive and insulting—any beneficial effects of accountability may fail or backfire.111

Contemplation

Contemplation is morally-oriented conversation in the face of decision making. In one study, people were tempted to lie. Those who engaged in contemplation, or morally-oriented conversation, told the truth, but those who engaged in self-interested conversation or simply made an immediate choice lied.112

Eliminate Conflicts of Interest

Conflicts of interest occur when a person is not incentivized to act in accord with the best interests of the organization. For example, White House minority leader Nancy Pelosi was in a conflict of interest: As an owner of $100,000 of Clean Energy stock, the passage of the fuel subsidy proposal would have positively impacted Nancy Pelosi’s net worth.113

Create Cultures of Integrity

The culture of a team emerges as a result of design factors in the organization and the team. Even in the most tightly controlled, bureaucratic organizations, it is impossible to monitor the actions of every employee. This is where the cultural code is supposed to guide every team member to make the right decisions without supervision. For example, group members comply with group norms of morality because they anticipate gaining respect when enacting the group’s moral values.114 In some cases, however, employees may engage in unethical behaviors because they believe it will benefit their organization. Field studies of employees who identify strongly with their organization indicate they are more likely to engage in unethical behavior that benefits their organization when they believe that the benefit will be reciprocated.115

According to the trickle-down model of ethical decision making, leaders play a prominent role in influencing employees’ propensity to be ethical and helpful.116 Indeed, there is a direct, negative relationship between leadership and ethical behavior: The more ethical the leadership, the less unethical and less deviant the teams’ behavior.117 Failure to discipline transgressions can be just as damaging as the failure to reward excellent behavior in teams. Business cultures that lack the ability to take swift and decisive action run the risk of unethical behavior by default. In steroid investigations in Major League Baseball, executives were accused of being slow to act when home run records were being shattered at a dizzying pace in the late 1990s and early 2000s and whispers had begun about illegal performance-enhancing drugs. When all-star New York Yankees infielder Alex Rodriguez acknowledged in 2009 that he used performance-enhancing drugs while playing for the Texas Rangers from 2001 to 2003, it cast doubt on the achievements of the player widely considered to be the best in baseball, as well as on the game itself, which was reeling from numerous steroid controversies among its current or former stars. “I couldn’t feel more regret and feel more sorry, because I have so much respect for this game and the people that follow us. And I have millions of fans out there who won’t ever look at me the same,” Rodriguez said. Rodriguez faced boos and taunts from visiting crowds throughout the 2009 baseball season.118

Future Self-Orientation

People who feel continuity with their future selves are more likely to behave in ethically responsible ways as compared to people who lack continuity with their future selves. For example, people who did not feel a connection to themselves in the future were more likely to lie, make false promises, and cheat.119

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.111.33