CHAPTER 3

Cascades

Researchers have long known that errors in groups can be amplified if their members influence one another. Of course, the human animal is essentially sociable, and human language may be the most subtle and engaging social mechanism in the animal kingdom. The brain is wired to help us naturally synchronize with and imitate other human beings from birth.1 Emotions are contagious in our species; obesity appears to be contagious too, and the same may well be true for happiness itself.2 (We know a behavioral economist who offers what he calls a law of human life: “You cannot be happier than your spouse.”) It is no exaggeration to say that herding is the fundamental behavior of human groups.

If you are doubtful, consider a brilliant study of music downloads.3 Sociologist Matthew Salganik and his coauthors conclude that there is a lot of serendipity in which music succeeds and which fails, and that small differences in early popularity make a major difference in ultimate success. In business, many people are aware of this point—but not nearly aware enough. They underrate the extent to which success or failure depends on what happens shortly after launch and frequently overrate the contributions of intrinsic merit.

Here’s how Salganik’s study worked. The researchers created a control group in which people could hear and download one or more of forty-eight songs by new bands. In the control group, intrinsic merit and personal tastes drove the choices. Individuals were not told anything about what anyone else had downloaded or liked. They were left to make their own independent judgments about which songs they liked. To test the effect of social influences, Salganik and his coauthors also created eight other subgroups. In each of these subgroups, each member could see how many people had previously downloaded individual songs in his or her particular subgroup.

In short, Salganik and his colleagues were testing the relationship between social influences and consumer choices. What do you think happened? Would knowledge of others’ choices make a difference, in terms of ultimate numbers of downloads, if people could see the behavior of others?

The answer is that it made a huge difference. While the worst songs (as established by the control group) never ended up at the very top and the best songs never ended up at the very bottom, essentially anything else could happen. If a song benefited from a burst of early downloads, it could do really well. If it did not get that benefit, almost any song could be a failure. As Salganik and Duncan Watts later demonstrated, you can manipulate outcomes pretty easily, because popularity is a self-fulfilling prophecy.4 This means that if a site shows (falsely) that a song is being downloaded a lot, that song can get a tremendous boost and eventually become a hit. John F. Kennedy’s father, Joe Kennedy, was said to have purchased tens of thousands of early copies of his son’s book, Profiles in Courage. The book became a best seller. Smart dad.

There’s a lesson here both for businesses that seek to market products and for foolish groups whose leaders often announce a preference for a proposed course of action before the groups have gathered adequate information or aired possible outcomes. The lesson seems obvious, but it now has a solid empirical foundation: if a project, business, politician, or cause gets a lot of early support, it can turn out to be the group’s final preference, even if it would fail on its intrinsic merits without that support. Both small and large groups can be moved in this way. If the initial speakers in a group favor a particular course of action, the group may well end up favoring that position, even if it would not have done so if the initial speakers had been different.

When products succeed, we often think that their success was inevitable. Wasn’t the Mona Lisa bound to be one of the most famous and admired paintings in the world? Isn’t her portrait uniquely mysterious and irresistible? Weren’t the Beatles destined for success? The Harry Potter series is one of the most popular in the history of publishing. The books are great; how could it be otherwise? Beware of this way of thinking, because inevitability is often an illusion. We can’t prove it here, but it’s true: with a few twists of fate, you would never have heard of the Mona Lisa, the Beatles, or even Harry Potter. (In fact, each of these now iconic works had inauspicious beginnings, but were bumped into the limelight by unpredictable bursts of popularity.)

For their part, many groups end up with a feeling of inevitability, thinking that they were bound to converge on what ultimately became their shared view. Beware of that feeling too, because it is often an illusion. The group’s conclusion might well be an accident of who spoke first—and hence of what we might call an incidental side effect of the group’s discussions. An agenda that says “bosses go first” might produce very different outcomes from one that says “subordinates go first.”

Savvy managers are often entirely aware of this point and organize the discussion so that certain people speak at certain times. Within the federal government, some of the most effective leaders are masters of this process. They know that if, at a crucial juncture, they call on the person with whom they agree, they can sway the outcome. Lesson for managers: devote some thought to the speakers with whom you agree, and get them to speak early and often. Another lesson for managers: don’t do that if you don’t know what the right answer is.

Up-Votes and Down-Votes

Other research supports our central point (and also helps to dispel the illusion of inevitability). Lev Muchnik, a professor at Hebrew University of Jerusalem, and his colleagues carried out an ingenious experiment on a website that displays a diverse array of stories and allows people to post comments, which can in turn be voted up or down. With respect to the posted comments, the website compiles an aggregate score, which comes from subtracting the number of down-votes from the number of up-votes. To put a metric on the effects of social influences, the researchers explored three conditions: (1) “up-treated,” in which a comment, when it appeared, was automatically and artificially given an immediate up-vote; (2) “down-treated,” in which a comment, when it appeared, was automatically and artificially given an immediate down-vote; and (3) “control,” in which comments did not receive any artificial initial signal. Millions of site visitors were randomly assigned to one of the three conditions. The question: What would be the ultimate effect of an initial up-vote or down-vote?

You might well think that after so many visitors (and hundreds of thousands of ratings), a single initial vote could not possibly matter. Some comments are good, and some comments are bad, and in the end, quality will win out. It’s a sensible thought, but if you thought it, you would be wrong. After seeing an initial up-vote (and recall that it was entirely artificial), the next viewer became 32 percent more likely also to give an up-vote. What’s more, this effect persisted over time. After five months, a single positive initial vote artificially increased the mean rating of comments by a whopping 25 percent! It also significantly increased turnout (the total number of ratings).

With respect to negative votes, the picture was not symmetrical—an intriguing finding. True, the initial down-vote did increase the likelihood that the next viewer would also give a down-vote. But that effect was rapidly corrected. After the same period of five months, the artificial down-vote had zero effect on median ratings (though it did increase turnout). Muchnik and his colleagues conclude that “whereas positive social influence accumulates, creating a tendency toward ratings bubbles, negative social influence is neutralized by crowd correction.” They think that their findings have implications for product recommendations, stock-market predictions, and electoral polling. Maybe an initial positive reaction, or just a few such reactions, can have major effects on ultimate outcomes—a conclusion very much in line with Salganik’s study of popular music.

We should be careful before drawing large lessons from one or two studies, particularly when no money was on the line. But there is no question that when groups move in the direction of some products, people, political initiatives, and ideas, the movement may not be because of their intrinsic merits, but because of the functional equivalent of early up-votes. There are lessons here about the extraordinary unpredictability of groups—and about their frequent lack of wisdom. Of course, Muchnik’s study involved very large groups, but the same thing can happen in small ones. In fact, the effect can be even more dramatic in small groups, as an initial up-vote—in favor of some plan, product, or verdict—has a large effect on other votes.

How Many Murders?

Here’s a clean test of group wisdom and social influences. As we saw in chapter 1, the median estimate of a large group is often amazingly accurate (and we will return to that theme in chapter 8). But what happens if people in the group know what others are saying? You might think that knowledge of this kind will help, but the picture is a lot more complicated.

Jan Lorenz, a researcher in Zurich, worked with several colleagues to learn what happens when people are asked to estimate certain values, such as the number of assaults, rapes, and murders in Switzerland.5 The researchers found that when people are informed about the estimates of others, there is a significant reduction in the diversity of opinions—a result that tends to make the crowd less wise. (Note, however, that even with diminished diversity, the crowd is still somewhat more accurate than a typical individual.6) Lorenz and his coauthors found another problem with the crowd, which is that because people hear about other estimates, they also become more confident. Notably, people received monetary payments for getting the right answer, so their mistakes were consequential—not just an effort to curry favor with others. The authors conclude that for decision makers, the advice given by a group “may be thoroughly misleading,” at least when group members are interacting with one another.

Notwithstanding their differences, the Salganik, Muchnik, and Lorenz studies have one thing in common: they all involve social cascades. A cascade occurs when people influence one another, so much so that participants ignore their private knowledge and rely instead on the publicly stated judgments of others. Corresponding to our two accounts of social influences, there are two kinds of cascades: informational and reputational. In informational cascades, people silence themselves out of respect for the information conveyed by others. In reputational cascades, people silence themselves to avoid the opprobrium of others.

Informational Cascades

Cascades need not involve deliberation, but deliberative processes and group decisions often involve cascades. The central point is that those involved in a cascade do not reveal all that they know. As a result, the group does not obtain important information, and it often decides badly.7

Informational Cascades in Action

To see how informational cascades work, imagine a company whose officials are deciding whether to authorize some new venture.8 Let us assume that the group members are announcing their views in sequence, a common practice in face-to-face teams and committees everywhere. Every member has some private information about what should be done. But each also attends, reasonably enough, to the judgments of others.

Andrews is the first to speak. He suggests that the venture should be authorized. Barnes now knows Andrews’s judgment. It is clear that she, too, should vote in favor of the venture if she agrees independently with Andrews. But suppose that her independent judgment is otherwise. Everything depends on how much confidence she has in Andrews’s judgment and how much confidence she has in her own. Suppose that she trusts Andrews no more and no less than she trusts herself. If so, she should be indifferent about what to do and might simply flip a coin. Or suppose that on the basis of her own independent information, she is unsure what to think. If so, she will follow Andrews.

Now turn to a third member, Carlton. Suppose that both Andrews and Barnes have argued in favor of the venture, but that Carlton’s own information, though inconclusive, suggests that the venture is probably a bad idea. Here again, Carlton will have to weigh the views of both Andrews and Barnes against his own. On reasonable assumptions, there is a good chance that Carlton will ignore what he knows and follow Andrews and Barnes. After all, it seems likely, in these circumstances, that both Andrews and Barnes had reasons for their conclusion. Unless Carlton thinks that his own information is better than theirs, he should follow their lead. If he does, Carlton is in a cascade.

If Carlton is quite savvy, he might consider the possibility that Barnes deferred to Andrews’s judgment and did not make any kind of independent judgment. If so, Carlton might ignore the fact that Barnes agreed with Andrews. But in the real world, many group members do not consider the possibility that earlier speakers deferred to the views of still earlier ones. People tend to think that if two or more other people believe something, each has arrived at that belief independently.9 This is an error, but a lot of us make it.

Now suppose that Carlton goes along with Andrews and Barnes, and that group members Davis, Edwards, and Francis know what Andrews, Barnes, and Carlton said and did. On reasonable assumptions, they will do exactly what Carlton did: favor the venture regardless of their private information (which, we are supposing, is relevant but inconclusive). This will happen even if Andrews initially blundered. Again, Davis, Edwards, or Francis might be able to step back and wonder whether Andrews, Barnes, and Carlton really have made independent judgments. But if they are like most people, they will not do that. The sheer weight of the apparently shared view of their predecessors will lead them to go along with the emerging view.

If this is what is happening, we have a now-familiar problem: those who are in the cascade do not disclose the information that they privately hold. In the example just given, decisions will not reflect the overall knowledge or the aggregate knowledge of people in the group—even if the information held by individual members, if actually revealed and aggregated, would produce a quite different (and possibly much better) result. The venture will be authorized even if it is a terrible idea and even if group members know that it is a terrible idea. The simple reason is that people are following the lead of those who came before. Subsequent speakers might fail to rely on, and fail to reveal, private information that actually exceeds the information collectively held by those who started the cascade.

Here’s an example of an informational cascade in jury deliberation. One of us (Hastie) has conducted dozens of mock-jury studies, with thousands of volunteer jurors, many with citizens from big-city jury pools. In these studies, the volunteers allowed themselves to be videotaped while deliberating to verdicts on difficult but typical cases. In many juries (mock and real) the deliberation begins with a straw vote, taken just to see where everyone stands.

In dozens of juries, we observed a scenario like the following. The straw vote would circle the jury table and often would start with a small cascade of two or three jurors favoring, with increasing confidence, the same verdict. Because the researchers collected predeliberation private ballots, we knew which verdict was privately preferred by each of the jurors at the table. Let’s suppose that Jurors 1, 2, and 3 endorsed second-degree murder, privately and publicly in the straw vote. But we knew that Juror 4 had voted for not guilty and had indicated the highest level of confidence on the predeliberation ballot.

So, what did Juror 4 do, when confronted with the solid line of three murder verdicts? He paused for a second and then said, “Second degree.” At this point, Juror 7, an undecided vote, suddenly spoke up and asked, “Why second degree?” A momentary deer-in-the-headlights expression flitted across Juror 4’s face, and then he replied, “Oh, it’s just obviously second degree.” This scenario stands out as an iconic example of an informational cascade, and we have no doubt that this scenario is played out every day in jury rooms, board rooms, and political conference rooms all over the world.

Anxiety and Cascading

People who are humble, pliable, or complacent are especially likely to fall into a cascade. But anxious people might well shatter it, certainly if it reflects a high degree of optimism. Nancy-Ann DeParle is a gold-medal shatterer of cascades, simply because she asks tough, skeptical questions that force people to rethink. Every group needs some people like that, who wonder: If lots of people share an opinion on a hard question, might it be because they are following the lead of one or two blunderers? Why are there no dissenters? (Recall the fiasco at the Bay of Pigs.)

It is important to understand that in relying on the statements or actions of their predecessors, group members are not acting irresponsibly. In fact, informational cascades can occur when members are following rigorously rational thought processes. Group members might well be reacting sensibly to the informational signals they receive. If most people think that the venture is a good idea, it’s probably a good idea. You should feel, sensibly, that you need a pretty strong counterargument if you are going to disagree with what your colleagues have said.

But we should not underestimate our tendencies to rely on confidence, our own and others’, as a cue for what information deserves the most attention (independent of the validity of that information). One of the most insidious side effects of group decision making is that people believe in wrong group decisions more than they believe in incorrect individual decisions. The social proof resulting from cascades and (conformity more generally) amplifies everyone’s trust in the incorrect outcome.10 And inputs into the decision process from highly confident or dominant personalities have more impact and increase the esteem accorded to those individuals, regardless of the quality of their contributions.11

Urns: Red or Black?

Cascades often occur in deliberating groups in the real world.12 They are also easy to create in the laboratory. The simplest experiment is artificial but also highly revealing, because it is a stylized version of reality—of what happens every day of every year. The experiment is a bit technical and its details are not exactly captivating, but please do bear with us for a bit, because the details explain a lot about how groups end up going wrong.

The experimenters asked participants to guess whether the experiment was using urn A, which contained two red balls and one white, or instead urn B, which contained two white balls and one red.13 Participants could earn $2 for a correct decision and hence had an economic incentive to make the right choice.

In each round, one urn was selected. Then a participant was asked to make one (and only one) private draw of a ball from the selected urn in each round. The participant recorded on an answer sheet (1) the color of that ball and (2) his or her own decision about which urn was involved. The participant did not announce to the group which ball had been drawn, but the person did announce his or her own decision (about the likely urn) to everyone.

Then the ball was returned to the urn and the urn was passed to the next participant for another private draw, which again was not disclosed to anyone else, and for this participant’s own decision about the urn, which again was disclosed. This process continued until all the participants had made draws and decisions. At that time, the experimenter announced which urn had actually been used. If the participants had picked the urn only on the basis of their private information, they would have been right 66.7 percent of the time. The point of the experiment was to see whether people would decide to ignore their own draw in the face of the announcements of their predecessors—and to explore whether such decisions led to cascades and errors.

The upshot? In the experiment, cascades often did develop—and they usually produced errors. After a number of individual judgments were revealed, people announced decisions that were not indicated by their private draws but that fit with the majority of previous announcements.14 More than 77 percent of rounds resulted in cascades, and 15 percent of private announcements did not reveal a “private signal,” that is, the information provided by the individual’s own private draw. Notably, most people’s decisions were rationally based on the available information—but erroneous cascades nonetheless developed.15

Table 3-1 shows an example of a cascade that produced an inaccurate outcome (the urn used was actually B, but the first two draws produced a cascade in favor of A).16

What is noteworthy here, of course, is that the total amount of private information—four whites and two reds—pointed to the correct judgment: urn B. But the existence of the two early signals, producing rational but incorrect judgments, led everyone else to fall in line. Initial misrepresentative signals began a chain of incorrect decisions, and the chain was not broken by more representative signals received later.17 There is evidence that people eventually tend to break very long cascade chains, but there is no doubt that even short chains can create big trouble, and some long ones do persist.18

TABLE 3-1


An informational cascade

image

Source: Marc Willinger and Anthony Ziegelmeyer, “Are More Informed Agents Able to Shatter Information Cascades in the Lab?” in The Economics of Networks: Interaction and Behaviours, ed. Patrick Cohendet et al. (New York: Springer, 1998), 291.


This urn experiment maps directly onto real-world decisions by teams and deliberating groups, in which people rely on the expressed views of their predecessors and hence fail to disclose what they know, to the detriment of the group as a whole. Recall here Salganik’s music download experiment, which produced similar results. In that experiment, some songs became popular and others tanked; because there is no accounting for taste, we can’t be clear that there were mistakes. But if a group ends up making the wrong decision because of the views of those who spoke first, social influence and herding are undermining the truth.

Reputational Cascades

A reputational cascade has an altogether different dynamic. With this kind of cascade, group members think they know what is right or what is likely to be right, but they nonetheless go along with the group to maintain the good opinions of others. The problem is not that group members are influenced by the information contained in the statements of their predecessors, but that they do not want to face the disapproval of their bosses or their colleagues.

Political correctness, an aspersion often used by the political right in the 1990s (and thereafter), can be found in many places; it is hardly limited to left-leaning institutions of higher education. In both business and government, there is often a clear sense that a certain point of view is the right one to have, and that those who question or reject it, even for purposes of discussion, do so at their peril. They seem difficult or not part of the team. They are viewed as wasting the rest of the group’s time. They can be disruptive. In extreme cases, they are seen as misfits. Misfits make the group uncomfortable, but wise groups take steps to protect them

Here’s how reputational cascades can arise. Suppose Albert suggests that a company’s new project is likely to succeed, and suppose that Barbara concurs with Albert, not because she actually thinks that Albert is right (in fact, she thinks he is wrong), but because she does not wish to seem ignorant, adversarial, or skeptical about that project. If Albert and Barbara seem to agree that the project will go well, Cynthia is not only unlikely to contradict them publicly; she might even appear to share their judgment. Cynthia’s reaction arises not because she believes the judgment to be correct (she doesn’t), but because she does not want to provoke Albert and Barbara’s hostility or lose their good opinion.

It should be easy to see how this process might generate a cascade. Once Albert, Barbara, and Cynthia offer a united front on the issue, their colleague David will be most reluctant to contradict them, even if he thinks they are wrong and has excellent reasons for that belief. In the actual world of group decisions, people may, of course, be uncertain whether publicly expressed statements are a product of independent information, participation in an informational cascade, or reputational pressure. As we have noted, listeners and observers undoubtedly overstate the extent to which the actions of others are based on independent information.

The possibility of reputational cascades is demonstrated by an ingenious variation on the urn experiment outlined above.19 In this experiment, people had to guess from which urn they had drawn a ball, just as in the earlier experiment (i.e., urn A had two red balls and one white; urn B had two white and one red). The participants were paid $0.25 for a correct decision but $0.75 for a decision that matched the decision of the majority of the group. There were punishments for incorrect and nonconforming answers as well. If people made an incorrect decision, they lost $0.25; if their decision failed to match the group’s decision, they lost $0.75.

In this experiment, cascades appeared almost all of the time! No fewer than 96.7 percent of rounds resulted in cascades, and 35.3 percent of people’s announcements did not match their private signal. That is, they voted publicly against the signal given by their own draw. And when the draw of a subsequent person contradicted the announcement of the predecessor, 72.2 percent of people matched the first announcement. Table 3-2 shows this version of the experiment (the actual urn was B).20

This experiment shows that especially unfortunate results should be expected if people are rewarded not only or not mostly for being correct, but also or mostly for saying or doing what other people say and do. Unfortunately, some groups (often led or constituted by very smart people) do offer such rewards. There are plenty of such groups (even in high-stakes business and government settings). Again, the problem is that people are silencing themselves to fit in and are not revealing the important information they actually have.

TABLE 3-2


Conformity and cascades

image

Source: Angela A. Hung and Charles R. Plott, “Information Cascades: Replication and an Extension to Majority Rule and Conformity-Rewarding Institutions,” American Economic Review 91 (2001): 1508, 1515.


Availability Cascades

Thus far, we have assumed that group members are, in a sense, entirely rational. They are listening to one another and paying attention to the informational signals given by their predecessors. True, they care about their reputations and they want to be liked and respected, but there is nothing irrational about that.

As we have noted, however, people use judgment heuristics, which can lead them astray, and people are also subject to biases. For purposes of understanding how cascade effects can go wrong, the most important heuristic involves availability. When a particular event is salient, it can lead to availability cascades, by which related ideas spread rapidly from one person to another, eventually producing a widespread belief within a group, whether large or small.21

A by-product of availability is associative blocking or collaborative fixation, whereby strong, highly associated ideas block the recall of other informative ideas. This phenomenon is a big problem when a group sets itself the task of generating creative solutions. The creative, novel thought processes of individual members are suppressed by the strong, available ideas generated by other members. Effective brainstorming requires tactics to avoid this troublesome side effect of availability (see further discussion in part 2).

In the area of risk, availability cascades are common. A particular event—involving a dangerous pesticide, an abandoned hazardous waste dump, a nuclear power accident, an act of terrorism—may become well known to the group, even iconic. If so, the event will alter the group’s perceptions of a process, a product, or an activity. In business, availability cascades are familiar. Reports of an event, a dramatic success, or a failure may spread like wildfire within or across firms, leading to a judgment about other apparently similar events or products. If a movie (Star Wars?), a television show (Breaking Bad?), or a book (Harry Potter?) does well, businesses will react strongly, eagerly looking for a proposal or a project that seems similar. And indeed, cascades are highly visible in the television industry in sudden bursts of shows about teenaged vampires or privileged housewives, as managers make decisions that reflect recent successes.22

Government officials are subject to availability cascades as well. A bad event (say, Neville Chamberlain’s capitulation to Hitler, the Vietnam War, or the financial crisis of 2008) may have long-term effects, simply because it is highly salient. Reminded of it when a current crisis arises, people act to avoid a repeat of the historical outcome.

Of course, the underlying judgments might be correct. Sometimes the past really is prologue. But recall that the availability heuristic is highly unreliable. People might know of an instance in which a risk came to fruition, but the instance may not be representative. Pesticides might be safe in general even if a particular pesticide is not. A well-publicized event involving an abandoned hazardous waste dump may suggest that abandoned hazardous waste dumps are far more dangerous than they actually are.

Availability cascades make businesses and government unrealistically optimistic about some possibilities and unrealistically pessimistic about others. And when availability cascades are involved, groups can move in terrible directions. We’re not speaking of groupthink here; the problem is that a well-known cognitive bias is interacting, in destructive ways, with social influences.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.252.201