CHAPTER 13

Confirmation Bias and the Evolution of Reason

Last lecture I talked a lot about confirmation bias, this tendency we have to filter and interpret evidence in ways that reinforce our beliefs and expectations. And I argued that one way to think about science and scientific methodology is as a set of procedures that function to neutralize the distorting effects of confirmation bias (among other cognitive biases) by forcing us to seek out and weigh even the evidence that might count against our beliefs and expectations.

There are two sides to cognitive bias research. There’s the science that describes the effects of these biases on our judgment and behavior, and there’s the science that tries to explain why we behave in this way, that tries to uncover the psychological or neurological or social mechanisms that generate the behavior.

Everyone agrees that confirmation bias is a very real phenomenon; the descriptive part is well established. But not everyone agrees on the explanation for why we’re so prone to this bias, what mechanisms are at work to generate it. And when there’s disagreement at this level, there can be disagreement about how to best counteract the effects of confirmation bias. So as critical thinkers we should be interested in these debates, for this reason, and because they’re relevant on a deeper level to how we should think of ourselves as rational beings ....

An Evolutionary Explanation of Confirmation Bias

So from this perspective, we can ask the broad question, why did human reason evolve? And by “reason” here I mean a very specific ability, namely, the ability to generate and evaluate arguments, to follow a chain of inferences, and to construct and evaluate chains of inferences that lead to specific conclusions. Human rationality can be defined much more broadly than this, but we’re focusing on this specific component of rationality for the time being.

Now, if we assume that this ability to construct and evaluate arguments is an evolutionary adaptation of some kind, the question then becomes, what is this ability an adaptation for?

The Simple and Obvious Story

Well, here’s one simple and obvious way to think about it. Our ability to construct and evaluate arguments evolved because it has survival value, and it has survival value because it helps us to arrive at truer beliefs about the world, and to make better decisions that further our goals. This ability to reason is a general purpose tool for constructing more accurate representations of the world and making more useful and effective decisions. We assume that, in general, ancestral humans that are better able to reason in this way will have a survival advantage over those that don’t. So we expect that a higher percentage of individuals with this trait will survive and reproduce, and over time the trait will come to dominate the population, and that’s why the trait evolved and persists in human populations.

Confirmation Bias: A Problem for the Simple and Obvious Story

Now, if we accept this simple story, then we have an immediate problem. The problem is that when we look at the psychological literature, we see that human beings are often very bad at following and evaluating arguments, and they’re often very bad at making decisions. This is the take-home message of a good deal of the cognitive bias research that’s been conducted over the past 40 years!

To take the obvious example, human beings are systematically prone to confirmation bias. Confirmation bias leads us to disproportionately accept arguments that support our beliefs and reject arguments that challenge our beliefs, and this leads to errors in judgment; we think our beliefs are more justified than they really are.

Now, from this simple evolutionary stance, the existence of confirmation bias is a bit of a puzzle. If reason evolved to improve the quality of our individual beliefs and decisions, then what explains the persistence of confirmation bias, and other cognitive biases that undermine the quality of our individual beliefs and decisions?

The Argumentative Theory of Reason

The new approach to these questions that is getting some recent attention tries to resolve this puzzle about confirmation bias. It’s known as the argumentative theory of reason, and it claims that the central adaptive function of human reasoning is to generate and evaluate arguments within a social setting, to generate arguments that will convince others of your point of view, and to develop critical faculties for evaluating the arguments of others.

Now this might not seem like a radical hypothesis, but I want you to note the contrast between this view and the previous one we just described. The previous view was that the central adaptive function of human reason was to generate more accurate beliefs and make better decisions for individuals. In other words, the function is to improve the fit between individual beliefs and the world, resulting in a survival advantage to the individual.

The argumentative theory of reason rejects this view, or at least it wants to seriously modify this view. It says that human reason evolved to serve social functions within human social groups.

What are these functions?

Well, imagine two ancestral humans who are trying to work together, to collaborate to find food, defend against aggressors, raise children, and so on. This all works fine when both parties agree on what they want and how to achieve it. But if a disagreement arises, then their ability to work together is compromised. They need to be able to resolve their disagreement to get back on track.

Now let’s imagine that this pair of ancestral humans lacks the ability to articulate the reasons for their respective views, or the ability to evaluate the reasons of the other. They’re stuck, they can’t resolve their disagreement, and because of this, their collaboration will probably fail and their partnership will dissolve.

Now imagine another pair of ancestral humans in the same situation who have the ability to articulate and evaluate their reasons to one another. They have the potential to resolve their disagreement through mutual persuasion. This pair is more likely to survive as a pair, and to reap the benefits of collaboration.

And for this reason, this pair will likely out-compete groups or individuals who lack the ability to argue with one another. And according to this theory, that’s the primary reason why the ability to reason evolved in human populations—to serve the needs of collaboration within social settings, not to improve the quality of individual beliefs or to track the truth about the world.

How the Argumentative Theory Explains Confirmation Bias

The argumentative theory of reason is the product of two French researchers, the well-known anthropologist and cognitive psychologist Dan Sperber and his former student Hugo Mercier.1

One of the reasons they offer in support of their theory is that it helps to explain the existence of confirmation bias.

How does it do this? Let’s walk through this, it’s kind of interesting. In a social setting where there are lots of different individuals with different beliefs and values, everyone is required to play two roles at different times, the role of the convincer—the one who is giving the argument that is intending to persuade—and the role of the convincee—the one who is the recipient of the argument and the intended object of persuasion.

Now, if your goal as a convincer is to use reason to persuade others, then a bias toward confirming arguments and confirming evidence is going to serve you well. As a convincer your goal isn’t the impartial weighing of evidence to get at the truth, it’s to assemble reasons and evidence that will do the job of persuading others to accept your conclusion.

Adroit use of confirmation bias may partially explain successful politicians and executives who climb the corporate ladder quickly.

Things are different when you’re the convincee, the one who is the object of persuasion. In this context you can imagine two extreme cases for how you should handle these attempts to persuade you. On the one hand, you could decide to accept everything that other people tell you. But that’s not going to serve your needs very well—you’ll be pulled in different directions, you won’t have stable beliefs, and you won’t be effective at asserting your own point of view.

On the other hand, you could decide to reject everything that other people tell you that doesn’t conform to your beliefs. This is the ultra-dogmatic position—you stick to your guns come what may. This has some obvious advantages. You’ll have a stable belief system, you’ll attract collaborators who think they way you do, and so on.

But it’s still not ideal, because the ultra-dogmatic stance runs the risk of rejecting arguments with true conclusions that would actually improve their condition if they were to accept them.

A better compromise position is one where there’s a default dogmatism, where your initial reaction is to resist arguments that challenge your beliefs, but this default dogmatism is tempered by a willingness and ability to evaluate arguments on their merits, and thereby make yourself open to rational persuasion.

From an evolutionary standpoint, this compromise position, which you might call “moderate dogmatism,” seems to offer the maximum benefits for individuals and groups.

Now when you combine the optimal strategy of the convincer, which is biased toward arguments and evidence that support your beliefs, and the optimal strategy of the convincee, which is biased against arguments and evidence that challenge your beliefs, you end up with an overall strategy that looks an awful lot like the confirmation bias that psychologists have been documenting for decades.

And this is what Sperber and Mercier are saying. When we think of reason as serving the goals of social persuasion, confirmation bias shouldn’t be viewed as a deviation from the proper function of human reason. Rather, it’s actually a constitutive part of this proper function, this is what it evolved FOR. To use a computer software analogy, confirmation bias isn’t a “bug,” it’s a “feature.”

Consequences of This View

Now, I don’t know if this view is right. But I do find it provocative and worth thinking about. I mentioned earlier that different views on confirmation bias can result in different views of how best to neutralize or counteract it. Sperber and Mercier argue that their view has some obvious consequences along these lines.

For one, their view implies that the worst case scenario is individuals reasoning alone in isolation. Under these conditions we’re most prone to the distorting effects of confirmation bias.

A much better situation is when individuals reason in groups about a particular issue. This way everyone can play both the role of the convincer and the convincee, and we can take advantage of our natural ability to evaluate the quality of other people’s arguments, and others can evaluate the quality of our arguments. We should expect that reasoning in groups like this will result in higher quality judgments than reasoning in isolation, and lots of studies on collective reasoning do bear this out.

Next time you’re in a group setting debating a marketing issue, have one or a few colleagues play the Devil’s Advocate and try to show that the pre-vailing inductive argument is weak and not cogent. (Remember, nearly all marketing arguments are inductive, not deductive, arguments.) “Weak” in this context means that, assuming true premises, the conclusion does not logically follow. “Cogent” means the argument’s premises are true, or at least plausible to your audience. If time allows, give them time to research the subject matter and come back to the group with their strongest counter-arguments. This approach can be helpful in “war gaming” your competitors’ responses to your marketing mix.

But of course not all groups are equal. If a group is very homogeneous, with lots of shared beliefs and values and background assumptions, then the benefits of group reasoning are more limited because there are collective confirmation biases that won’t be challenged.

So, if we’re looking to maximize the quality of our judgments and our decisions, the better situation is when the groups are not so homogenous, when there’s a genuine diversity of opinion represented within the group, and you can expect that at least some people will start offdisagreeing with you. Under these conditions, the benefits of group reasoning and group argumentation are greatest—the result will be judgments that are least likely to be distorted by confirmation bias ....

Chapter Takeaways

  • What causes confirmation bias? It may be some baggage that we’re carrying from our evolutionary past.

  • Developing short-cut mechanisms for decision-making and pattern-recognition abilities likely helped our ancestors avoid lions and other dangerous predators. But what worked in the jungle may not work in the boardroom. Relying on fixed decision rules and pattern-recognition abilities may lead us down the wrong path.

  • Another explanation for confirmation bias weaknesses likes in the Argumentative Theory of Reason. Learning how to develop and evaluate arguments in a social setting gave small human groups the skills they needed to cooperate and survive.

  • One of the best ways to counter our natural inclination toward confirmation bias is to discuss topics in heterogeneous groups. If everyone has similar backgrounds and experiences, not much will be challenged. Try to assemble people with different backgrounds and experiences.

  • Why should you care about this? There are at least two reasons. First, because to the extent that we, as critical thinkers, are aware of the factors that bring about cognitive biases, we will be less likely to use them to develop weak or invalid arguments. Second, having a basic understanding of how evolutionary forces may have contributed to this situation gives us a better understanding of our nature as human beings.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.141.130