CHAPTER 10

What They Are and Why They’re Important

Everyone agrees that logic and argumentation are important for critical thinking. (And an important component of improving one’s critical thinking skills is) background knowledge ....

There are different types of background knowledge that are relevant to critical thinking in different ways. One of the most important types of background knowledge is knowledge of how our minds actually work—how human beings actually think and reason, how we actually form beliefs, how we actually make decisions.

There are a lot of different scientific fields that study how our minds actually work. These include behavioral psychology, social psychology, cognitive psychology, cognitive neuroscience, and other fields. Over the past 40 years we’ve learned an awful lot about human reasoning and decision making.

A lot of this research was stimulated by the work of two important researchers, Daniel Kahneman and Amos Tversky, going back to the early 1970s. They laid the foundations for what is now called the “biases and heuristics” tradition in psychology.1

Normative Versus Descriptive Theories of Human Reasoning

To get a feel for the importance of this research, let’s back up a bit. When studying human reasoning you can ask two sorts of question. One is a purely description question—how do human beings in fact reason? The other is a prescriptive or normative question—how should human beings reason? What’s the difference between good reasoning and bad reasoning?

When we study logic and argumentation, we’re learning a set of rules and concepts that permit us to answer this second question—how should we reason ....

... [O]ver time, we’ve developed a number of different theories of rationality that give us norms for correct reasoning in different domains.

This is great, of course, (as) these are very powerful and useful tools. (Some of which are the focus of this book).

Now, when it comes to the study of how human reasoning actually works, before Kahneman and Tversky’s work in the 1970s, there was a widely shared view that, more often than not, the mind, or the brain, processes information in ways that mimic the formal models of reasoning and decision making that were familiar from our normative models of reasoning, from formal logic, probability theory, and decision theory.

This “widely shared view” has influenced the methods marketing researchers often use. For example, three commonly used methods researchers employ to model brand choice are (1) conjoint analysis, (2) regression analysis, or (3) the combination of brand attribute performance ratings with attribute “importance” ratings. Findings from behavioral economics suggest that these kinds of models do not tell us the complete story—in fact they may weave a false story—of how consumers select brands. In short, consumers are not as “rational” as we think they are.

What Kahneman and Tversky showed is that, more often than not, this is NOT the way our minds work—they showed that there’s a gap between how our normative theories say we should reason and how we in fact reason.

This gap can manifest itself in different ways, and there’s no one single explanation for it. One reason, for example, is that in real-world situations, the reasoning processes prescribed by our normative theories of rationality can be computationally very intensive. Our brains would need to process an awful lot of information to implement our best normative theories of reasoning. But that kind of information processing takes time, and in the real world we often need to make decisions much quicker, sometimes in milliseconds. You can imagine this time pressure being even more intense if you think about the situations facing our homo sapiens ancestors; if there’s a big animal charging you and you wait too long to figure out what to do, you’re dead.

This is an important point for marketers. Often, marketing managers need to make relatively quick decisions based on either too little or too much information. This frequently leads to using “rules of thumb” that we fall back on to save time. These shortcuts in making decisions are called “heuristics”—practical, time-saving processes used to make quick decisions that are not necessary optimal or perfect. Sometimes, these heuristics employ one or more of the 60 logical fallacies discussed in this book—and when they do, you can be certain that the likelihood of a bad decision being made will increase. Note that a “heuristic” can also refer to ways “for thinking about phenomena or questions in a way that might give you new insights and ideas,” 2 which can be used in argument development. Our Think Better piece on The Five Whys is an example.

Biases and Heuristics (Rules of Thumb)

So, the speculation is that our brains have evolved various short-cut mechanisms for making decisions, especially when the problems we’re facing are complex, we have incomplete information, and there’s risk involved. In these situations we sample the information available to us, we focus on just those bits that are most relevant to our decision task, and we make a decision based on a rule of thumb, or a shortcut, that does the job.

These rules of thumb are the “heuristics” in the “biases and heuristics” literature.

Two important things to note: One is that we’re usually not consciously aware of the heuristics that we’re using, or the information that we’re focusing on. Most of this is going on below the surface.

The second thing to note is these heuristics aren’t designed to give us the best solutions to our decision problems, all things considered. What they’re designed to do is give us solutions that are “good enough” for our immediate purposes.

But “good enough” might mean “good enough in our ancestral environments where these cognitive mechanisms evolved.” In contexts that are more removed from those ancestral environments (say in a marketing committee), we can end up making systematically bad choices or errors in reasoning, because we’re automatically, subconsciously invoking the heuristic in a situation where that heuristic isn’t necessarily the best rule to follow.

So, the term bias in this context refers to this systematic gap between how we’re actually disposed to behave or reason, and how we ought to behave or reason, by the standards of some normative theory of reasoning or decision-making. The “heuristic” is the rule of thumb that we’re using to make the decision or the judgment; the “bias” is the predictable effect of using that rule of thumb in situations where it doesn’t give an optimal result.

Some examples of heuristics in marketing decision-making we’ve observed are as follows:

  • “What’s happened in the past will happen tomorrow”: For example, making financial projections based simply on historical trends and not taking into account other factors that can affect future financial performance, such as projected GNP growth or the prime interest rate.

  • Argument to moderation: “Splitting the difference” between alternative marketing projections (e.g., a sales forecast) because one does not have sufficient time to develop a better forecast supported by good evidence and logic.

  • “Shooting from the hip”: Simply guessing what action to take because one does not have enough time to properly investigate a particular issue.

  • Alleged certainty: Relying on conventional wisdom—“Everyone knows that ...”—to make a decision as opposed to thinking through a problem in greater detail.

An Example: The Anchoring Effect

This is all pretty general, so let me give you an example of a cognitive bias and its related heuristic. This is known as the anchoring heuristic, or the anchoring effect.

Kahneman and Tversky did a famous experiment in the early 1970s where they asked a group of subjects to estimate the percentage of countries in Africa that are members of the United Nations. Of course most aren’t going to know this, for most of us this is just going to be a guess.

But for one group of subjects, they asked the question “Is this percentage more or less than 10 percent?” For another group of subjects, they asked the question “Is it more or less than 65 percent?”

The average of the answers of the two groups differed significantly. In the first group, the average answer was 25 percent. In the second group, the average answer was 45 percent. The second group estimated higher than the first group.

Why? Well, this is what seems to be going on. If subjects are exposed to a higher number, their estimates were “anchored” to that number. Give them a high number, they estimate higher; give them a lower number, they estimate lower.

So, the idea behind this anchoring heuristic is that when people are asked to estimate a probability or an uncertain number, rather than try to perform a complex calculation in their heads, they start with an implicitly suggested reference point—the anchor—and make adjustments from that reference point to reach their estimate. This is a shortcut; it’s a rule of thumb.

Now, you might think in this case, it’s not just the number; it’s the way the question is phrased that biases the estimates. The subjects are assuming that the researchers know the answer and that the reference number is therefore related in some way to the actual answer. But researchers have redone this experiment many times in different ways.

In one version, for example, the subjects are asked the same question, to estimate the percentage of African nations in the United Nations, but before they answer, the researcher spins a roulette wheel in front of the group, wait for it to land on a number so they can all see the number, and then ask them if the percentage of African nations is larger or smaller than the number on the wheel.

The results are the same. If the number is high people estimate high, if the number is low people estimate low. And in this case the subjects couldn’t possibly assume that the number on the roulette wheel had any relation to the actual percentage of African nations in the United Nations. But their estimates were anchored to this number anyway.

Results like these have proven to be really important for understanding how human beings process information and make judgments on the basis of information. The anchoring effect shows up in strategic negotiation behavior, consumer shopping behavior, in the behavior of stock and real estate markets—it shows up everywhere; it’s a very widespread and robust effect.

Note, also, that this behavior is, by the standards of our normative theories of correct reasoning, systematically irrational.

Linda Henman, Missouri-based business consultant, provides an example of when the anchoring effect can have a negative outcome: If a team leader asks his subordinates whether marketing efforts in a particular region should be increased by, say 20 percent, his employees will use that number as a cue. Perhaps the figure should be even higher, Henman notes, or the company needs to eliminate marketing efforts in that region altogether. By using the anchor of 20 percent, the boss has already planted a seed in his subordinates’ minds, which will be difficult to erase.3

Why This Is Important

So this is an example of a cognitive bias. Now, this would be interesting but not deeply significant if the anchoring effect was the only cognitive bias that we’ve discovered. But if you go to the Wikipedia entry under “list of cognitive biases,” you’ll find a page that lists just over a hundred of these biases, and the list is not exhaustive. I encourage everyone to check it out.

So what’s the upshot of all this for us as critical thinkers?

At the very least, we all need to acquire a certain level of cognitive bias “literacy.” We don’t need to become experts, but we should all be able to recognize the most important and most discussed cognitive biases. We should all know what “confirmation bias” is, what “base rate bias” is, what the “gambler’s fallacy” is, and so on. These are just as important as understanding the standard logical fallacies.

Why? Because, as critical thinkers, we need to be aware of the processes that influence our judgments, especially if those processes systematically bias us in ways that make us prone to error and bad decisions ....

Chapter Takeaways

  • A cognitive bias is a systematic error in judgment. A typical example of cognitive bias is confirmation bias, where we seek out or filter information so that it conforms to our own world view.

  • Most of this book—the logical fallacies for example—focuses on prescriptive or normative rules of thinking; “Don’t use logical fallacies!” But to become a good critical thinker, we need to know something about and be sensitive to how human beings reason. Studying cognitive biases is a good first step on this journey.

  • Historically, marketers and economists viewed consumers as being logical decision-makers who sought to maximize value in their marketing transactions. The primary factors these social scientists studied were products, their features and benefits, and price.

  • Since the 1970s, research has slowly accumulated in the fields of behavioral economics, neurology, and psychology, which strongly suggests that the consumer decision process is not only more complex than we thought it was, but is also affected by other factors such as the context in which consumers make decisions and the heuristics they use to simplify the decision process.

  • We all need to acquire a basic level of cognitive bias literacy to truly become good critical thinkers. This and the following chapters give you a good introduction to this field. Additional suggested readings are given in the last chapter of this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.247.81