Chapter 1. How We Think and Act

Behavior Change…

All around us, people are trying to change our and each other’s behavior. Negative examples are often obvious: ads that entice us to buy stuff we don’t need, micromanaging bosses at work, or apps that try to swallow our attention and time. Positive examples are just as pervasive, just not as obvious. The loving parent teaching a child to share. Support programs helping addicts break free from their demons. Apps helping us track our weight and encouraging us as we diet and exercise.

In many ways, we’re all in the ‘behavior change’ business. We are a social species, and there’s little we can accomplish on our own. In order to be successful at our goals, even at altruistic goals of helping another person succeed, someone must do something differently. To affect change is to affect behavior change.

Yet we rarely talk about it that way. In the product world, we talk about features delivered, user needs met, and so forth. Those things are all important, certainly. But, none of them matter unless people adopt and apply our products: i.e., if our users change their behavior in a meaningful way.

Perhaps, we don’t talk about behavior change so directly because it’s uncomfortable: we don’t want to be seen as manipulative or coercive. So, we end up with sanitized conversations distanced from real people changing their behavior because of our products and communications: Key Performance Indicators for adoption and retention; Objectives and Key Results for click through rates and usage.

It shouldn’t be that way. When we don’t talk about what we’re actually doing, we are both less effective at helping others when we should, and more likely to try to change behavior in ways we shouldn’t. This book is about designing products intentionally to change behavior. How to ethically and thoughtfully help others succeed, without, I hope, falling into trickery or manipulation.

In this book we’ll have an open discussion: about how to help people decide what to do, to help them act on their intentions and personal goals, to build products that influence that process, and to assemble and run a team that does so. Nothing presented here is perfect, but I hope this book can help you make better products and better serve your users.

And Behavioral Science

One of the best toolsets to accomplish this task – intentionally designing for behavior change – comes from behavioral science. In addition to being helpful, behavioral science is simply awesome.

Behavioral science is an interdisciplinary field that combines psychology and economics, among other disciplines, to gain a more nuanced understanding of how people make decisions and translate those decision into action. Behavioral scientists have studied a wide range of behaviors, from saving for retirement to exercise.1 Along the way, they’ve found ingenious ways to help people take action when they would otherwise procrastinate or struggle to follow through.

One of the most active areas of research in behavioral science is how our environment affects our choices and behavior, and how a change in that environment can then change those choices and behaviors. Environments can be thoughtfully and carefully designed: to help us become more aware of our choices, to shape our decisions for good or for ill, and to spur us to take action once we’ve made a choice. We call that process choice architecture, or behavioral design.

Over the last decade, there has been a tremendous growth of research in the field and also of best-selling books that share its lessons, including Richard Thaler and Cass Sunstein’s Nudge, Daniel Kahneman’s Thinking, Fast and Slow, and Dan Ariely’s Predictably Irrational.2They give fun introductions to field, including how:

  • Turning a staircase into piano steps can make people reframe exercise as fun, and skip the escalator.

  • Putting a picture of a fly in the center of a men’s urinal can help reduce the mess that men make, far more than exhorting them not to make a mess.

  • Giving people many small bags of popcorn makes them eat less of it3

In fact, Thaler and Kahneman have each won the Nobel Prize largely because of their work in behavioral science.

That said, we’re not trying to recreate Nudge or Predictably Irrational here. This book is about how to apply lessons from behavioral science to product development. In particular, how to help our users do something they want to do, but struggle with. Whether that’s dieting, spending time with their kids, or breaking a social media app’s hold on their lives. It’s about arming you with a straightforward process to design for behavior change.

Some of those lessons are what you’d expect: When designing a product look out for unnecessary frictions or for areas where a user loses self-confidence. Build habits via repeated action in a consistent context. Some of those lessons are far less expected, and you may not even want to hear them; for example, most products, most of the time, will have no unique impact on their user’s lives. For that reason, we need to test early and often, and use rigorous tools to do so. Other lessons are simply fun and surprising: like make text harder to read if it’s important that users make a careful and deliberative decision.

With that, let’s dive into a primer on behavioral science!

Behavioral Science 101: How Our Minds Are Wired

Last summer, my family and I were out on vacation and having a great time. One afternoon through, we decided we’d eaten out way too much, and we wanted something cheaper and more familiar than another restaurant meal. So, we went to a grocery store.

Now, the first thing we looked for was cereal. We found the aisle and there were far too many options to choose from. As they often do, our kids were running up and down the aisle, pulling and swinging each other around. Somehow, all of that movement makes them unable to hear us telling them to stop. It’s clearly loads of fun -- until they crash into something. So, we had to make a quick decision.

Unfortunately, my kids and I have lots of allergies (my wife, as always, is nearly perfect). My allergies are lethal, and my kid’s allergies cause pain but thankfully not too much more. So, as we’re standing the aisle, trying to make choice, and keep our kids out of trouble, my wife and I were torn: we simply couldn’t read all of the boxes for their ingredients.

Thankfully, we have some simple rules we know to follow. Any cereal with cartoons on the box is automatically out: those are often crammed full of sugar, and our kids have enough energy already. Second, cereals that are gluten free (which one of our sons needs) usually proclaim it proudly on the box – easy to scan for. And third, after decades of practice, I have a really useful habit: I automatically pick up food and recognize ingredients on the list that would kill me. It only takes a split second, and I hardly think about it unless I see something that’s a problem.

After a little while, we picked up a nice bag of corn flakes, a box of some granola-like stuff, and went on to the next aisle. No problem. Unfortunately, we did forget to pick up the milk, and a few other things in that aisle. We’d intended to get them, but in the moment, we missed those items on our mental checklist.

Now, when we got home, the granola stuff was actually really good. The corn flakes were simply terrible – in all of the hurry, we missed a key sign: dust on the bag. They’d been sitting there a long time, and everyone else clearly knew not to buy them.

In everyday life and in (true) stories like this one, we can find the core lessons of behavioral science, if we know where to look. I like to start with a basic, and often overlooked one: as human beings, we’re all limited. We can’t instantly know what cereal is best just by thinking about them. We have to take time and energy to sort through the options, and make a decision. That time and energy is scarce – if we spend too much time on one thing, there’s a cost (like our kids crashing to the shelves). Similarly, we’re limited in our attention, our willpower, our ability to do math in our heads, etc. You get the picture.

Our limitations aren’t ‘bad’, per se: they just are facts of life. Personally, I can’t even imagine what it would mean to have ‘unlimited attention’ for example: to be simultaneously aware of absolutely everything at once. That’s just not how we’re made.

Given these limitations, our minds are really really good are making the best of what we have. We ‘economize’ on our time, attention and mental energy by using simple rules of thumb to make decisions. For example, by excluding cereals with cartoons. As researchers, we call these results of thumb heuristics. Another way we our minds economize is by making split-second nonconscious judgements: for example, nonconscious habits are automated associations in our heads that trigger us to take a particular action when we see a specific trigger (like scanning for deadly ingredients whenever I see unknown food). Habits free up our conscious minds to think about other things.

While these economizing techniques are truly impressive, they aren’t perfect. They falter in two big ways. First, we don’t always make the right decision; for example, sometimes we don’t pay attention to something important (dust on the bag). As researchers, we often call the results of a shortcut going awry a cognitive bias: a systematic difference between how we’d expect people to behave in a certain circumstance, and what they actually do.4 Second, even when we make the right choice, our inherent human limitations mean we don’t always follow through on our intentions (getting the milk). We call that the intention-action gap.

And finally, context matters immensely. It mattered that our kids were running around: we had less of our limited attention to pay to the task (reading ingredients, remembering milk). If milk were in a different aisle, we might have seen it and remembered it. If our kids weren’t running around … never mind. That wouldn’t happen.

So, if I were to put decades of behavioral research into a few bullet points (please forgive me, my fellow researchers!), it would be these:

  • We’re limited beings: in attention, time, willpower, etc.

  • We’re of two minds: our actions depend on both on conscious thought and nonconscious reactions, like habits.

  • In both cases, our minds use shortcuts to economize and make quick decisions, because of these limitations.

  • Our decisions and our behavior are deeply affected by the context we’re in: worsening our ameliorating our biases and our intention-action gap.

  • One can cleverly and thoughtfully design a context to improve people’s decision making and lessen the intention-action gap.

Let’s look at each of these points in a bit more detail.

We’re Limited

Note

This section draws upon Wendel (2019: SD)

Who hasn’t forgotten something at some point in their lives? Heck, who hasn’t forgotten something in the last hour, or the last five minutes?

Forgetfulness is one of our many human frailties. Personally, the older I get, the longer that list seems to grow. There are sadly many ways in which our minds are limited and lead us to make choices that aren’t the best, including limited attention, limited cognitive capacity, and limited memories.

These limitations string together. In terms of our attention, there are nearly an infinite number of things we could be paying attention to at any moment. We could be paying attention to the sound of our own heartbeat, the person who is trying to speak to us, the interesting conversation someone else is having near us, or the report that’s overdue and we need to complete. Unfortunately, researchers have shown again and again that our conscious minds can really only pay proper attention to one thing at a time. Despite all of the discussion in the popular media about multitasking, multitasking is a myth.5 Certainly we can switch our attention back and forth; we can move from focusing on one thing to focusing on another — and we can do so again and again and again. But the reality is, switching focus frequently is costly: it slows us down, and it makes it harder for us to think clearly. Given that we can only focus on one thing at a time, and that there are so many things that we could focus on (many of them urgent and interesting), it’s no wonder that sometimes we aren’t thinking about what we’re doing.

Similarly, our cognitive capacity is limited: we simply can’t hold many unrelated ideas or pieces of information in our minds at the same time. You may have heard the famous story about why phone numbers in the U.S. are seven digits plus an area code: it’s because researchers found that we could hold seven unrelated numbers in our heads at the same time, plus or minus two.6 And, of course, there are so many other ways in which our cognitive capacity is limited. For one, we have a particularly difficult time dealing with probabilities and uncertain events, and with realistically predicting the likelihood of something happening in the future. We tend to over-predict rare but vivid and widely reported events, like shark attacks, terrorist attacks and lightning strikes.7

In addition, we can become overwhelmed or paralyzed when faced with a wide range of options, even as we consciously seek out more choices and options. Researchers call this the paradox of choice:8 our conscious minds believe that having more choices is almost always better, but when it actually comes time to make a decision and we’re faced with our limited cognitive capacity and the difficulty of the choice ahead of us, we balk.

Lastly, when it comes to our memories, they simply aren’t perfect and nothing is going to change that. And, for most of us, having a “not perfect” memory is a significant understatement. Our memories usually aren’t crystal-clear videos, but rather are a set of crib notes from which we reconstruct mental videos and pictures. We remember events that occur frequently (like eating breakfast) in a stylized format, losing the details of the individual occurrences and remembering instead a composite of that repeated experience. Additionally, in some circumstances, we remember the peak and the end of an extended experience, not a true record of its duration or intensity.9

What do all of these cognitive limitations mean? They are important to product people for two main reasons. First, these cognitive limitations mean that sometimes our users don’t make the best choices: even when something is in their best interest. It’s not that they’re bad people; it’s that they are, simply, people. They get distracted, they forget things, they get overwhelmed. We shouldn’t interpret a few bad choices as a sign that they are fundamentally disinterested in doing better (or using our product); instead, it’s just that their simple human frailties may be at work.

Second, our limitations matter because our minds cleverly work around them: by having two semi-independent systems in the brain, and by using lots and lots of shortcuts. When developing products and communications, we should understand those shortcuts, and intentionally with them work or around them.

We’re of Two Minds

You can think about the brain as having two types of thinking—one is deliberative and the other is reactive. 10 Our reactive thinking (aka “intuitive”, or “System 1”), is blazingly fast and automatic, but we’re generally not conscious of its inner workings. It uses our past experiences and a set of simple rules of thumb to almost immediately give us an intuitive evaluation of a situation—an evaluation we feel through our emotions and through sensations around our bodies like a “gut feeling” (Damasio et al. 1996). It’s generally quite effective in familiar situations, where our past experiences are relevant, and does less well in unfamiliar situations.

Our deliberative thinking (aka “conscious”, or “System 2”) is slow, focused, self-aware and what most of us consider “thinking.” We can rationally analyze our way through unfamiliar situations, and handle complex problems with System 2. Unfortunately, System 2 is woefully limited in how much information it can handle at a time – like struggling to hold more than seven unrelated numbers in short-term memory at once! It thus relies on System 1 for much of the real work of thinking. These two systems can work independently of each other, in parallel, and can disagree with one another—like when we’re troubled by the sense that, despite our careful thinking, “something is just wrong” with a decision we’ve taken.11

What this means is that we’re often not ‘thinking’ when we act: At least, we’re not choosing consciously. Most of our daily behavior is governed by our intuitive mode. We’re acting on habit (learned patterns of behavior), on gut instinct (blazingly fast evaluations of a situation based on our past experiences), or on simple rules of thumb (cognitive shortcuts or heuristics built into our mental machinery).12 Researchers estimate that roughly half of our daily lives are spent executing habits and other intuitive behaviors, and not consciously thinking about what we’re doing (see Wood 2019; Dean 2013). Our conscious minds usually only become engaged when we’re in a novel situation, or when we intentionally direct our attention to a task.13

Unfortunately, our conscious minds believe that they are in charge all the time, even when they aren’t. Jonathan Haidt (2006) builds on the Buddha’s metaphor of a rider and an elephant to explain this idea. Imagine that there is a huge elephant, with a rider sitting atop it. The elephant is our immensely powerful but uncritical, intuitive self. The rider is our conscious self, trying to direct the elephant where to go. The rider thinks it’s always in charge, but it’s the elephant doing the work; if the elephant disagrees with the rider, the elephant usually wins.

There are fascinating studies of people whose left and right brains have been surgically separated and can’t (physically) talk to one another. The left side makes up convincing but completely fabricated stories about what the right side is doing (Gazzaniga and Sperry 1967). That’s the rider standing on top of an out-of-control elephant crying out that everything is under control!14 Or, more precisely, crying out that every action that the elephant takes is absolutely what the rider wanted him to do—and the rider actually believes it.

Even though we’re not necessarily choosing what we do, we’re may still be thinking—even when we’re watching TV or daydreaming. The point is that what we can do one thing, and think about something different at the same time. We might be walking to the office, but we’re actually thinking about all of the stuff we need to do when we get there. The rider is deeply engaged in preparing for the future tasks, and the elephant is doing the work of walking. In order for behavior change to occur, we need to work with both the rider and elephant (Heath and Heath 2010).

Figure 1-1. While the mind consciously things about what needs to be done at work, the subconscious mind keeps the body walking (habits and skills), avoids shadowy alleys (an intuitive response), and follows the sweet smell of a bakery (habit).

We Use Shortcuts, Part 1: Biases and Heuristics

Both our conscious minds and our nonconscious minds heavily rely on shortcuts to make the best of our limited faculties. Our minds’ myriad shortcuts (heuristics), help us sort through the range of options we face us on a day-to-day basis, and make rapid, reasonable decisions about what to do. Here are a few of the ones we commonly use:

Descriptive norms

If we aren’t sure what to do, we look at what other people are doing and try to do the same (a.k.a. descriptive norms).15 This is one of our most basic shortcuts. For example: “People here drinking and having a good time, so it’s ok if I do as well.”

Availability heuristic

When things are particularly easy to remember, we believe that are more likely to occur.16 For example, if I’d recently heard in the news about a school shooting, I’d naturally think that that is much more common than it actually is.

Halo effect

If we have a good assessment about someone (or something) overall, we sometimes judge other characteristics of the person (or thing) too positively — as if they have a “halo” of skill and quality.17 For example, if we like someone personally, we might overestimate their skill at dancing, even if we knew nothing about their dancing ability.

Confirmation bias

We tend to seek out, notice and remember information already in line with our existing thinking.18 For example, if someone has a strong political view, they may notice and remember news stories that support that view and forget those that don’t. In a sense, this tendency allows our mind to focus on what appears to be relevant in a sea of overwhelming information. (It’s also quite troubling, since we ignore new information that might help us gain a truer picture of the world or try new things.)

IKEA effect

When we invest time and energy in something — even if our contribution is objectively quite small — we tend to value the resulting item or outcome much more.19 For example: after we’ve assembled IKEA furniture, we often value it more than similar furniture someone else assembled (even if it’s of higher quality) — our sweat equity doesn’t matter in terms of market value, but it does to us.

There are over a hundred of these shortcuts (heuristics) or other tendencies of the mind (biases) that researchers have identified. Unfortunately, these shortcuts can also lead us astray as we try to make good choices in our lives. For example, if you’re a religious person living in a place where people don’t speak about religion, descriptive norms apply a subtle (or not-so-subtle) pressure to avoid doing so yourself. Or a homeless person might look and smell dirty, and the (negative) halo effect could lead others to think negatively about them; they might see the person as less honest and less smart than they really are. While I’ve mentioned some negative outcomes from our shortcuts and biases, it’s important to understand that, at their root, our shortcuts are clever ways to handle the limited resources that our minds have.

Let’s talk a closer look at another way in which our minds economize: habits.

We Use Shortcuts, Part 2: Habits

We use the term “habit” loosely in everyday speech to mean all sorts of things, but a concrete way to think about them is this: a habit is a repeated behavior that’s triggered by cues in our environment. It’s automatic—the action occurs outside of conscious control, and we may not even be aware of it happening.20 Habits save our minds work; we effectively outsource control over our behavior to cues in the environment (Wood and Neal 2007). That keeps our conscious minds free for other, more important things, where conscious thought really is required.

Habits arise in one of two ways (Wood and Neal 2007).21 First, we can build habits through simple repetition: whenever you see X (a cue), you do Y (a routine). Over time, your brain builds a strong association between the cue and the routine, and doesn’t need to think about what to do when the cue occurs—it just acts. For example, whenever you wake up in the morning (cue), you get out of bed at the same spot (routine). Rarely do you find yourself lying in bed, awake, agonizing over which exact part of the bed you should exit by. That’s how habits work—they are so common, and so deeply ingrained in our lives, that we rarely even notice them.

Sometimes, there is also a third element, in addition to a cue and routine: a reward, something good that happens at the end of the routine. The reward pulls us forward—it gives us a reason to repeat the behavior. It might be something inherently pleasant, like good food, or the completion of a goal we’ve set for ourselves, like putting away all of the dishes (Oullette and Wood 1998). For example, whenever you walk by the café and smell coffee (cue), you walk into the shop, buy a double mocha espresso with cream (routine), and feel chocolate-caffeine goodness (reward). We sometimes notice the big habits—like getting coffee—but other, less obvious habits with rewards (checking our email and receiving the random reward of getting an interesting message) may not be noticed.

Once the habit forms, the reward itself doesn’t directly drive our behavior; the habit is automatic and outside of conscious control. However, the mind can “remember” previous rewards in subtle ways; intuitively wanting (or “craving”) them.22 In fact, the mind can continue wanting a reward that it will never receive again, and may not even enjoy when it does happen (Berridge et al. 2009)!23 I’ve encountered that strange situation myself—long after I formed the habit of eating certain potato chips, I still habitually eat them even though I don’t enjoy them and they actually make me sick.24 This isn’t to say that rewards aren’t important after the habit forms—they can push us to consciously repeat the habitual action and can make them even more resistant to change.

The same characteristics that make habits hard to root out can be immensely useful. Thinking of it another way, once “good” habits are formed, they provide the most resilient and sustainable way to maintain a new behavior. Charles Duhigg, in The Power of Habit (Random House, 2012), gives a great example. In the early 1900s, advertising man Claude C. Hopkins moved American society from being one in which very few people brushed their teeth to a majority brushing their teeth in the span of only ten10 years. He did it by helping Americans form the habit of brushing:25

  • He taught people a cue—feeling for tooth film, the somewhat slimy, off-white stuff that naturally coats our teeth (apparently, it’s actually harmless in itself).

  • When people felt tooth film, the response was a routine—brushing their teeth (using Pepsodent, in this case).

  • The reward was a minty tingle in their mouths—something they felt immediately after brushing their teeth.

Over time, the habit (feel film, brush teeth) formed, strengthened by the reward at the end. And, so did a craving—wanting to feel the cool tingling sensation that Pepsodent caused in their mouths that people associated with having clean, beautiful teeth (Figure 1-2).

Pepsodent advertisement from 1950  highlighting the cue for the habit of brushing teeth  tooth film  courtesy of Vintage Adventures.com  http   Vintage Adventures.com
Figure 1-2. Pepsodent advertisement from 1950, highlighting the cue for the habit of brushing teeth: tooth film (courtesy of Vintage-Adventures.com (http://Vintage-Adventures.com))

Stepping back from Duhigg’s example, let’s look again at the three pieces of a reward-driven habit.

  • The cue is something that tells us to act now. The cue is a clear and unambiguous signal in the environment (like the smell of coffee) or in the person’s body (like hunger). BJ Fogg and Jason Hreha categorize the two ways that they work on behavior into “cue behaviors” and “cycle behaviors” (Fogg and Hreha 2010): based on whether the cue is something else that happens and tells you it’s time to act (brushing teeth after eating breakfast) or the cue occurs on a schedule, like at a specific time of day (preparing to go home at 5 p.m. on a weekday).

  • The routine can be something simple (hear phone ring, answer it) or complex (smell coffee, turn, enter Starbucks, buy coffee, drink it), as long as the scenario in which the behavior occurs is consistent. Where conscious thought is not required (i.e., consistency allows repetition of a previous action without making new decisions), the behavior can be turned into a habit.

  • The reward can occur every time—like drinking our favorite brand of coffee—or on a more complex “reward schedule.” A reward schedule is the frequency and variability with which a reward occurs each time the behavior occurs. For example, when we pull the arm of (or press the button on) a slot machine, we are randomly rewarded: sometimes we win, sometimes we don’t. Our brains love random rewards. In terms of timing, rewards that occur immediately after the routine are best—they help strengthen the association between cue and routine.

Researchers are actively studying exactly how rewards function, but one of the likely scenarios goes like this: when these three elements are combined, over time, the cue becomes associated with the reward.26 When we see the cue, we anticipate the reward, and it tempts us to act out the routine to get it. (Nir Eyal 2012) has a great phrase for this process: the desire engine. The process takes time, however—varying by person and situation from a few weeks to many months (Lally et al. 2010). And again, the desire for the reward can continue long after the reward no longer exists (Berridge et al. 2009).

We’re Deeply Affected by Context

And finally, we turn to the last big lesson: the importance of context on our behavior. What we do is shaped by our contextual environment in obvious ways, like when the architecture of a building focuses our attention and activity toward a central courtyard. It’s also shaped in non-obvious ways: by the people we talk and listen to (our social environment), by what we see and interact with (our physical environment), and the habits and responses we’ve learned over time (our mental environment). These non-obvious effects can show themselves even in slight changes in wording of a question. Let’s take a look at one famous example.

Suppose there’s an outbreak of a rare disease, which is expected to kill 600 people. You’re in charge of crafting the government’s response.

You have two options:

  1. Option A will result in 200 people saved.

  2. Option B will result in a one-third probability that 600 people will be saved, and a two-thirds probability nobody will be saved.

Which option would you choose?

Now suppose there’s another outbreak, of a different disease, which is also expected to kill 600 people.

You have two options:

  1. Option C will result in the certain death of 400 people.

  2. Option D will result in the one-third probability that nobody dies and two-thirds probability everyone dies.

Which option would you choose now?

Presented with these options, people generally prefer Option A in the first situation, and Option D in the second. In Tversky and Kahneman’s early studies using these situations,27 72% of people choose A (versus 28% for B), but only 22% choose C (versus 78% for D). Which, as you’ve probably caught on, doesn’t make much sense, since for both A and C, four hundred people face certain death, and two hundred will be saved. Logically, if someone prefers A, that person should also choose C. But that isn’t what happens, on average.

Many researchers believe there is such a stark difference in people’s choices for these two mathematically equivalent options (A and C) because of how the choices are framed. One is framed as a gain of two hundred lives, and the other is framed as a loss of four hundred lives.28 The text of C leads us to focus on the loss of four hundred lives (instead of the simultaneous gain of two hundred) while the text of A leads us to focus on the gain of two hundred lives (instead of the loss of four hundred). And people tend to avoid uncertain or risky options (B and D) when there is a positive, or “gain,” frame (A versus B), and seek risks when faced with a negative, or “loss,” frame (C versus D).

That’s, well, odd. It shows how relatively minor changes in wording can lead to radically different choices. However, it is especially odd since this isn’t something that people would explain themselves. If they were faced with both sets of choices, they wouldn’t say, “Well, I recognize that A and C have exactly the same outcomes, but I just intuitively don’t like thinking about a loss, even when I know it’s a trick of the wording.” Instead, the person might simply say: “knowing that I can save people is important (A) and I really don’t like the thought of knowingly letting people go to certain death (C).”

Or to use the rider and elephant metaphor again, the rider thinks it’s in control, but the elephant really is. Our conscious rider explains our behavior after it’s happened, without knowing the real reason itself. We are, as social psychology Tim Wilson nicely puts it, “strangers to ourselves.”29 30 .

This lack of self-knowledge also extends to what we’ll do in the future. We’re bad at forecasting the level of emotion we’ll feel in future situations, and generally bad at forecasting our own behavior in the future.31 For example, people can significantly overestimate the impact of future negative events on their emotions, such as a divorce or medical problem.32 We’re not only affected by the details of our environment, but we don’t often recognize that our environment has affected us in the past, and so we don’t consider the influence when we’re thinking about what we’ll do in the future.

We Can Design Context

Because our environment affects our decision making and behavior, redesigning that environment can change decision making and behavior. We can thoughtfully develop product designs and communications that take this knowledge into account, and help people both make better decisions, and follow through on their intentions to act: which is the focus of the rest of this book.

What Can Go Wrong

We’ve touched upon the areas in which the quirks of our minds can lead to bad outcomes a bit already. It’s useful to make these areas more explicit and clear though: since that understanding is the foundation of making things better. We can distinguish two major branches of research, both of which are useful for our purposes. Broadly, behavioral science helps us understand quirks of decision making and quirks of action.

Quirks of Decision Making

The shortcuts that our minds use lead us to rapid and generally good decisions without evaluating all of the options thoroughly. They’re necessary, let us make the best of our limited faculties, and are generally very helpful – until they aren’t.

Think about what happens when you’re visiting a new town, and you’re walking around looking for a bite to eat.

  • You might look on your phone to see which restaurant has the highest and most ratings. Or, you might peek in the window to see where other people are eating – it’s never a good sign if a place is deserted, right? That’s the social proof heuristic: if you aren’t sure what to do, copy what other people are doing.

  • You might have seen ads touting a particular restaurant, and when you pass by the store it catches your eye. If at least you’ve heard of it, that’s good! The availability heuristic supports that feeling.

  • You might notice a chain you’ve been to and liked recently, and you figure that what’s been good recently probably will be good again. That’s, among other things, a recency bias helping you choose.

  • You might look at their prices, and see that one place is offering burgers at $10, and another at $2. You’re all for saving money, but that’s just too much: there must be something wrong with the $2 place. That’s the shortcut of price as a signal of quality.

In each case, the shortcuts help us make quick decisions. None of them are perfect, certainly. We could find ways to make them better, but by and large, these are all reasonable choices. Most importantly, they are fast: they save us from spending hours and hours reviewing all of the pros and cons of each restaurant in the city, judging them in terms of their taste and nutrition on 50 dimensions, the time to reach each one, their ambiance and likely social environment, etc. Shortcuts like these make decisions possible, and avoid decision paralysis.

Switching contexts, let’s think about investing money in the stock market. You’ve recently received a windfall, and you’re looking to invest some of it for the future. You don’t have much experience in investing, so what do you do?

  • You might look online to see what everyone else is talking about and investing in, using social proof. Awesome! Except that’s how bubbles are made: think Bitcoin.

  • You might invest in things you’ve heard of, using the availability heuristic. Excellent! Except, again, that’s how bubbles are made.

  • You might look at what’s performed well in the past, and invest in that, using recency bias. The problem is that past performance doesn’t predict future performance. Not a great guide.

  • You might look at prices – if a stock has a really high price, it must be a good investment, right? OK, you know where that goes.

And so forth. You get the picture.

When shortcuts work well, we often don’t notice them: we effortlessly and quickly come to a decision. Or, in the rare cases we do notice them, we call them clever. In the research community, we refer to them, in the positive sense, as fast and frugal heuristics (where heuristic is another word for shortcut).33

When the same shortcuts get us into trouble, however, we call them foolish: how could I have been so stupid as to follow the crowd! Since you’re reading this book, you’ve probably come across the term ‘bias’, which is intimately related to these shortcuts. A bias, strictly speaking, is a tendency of thought or action: it is neither positive nor negative; it just is. Most people, including many researchers, use it explicitly in the negative sense, as in a ‘deviation from an objective standard, such as a normative model’ of how people should behave (Soll et al 2015, building on Baron 2012). A shortcut or heuristic gone awry creates a bias.

Once we call something a bias, it’s easy to jump to the logical conclusion: well, we just need to get rid of them! Remove people’s biases! It’s not that simple. It’s precisely because these shortcuts are so clever and smart that they are hard to change. If we simply did foolish things, we’d eventually learn not to do them (either in our own lives or across the history of our species). But these shortcuts are not foolish at all: they are immensely valuable. They are sometimes out of context, and used at the wrong time. We can’t and shouldn’t be able to simply stop ‘social proof’ or the ‘availability heuristic. That would wreak more havoc than it would solve.

The reality is that we can’t avoid using mental shortcuts completely. Rather, by understanding how our conscious and nonconscious shortcuts are triggered by the details of our environment, we can learn to avoid some of the negative outcomes that result from their use. So the first challenge of behavioral science – of designing for behavior change – is to help people make ‘better’ decisions, given the valuable, but imperfect shortcuts we all use.

Quirks of Action

I, myself, am made entirely of flaws, stitched together with good intentions.

Augusten Burroughs

Behavioral Science also helps us understand the quirks of our behavior, above and beyond our decisions: especially why we decide to do one thing, and actually do something else. This understanding starts with the same base as for decisions: that we’re limited beings with limited attention, memory, willpower, etc. Our minds still use clever shortcuts to help us economize and avoid taxing our limited resources. But these facts make themselves felt in different ways after we’ve made a decision. In particular, we have errors of inaction and errors of unintentional action.

In the research literature, the intention-action gap is one of the major errors of inaction. We’ve all felt this gap in one way or another. For example:

Do you have a friend with a gym membership or exercise equipment at home, that they just don’t use that often? Do they really enjoy giving money to a gym? Of course not. They really intended to go to the gym when they first signed up, or to use that fancy machine when they first bought it. It’s just that that didn’t. The benefits of the gym are still clear. And despite all of their past failures, they keep hoping and believing that they’ll get it together and go regularly. But, something else gets in the way that isn’t motivation. So, they keep paying — and keep failing to go.

With the intention-action gap, the intention to act is there, but people don’t follow through and act on it. It’s not that people are insincere or lack motivation; the gap happens because of how our minds are wired. It illustrates one of the core lessons of behavioral science: Good intentions, and the sincere desire to do something, aren’t enough.

And unintentional action? I don’t mean revelry that we regret the next morning. Rather, I mean behaviors that we don’t intend even while we’re doing them: often because we aren’t aware or thinking about them. One cause of these we’ve already looked at – habits.

Our habits allow us to take action without thought: effortlessly riding a bike, navigating the interface of common applications, or playing sports. But naturally, they can also go awry.

Do you know someone who just can’t stop eating junk food? Each night, when he gets home, he’s tired and needs a break. On the way to the couch, he picks up a candy bar and a bag of chips, and then sits down with the laptop to watch videos. A hour or so later, he takes a break and notices the crumpled up wrapper and bag, and throws them away. He’s still hungry, and hardly noticed the snacks on their way into his mouth.

There are many other examples, like when we get hooked on cigarettes (where it appears the habit is more powerful than the nicotine: Wood 2019), on late night TV binging, or on incessantly checking social media apps. Habits, as learned patterns of behavior, are inherently neutral. We learn ‘bad habits’ just as we learn good habits: through repetition. Our minds automate them in order to save us cognitive work. For the person eating junk food, maybe it was a particularly rough time at work, or maybe it was when he first moved to the city and didn’t know where to get good groceries, that setup the routine that his mind automated. Regardless of the source, once that junk food habit was set, it was hard to shake.

Just as with our decision-making shortcuts, try to imagine a world in which we didn’t have habits. In which we had to carefully think though every decision, every action, as if we were a teenager first learning to drive a car. We’d be exhausted wrecks in no time. We can’t not rely on habits, nor ask users of our products not to do so. Rather, as developers of behavior-changing products and communications, we need to understand them and work with (or around) them. Shortcuts gone awry, habits that people wish they didn’t have, and the yawning gap between people’s intentions and their actions: these are the problems that we’re here to solve. These are why we design for behavior change.

A Map of the Decision-Making Process

We’ve talked about a range of ways by which the mind decides what to do next—from habits and intuitive responses to heuristics and conscious choices. Table 1-1 lists where each of these decision-making processes often occurs.

Table 1-1. The various tools the mind uses to choose the right action
MECHANISM WHERE IT’S MOST LIKELY TO BE USED
Habits Familiar cues trigger a learned routine
Other intuitive responses Familiar and semi-familiar situations, with a reaction based on prior experiences
Active mindset or self-concept Ambiguous situations with a few possible interpretations
Heuristics Situations where conscious attention is required, but the choice can be implicitly simplified
Focused, conscious calculation Unfamiliar situations where a conscious choice is required or very important decisions we direct our attention toward

As you look down this list, they are ordered in terms of how familiar the situation is in our daily lives, and how much thought is required. That’s not accidental; the mind wants to avoid work, and so it likes to use the process that requires the least thought. Unfamiliar situations (like, for most people, math puzzles) require a lot of conscious thought. Walking to your car doesn’t.

But that’s doesn’t mean that we always use habits in familiar situations, or we only use our conscious minds in unfamiliar ones. Our conscious minds can and do take control of our behavior, and focus very strongly on behaviors that otherwise would be habitual. For example, I can think very carefully about how I sit in front of the computer, to improve my posture; that’s something I normally don’t think about because it’s so familiar. That takes effort, however. Remember that our conscious attention and capacity is sorely limited. We only bring in the big guns (conscious, cost-benefit calculations) when we have a good reason to do so: when something unusual catches our attention, when we really care about the outcome and try to improve our performance, and so on.

You can think about the range of decision-making processes in terms of the default, lowest energy way that our minds would respond, if we didn’t intentionally do something different. Those defaults occur on a spectrum from where very little thinking is required to where intensive thinking is needed (Figure 1-3).

Figure 1-3. In familiar situations, our minds can use habits and intuitive responses to save work

Here are some simple examples, using a person who is thinking about going on a diet and doesn’t have much past experience with diets:

Eating potato chips out of a bag

Very familiar. Very little thought. Habit.

Picking out what to get at your favorite buffet bar

Familiar. Little thought. Intuitive response or assessment.

Signing up for dieting workshops at the office

Semi-familiar. Some thought. Self-concept guides choice.

Judging whether a cheeseburger will violate your diet’s calorie limit for the day

Unfamiliar. Thought required, but with easy ways to simplify.34 Heuristic.

Making a weekly meal plan for the family based on the individual calorie and nutrient counts of hundreds of foods

Unfamiliar. Lots of attention and thought. Conscious, cost-benefit calculations.

As behavior change practitioners, it’s a whole lot easier to help people take actions that are near the potato chip side of the spectrum, rather than the meal plan side. But it’s much harder for people to stop actions on the potato chip side than on the meal plan side. The next two chapters will look at both, though: how to create the good, and how to stop the bad.

1 Thaler and Benartzi (2004) [Add additional cites]

2 Ariely (2008), Thaler and Sunstein (2008), Kahneman (2011).

3 [Original Fun Theory Cite; Wood includes it] ; [Original Schipol Cite; Thaler 2008 includes it]; Soman (2015).

4 This definition comes from Soman 2015 [[]]. Not all biases are directly caused by heuristics gone awry, but many can be traced back to time or energy saving devices in the mind. One major category that isn’t from a heuristics are identity-preserving biases (mental quirks that make us feel better about ourselves), like overconfidence bias.

5 Hamilton (2008)

6 Miller (1956)

7 E.g., Manis et al. (1993). These outcomes are actually the result of reasonable, but imperfect shortcuts that our minds use to counter our limitations; we’ll talk about those shortcuts shortly.

8 See Schwartz (2004), Iyengar (2010)

9 Kahneman (2000)

10 I.e., the family of theories referred to as ‘dual process theory’ in psychology. Dual process theories give a useful abstraction—a simplified but generally accurate way of thinking about—the vast complexity of our underlying brain processes.

11 There are great books about dual process theory and the workings of these two parts of our mind. Kahneman’s Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011) and Malcolm Gladwell’s Blink (Back Bay Books, 2005) are two excellent places to start; I’ve included a list of resources on how the mind works (including dual process theory) in the online Appendix to this book.

12 The boundaries between “habit” and other processes (“intuition,” etc.) are somewhat blurry; but these terms help draw out the differences among types of System 1 responses. See Wood and Neal (2007) for the distinction between habits and other automated System 1 behaviors; see Kahneman (2011) for a general discussion of System 1 behaviors.

13 I’m indebted to Neale Martin for highlighting the situations in which the conscious mind does become active. See his book Habit (2005) for a great summary of the literature on when intuitive and deliberative processes are at play.

14 This isn’t to say that the rider is equivalent to the left side of the brain and the elephant to the right side. Our deliberative and intuitive thinking isn’t neatly divided in that way. Instead, this is just one of the many examples of how rationalizations occur when our deliberative mind is asked to explain what happens outside of its awareness and control. Many thanks to Sebastian Deterding for catching that unintended (and wrong!) implication of the passage.

15 Gerber and Rogers (2009)

16 Tversky and Kahneman (1973)

17 Nisbett and Wilson (1977)

18 Watson (1960)

19 Norton et al. (2012)

20 See Bargh et al. (1996) for a discussion of the four core characteristics of automatic behaviors, such as habits: uncontrollable, unintentional, unaware, and cognitively efficient (doesn’t require cognitive effort).

21 There’s a nice summary and video here: http://newsinhealth.nih.gov/issue/jan2012/feature1, http://www.cbsnews.com/8301-18560_162-57423321/hooked-why-bad-habits-are-hard-to-break/

22 There’s an active debate in the field about how exactly the notion of a reward affects a person after the habit is formed. See Wood and Neal (2007) for a discussion.

23 See Berridge et al. (2009) on the difference between “wanting” and “liking.” The difference between wanting and liking is a possible explanation for why certain drugs instill strong craving in addicts although taking them long stopped being pleasurable.

24 And yes, for those of you who recall this example from the first edition of the book, it’s still true today…

25 Duhigg’s story also is an example of the complex ethics of behavior change. Hopkins accomplished something immensely beneficial for Americans and American society. He also was wildly successful in selling a commercial product in which demand was partially built on a fabricated “problem” (the fake problem of tooth film, which is harmless, rather than tooth decay, which is not).

26 This is one form of “motivated cueing,” in which there is a diffuse motivation applied to the context that cues the habit (Wood and Neal 2007). There is active debate in the field on how, exactly, motivation affects habits that have already formed.

27 Tversky and Kahneman (1981)

28 Many researchers accept this explanation, but not all. As often happens in science, there is a divergence of opinion on why framing effects like this occur. An alternative view is that people make a highly simplified analyses of the options and the two different options have two different simplified answers. See Kühberger and Tanner (2010) for one such perspective.

29 Wilson (2002)

30 See Nisbett and Wilson (1977b) for an early summary

31 Wilson and Gilbert (2003) (emotion); Wilson and LaFleur (1995) (behavior)

32 See Wilson and Gilbert (2003) for this and other examples.

33 Gigerenzer and Todd (1999); Gigerenzer (2004)

34 One such commonly used heuristic is the volume of the food—yes, how big it is. Barbara Rolls, head of the Penn State Human Ingestive Behavior Lab, developed a diet that leverages this heuristic to help people lose weight (see Rolls 2007 and http://nutrition.psu.edu/foodlab/barbara-rolls).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.59.187