Chapter 11

Believing Our Own Stories

Abstract

PrD requires us to learn from failing. Thinking we’ve got our story right and working to confirm that story creates opportunity costs. When we believe too much in our own stories, we are unaware of the wealth of insights obtained by watching our ideas fail. Thinking in this fashion is counterintuitive. We don’t normally try to prove ourselves wrong. To embrace self-doubt requires overcoming our cognitive biases. It also requires changing the way we approach our artifacts and engagement with external stakeholders. Failure breeds constructive experimentation, leading to insight and innovation. Believing our own stories and insisting we’re right preempts such discoveries from occurring.

Keywords

belief
confirmation bias
judgment
accuracy
confidence

The human understanding when it has adopted an opinion … draws all things to support and agree with it.

—Francis Bacon

Overview

Though called Presumptive Design, PrD is fundamentally about research, not design. Design is the medium; research is the activity. Irrespective of its name, the process challenges both designers and researchers, because it requires each to rethink the way they’ve approached their disciplines. Another challenge (to everyone, not just designers and researchers) is PrD’s insistence we enter engagements with stakeholders believing we’re wrong. The principle of Design to Fail was discussed in Chapter 4. Here we’ll discuss the flipside of this principle—the hazard of believing our own stories.
It’s difficult to imagine spending (or perhaps wasting) our and our stakeholders’ time exploring things we know to be wrong. We hold onto our belief in the value and fitness of our ideas; that’s as it should be. Once we’ve spent time crafting an artifact expressing those ideas, we become intimate with it. We go over it in our minds, thinking deeply about it, ready to explain each of the decisions we made when creating it. In this way, we convince ourselves we have a really good idea. It might very well be. But PrD necessitates our disproving that position. Even if we’re confident in our assumptions, we must at the same time honestly seek to hear how, and in what way, they are wrong.
We must get good at harboring these seemingly disparate notions simultaneously: It’s a great idea (given what we currently know) and it’s wrong. We’re looking for that “Aha!” moment of surprise that shifts our perspective. Consider the delight we feel when we look at visual illusions, perceiving the image first one way and then another. In Figure 11.1a, for instance, is Edgar Rubin’s famous figure-ground vase. In Figure 11.1b is William Ely Hill’s (1915) drawing of the “my wife and mother-in-law” illusion, from an older (1888) German postcard. Also see Figure 11.2.
image
Figure 11.1 (a) Vases and faces (b) my wife and my mother-in-law
image
Figure 11.2 (a) Edelson’s checkerboard (b) Edelson’s checkerboard proof
Keeping two completely opposite notions in our heads at the same time is challenging. In the case of visual illusions, it’s impossible. Instead, we rapidly shift from one to the other. But in the case of a point of view or assumption, once we have seen it is “wrong,” we can never go back and believe it to be true. That moment of transformation can be delightful, surprising, and sometimes embarrassing. In PrD, we prepare ourselves for it in the belief it is inevitable. Still, it is always a surprise.

Increasing Investment Increases Belief

Consider a scenario we encountered with a design team member (we’ll call him Jason). Jason was taking the first step to envision a completely different intranet experience for his group. Jason had over 20 years of design experience, was great with stakeholders, and knew his way through the design process. He wasn’t that strong with user research, however, having had only a smattering of engagements over the years. Until he got involved with PrD, he’d never really taken an interest in research. But Jason became enamored with PrD when he was introduced to it. PrD gave him a chance to use his design skills in service of better understanding his stakeholders. Jason went through formal training and had several opportunities to experience PrD Creation and Engagement Sessions.
Jason had a presumption about the intranet and how a visitor’s homepage should work. He was adamant about it being a novel mash-up of widely varying content, tools, feeds, and the like. In addition, based on substantial research already completed, Jason had a metaphor for the homepage: We’ll call it “Main Street.” Jason was most concerned about getting his vision right and getting his story out there. Relying on his 20 years of experience, he knew he had to present something to sponsors that would make them salivate and want more. Alarm bells went off, however, as he explained he expected to run his PrD sessions after he presented his vision. He suggested he might use PrD to test a rollover interaction or perhaps an approach to navigation.
After making it this far into the book, we hope you appreciate Jason’s misunderstanding about PrD. Over a couple of beverages, he and Leo discussed how PrD enabled him to get external stakeholder feedback on his initial story. He could use the process to test his metaphor and his initial assumptions. Especially because he felt so strongly about his story, he needed to quickly understand whether external stakeholders would want or understand it. Most importantly, the stories he told his sponsors shouldn’t be his, but rather his external stakeholders’. Thankfully, light bulbs went off and he got it. His presentation to his sponsors would be more powerful if he showed how visitors to the homepage understood and had shaped the concept. He understood PrD wasn’t limiting his design proposals. PrD wasn’t about asking stakeholders for design reviews or feedback on his design ideas. He understood and appreciated his presumptions were extraordinary and in need of vetting.
What was amazing to the rest of us was why Jason hadn’t internalized his understanding of PrD even though he’d had so much experience with it in the past.
Jason had fallen into a common trap: He believed his own story so much he couldn’t see past it to test it. After the happy-hour chat with Leo he was able to envision the PrD session, imagine the artifact he needed to create, and had a sense of how the script could go. Now Jason was prepared to be wrong; he was setting up the session to help him find out how wrong he was as quickly as possible. “Better to figure that out now, before we spend a lot of money building something,” he confirmed.
Jason had seen the light: Believing his own stories could have been costly. Jason’s experience is not unusual; it illustrates a pitfall many experienced designers encounter. Designers spend a lot of time thinking about their approaches, crafting solutions and concepts, and reflecting on them again and again. By the time they are ready to talk to stakeholders, they have thought it all through. PrD asks them let go of all this.
Designers are not alone in tripping over their beliefs. Researchers also fall prey to believing their own stories, but in different ways. We’ll illustrate a few of the common traps related to believing our story: pitching, presenting, and arguing. We spend the bulk of the chapter exploring the underlying psychological basis for believing our own stories, or, as cognitive psychologists call it, confirmation bias.

The Three Traps

Pitching the Design

Designers are most likely to fall into the trap of pitching the design. Anyone who’s been part of a design group knows the “pitch” is a key part of gaining support from clients, sponsors, or other stakeholders. It can take a designer years to develop the ability to effectively pitch an idea. We shouldn’t be surprised experienced designers expect this to be part of the process. But as we discuss in Chapters 14 and 16, it’s exactly the wrong thing to do during the Engagement Sessions. After Jason had internalized the process, he realized his idea, his story, and his concept didn’t matter one iota. What mattered was his ability to get external stakeholders to tell their stories in reaction to the artifact.

Presenting the Design

The seemingly innocent cousin to treating an Engagement Session as a pitch is the impulse to present or explain the design to external stakeholders.
This trap occurs when the team explains the design as envisioned. Establishing rapport at the beginning of the Engagement Session is vital (as we discuss in detail in Chapter 14). Team members new to PrD may be tempted to build rapport by explaining the design. These initial remarks are intended to put the stakeholder at ease, but they cause considerable damage. Presenting the idea, by way of introducing the artifact to the external stakeholder, ruins the session; it’s as good as over. By the time the introduction is finished, the team has filled the stakeholder’s head with its own notions, irrevocably losing access to the stakeholder’s own thoughts. Explaining or presenting the design is often accompanied by a team member holding the artifact, preventing the stakeholder from engaging with it. The opportunity for PrD is lost—the team has told the stakeholder its own story, wiping out the stakeholder’s story in the process, and the team never crossed the threshold of offering the artifact as a “social object.” (We discuss artifacts as social objects at length in Chapter 14.)
Team members may also feel an urge to explain the design when the stakeholder is confused about something or asks a question. The impulse to help, to answer the stakeholder’s questions is a natural response; we want to be helpful and not alienate our participants. Such questions and confusion indicate a mismatch between the external stakeholder’s mental model and what she thinks the artifact is trying to communicate. These tender green shoots are easily trampled by the team’s explanations.
The only time we explain the design is at the conclusion of the Engagement Session. The stakeholder may need closure about what the team had in mind and it is a common courtesy to answer her questions. During these postsession discussions the team gleans additional information, but keep in mind we’re no longer doing PrD; this is now a traditional user interview—with the same benefits and risks.

Arguing with Stakeholders

The final trap teams encounter is approaching the Engagement Session as a contest of wills. When the stakeholder appears to misunderstand the purpose of the artifact, she reveals an important perspective. We’re looking for those revelations. Such misunderstandings are the most important provocation the artifact offers. It is expressly forbidden for anyone on the team to contest anything the stakeholder says about the design, artifact, or concept. Correcting the stakeholder not only quashes insights but also exposes the team’s misconception about the process. The only people who are wrong are the team members.

Confirmation Bias

Seeking Supporting Data

Resisting the impulse to answer questions, to help, to explain the concept can be hard to do. We think we have a good idea. We think our assumptions are mostly correct. We think we’re really onto something. How can we possibly want to prove ourselves wrong? It feels unnatural, and that’s because it is. Wanting to disprove our own world view is not how we naturally think. When assessing our hypotheses, we selectively seek evidence we think supports them.
Tell someone a series of three numbers, such as “2-4-6,” and tell him the numbers adhere to a rule. The rule is a secret, but he can try to figure it out by offering other three-digit sequences and being told whether or not they also adhere to the rule. What will he probably do? He’ll probably make a guess as to what the rule is, such as “even numbers that increase by two.” Then he’ll throw out sequences that adhere to his guess, such as “6-8-10,” “12-14-16,” etc. After hearing “Yes” a number of times, he’ll announce his guess—and he might very well be wrong. What if the rule was merely any series of ascending numbers? He’d never figure this out unless he tried to prove his guess wrong. In Wason’s classic study on the confirmation bias,1 in which exactly this exercise was done, only 21% of participants ever figured out the correct rule.

Resisting Effort

We don’t just fail to violate our hypotheses in general, we also remain fixated on a specific hypothesis we’ve generated. In short, once we form a hypothesis, conceive of a solution or a design idea, we slip on blinders. We home in on that single idea. Rather than generating as many alternate ideas as possible, we become defensive. Rethinking things takes effort and we naturally resist it. When people are presented evidence for both sides of an important social issue, they are prone to rate evidence conforming to their stance as being of better quality than the evidence against.2 In another classic study, this still occurred when people were explicitly instructed to be “objective” and “unbiased” (an impossible ask!).3

Experience Is Not Our Guide

From experience we’ve witnessed being right about our design intuitions. So while all of the introductory material about confirmation bias might be true for other people, our hunches are, well, gold, and we have the evidence to prove it. But even our own experiences can be misleading!4 Say we have a good track record. We’ve done lots of projects and most of them have been successes. We’ve approached them in much the same way and consistently know how to produce such successes. These repeat performances increase our confidence in our abilities and hunches. Here’s the problem: What we learn from experience is what we’re operationally reinforced to think, but such knowledge is situational. Depending on the environment, what we’re reinforced to think may not constitute valid knowledge.
Consider the following: We have a problem. We do some research, derive a solution, and execute on it. Sounds pretty straightforward: We’ve identified a good solution to the problem as we understand it, based on our upfront research. The solution is built and we consider it a success. Should we notch that up as another good design call? Maybe not. It’s not just a matter of making a decision and observing its outcome. If only the world worked that way!

Success Is Not So Easily Defined

As Einhorn and Hogarth stress in their seminal work on the relationship between our confidence and accuracy,5 what is left unspoken are all of the “meta” variables at play in this simple dynamic. Relevant variables include:
Our decision to go with a certain solution
Our criterion for making the decision
Our assessment of the outcome of the decision
The criteria we use to make the assessment
The selection ratio of how many possible solutions are tried out and assessed
The base rate of how many solutions would be deemed successes given our criteria
How well our judgments actually differentiate between good and bad solutions
This gets complex.6 Imagine our criteria for assessing the outcomes of our decisions aren’t particularly strict. Let’s say we’re interested in profitable outcomes or high rates of task completions. Perhaps we define success as an absence of “showstoppers.”
No problem, right? Wrong. With such lax criteria, we’ll incur high opportunity costs. So what if users are able to successfully complete key tasks? What about other solutions that might have enabled them to complete the same tasks quicker? So what if we were profitable? How much money did we leave on the table? Was there a better way to achieve the same results, or a way of producing even better results? Forget a lack of showstoppers. What about user delight or customer satisfaction? What about solutions disrupting the status quo, whether that’s user productivity or market penetration?

In a Small Pool, Any Fish Looks Big

There may be a lot of possible solutions to our problem we didn’t choose to explore. And whenever the proportion of possible solutions tried is less than the base rate of how many would be deemed successes (given our criteria), we’ll be reinforced to think our hunches are good even if they don’t really differentiate between good and bad solutions at all! The more lax our criteria, the bigger the problem, and this is a costly trap to fall into.
The way out is discussed in the next chapter. In brief, we must remain skeptical of our own stories. We’re wrong. We just don’t know in what ways. The antidote to Believing Our Own Stories is to actively imagine the opposite is true. We think our design idea is the right solution? We think we know what users need? We think we know how they’ll respond to our artifact? We need to consider the opposite. How might our assumptions be unfounded? How might the narrative we’ve weaved about the users and their needs be wrong? We’re there to uncover the external stakeholder’s story, not to confirm our own. Our posture, approach, and demeanor are that of a student learning from the master. The external stakeholder is the master. We are the student. Only she knows her story. We don’t. To counteract our beliefs, we must view our work through this lens.

Been There, Done That

The last pitfall around Believing Our Own Stories is fatigue: After several Engagement Sessions, we hear external stakeholders tell the same stories. This influences our beliefs in two ways: (a) Our story changes the more we work with external stakeholders and (b) external stakeholders may continually reinforce our initial assumptions.

Owning the Stakeholder’s Story

It doesn’t take very long for a team to begin to believe the external stakeholders’ stories, especially if it hears the same ones over and over again. Something funny happens; the team subtly shifts its thinking and begins to own the new story. That’s actually a great thing! But it also means it’s time to move on and find something else to talk about. If the team encounters the same “new” story again and again, it’s not learning anything new.

The Stakeholders’ Stories Reinforce the Team’s Assumptions

When the team encounters stakeholders who validate its assumptions, this should be cause for celebration! We always appreciate the jolt of pleasure that comes when a stakeholder expresses our assumptions back to us without any prompting on our part. We’ve succeeded! So why do we put this into a pitfall? Two reasons:
1. Once we’ve learned stakeholders believe the same thing we do (a majority use the same language, bring up the same idea, or consistently display the same reactions to the assumptions), we must remove the line of inquiry from the script and focus on something else. If we don’t, we just waste time.
2. Equally important, we must still question the validity of our understanding. Although we may have heard the same story multiple times, we still must accept the possibility we’re wrong. After all, we’ve heard it only a handful of times. It isn’t predictive of the entire population we’re trying to serve. It serves as a signal, and if big bets depend on being absolutely certain, the team needs to use a different method (perhaps a quantitative instrument) to be certain that signal is real.

Summary

PrD is fundamentally about finding out how wrong we are in our assumptions, but it is human nature to believe in those assumptions.
We must avoid selling our ideas to stakeholders by pitching, presenting, or arguing with them. Hand the artifact over and let them pitch, present, and argue with us.
Psychologically, the cards are stacked against us: We seek data supporting our position, we resist the effort required to change our minds, we rely on experience even when it isn’t a reliable guide, and we never generate enough disparate solutions to truly know how wrong we might be. Still, by simply keeping an opposite frame of reference in mind, we can counteract our bias toward confirmation.
If we’re lucky, we’ll have guessed more right than wrong, or if we’re sensitive, we’ll begin owning the stories our stakeholders tell us. In either case, it’s time to move on to something else.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.66.128