Chapter 2. The Basics

Research is a discipline with many applications. This chapter introduces the core practices and fundamental ideas and techniques you will use repeatedly in many situations. We’ll cover who should do research, different types of research and when to use them, and roles within each set of research activities. To help counter any skepticism about the business value of research, we’ll also review some common objections and how to overcome them.

Who Should Do Research? Everyone!

Ideally, everyone who is on the product or design team should also participate in the research.

If you are a sole practitioner, well, that’s easy. You will have excellent direct experience and can tailor the process and documentation to suit your needs. (Be particularly mindful of your personal biases, though.) If you work with other people, involve them from the start. Presenting them with the world’s most stunning report will give them a terrific reference document, but it’s far less likely to inspire them to approach their work differently. (Do you disagree? Perhaps you are an economist.)

When you find yourself making a case for a skeuomorphic, bronze astrolabe interface based on the research you’ve all done together, you’ll be able to spend less time explaining the rationale and more time focused on the merit of the conclusion. “As you saw in the interviews, we found that our target group of amateur astronomers exclusively uses nineteenth-century equipment for stargazing…”

People who have a hand in collecting insights will look for opportunities to apply them. Being one of the smart people is more fun than obeying the smart person, which is how the researcher/designer dynamic can feel if designers are merely the recipients of the analysis.

At my first design-agency job, the research director was a charming PhD anthropologist with a penchant for vivid, striped shirts. Despite being fresh out of academia, he was much more of a scout troop leader than a fusty professor. Interviews and usability tests became scavenger hunts and mysteries with real-world implications. Unlike heinous, contrived team-building activities—rope courses and trust falls—doing research together actually did make our team more collaborative. We were learning interesting, valuable new things, and everyone had a different perspective to contribute, which helped us overcome our biases. The content strategist noticed the vocabulary real people used; the developer had good questions about personal technology habits. The visual designer was just really into motorcycles, and that helped sometimes, too.

Someone needs to be the research lead—the person who keeps everyone on track and on protocol and takes ultimate responsibility for the quality of the work. If you take this on, it might mean you’re the primary researcher, gathering the data for others to help you analyze, or you could have more of an ensemble approach. The most important thing is that everyone involved understands the purpose of the research, their role, and the process.

Find your purpose

Every design project ultimately amounts to a series of decisions. What is the most important problem to solve? What is the best solution to that problem? How big should the logo be?

For any given project, you need include only the research activities that support the specific decisions you anticipate. If the client has only identified an audience and wants to explore ways to better serve them, your research will be more open-ended than if the design problem is already well defined.

Now that digital design is moving from “mobile first” to multimodal interfaces and incorporating machine learning, organizations must be particularly careful about letting technology drive design decisions. To paraphrase a prescient fictional chaos theorist, just because we can do something doesn’t mean we should.

There are many, many ways of classifying research, depending on who is doing the classification. Researchers are always thinking up more classifications. Academic classifications may be interesting in the abstract, but we care about utility—what helps get the job done. Research is a set of tools. We want to make sure we can find the right one fast, but we aren’t too concerned with the philosophy of how the toolbox is organized.

To choose the best research tool for your project, you’ll need to know what decisions are in play (the purpose) and what you’re asking about (the topic). Then you can find the best ways to gather background information, determine the project’s goals and requirements, understand the project’s current context, and evaluate potential solutions.

Generative or exploratory research: “What’s up with...?”

Generative research is the research you do before you even know what you’re doing. You start with general curiosity about a topic, look for patterns, and ask, “What’s up with that?” The resulting insights will lead to ideas and help define the problem to solve. Don’t think of this as just the earliest research. Even if you’re working on an existing product or service, you might be looking for ideas for additional features or other enhancements, or new products you could bring to an audience you’re already serving.

Generative research can include interviews, field observation, and reviewing existing literature—plus feeling fancy about saying “generative research.”

Once you’ve gathered information, the next step is to comb through it and determine the most commonly voiced unmet needs. This sort of research and analysis helps point out useful problems to solve. Your thinking might lead to a hypothesis, such as “Local parents of young children would value an app that offers ideas for events and activities based on their child’s developmental milestones.” Then you can do further (descriptive) research on how parents recognize and commemorate those milestones.

Descriptive and explanatory: “What and how?”

Descriptive research involves observing and describing the characteristics of what you’re studying. This is what you do when you already have a design problem and you need to do your homework to fully understand the context to ensure that you design for the audience instead of yourself. While the activities can be similar to generative research, descriptive research differs in the high-level question you’re asking. You’ve moved past “What is a good problem to solve?” to “What is the best way to solve the problem I’ve identified?”

At Mule, we’ve done a lot of design work for eye-health organizations. Despite the fact that several of us have really terrible vision (and very stylish glasses), none of us had any expertise beyond whether the chart looks sharper through lens number one or lens number two. The Glaucoma Research Foundation offered a clear design problem to solve: how to create useful, accurate educational materials for people who had been newly diagnosed with an eye disease. So, a round of descriptive research was in order.

To inform our design recommendations, we interviewed ophthalmologists and patients, and reviewed a large quantity of frankly horrifying literature. (Please, have your eyes examined regularly.) By understanding both the doctor and patient priorities and experiences, we were able to create online resources full of clear information that passed clinical muster and didn’t provoke anxiety.

Evaluative research: “Are we getting close?”

Once you have a clear idea of the problem you’re trying to solve, you can begin to define potential solutions. And once you have ideas for potential solutions, you can test them to make sure they work and meet the requirements you’ve identified. This is research you can, and should, do in an ongoing and iterative way as you move through design and development. The most common type of evaluative research is usability testing, but any time you put a proposed design solution in front of your client, you really are doing some evaluative research.

Causal research: “Why is this happening?”

Once you have implemented the solutions you proposed and have a website or application up and running out in the world, you may start noticing that people are using it in a way that isn’t exactly what you’d hoped. Or perhaps something really good starts happening and you want to replicate the success in other parts of your operation. This is your prompt to do some causal research.

Establishing a cause-and-effect relationship can be tricky. Causal research often includes looking at analytics and conducting multivariate testing (see Chapter 10). You might review user paths to see how visitors are entering and moving around your site and what words they might be searching for, as well as trying design and language variations to see which ones are more effective. Causal research might indicate that you suffered from changes to search engine algorithms, or a site that sent you a lot of traffic shut down. Or, you might have to look beyond site performance to see what’s going on in the wider world. Maybe unusual weather patterns are affecting your customers or John Oliver mentioned you.

As long as you’re clear about your questions and your expectations, don’t fret too much about the classification of the research you want to undertake. Remain open to learning at every stage of the process. And share this love of learning with your team. All of your research will benefit from a collaborative approach.

Roles

Research roles represent clusters of tasks, not individual people. Often one person will cover multiple roles on a study, or a single role can be shared. Always be explicit about roles and responsibilities in advance.

Author

The author plans and writes the study. This includes the problem statement and questions, and the interview guide or test script. Ideally, this is a team activity. Having a shared sense of what you don’t know is often even more critical than sharing what you learn.

Recruiter

The recruiter screens potential participants and identifies the respondents who would be good subjects. Although many organizations outsource recruiting, knowing how to find representative users and customers is tantamount to knowing how to reach actual users and customers, so it’s a good skill to develop in-house.

Coordinator/Scheduler

The coordinator plans how time will be used during the study and schedules sessions, including arranging times with the participants.

Interviewer/Moderator

If the research includes interviews or moderated tests, the interviewer or moderator is the person who interacts directly with the participants.

Observer

It’s often useful for clients or any available team members to watch the research in progress. This is appropriate as long as the presence of the observers will not influence the research itself. You can also make raw recordings available, if you can ensure confidentiality.

Notetaker/Recorder

Record sessions whenever possible, but have someone take notes as a fallback, and to keep an eye on the time. A separate notetaker allows the interviewer to devote full attention to the participant, and makes it possible to tag out to avoid fatigue.

Analyst

The analyst reviews the gathered data to look for patterns and insights. More than one person should have this role in order to reduce bias in the analysis and promote collaborative learning.

Documenter

The documenter reports the findings once the research study is complete.

You can either change roles with each set of activities if that works best, or develop a routine that allows you to focus on the information gathering. Just as with design and coding, every time you complete a round of research, you’ll have ideas for how to do it better next time and you’ll have found new ways to incorporate learning into your work.

Listen. Be interested. Ask questions. Write clearly. And practice. Whatever your day job is, adding research skills will make you better at it.

The research process

We’ll cover ways to organize research activities in extensive detail in Chapter 3. For the purposes of this section, what matters is that everyone working together has a shared understanding of how the work will proceed. This can be as simple as a checklist.

In addition to organizing the efforts of your immediate team, you may need to get approval to do research at all, either from the client or from decision-makers in your organization. Handle this as early as possible so you can focus on the work rather than on defending it.

Overcoming Objections

In many organizations, there are still those who consider research somewhere between a threat and a nuisance. You might have to get a certain amount of advance buy-in to proceed.

The whole point of doing research is to have a stronger basis for decision-making; if another level of decision-making, such as executive fiat, trumps your findings, you will have wasted your time. Get ready to advocate for your research project—before you start.

Research “proves” nothing

“All I have to do is get enough of the right kind of data and I can prove they should pay attention to research.” I’ve heard this one a lot. It leads to heartache and wasted effort.

You, as a level-headed seeker and proponent of evidence, will need to muster the courage to stare one awful truth in the face. No matter how much research you do, facts will never change minds. Most people most of the time operate on beliefs and back them up with hand-selected anecdotes. We see magical thinking and rampant bias in business and design every day, whether it’s copying the surface of a competitor’s success without looking at the underlying operation or giving more weight to experts whose opinions we find flattering.

In order to influence decisions with evidence, you have to work with existing beliefs, not against them. You need to create interest and establish credibility before the results come in, or your findings will pile up and gather dust. And you don’t even have to get people to care about “research” as a concept, as long as you can get them to embrace reality as a constraint. I like “evidence-based design” as a rallying cry, because if your design isn’t based on evidence, then what is it based on? Keep the focus on shared goals and decisions.

Objections you will hear

Here is a handy list of common objections to research, and their responses.

We don’t have the time

You don’t have time to be wrong about your assumptions and you don’t have time to argue about whose version of reality wins. What are your key assumptions? What if they’re all wrong? How much work would you have to redo? How long would that take?

Upfront and continuous research can provide a basis for decision-making that makes the rest of the work go much faster. Nothing slows down design and development projects as much as arguing over personal opinions or wasting effort solving the wrong problem. And you can start small. A couple of weeks can mean very little to your overall schedule while adding significantly to your potential for success.

We don’t have the money

Doing a project without research is a great way to end up with even less money and nothing to show for it. Objections about time and money are always a smokescreen using a bad model of what research is. Even with little or no budget, you can usually locate some related research online, wrangle representative users to interview, and do a little usability testing. Applying some critical thinking to your assumptions costs nothing, but changing your habits can offer tremendous returns.

We don’t have the expertise

You have what it takes, thanks to this book! It’s strange to think that you have the expertise to build something, but not to figure out whether you are building the right thing. Yes, research is a craft and a set of skills, but above all it’s a mindset. If you would rather be proven wrong quickly and cheaply than make a huge investment in a bad idea, then you have the right attitude.

We need to be scientists

This isn’t pure science we’re talking about here. This is applied research. You just need to have (or develop) a few qualities in common with a good scientist:

  • Your desire to find out needs to be stronger than your desire to predict. Otherwise you’ll be a mess of confirmation bias, looking for answers that confirm what you already assume.
  • You need to be able to depersonalize the work. There are no hurt feelings or bruised toes in research, only findings.
  • You need to be a good communicator and a good analytical thinker. Otherwise questions and reports get muddy, and results will be worse. This is just a set of skills that most people can develop if they have the right attitude.

The CEO is going to dictate what we do anyway

You’re going to fight to change that dictatorial culture. Not with facts, but with questions. The first step to better decision-making is understanding how the people in charge make decisions and what sources of input they trust. And if the leadership really does have a “damn the facts, full speed ahead” attitude, get a different job.

One research methodology is superior (qualitative vs. quantitative)

What you need to find out determines the type of research you need to conduct. It’s that simple. If you have a qualitative question, you need a qualitative method, and your data will come in the form of narrative insights. If you have a quantitative question, you need a quantitative method, and you’ll end up with measurements. As Douglas Adams pointed out, “42” is not a very useful answer to the meaning of life.

Often your questions will indicate a mixed-methods approach. You want to know what is happening (qualitative), how much it’s happening (quantitative), and why it’s happening (qualitative).

We don’t have the infrastructure

You don’t need special tools. Whatever tools and processes you use for the rest of the work, you can use to gather information. Google Docs and Hangouts will get you very far for free. I suspect you own or can borrow a laptop and have access to the internet. That is all you need.

We can find out everything in beta

Or are we calling it User Acceptance Testing now? There are a lot of things you can find out in beta: what functionality is working, whether users have a hard time finding core features. But there is also a lot that is helpful to know before you ever start designing or coding, and you can find it pretty fast: what your target audience is doing right now to solve the problems your product or service purports to solve, whether people want this product at all, and whether your organization has what it takes to support it.

Again, it’s a matter of where you want to invest and what you have to lose. Don’t waste anyone’s time or effort on untested assumptions if you don’t have to.

We already know the issue/users/app/problem inside and out

Unless this knowledge comes from recent inquiry specific to your current goals, a fresh look will be helpful. Familiarity breeds assumptions and blind spots. Plus, if you are familiar with your users, it will be easy for you to find some to talk to.

And who is the “we” in this case? In the absence of a mind meld, the client’s experience with the users or the business problem doesn’t transfer to the designer. Talking to someone who has done research just gets you their interpretation of the research. Shared understanding is key.

Research will change the scope of the project

It’s better to adjust the scope intentionally at the start than be surprised when new information pops up down the road like one of those fast-moving zombies. Research is an excellent prophylactic against unexpected complexity.

Research will get in the way of innovation

Relevance to the real world is what separates innovation from invention. Understanding why and how people do what they do today is essential to making new concepts fit into their lives tomorrow.

Actual reasons behind the objections

At the root of most of these objections is a special goo made up of laziness and fear.

I don’t want to be bothered

Unless you are naturally curious about people, research can seem like annoying homework at first. Once you get into it, though, you’ll find it totally fun and useful. A little knowledge opens up a whole world of new problems to solve and new ways to solve the problems at hand. That makes your work more rewarding. If research is one more thing tossed on your already overfull plate, then someone needs to ask the “Who should be doing this?” question again—but the problem is you being too busy, not research being unimportant. Research needs to be integrated into process and workflow or it will get shoved in a corner. If your project has a project manager, talk with them about finding ways to make it work.

I am afraid of being wrong

The cult of the individual genius designer/developer/entrepreneur remains strong. In certain “rockstar knows best” cultures, wanting to do research can come across as a sign of weakness or lack of confidence. Fight this. Accept that asking questions is both terrifying and a sign of courage and intelligence. The faster you are proven wrong, the less time you will spend being wrong.

I am very uncomfortable talking to people

You are creating a system or a service actual people are going to have to use. This system will be talking to people on your behalf, so it’s only fair that you talk to people on its behalf. That said, some people on your team will have more comfort and skills when it comes to interacting with your research subjects, so consider that when you’re deciding who does what.

Having to respond to challenges and objections before you can get to work may feel like a waste of time, but it can be useful in its own right. Describing the goals and potential of your research to people who aren’t sold on the value will actually help you focus and better articulate what you hope to uncover.

Research provides just a subset of inputs. Informed, purposeful iteration is the key to a successful design.

Research Requires Collaboration

Successful design projects require effective collaboration and healthy conflict.

Dan Brown, Designing Together

A design project is a series of decisions. And research leads to evidence-based decisions. But making evidence-based decisions requires collaboration—everyone involved working together towards a shared goal. Organizations that don’t put in the effort to clarify goals, socialize understanding, and resolve conflicts will continue to make critical decisions based on the personal preferences of the most influential person in the room—no matter how “good” the research is.

It is as common as it is counter-productive for companies to plant a research practice in a non-collaborative environment. The learning happens in one area and the design decisions in another, often in separate team cultures, sometimes even in separate buildings. Research reports and presentations appear, then fade into obscurity. When managers claim that they tried research and nothing came of it, the real culprit is usually poor collaboration.

The better the collaboration, the better equipped the organization is to incorporate continuous learning—and the lower the risk of placing large bets on bad assumptions.

The virtues of collaboration

Collaboration doesn’t just happen on its own. You can work alongside someone every day for a decade and never truly work together. It takes intention and incentives for behavior change. Most importantly, you need clear, shared objectives. 

In his book Designing Together, Dan Brown outlined four virtues of collaboration as guiding principles:

  • Clarity and Definition: Expressing and articulating thoughts clearly
  • Accountability and Ownership: Understanding and taking responsibility
  • Awareness and Respect: Empathizing with your colleagues
  • Openness and Honesty: Stating and accepting the truth

Your environment is collaborative to the extent these principles are represented in day-to-day interactions. If work proceeds without fundamental clarity, if mistakes generate blamestorms, if coworkers act disrespectfully, if people are afraid of saying what’s true—you aren’t truly collaborating.

Several behaviors Brown cites as necessary for embodying these virtues are identical to those that are essential for doing useful research. Embrace these habits in all research and design work:

  • Have a plan
  • Provide a rationale for decisions
  • Define roles and responsibility
  • Set expectations
  • Communicate progress
  • Reflect on performance

Anyone can encourage these behaviors simply by asking clarifying questions. If you’re coming in as a freelancer or a contractor to work as part of an internal team, ask about the approach to collaboration and decision-making before getting started; it’s the only way to make sure that you’ll have the information you need to do your job. If you occupy a position of influence, remember issuing an edict is not enough; these are habits that require ongoing recognition and reward. It is much more comfortable for people to keep their heads down and produce. The point is not to remain comfortable.

The fear of confrontation

One of the arguments against working collaboratively is that it will lead to groupthink and design by consensus. This is not true. Groupthink happens when a team optimizes for the appearance of agreement to avoid dealing with a shared fear of confrontation. As Brown puts it, conflict allows teams “to acknowledge their lack of alignment and to work together to achieve a shared understanding.” Healthy conflict is essential to collaboration. Challenging design decisions makes them stronger. Just think of all the bad design that made it out into the world because no one asked, “Is that really a good idea?”

In a functional organization, people work through conflict without feeling personally attacked because the conflict is grounded in mutual respect and a desire for shared success. Good design requires good decision-making; good decision-making requires shared understanding.

Better products, faster

The myth that research slows things down persists, especially in high-growth startups or any company anxious about rapid innovation. But, in truth, working in a shared reality based on actual evidence makes for faster decisions. Nothing is slower than rushing to ship the wrong thing and having to clean up afterwards.

You can’t talk anyone into caring about research. Don’t try. Start from a place of agreement: everyone wants to create better products, faster. Once you rally around that, then you can talk about how to achieve it—by making sure everyone on the team has clear goals, clear roles, reasonable timelines, and a strong sense of what you know and what you don’t about your potential customers and their needs. In order to reduce risk, continuous questioning and learning needs to be baked into your process from the start.

Working with an Agile development team

Agile is a popular software development philosophy with the goal of building better software faster in a productive, collaborative working environment. Many short iterations of two or three weeks replace the traditional approach of multimonth or multiyear projects broken into distinct phases.

On the surface, Agile seems antithetical to design. The Agile Manifesto explicitly values “responding to change over following a plan.” Design is planning. However, any work with complex ideas and dependencies requires holding some ideas outside the development process. You can’t cave in completely to the seductive solipsism that Agile offers, or you’ll be tunneling efficiently and collaboratively toward the center of the earth. While flexibility and responsiveness are certainly virtues that many project teams could use more of, let’s not discount the importance of having some sort of plan.

From a user-experience perspective, the primary problem with Agile is that it’s focused on the process, not the outcomes. It doesn’t offer guidance on what to build, only how. Perhaps your team is more efficient and happier making a lot of stuff together, but how do you know that stuff is the best it could be, meeting real user needs and fit to compete in the marketplace?

If you’re always reacting without a framework, you need some guiding mandates. Which customers do you listen to and why? Which user stories do you prioritize? What are you ultimately building toward?

Research is not antithetical to moving fast and shipping constantly. You’ll need to do some upfront work for background and strategy and the overall framework. Then, as the work progresses, do continual research.

It might sound counterintuitive, but the most effective approach may be to decouple the research planning from the development process—that is, don’t wait to start coding until you’ve answered all your research questions. Once you have some basic tools and processes in place, such as observation guides, interview guides, recording equipment, and questions for analysis, you can take a Mad Libs approach and fill in your actual questions and prototypes on the fly.

Jeff Patton describes this continuous user-research process in his article “Twelve Emerging Best Practices for Adding UX Work to Agile Development” (http://bkaprt.com/jer2/02-01/). He offers a tidy three-point summary:

Aggressively prioritize the highest-value users.
Analyze and model data quickly and collaboratively.
Defer less urgent research and complete it while the software is being constructed.

In other words, focus only on the essential user types, deal with your data as soon as you get it, involve your team in the analysis, and do the less important stuff later.

This of course opens up the questions of who the highest-value users are and what the more or less urgent research activities are. Prioritize those user types whose acceptance of the product is critical to success and those who least resemble the software developers on your team. Go learn about them.

Recruiting and scheduling participants is the most difficult part, so always be recruiting. Set up windows of time with different participants every three weeks. When you have them, you can either conduct an ethnographic interview (see Chapter 5) to understand their behavior before the next round of development or do some usability testing on the current state of the application.

Use what you learn from the initial user research and analysis to create personas that inform high-level sketches and user stories. Then, when the team is working on a feature that has a lot more engineering complexity than interaction design complexity, you can fit in additional evaluative research.

Throughout the development cycle, the designers can use research to function as a periscope, keeping an eye out for new insights about users and competitive opportunities while doing usability testing on whatever is ready.

Just Enough Rigor

Professional researchers are not unlike journalists. While many people have sufficient skills to observe, analyze, and write, it’s allegiance to a set of standards that sets the pros apart. In addition to being professional and respectful in your work, there are just a few responsibilities to keep in mind.

Cover your bias

Wherever there is research, there is bias. Your perspective is colored by your habits, beliefs, and attitudes. Any study you design, run, or analyze will have at least a little bit of bias. Your group of participants will be imperfectly representative. Your data gathering will be skewed. Your analysis will be colored by selective interpretation.

Don’t give up!

You can’t eliminate it completely—but the simple act of noting potential or obvious bias in your research process or results will allow you to weigh the results more appropriately. In lieu of a trained eye, use the following bias checklist, or make your own. Grade hard.

Design bias

Design in this case refers to the design of the studies themselves, how they are structured and conducted. This is the bias that creeps into studies when you don’t acknowledge bias, or if you include or leave out information based on personal goals or preferences.

Sampling bias

If your app for science-minded new parents is intended to serve men and women in equal numbers but all of your subjects are women, that’s a biased sample. If you offer an optional post-visit survey, the people who choose to take it are a self-selected biased sample of all visitors—often the angry ones. If you only interview the happy customers who show up in your contact database, that is a wildly biased sample.

Some level of sampling bias is unavoidable; even random sampling isn’t truly random. (See Chapter 9 for a deeper exploration of sampling.) You can counter sampling bias by being mindful about how you generalize from your findings.

Interviewer bias

Conducting unbiased interviews is difficult. Inserting one’s opinions is easy. Make sure that interviewers remain as neutral as possible.

This is something to watch out for particularly at the beginning of interviews when you are trying to establish rapport. Maybe the interviewer is super enthusiastic about one aspect of the project. Practice interviews and critiques with an internal team are the best way to develop a neutral interviewing style.

Sponsor bias

Sponsor bias is one of the biggest issues with onsite lab usability tests, because going onsite feels special and can be exciting or even daunting to a participant. If an organization is inviting you into their facility, offering you snacks, and writing you a check, it is very possible you will be gentler in your evaluations. To decrease sponsor bias without being deceptive, use a general description of the organization and goals of the study without naming the specific company until and unless it appears in materials you are evaluating.

Social desirability bias

Everyone wants to look their best. People want to be liked. It can be hard to admit to an interviewer that you don’t floss or pay off your credit card bill every month, so participants will sometimes give the answers that cast the best light. Emphasize the need for honesty and promise confidentiality. Also, be mindful of asking specific questions too soon or at the wrong time. Often, asking about a general topic (such as household routines) will naturally lead into more sensitive topics without triggering a defensive response.

The Hawthorne effect

The behavior of the people you are studying might change just because you are there. Staff who typically goof around and chat during the day might clam up and shuffle files if you’re hanging about to observe their workflow. Do your best to blend into the background and encourage research participants to go about their normal day. This bias is named for the Hawthorne Works in Cicero, Illinois, where the productivity experiments that led to the discovery of this effect were conducted in the early twentieth century.

The curse of knowledge

Once you know something, it’s impossible to imagine what it’s like not to know the thing, which makes it difficult to communicate about a topic with people who know less. A doctor might ask whether you’ve experienced a vasovagal syncope, forgetting that most people just call it fainting. This is both a reason to do research and a caution when creating interview questions.

The curse of knowledge sounds like a malediction out of Doctor Strange, but this term was coined by Colin Camerer, a behavioral economist who also started a record label as an economic experiment and signed the Dead Milkmen. So, that’s even better.

Seeking validation

References to “validation” get thrown around design and development like everyone is stressed out about overpaying for lunchtime parking. The term has specific meanings depending on the context, but in practice it often translates to “I would like to go through the motions of learning so that I can check a box and keep moving.”

If you are following software-engineering quality-management standards, verification and validation is a set of procedures for checking that a product, service, or system meets specifications and fulfills a defined set of end-user or stakeholder expectations. This sort of validation is often summarized as the question, “Am I building the right product?” Waiting to answer this question until you have the actual product seems mighty risky.

As part of hypothesis-driven design, you might turn a core assumption into a hypothesis, such as, “We believe our customers would make different spending decisions if they were aware of the carbon footprint of each product.” There are several ways to test that hypothesis to validate it, such as interviewing customers about spending habits or testing a prototype catalog.

However, when it comes to UX, avoid the phrase validate the design. These three words set the expectation that your goal is to be proven right, not to learn. This may seem like a small thing, but if you equate negative feedback with failure or not meeting your goal, everyone on your team will strive to accentuate the positive, and this will weaken the resulting work.

Kara Pernice of the Nielson Norman Group makes the following excellent suggestion:

If “validate” is a permanent fixture for you or your team, consider balancing the possible priming by pairing it with “invalidate,” as in “Let’s test the design to validate or invalidate it.” (http://bkaprt.com/jer2/02-02/)

It feels too good to be proven right. Confirmation bias lurks around every corner. (Positive publication bias is a huge issue in scientific research. Studies with positive results are over-represented in the literature.) Don’t explicitly invite it into your process. Stay strong. Aim to be proven wrong. And you’ll be far more right in the long run.

The ethics of research

What harm can come of asking people how they decide what to have for dinner or how they use their phones to find directions? We aren’t talking about clinical trials of dangerous, new cancer drugs, but all research that includes people and their personal information must be conducted ethically and conscientiously. It’s our responsibility as professionals to proceed without deceiving or injuring any of the participants.

What follows is a starter set of ethical concerns you should keep in mind whenever you are doing research. (For more thorough guidelines, take a look at the ICC/ESOMAR Code on Market and Social Research, which is available in fifteen languages: http://bkaprt.com/jer2/02-03/.)

The project as a whole

Maybe this goes without saying, but it is worth saying nevertheless. Is your overall goal, the project that the research supports, ethical? Will your success lead to harm for others? If it will, don’t participate in it. You should be intentional about your position. Conducting an otherwise aboveboard study on women to induce them to buy a diet aid with dangerous side effects doesn’t make it right.

The goals or methods of the research

Some research requires keeping certain facts from the participants. Usually this is benign, such as hiding the name and description of the product you’re designing, but sometimes it’s a problem. Will concealing these facts lead those users to participate in anything they might not otherwise agree to? Are you tricking them or setting some unrealistic expectation about the real world? Are you presenting false information as true?

Consent, transparency, and privacy

Informed consent is the rule. This means that participants must understand and agree in advance to the overall goals of any study and how their information will be recorded, used, or shared. Let them know if they are being watched by unseen observers. Make sure that research participants are of sound mind and able to give consent to participate.

The use and abuse of user data has emerged as one of the most critical issues in internet technology. Much of this has involved Facebook. In 2014, they conducted a psychological experiment in which researchers manipulated the news feeds of 689,003 users to highlight positive or negative posts to see whether the content viewed affected the mood of subsequent status updates. The resulting paper was titled “Experimental evidence of massive-scale emotional contagion through social networks” (http://bkaprt.com/jer2/02-04/). Cool. The experiment was conducted before Facebook changed their data use policy to include “research.” Cool. Cool.

The commercial web runs on implied consent. Every site and app has policies no one reads. There is a fine line between A/B testing and showing different information to different audiences in order to study their reactions to manipulated or discriminatory material. In one case, you are testing the performance of the system; in the other, you are studying the changes the system caused in human beings.

And because minors cannot legally agree to these sorts of implied contracts, conducting research on underage subjects requires the consent of a parent or guardian. In the United States, the age below which minors require parental consent to participate in research varies from state to state, so it’s a good idea to get explicit consent from the parents of anyone under eighteen. This is true whether you are asking children directly about their feelings and experiences through an interview or survey, observing the behavior of children, or analyzing information about individual children in unpublished sources.

Basic safety

Ensure that participants know what is required of them in advance and will be comfortable and not fatigued. Verify that your presence in a home or workplace will not lead to any risks or danger. For example, if you’re observing someone taking care of small children, make sure your actions don’t distract in any way that would interfere with proper care.

And for the love of all humanity, never, ever agree to do telephone interviews when anyone involved is driving. Not participants, not interviewers, not passive observers. No one. As soon as you learn that someone is on the phone while driving, end the call, and follow up by email or another means to reschedule if necessary.

Staying out of judgment

Researcher Vivianne Castillo has spoken and written about the relationship between empathy and shame in UX research. In “Ethics & Power: Understanding the Role of Shame in UX Research,” she points out that the word empathy is too often a shallow cliché because the unexamined privilege of the researcher leads to pity being labeled as empathy:

[Pride] keeps us from recognizing that some user research “war stories” are excuses to ridicule and mock, as if empathy is only practiced when you’re with a participant. (http://bkaprt.com/jer2/02-05/)

According to Castillo, in order to connect with research participants and practice true empathy, it is every researcher’s responsibility to be aware of their own experiences of pride and shame, and to recognize when participants might be experiencing shame. Only by slowing down and being open to vulnerability can we build rapport and fulfil our responsibility to the participants.

A handy checklist

The noted sociologist Michael Quinn Patton’s ethical research checklist is a good starting place. The following is adapted from Qualitative Research & Evaluation Methods:

  • Explain the purpose of your research and your methods in clear, plain language the participant will understand.
  • Describe the benefits of participating, for the individual participant and for the greater good (e.g., “help us design better products for people like you”).
  • Note any risks to the participant (social, physical, psychological, financial, and so on).
  • Make mutual promises of confidentiality to the extent they are possible, and do not promise more than you can deliver.
  • Obtain informed consent for participation, if necessary.
  • Determine who will have access to the data and for what purpose.
  • Discuss how the interviewer might be affected by conducting the research.
  • Identify the source of expertise and advice about ethical matters you will go to when questions arise.
  • Decide how hard to push for data if the participants become uncomfortable with answering.
  • Define your professional code of ethics and philosophy to ensure that you are proceeding ethically in spirit, and not just following the minimal rules.

This might seem like overkill when you are just talking to a few customers about their use of project-management software. However, given the complexity of the systems many of us are working on, it’s easy to stray into gray areas very quickly when you aren’t paying attention. Turns out there is a short distance between creating a platform to share cat photos and participating in surveillance capitalism. So it’s best to follow good practices from the beginning, when everything seems the most straightforward.

Be a skeptic

Not only do you need to be ethical about gathering information from individuals’ online activity, you need to remain vigilant and remember that not everything posted online is true or represents reality.

Get in the habit of asking a lot of questions. Question all your assumptions and determine whether you need to check your facts. If you’re constantly on the lookout for threats and potential points of failure, you and your products will be stronger.

This is a type of critical thinking that will serve you well at all times. You need to be aware of how much you don’t know and what that means. Awareness of your own limits will allow you to be as effective as possible within them.

Best Practices

There are many good reasons why people get master’s degrees and PhDs and become professional analysts and researchers, and there are plenty of reasons why companies benefit from hiring those people. Specialized, educated, and trained researchers cultivate a deep curiosity, have a broad base of relevant knowledge, and gain academic and professional experience conducting ethical and methodical studies.

As a designer or developer, you might have good reasons to avoid DIY and hire a trained professional. These include:

  • a large, complex project
  • a large, complex organization
  • highly specialized or sensitive subject matter
  • a very specialized or challenging user base, such as children, hedge fund managers, or prisoners
  • heinous organizational politics
  • lack of team members with the time or inclination to acquire additional skills and duties

Skilled, trained professional researchers have rigor. They can apply precise critical thinking in the face of common distractions and pressures, such as the enthusiasm of their team or their manager’s personal preferences. The best researchers also have enough humor and humanity to roll with imperfect circumstances. You want rigorous, not rigid.

But in the absence of a trained professional, how do you ensure you are being sufficiently rigorous? You’re an amateur attempting these stunts on the open road instead of a closed course; how do you make sure you and your work don’t go up in flames?

You borrow the methods of America’s greatest amateur, Benjamin Franklin: discipline and checklists.

Discipline requires you to be ever watchful for bad habits, shoddy thinking, and other human frailties that will undermine your efforts. Checklists substitute the experience of others for your own, and give you access to cool thinking in the midst of a hot mess. Discipline also requires that you don’t deviate from your checklists without good reason.

Here is the first checklist, that of best practices. Go over the items again and again until you know them by heart, and then post them where you can see them. (Why rely on memory when you don’t have to?)

1. Phrase questions clearly

This refers not to the questions you’re asking, but to the big question you’re trying to answer. Unless you know and can clearly state what you’re trying to find out and why, applied research is a pointless exercise.

2. Set realistic expectations

A successful study is preceded by expectation-setting for everyone involved, including the questions to be answered, the methods to be used, and the decisions to be informed by the findings. This is particularly important when you need to request time or budget for the work. If your research doesn’t meet the expectations of the stakeholders, they will treat you like you’ve wasted time and money. Ask team members and managers what they hope for. Tell them what to expect.

3. Be prepared

Research is like cooking: the better you prep, the faster and cleaner the work goes. If you don’t prepare, you end up with a huge mess and a kitchen in flames. Get your process and materials in order before you start. Set them up so they’re easy to reuse as needed.

4. Allow sufficient time for analysis

You need a little time for things to click into place. After doing the research—or while still in the middle—it’s tempting to just forge ahead to solutions without giving yourself enough time to digest. Again, a bit more time here can save lots later on.

5. Make it memorable and motivating

Notes or it didn’t happen. Effective research requires effective reporting and sharing your results and recommendations with others. A good report doesn’t have to be arduous to compile or read. It needs to be sufficiently informative and very clear to anyone who needs to make decisions based on the research. A single page could be enough.

The whole point of research is to inform decisions. You don’t necessarily need specialized communication channels and documentation. You need to get the insights in front of the decision-makers when they need them. Look at the types of communication that are most effective in your organization and copy what’s already working.

You may be doing your own research to save time and money, but be honest with yourself and your team about your capacity. Otherwise you risk wasting both time and money, as well as spreading misinformation and decreasing the overall reputation of research as a necessary input into the work.

Can you commit?

Good. Onward.

How Much Research Is Enough?

There are things we know that we know. There are known unknowns—that is to say, there are things that we now know we don’t know. But there are also unknown unknowns—there are things we do not know we don’t know.

Donald Rumsfeld, former US secretary of defense

In addition to offering the clarity and confidence necessary to design, research is essential to reducing your risk—the risk you incur by relying on assumptions that turn out to be wrong or by failing to focus on what’s most important to your business and your users. However, some assumptions pose greater risk than others.

To make the best use of your time and truly do just enough research, try to identify your highest-priority questions—your assumptions that carry the biggest risk.

For example, given your stated business goals, what potential costs will you incur—what bad things will happen—if, six months from now, you realize:

  • you are solving the wrong problem,
  • you were wrong about how much organizational support you have for this project,
  • you don’t have a particular competitive advantage you thought you had, or you didn’t see a particular competitive advantage before your competitor copied you,
  • you were working on features that excited you but that don’t actually matter much to your most important customers,
  • you failed to reflect what is most important to your users,
  • your users don’t really understand the labels you’re using,
  • you missed a key aspect of your users’ environments,
  • you were wrong about your prospective users’ habits and preferences, or
  • your product, service, or system can be misused in ways you didn’t consider?

If there is no risk associated with an assumption—if, say, you are working on a technical proof of concept that really, truly doesn’t have to satisfy any real-world users—then you don’t need to spend time investigating that assumption.

On the other hand, maybe the success of your new design depends on the assumption that many people who shop online value the ability to publicly share their transactions. You could conduct research to understand the social sharing practices and motivations of people who shop online before diving into design and development. Or you could go ahead and design based on an optimistic assumption and then see what happens. At risk are the time and money to design and build the functionality, as well as your organization’s reputation.

A better understanding of online shoppers mitigates the risk by validating the assumption and informing your design with real user priorities. In addition, you might uncover opportunities to provide something of even greater value to that same audience.

All it takes to turn potential hindsight into happy foresight is keeping your eyes open and asking the right questions. Failing isn’t the only way to learn.

That satisfying click

No matter how much research you do, there will still be things you wish you’d known, and there are some things you can only learn once your design is out there in the world. Design is an iterative process. Questions will continue to crop up. Some of them you can answer with research; some you can only answer with design. Even with research, you’ll need to create a few iterations of the wrong thing to get to the right thing. And then something out in the world will change and your right thing will be wrong again. There is no answer to the question of enough, other than the point at which you feel sufficiently informed and inspired. The topics in this book can only offer a starter kit of known unknowns.

That said, one way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on the solution. The click will sound at different times depending on the problem at hand and the people working on it.

Patterns will begin to emerge from the data. Those patterns will become the answers you need to move forward. This will be very satisfying on a neurochemical level, especially when you start out with a lot of uncertainty. Since human brains are pattern-recognition machines, you might start seeing the patterns you want to see that aren’t actually there. Collaborating with a team to interpret the data will reduce the risk of overly optimistic interpretation.

If you don’t have enough information, or what you’re finding doesn’t quite hold together, the pieces will rattle around in your head. Ask a few more questions or talk to a few more people. Talk through the results. The pieces will fall into place.

Learn to listen for that click.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.35.148