Chapter 15. Feedback and Research

Research is formalized curiosity. It is poking and prying with a purpose.

Zora Neale Hurston

Research with users is at the heart of UX design. Too often, though, teams outsource research work to specialized research teams. And too often, research activities take place on rare occasions—either at the beginning of a project or at the end. Lean UX solves these problems by making research both continuous and collaborative. Let’s dig in to see how to do that.

In this chapter, we cover the following:

  • Collaborative research techniques that you can use to build shared understanding with your team

  • Continuous research techniques to build small, informal, qualitative research studies into every iteration

  • How to use small units of regular research to build longitudinal research studies

  • How to reconcile contradictory feedback from multiple sources

  • What artifacts to test and what results you can expect from each of these tests

  • How to incorporate the voice of the customer throughout the Lean UX cycle

Continuous and Collaborative Research

Lean UX takes basic UX research techniques and overlays two important ideas. First, Lean UX research is continuous. This means you build research activities into every sprint. Instead of being a costly and disruptive “big bang” process, we make it bite-sized so that we can fit it into our ongoing process. Second, Lean UX research is collaborative. This means that you don’t rely on the work of specialized researchers to deliver learning to your team. Instead, research activities and responsibilities are distributed and shared across the entire team. By eliminating the handoff between researchers and team members, we increase the quality of our learning. Our goal in all of this is to create a rich shared understanding across the team.

Collaborative Discovery

Collaborative discovery is the process of working together as a team to test ideas in the market. It is one of the two main cross-functional techniques that create shared understanding on a Lean UX team. (Collaborative design, covered in Chapter 14, is the other.) Collaborative discovery is an approach to research that gets the entire team out of the building—literally and figuratively—to meet with and learn from customers and users. It gives everyone on the team a chance to see how the hypotheses are tested and, most important, multiplies the number of perspectives the team can use to gather customer insight.

It’s essential that you and your team conduct research together; that’s why we call it collaborative discovery. Outsourcing research dramatically reduces its value: it wastes time, it limits team building, and it filters the information through deliverables, handoffs, and interpretation. Don’t do it.

Researchers sometimes feel uneasy about this approach. As trained professionals, they are right to point out that they have special knowledge that is important to the research process. We agree. That’s why you should include a researcher on your team if you can. Just don’t outsource the work to that person. Instead, use the researcher as an expert guide to help your team plan their work and to lead the team through their research activities. In the same way that Lean UX encourages designers to take a more facilitative approach, Lean UX asks the same of researchers. Researchers should use their expertise to help the team plan good research, ask good questions, and select the right methods for the job. Just don’t do all the research for them.

Collaborative discovery in the field

Collaborative discovery is simply a way to get out into the field with your team. Here’s how you do it:

  1. As a team, review your questions, assumptions, hypotheses, and MVPs. Decide as a team what you need to learn. (Box 7 on the Lean UX Canvas.)

  2. Working as a team, decide on your research method. (Box 8 on the Lean UX Canvas.) If you are planning to work directly with customers and users, decide who you’ll need to speak to and observe to address your learning goals.

  3. Create an interview guide (see the sidebar “The Interview Guide”) that you can all use to guide your conversations.

  4. Break your team into research pairs, mixing up the various roles and disciplines within each pair (i.e., try not to have designers paired with designers). If you are doing this research over a number of days, try to mix up the interview pairs each day so that people have a chance to share experiences with various team members.

  5. Arm each pair with a version of your MVP, prototype, or other materials you want to show to the research participants.

  6. Send each team out to meet with customers/users.

  7. One team member interviews while the other takes notes.

  8. Begin with questions, conversations, and observations.

  9. Demonstrate the MVP later in the session and allow the customer to interact with it.

  10. Collect notes as the customer provides feedback.

  11. When the lead interviewer is done, switch roles to give the notetaker a chance to ask follow-up questions.

  12. At the end of the interview, ask the customer for referrals to other people who might also provide useful feedback.

A collaborative discovery example

A team we worked with at PayPal set out with a clickable prototype to conduct a collaborative discovery session. The team was made up of two designers, a UX researcher, four developers, and a product manager; they split into teams of two and three. They paired each developer with a nondeveloper. Before setting out, they brainstormed what they’d like to learn from their prototype and used these ideas to write brief interview guides. Their product was targeted at a broad consumer market, so they decided to just head out to the local shopping malls scattered around their office. Each pair targeted a different mall. They spent two hours in the field, stopping strangers, asking them questions, and demonstrating their prototypes. To build up their individual skill sets, they changed roles (from lead to notetaker) an hour into their research.

When they reconvened, each pair read their notes to the rest of the team. Almost immediately they began to see patterns emerge, confirming some of their assumptions and rejecting others. Using this new information, they adjusted the design of their prototype and headed out again later that afternoon. After a full day of field research, it was clear which parts of their idea worked well and which parts would need adjusting. When they began the next sprint the following day, every member of the team was working from the same baseline of clarity, having built a shared understanding by means of collaborative discovery the day before.

Continuous Learning

Designers and researchers face a lot of pressure to force their work into a sprint framework. The problem is that some work just takes a long time, especially some kinds of research. This long-cycle work has the potential to create conflict on Agile teams. Researchers are used to planning multiweek research projects, for example. And when they try to do this on an Agile team and put their eight-week research project into the backlog, they end up having to explain at the end of every sprint why their work isn’t “done.” It makes everyone unhappy.

Going back to principles

When faced with a conflict like this, it’s helpful to go back to principles. Remember this principle from Chapter 2? Don’t do the same thing faster. And this one? Beware of phases. These principles tell us that we shouldn’t try to fit an eight-week research study into a two-week sprint. Instead, we should rethink the way we plan our research and the way we think about “done” for research work.

To do that, let’s consider why the Scrum framework is so insistent on the notion of done. Scrum says that any work you do during a sprint should be done by the end of that sprint. This is a powerful forcing function: it forces everyone to show their work. And it makes the assumption that finished work is valuable. (That’s not always true, but that’s the goal.)

For us then, the goal of done is really: “Be transparent and deliver value every sprint.”

How can we use that idea when we’re planning research? Well, instead of thinking about completing our eight-week study in two weeks, we can ask, “How can we be transparent and deliver value every two weeks, even as we’re working on an eight-week study?” We might deliver an experience report at sprint demo meetings. We might present some early conclusions after completing half of our interviews. We might present and discuss the new questions that have come up as we’ve started to learn new things. Those things are all valuable to the team. They make the work transparent. They maintain the spirit of Agile while also keeping the integrity of the research work high.

Continuous research: Research is never done

A high-functioning Agile team should be doing research continuously. A critical best practice in Lean UX is building a regular cadence of customer involvement. Regularly scheduled conversations with customers let you minimize the time between hypothesis creation, experiment design, and user feedback—giving you the opportunity to validate your hypotheses quickly.

In other words, research should inform the decisions the product team is making. Since you’re making decisions constantly, you want to make sure you have the latest research data at hand at all times. (And conversely, the research agenda should sometimes drive development priorities because sometimes you need to make stuff specifically support the needs of your researchers. It’s a two-way conversation.)

In general, knowing you’re never more than a few days away from getting customer feedback has a powerful effect on teams. It takes the pressure off of your decision making because you know that you will soon have an opportunity to get meaningful data from the market—and course correct quickly if needed.

So stop thinking in terms of research studies and research phases, and instead think of research as a continuous part of your team’s operating rhythm. Share your work. Deliver value each week. Be honest about what you do and don’t know. Help your team learn. The rest of this chapter will show you how.

Continuous learning in the lab: Three users every Thursday

Although you can create a standing schedule of fieldwork based on the aforementioned ideas, it’s much easier (especially for companies that work with consumers) to bring customers into the building—you just need to be a little creative to get the entire team involved.

We like to use a weekly rhythm to schedule research, as demonstrated in Figure 15-1. We call this “Three, twelve, one” because it’s based on the following guidelines: three users; by twelve noon; once a week.

Figure 15-1. The three, twelve, one activity calendar

Here’s how the team’s activities break down:

Monday: Recruiting and planning
Decide, as a team, what will be tested this week. Decide who you need to recruit for tests and start the recruiting process. Outsource this job if at all possible: it’s very time-consuming (see the sidebar “A Word About Recruiting Participants”).
Tuesday: Refine the components of the test
Based on what stage your MVP is in, begin refining the design, the prototype, or the product to a point that will allow you to tell at least one complete story when your customers see it.
Wednesday: Continue refining, write the script, and finalize recruiting
Put the final touches on your MVP. Write the test script that your moderator will follow with each participant. (Your moderator should be someone on the team if at all possible.) Finalize the recruiting and schedule for Thursday’s tests.
Thursday: Test!
Spend the morning testing your MVP with customers. Spend no more than an hour with each customer. Everyone on the team should take notes. The team should plan to watch from a separate location. Review the findings with the entire project team immediately after the last participant is done.
Friday: Plan
Use your new insight to decide whether your hypotheses were validated and what you need to do next.

Simplify your test environment

Many firms have established usability labs in-house—and it used to be you needed one. These days, you don’t need a lab—all you need is a quiet place in your office and a computer with a network connection and a webcam. It used to be necessary to use specialized usability testing products to record sessions and connect remote observers. These days, you don’t even need that. We routinely run tests with remote observers using nothing more exotic than Zoom.

The ability to connect remote observers is a key element. It makes it possible for you to bring the test sessions to team members and stakeholders who can’t be present. This has an enormous impact on collaboration because it spreads understanding of your customers deep into your organization. It’s hard to overstate how powerful this is.

Who should watch?

The short answer is your entire team. Like almost every other aspect of Lean UX, usability testing should be a group activity. With the entire team watching the tests, absorbing the feedback, and reacting in real time, you’ll find the need for subsequent debriefings reduced. The team will learn firsthand where their efforts are succeeding and failing. Nothing is more humbling (and motivating) than seeing a user struggle with the software you just built.

Continuous research: Some examples

Companies operationalize continuous research in many different ways. For example, the team at ABN AMRO, a bank in the Netherlands, runs what they call a Customer Validation Carousel once a week. This weekly user research event is structured like speed dating. Each week, five customers come into the company’s offices. Each customer is set up at their own research station. Then a group of interviewers come into the room and spread out, each sitting with one customer. (At ABN AMRO, many of the interviewers are “people who do research”—in other words, designers and product people rather than trained researchers. Because of this, the trained researchers on staff work with them before the event to help them create their research plan and discussion guides. Interviews are often conducted by a pair of interviewers who work together and take turns interviewing and taking notes.)

The interviewers conduct 15-minute interviews with each participant. When the 15 minutes are up, each interviewer or pair gets up and moves to the next participant—kind of like musical chairs. In this way, each interviewer gets to speak to each participant. After everyone has spoken with everyone else, the customers leave and the interviewers convene to debrief. Normally, each interviewer is assigned to a single topic, but even though they may not be working on the same set of questions, the debrief is valuable, helping them better understand the customers and helping them to interpret the data that they’ve just collected. They capture learnings from this event on a single-page insight template and then add these documents to the company’s shared insights database. Researcher Ike Breed, who helped set up this process, told us that this sharing step really democratized research. “People thought the insights database was a really formal thing. They asked, ‘you mean I can put something in there?’” By opening the process up to contribution from a wider group, it helped the product and design teams feel more ownership of the customer insights process and the data that was collected as part of that process.

Testing Tuesdays

Another researcher we spoke to told us about how he started a practice called “Testing Tuesdays” at his company, a financial services firm building consumer-facing technology. Andrew Bourne was hired there as a usability researcher. When he arrived, he discovered a long backlog of work waiting for him. As he worked through the backlog, he started reporting on the research results at every Sprint Demo, which, at his company, took place every Tuesday. Because he had such a large backlog of work, there was always something new to report. To help make sure that his reports got heard by everyone who was interested, he started publicizing the contents of his briefing in advance using email announcements. He’d announce, “This week, I’ll be reporting on X.” This had two really positive effects. First, it got people to show up for his briefings—often many more people were interested in the results than he’d anticipated. Additionally, product people started coming to him asking for his partnership in the research that they wanted to do. In other words, it grew the demand for research. And not just usability studies. The folks coming to him were asking for all kinds of research—including early-stage formative studies.

Making Sense of the Research: A Team Activity

Whether your team does fieldwork or labwork, research generates a lot of raw data. Making sense of this can be time-consuming and frustrating—so the process is often handed over to specialists who are asked to synthesize research findings. You shouldn’t do this. Instead, work as hard as you can to make sense of the data as a team.

As soon as possible after the research sessions are over—preferably the same day, if not then the following day—gather the team together for a review session. When the team has reassembled, ask everyone to read their findings to one another. One really efficient way to do this is to transcribe the notes people read out loud onto index cards or sticky notes and then sort the notes into themes. This process of reading, grouping, and discussing gets everyone’s input out on the table and builds the shared understanding that you seek. With themes identified, you and your team can then determine the next steps for your MVP.

Confusion, contradiction, and (lack of) clarity

As you and your team collect feedback from various sources and try to synthesize your findings, you will inevitably come across situations in which your data presents you with contradictions. How do you make sense of it all? Here are a couple of ways to maintain your momentum and ensure that you’re maximizing your learning.

Look for patterns
As you review the research, keep an eye out for patterns in the data. These patterns reveal multiple instances of user opinion that represent elements to explore. If something doesn’t fall into a pattern, it is likely an outlier.
Place your outliers in a “parking lot”
Tempting as it is to ignore outliers (or try to serve them in your solution), don’t do it. Instead, create a parking lot or backlog. As your research progresses over time (remember: you’re doing this every week), you might discover other outliers that match the pattern. Be patient.
Verify with other sources
If you’re not convinced the feedback you’re seeing through one channel is valid, look for it in other channels. Are the customer support emails reflecting the same concerns as your usability studies? Is the value of your prototype echoed with customers inside and outside your office? If not, your sample might have been disproportionately skewed.

Identifying Patterns over Time

Typical UX research programs are structured to get a conclusive answer: you will plan to do enough research to conclusively answer a question or set of questions. Lean UX research takes a different approach. It puts a priority on being continuous—which means that you are structuring your research activities very differently. Instead of running big studies, you are seeing a small number of users every week. This means that some questions might remain open over a couple of weeks. One big benefit, though, is that interesting patterns can reveal themselves over time.

For example, over the course of regular test sessions from 2008 to 2011, the team at TheLadders watched an interesting change in their customers’ attitudes over time. In 2008, when they first began meeting with job seekers on a regular basis, they would discuss various ways to communicate with employers. One of the options they proposed was SMS. In 2008, the audience, made up of high-income earners in their late 40s and early 50s, showed a strong disdain for SMS as a legitimate communication method. To them, it was something their kids did (and that perhaps they did with their kids), but it was certainly not a “proper” way to conduct a job search.

By 2011, though, SMS messages had taken off in the United States. As text messaging gained acceptance in business culture, audience attitudes began to soften. Week after week, as they sat with job seekers, they began to see opinions about SMS change. The team saw job seekers become far more likely to use SMS in a midcareer job search than they would have just a few years earlier.

The team at TheLadders would never have recognized this as an audience-wide trend were it not for two things. First, they were speaking with a sample of their audience, week in and week out. Additionally, though, the team took a systematic approach to investigating long-term trends. As part of their regular interaction with customers, they always asked a regular set of level-setting questions to capture the “vital signs” of the job seeker’s search—no matter what other questions, features, or products they were testing. By doing this, the team was able to establish a baseline and address bigger trends over time. The findings about SMS would not have changed the team’s understanding of their audience if they’d represented just a few anecdotal data points. But aggregated over time, these data points became part of a very powerful dataset.

When planning your research, it’s important to consider not just the urgent questions—the things you want to learn over the next few weeks. You should also consider the big questions. You still need to plan big standalone studies to get at some of these questions. But with some planning, you should be able to work a lot of long-term learning into your weekly studies.

Test what you’ve got

To maintain a regular cadence of user testing, your team must adopt a “test-what-you-got” policy. Whatever is ready on testing day is what goes in front of the users. This policy liberates your team from rushing toward testing day deadlines—or, worse, delaying research activities in pursuit of some elusive “perfect” moment. Instead, when you adopt a “test-what-you-got” approach, you’ll find yourself taking advantage of your weekly test sessions to get insight on whatever is ready, and this will create insight for you at every stage of design and development. You must, however, set expectations properly for the type of feedback you’ll be able to generate with each type of artifact.

Sketches

Feedback collected on sketches helps you validate the value of your concept (see Figure 15-2). They’re great conversation prompts to support interviews, and they help to make abstract concepts concrete, which helps generate shared understanding. What you won’t get from sketches is detailed, step-by-step feedback on the process, insight about specific design elements, or even meaningful feedback on copy choices. You won’t be able to learn much (if anything) about the usability of your concept.

Figure 15-2. Example of a sketch that can be used with customers

Static wireframes

Showing test participants wireframes (Figure 15-3) lets you assess the information hierarchy and layout of your experience. In addition, you’ll get feedback on taxonomy, navigation, and information architecture.

You’ll receive the first trickles of workflow feedback, but at this point your test participants are focused primarily on the words on the page and the selections they’re making. Wireframes provide a good opportunity to begin testing copy choices.

Figure 15-3. Example of a wireframe

High-fidelity visual mock-ups (not clickable)

Moving into high-fidelity visual-design assets, you receive much more detailed feedback. Test participants will be able to respond to branding, aesthetics, and visual hierarchy, as well as aspects of figure/ground relationships, grouping of elements, and the clarity of your calls to action. Your test participants will also (almost certainly) weigh in on the effectiveness of your color palette. (See Figure 15-4.)

Nonclickable mock-ups still don’t let your customers interact naturally with the design or experience the workflow of your solution. Instead of watching your users click, tap, and swipe, you need to ask them what they would expect and then validate those responses against your planned experience.

Figure 15-4. Example of mock-up from Skype in the Classroom (design by Made By Many)

Clickable mock-ups

Clickable mock-ups, like that shown in Figure 15-4, increase the fidelity of the interaction by linking together a set of static assets into a simulation of the product experience. These days, most design tools make it easy to link together a number of static screens to produce these types of mock-ups. Visually, they can be high, medium, or even low fidelity. The value here is not so much the visual polish but rather the ability to simulate workflow and to observe how users interact with your designs.

Designers used to have limited tool choices for creating clickable mock-ups, but in recent years, we’ve seen a huge proliferation of tools. Some tools are optimized for making mobile mock-ups, others are for the web, and still others are platform neutral. Most have no ability to work with data, but with some (like Axure), you can create basic data-driven or conditional logic-driven simulations. Additionally, design tools such as Figma, Sketch, InVision, and Adobe XD include “mirror” features with which you can see your design work in real time on mobile devices and link screens together to create prototypes without special prototyping tools.

Coded prototypes

Coded prototypes are useful because they have the best ability to deliver high fidelity in terms of functionality. This makes for the closest-to-real simulation that you can put in front of your users. It replicates the design, behavior, and workflow of your product. You can test with real data. You can integrate with other systems. All of this makes coded prototypes very powerful; it also makes them the most complex to produce. But because the feedback you gain is based on such a close simulation, you can treat that feedback as more authoritative than the feedback you gain from other simulations.

Monitoring Techniques for Continuous and Collaborative Discovery

In the preceding discussions, we looked at ways to use qualitative research on a regular basis to evaluate your hypotheses. However, as soon as you launch your product or feature, your customers will begin giving you constant feedback—and not only on your product. They will tell you about themselves, about the market, about the competition. This insight is invaluable—and it comes into your organization from every corner. Seek out these treasure troves of customer intelligence within your organization and harness them to drive your ongoing product design and research, as depicted in Figure 15-5.

Figure 15-5. Customers can provide feedback through many channels

Customer service

Customer support agents talk to more customers on a daily basis than you will talk to over the course of an entire project. There are multiple ways to harness their knowledge:

  • Reach out to them and ask them what they’re hearing from customers about the sections of the product on which you’re working.

  • Hold regular monthly meetings with them to understand the trends. What do customers love this month? What do they hate?

  • Tap into their deep product knowledge to learn how they would solve the challenges your team is working on. Include them in design sessions and design reviews.

  • Incorporate your hypotheses into their call scripts. One of the cheapest ways to test your ideas is to suggest it as a fix to customers calling in with relevant complaints.

In the mid-2000s, Jeff ran the UX team at a midsized tech company in Portland, Oregon. One of the ways that team prioritized the work they did was by regularly checking the pulse of the customer base. The team did this with a standing monthly meeting with customer service representatives. Each month, Customer Service would provide the UX team with the top 10 things customers were complaining about. The UX team then used this information to focus their efforts and to subsequently measure the efficacy of their work. At the end of the month, the next conversation with Customer Service gave the team a clear indication of whether or not their efforts were bearing fruit. If the issue was not receding in the top-10 list, the solutions had not worked.

This approach generated an additional benefit. The Customer Service team realized there was someone listening to their insights and began proactively sharing customer feedback above and beyond the monthly meeting. The dialogue that was created provided the UX team with a continuous feedback loop to inform and test product hypotheses.

On-site feedback surveys

Set up a feedback mechanism in your product with which customers can send you their thoughts regularly. Here are a few options:

  • Simple email forms

  • Customer support forums

  • Third-party community sites

You can repurpose these tools for research by doing things like the following:

  • Counting how many inbound emails you’re getting from a particular section of the site

  • Participating in online discussions and testing some of your hypotheses

  • Exploring community sites to discover and recruit hard-to-find types of users

These inbound customer feedback channels provide feedback from the point of view of your most active and engaged customers. Here are a few tactics for getting other points of view.

Search logs

Search terms are clear indicators of what customers are seeking on your site. Search patterns indicate what they’re finding and what they’re not finding. Repeated queries with slight variations show a user’s challenge in finding certain information.

One way to use search logs for MVP validation is to launch a test page for the feature you’re planning. Following the search, logs will inform you as to whether the test content (or feature) on that page is meeting the user’s needs. If users continue to search on variations of that content, your experiment has failed.

Site usage analytics

Site usage logs and analytics packages—especially funnel analyses—show how customers are using the site, where they’re dropping off, and how they try to manipulate the product to do the things they need or expect it to do. Understanding these reports provides real-world context for the decisions the team needs to make.

In addition, use analytics tools to determine the success of experiments that have launched publicly. How has the experiment shifted usage of the product? Are your efforts achieving the outcome you defined? These tools provide an unbiased answer.

If you’re just starting to build a product, build usage analytics into it from day one. Third-party metrics products like Kissmetrics and MixPanel make it easy and inexpensive to implement this functionality, and provide invaluable information to support continuous learning.

A/B testing

A/B testing is a technique, originally developed by marketers, to gauge which of two (or more) relatively similar concepts achieve the defined goal more effectively. When applied in the Lean UX framework, A/B testing becomes a powerful tool to determine the validity of your hypotheses. Applying A/B testing is relatively straightforward after your ideas evolve into working code. Here’s how it works:

  • Take the proposed solution and release it to your audience. However, instead of letting every customer see it, release it only to a small subset of users.

  • Measure the performance of your solution for that audience. Compare it to the other group (your control cohort) and note the differences.

  • Did your new idea move the needle in the right direction? If it did, you’ve got a winning idea.

  • If not, you’ve got an audience of customers that might make good targets for further research. What did they think of the new experience? Would it make sense to reach out to them for some qualitative research?

The tools for A/B testing are widely available and can be inexpensive. There are third-party commercial tools like Optimizely. There also are open source A/B testing frameworks available for every major platform. Regardless of the tools you choose, the trick is to make sure the changes you’re making are small enough and the population you select is large enough that any change in behavior can be attributed with confidence to the change you’ve made. If you change too many things, any behavioral change cannot be directly attributed to your exact hypothesis.

Wrapping Up

In this chapter, we covered many ways to validate your hypotheses. We looked at collaborative discovery and continuous learning techniques. We discussed how to build a weekly Lean testing process and covered what you should test and what to expect from those tests. We looked at ways to monitor your customer experience in a Lean UX context, and we touched on the power of A/B testing.

These techniques, used in conjunction with the processes outlined in Chapter 4 and Chapter 5, make up the full Lean UX process loop. Your goal is to get through this loop as often as possible, refining your thinking with each iteration.

In the next section, we move away from process and take a look at how to integrate Lean UX into your organization. We’ll cover the organizational shifts you’ll need to make to support the Lean UX approach, whether you’re a startup, large company, or a digital agency.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
54.81.157.133