Chapter 4. Understanding Users: Qualitative Research

The outcome of any design effort must ultimately be judged by how successfully it meets the needs of both the product user and the organization that commissioned it. No matter how skillful and creative the designer, if she does not have clear and detailed knowledge of the users she is designing for, the constraints of the problem, and the business or organizational goals that are driving design activities, she will have little chance of success.

Real insight into these topics can’t be achieved by digging through the piles of numbers that come from a quantitative study like a market survey (though these can be critical for answering other kinds of questions). Rather, this kind of deep knowledge can only be achieved by qualitative research techniques. There are many types of qualitative research, each of which can play an important role in understanding the design landscape of a product. In this chapter, we focus on specific qualitative research techniques that support the design methods described in subsequent chapters. At the end of the chapter, we briefly discuss how quantitative research can, and cannot, be used to help support this effort.

Qualitative versus Quantitative Research

Research is a word that most people associate with science and objectivity. This association isn’t incorrect, but it biases many people towards the notion that the only valid sort of research is the kind that yields the supposed ultimate in objectivity: quantitative data. It is a common perspective in business and engineering that numbers represent truth, even though we all know that numbers — especially statistics describing human activities — are subject to interpretation and can be manipulated at least as dramatically as words.

Data gathered by the hard sciences like physics are simply different from that gathered on human activities: Electrons don’t have moods that vary from minute to minute, and the tight controls physicists place on their experiments to isolate observed behaviors are impossible in the social sciences. Any attempt to reduce human behavior to statistics is likely to overlook important nuances, which can make an enormous difference to the design of products. Quantitative research can only answer questions about “how much” or “how many” along a few reductive axes. Qualitative research can tell you about what, how, and why in rich detail that is reflective of the actual complexities of real human situations.

Social scientists have long realized that human behaviors are too complex and subject to too many variables to rely solely on quantitative data to understand them. Design and usability practitioners, borrowing techniques from anthropology and other social sciences, have developed many qualitative methods for gathering useful data on user behaviors to a more pragmatic end: to help create products that better serve user needs.

The value of qualitative research

Qualitative research helps us understand the domain, context, and constraints of a product in different, more useful ways than quantitative research does. It also helps us identify patterns of behavior among users and potential users of a product much more quickly and easily than would be possible with quantitative approaches. In particular, qualitative research helps us understand:

  • Behaviors, attitudes, and aptitudes of potential product users

  • Technical, business, and environmental contexts — the domain — of the product to be designed

  • Vocabulary and other social aspects of the domain in question

  • How existing products are used

Qualitative research can also help the progress of design projects by:

  • Providing credibility and authority to the design team, because design decisions can be traced to research results

  • Uniting the team with a common understanding of domain issues and user concerns

  • Empowering management to make more informed decisions about product design issues that would otherwise be based on guesswork or personal preference

It’s our experience that in comparison, qualitative methods tend to be faster, less expensive, and more likely to provide useful answers to important questions that lead to superior design:

  • How does the product fit into the broader context of people’s lives?

  • What goals motivate people to use the product, and what basic tasks help people accomplish these goals?

  • What experiences do people find compelling? How do these relate to the product being designed?

  • What problems do people encounter with their current ways of doing things?

The value of qualitative studies is not limited to helping support the design process. In our experience, spending the time to understand the user population as human beings can provide valuable business insights that are not revealed through traditional market research.

In one particularly illustrative example, we were asked by a client to perform a user study for an entry-level consumer video-editing product for Windows users. An established developer of video-editing and -authoring software, the client had used traditional market research techniques to identify a significant business opportunity in developing a product for people who owned a digital video camera and a computer but hadn’t connected the two yet.

In the field, we conducted interviews with a dozen users in the target market. Our first discovery was not surprising — that the people who did the most taping and had the strongest desire to share edited versions of their videos were parents. The second discovery, however, was quite startling. Of the 12 people whose homes we visited, only one person had successfully connected his video camera to his computer, and he had relied on the IT guy at work to set it up for him. One of the necessary preconditions of the success of the product was that people could actually get video onto their computers to edit, but at the time it was extremely difficult to get a FireWire or video capture card functioning properly on an Intel-based PC.

As a result of four days of research, we were able to help our client make a decision to put a hold on the product, which likely ended up saving them a considerable investment.

Types of qualitative research

Social science and usability texts are full of methods and techniques for conducting qualitative research, and readers are encouraged to explore this literature. In this chapter, we will focus specifically on techniques that have been proven effective in our practice over the last decade, occasionally drawing attention to similar techniques practiced in the design and usability fields at large. We will also try to avoid getting bogged down in theory, and instead will present these techniques from a pragmatic perspective. The qualitative research activities we have found to be most useful in our practice are:

  • Stakeholder interviews

  • Subject matter expert (SME) interviews

  • User and customer interviews

  • User observation/ethnographic field studies

  • Literature review

  • Product/prototype and competitive audits

Stakeholder interviews

Research for any new product design should start by understanding the business and technical context surrounding the product. In almost all cases, the reason a product is being designed (or redesigned) is to achieve one or several specific business outcomes (most commonly, to make money). It is the designers’ obligation to develop solutions without ever losing sight of these business goals, and it is therefore critical that the design team begin its work by understanding the opportunities and constraints that are behind the design brief.

As Donald Schön so aptly puts it, “design is a conversation with materials”[1] This means that for a designer to craft an appropriate solution, he must understand the capabilities and limitations of the “materials” that will be used to construct the product, whether they be lines of code or extruded plastic.

Generally speaking, a stakeholder is anyone with authority and/or responsibility for the product being designed. More specifically, stakeholders are key members of the organization commissioning the design work, and typically include executives, managers, and representative contributors from development, sales, product management, marketing, customer support, design, and usability. They may also include similar people from other organizations in business partnership with the commissioning organization.

Interviews with stakeholders should occur before any user research begins because these discussions often inform how user research is conducted. Also, it is usually most effective to interview each stakeholder in isolation, rather than in a larger, cross-departmental group. A one-on-one setting promotes candor on the part of the stakeholder, and ensures that individual views are not lost in a crowd. (One of the most interesting things that can be discovered in such interviews is the extent to which everyone in a product team shares — or doesn’t share — a common vision.) Interviews need not last longer than about an hour, though follow-up meetings may be called for if a particular stakeholder is identified as an exceptionally valuable source of information.

The type of information that is important to gather from stakeholders includes:

  • Preliminary product vision—As in the fable of the blind men and the elephant, you may find that each business department has a slightly different and slightly incomplete perspective on the product to be designed. Part of the design approach must therefore involve harmonizing these perspectives with those of users and customers.

  • Budget and schedule—Discussions on this topic often provide a reality check on the scope of the design effort and provide a decision point for management if user research indicates a greater (or lesser) scope is required.

  • Technical constraints and opportunities—Another important determinant of design scope is a firm understanding of what is technically feasible given budget, time, and technology constraints. It is also often the case that a product is being developed to capitalize on a new technology. Understanding the opportunities underlying this technology can help shape the product’s direction.

  • Business drivers—It is important for the design team to understand what the business is trying to accomplish. This again leads to a decision point, should user research indicate a conflict between business and user needs. The design must, as much as possible, create a win-win situation for users, customers, and providers of the product.

  • Stakeholders’ perceptions of the user—Stakeholders who have relationships with users (such as customer support representatives) may have important insights on users that will help you to formulate your user research plan. You may also find that there are significant disconnects between some stakeholders’ perceptions of their users and what you discover in your research. This information can become an important discussion point with management later in the process.

Understanding these issues and their impact on design solutions helps you as a designer to better develop a successful product. Regardless of how desirable your designs are to customers and users, without considering the viability and feasibility of the proposed solution there is no chance that the product will thrive.

Discussing these topics is also important to developing a common language and understanding among the design team, management, and engineering teams. As a designer, your job is to develop a vision that the entire team believes in. Without taking the time to understand everyone’s perspective, it is unlikely that they will feel that proposed solutions reflect their priorities. Because these people have the responsibility and authority to deliver the product to the real world, they are guaranteed to have important knowledge and opinions. If you don’t ask for it upfront, it is likely to be forced upon you later, often in the form of a critique of your proposed solutions.

Subject matter expert (SME) interviews

Early in a design project, it is often invaluable to identify and meet with several subject matter experts (SMEs) — experts on the domain within which the product will operate. Many SMEs were users of the product or its predecessors at one time and may now be trainers, managers, or consultants. Often they are experts hired by stakeholders, rather than stakeholders themselves. Similar to stakeholders, SMEs can provide valuable perspectives on a product and its users, but designers should be careful to recognize that SMEs represent a somewhat skewed perspective. Some points to consider about using SMEs are:

  • SMEs are often expert usersTheir long experience with a product or its domain means that they may have grown accustomed to current interactions. They may also lean towards expert controls rather than interactions designed for perpetual intermediates. SMEs are often not current users of the product and may have more of a management perspective.

  • SMEs are knowledgeable, but they aren’t designersThey may have many ideas on how to improve a product. Some of these may be valid and valuable, but the most useful pieces of information to glean from these suggestions are the causative problems that lead to their proposed solutions. As with users, when you encounter a proposed solution, ask “how would that help you or the user?”

  • SMEs are necessary in complex or specialized domainsIf you are designing for a technical domain such as medical, scientific, or financial services, you will likely need some guidance from SMEs, unless you are one yourself. Use SMEs to get information on industry best practices and complex regulations. SME knowledge of user roles and characteristics is critical for planning user research in complex domains.

  • You will want access to SMEs throughout the design processIf your product domain requires use of SMEs, you should be able to bring them in at different stages of the design to help perform reality checks on design details. Make sure that you secure this access in your early interviews.

Customer interviews

It is easy to confuse users with customers. For consumer products, customers are often the same as users, but in corporate or technical domains, users and customers rarely describe the same sets of people. Although both groups should be interviewed, each has its own perspective on the product that needs to be factored quite differently into an eventual design.

Customers of a product are those people who make the decision to purchase it. For consumer products, customers are frequently users of the product; although for products aimed at children or teens, the customers are parents or other adult supervisors of children. In the case of most enterprise, medical, or technical products, the customer is someone very different from the user — often an executive or IT manager — with distinct goals and needs. It’s important to understand customers and satisfy their goals in order to make a product viable. It is also important to realize that customers seldom actually use the product themselves, and when they do, they use it quite differently from the way their users do.

When interviewing customers, you will want to understand:

  • Their goals in purchasing the product

  • Their frustrations with current solutions

  • Their decision process for purchasing a product of the type you’re designing

  • Their role in installation, maintenance, and management of the product

  • Domain-related issues and vocabulary

Like SMEs, customers may have many opinions about how to improve the design of the product. It is important to analyze these suggestions, as in the case of SMEs, to determine what issues or problems underlie the ideas offered, because better, more integrated solutions may become evident later in the design process.

User Interviews

Users of a product should be the main focus of the design effort. They are the people who are personally utilizing the product to accomplish a goal (not their managers or support team). If you are redesigning or refining an existing product, it is important to speak to both current and potential users, that is, people who do not currently use the product but who are good candidates for using it in the future because they have needs that can be met with the product and are in the target market for the product. Interviewing both current and potential users illuminates the effect that experience with the current version of a product may have on how the user behaves and thinks about things.

Information we are interested in learning from users includes:

  • The context of how the product (or analogous system, if no current product exists) fits into their lives or workflow: when, why, and how the product is or will be used

  • Domain knowledge from a user perspective: What do users need to know to do their jobs?

  • Current tasks and activities: both those the current product is required to accomplish and those it doesn’t support

  • Goals and motivations for using their product

  • Mental model: how users think about their jobs and activities, as well as what expectations users have about the product

  • Problems and frustrations with current products (or an analogous system if no current product exists)

User observation

Most people are incapable of accurately assessing their own behaviors,[2] especially when they are removed from the context of their activities. It is also true that out of fear of seeming dumb, incompetent, or impolite, many people may avoid talking about software behaviors that they find problematic or incomprehensible.

It then follows that interviews performed outside the context of the situations the designer hopes to understand will yield less-complete and less-accurate data. You can talk to users about how they think they behave, or you can observe their behavior first-hand. The latter route provides superior results.

Perhaps the most effective technique for gathering qualitative user data combines interviewing and observation, allowing the designers to ask clarifying questions and direct inquiries about situations and behaviors they observe in real time.

Many usability professionals make use of technological aides such as audio or video recorders to capture what users say and do. Interviewers must take care not to make these technologies too obtrusive; otherwise, the users will be distracted and behave differently than they would off-tape. In our practice, we’ve found that a notebook and a camera allow us to capture everything we need without compromising the honest exchange of information. Typically, we won’t bring out the camera until we feel that we’ve established a good rapport with the interview subject, and then we use it to capture things about the environment that are difficult to jot in our notes. However, video, when used with care, can sometimes provide a powerful rhetorical tool for achieving stakeholder buy-in to contentious or surprising research results. Video may also prove useful in situations where note taking is difficult, such as in a moving car.

Literature review

In parallel with stakeholder interviews, the design team should review any literature pertaining to the product or its domain. This can and should include product marketing plans, brand strategy, market research, user surveys, technology specifications and white papers, business and technical journal articles, competitive studies, Web searches for related and competing products and news, usability study results and metrics, and customer support data such as call center statistics.

The design team should collect this literature, use it as a basis for developing questions to ask stakeholders and SMEs, and later use it to supply additional domain knowledge and vocabulary, and to check against compiled user data.

Product and competitive audits

Also in parallel to stakeholder and SME interviews, it is often quite helpful for the design team to examine any existing version or prototype of the product, as well as its chief competitors. Doing so gives the design team a sense of the state of the art, and provides fuel for questions during the interviews. The design team, ideally, should engage in an informal heuristic or expert review of both the current and competitive interfaces, comparing each against interaction and visual design principles (such as those found later in this book). This procedure both familiarizes the team with the strengths and limitations of what is currently available to users, and provides a general idea of the current functional scope of the product.

Ethnographic Interviews: Interviewing and Observing Users

Drawing on years of design research in practice, we believe that a combination of observation and one-on-one interviews is the most effective and efficient tool in a designer’s arsenal for gathering qualitative data about users and their goals. The technique of ethnographic interviews is a combination of immersive observation and directed interview techniques.

Hugh Beyer and Karen Holtzblatt have pioneered an ethnographic interviewing technique that they call contextual inquiry. Their method has, for good reason, rapidly gained traction in the industry, and provides a sound basis for qualitative user research. It is described in detail in the first four chapters of their book, Contextual Design. Contextual inquiry methods closely parallel the methods described here, but with some subtle and important differences.

Contextual inquiry

Contextual inquiry, according to Beyer and Holtzblatt, is based on a master-apprentice model of learning: observing and asking questions of the user as if she is the master craftsman, and the interviewer the new apprentice. Beyer and Holtzblatt also enumerate four basic principles for engaging in ethnographic interviews:

  • Context—Rather than interviewing the user in a clean white room, it is important to interact with and observe the user in her normal work environment, or whatever physical context is appropriate for the product. Observing users as they perform activities and questioning them in their own environments, filled with the artifacts they use each day, can bring the all-important details of their behaviors to light.

  • Partnership—The interview and observation should take the tone of a collaborative exploration with the user, alternating between observation of work and discussion of its structure and details.

  • Interpretation—Much of the work of the designer is reading between the lines of facts gathered about users’ behaviors, their environment, and what they say. These facts must be taken together as a whole and analyzed by the designer to uncover the design implications. Interviewers must be careful, however, to avoid assumptions based on their own interpretation of the facts without verifying these assumptions with users.

  • Focus—Rather than coming to interviews with a set questionnaire or letting the interview wander aimlessly, the designer needs to subtly direct the interview so as to capture data relevant to design issues.

Improving on contextual inquiry

Contextual inquiry forms a solid theoretical foundation for qualitative research, but as a specific method it has some limitations and inefficiencies. The following process improvements, in our experience, result in a more highly leveraged research phase that better sets the stage for successful design:

  • Shorten the interview process—Contextual inquiry assumes full-day interviews with users. The authors have found that interviews as short as one hour can be sufficient to gather the necessary user data, provided that a sufficient number of interviews (about six well-selected users for each hypothesized role or type) are scheduled. It is much easier and more effective to find a diverse set of users who will consent to an hour with a designer than it is to find users who will agree to spend an entire day.

  • Use smaller design teams—Contextual inquiry assumes a large design team that conducts multiple interviews in parallel, followed by debriefing sessions in which the full team participates. We’ve found that it is more effective to conduct interviews sequentially with the same designers in each interview. This allows the design team to remain small (two or three designers), but even more important, it means that the entire team interacts with all interviewed users directly, allowing the members to most effectively analyze and synthesize the user data.

  • Identify goals first—Contextual inquiry, as described by Beyer and Holtzblatt, feeds a design process that is fundamentally task focused. We propose that ethnographic interviews first identify and prioritize user goals before determining the tasks that relate to these goals.

  • Looking beyond business contexts—The vocabulary of contextual inquiry assumes a business product and a corporate environment. Ethnographic interviews are also possible in consumer domains, though the focus of questioning is somewhat different, as we describe later in this chapter.

The remainder of this chapter provides general methods and tips for preparing for and conducting ethnographic interviews.

Preparing for ethnographic interviews

Ethnography is a term borrowed from anthropology, meaning the systematic and immersive study of human cultures. In anthropology, ethnographic researchers spend years living immersed in the cultures they study and record. Ethnographic interviews take the spirit of this type of research and apply it on a micro level. Rather than trying to understand behaviors and social rituals of an entire culture, the goal is to understand the behaviors and rituals of people interacting with individual products.

Identifying candidates

Because the designers must capture an entire range of user behaviors regarding a product, it is critical that the designers identify an appropriately diverse sample of users and user types when planning a series of interviews. Based on information gleaned from stakeholders, SMEs, and literature reviews, designers need to create a hypothesis that serves as a starting point in determining what sorts of users and potential users to interview.

The persona hypothesis

We label this starting point the persona hypothesis, because it is the first step towards identifying and synthesizing personas, the user archetypes we will discuss in detail in the next chapter. The persona hypothesis should be based on likely behavior patterns and the factors that differentiate these patterns, not purely on demographics. It is often the case with consumer products that demographics are used as screening criteria to select interview subjects, but even in this case, they should be serving as a proxy for a hypothesized behavior pattern.

The nature of a product’s domain makes a significant difference in how a persona hypothesis is constructed. Business users are often quite different from consumer users in their behavior patterns and motivations, and different techniques are used to build the persona hypothesis in each case.

The persona hypothesis is a first cut at defining the different kinds of users (and sometimes customers) for a product. The hypothesis serves as the basis for initial interview planning; as interviews proceed, new interviews may be required if the data indicates the existence of user types not originally identified.

The persona hypothesis attempts to address, at a high level, these three questions:

What different sorts of people might use this product?

How might their needs and behaviors vary?

What ranges of behavior and types of environments need to be explored?

Roles in business and consumer domains

For business products, roles — common sets of tasks and information needs related to distinct classes of users — provide an important initial organizing principle. For example, for an office phone system, we might find these rough roles:

  • People who make and receive calls from their desks

  • People who travel a lot and need to access the phone system remotely

  • Receptionists who answer the phone for many people

  • People who technically administer the phone system

In business and technical contexts, roles often map roughly to job descriptions, so it is relatively easy to get a reasonable first cut of user types to interview by understanding the kind of jobs held by users (or potential users) of the system.

Unlike business users, consumers don’t have concrete job descriptions, and their use of products may cross multiple contexts. Therefore, it often isn’t meaningful to use roles as an organizing principle for the persona hypothesis for a consumer product. Rather, it is often the case that you will see the most significant patterns emerge from users’ attitudes and aptitudes, as manifest in their behaviors.

Behavioral and demographic variables

In addition to roles, a persona hypothesis should be based on variables that help differentiate between different kinds of users based on their needs and behaviors. This is often the most useful way to distinguish between different types of users (and forms the basis for the persona-creation process described in the next chapter). Despite the fact that these variables can be difficult to fully anticipate without research, they often become the basis of the persona hypothesis for consumer products. For example, for an online store, there are several ranges of behavior concerning shopping that we might identify:

  • Frequency of shopping (from frequent to infrequent)

  • Desire to shop (from loves to shop to hates to shop)

  • Motivation to shop (from bargain hunting to searching for just the right item)

Although consumer user types can often be roughly defined by the combination of behavioral variables they map to, behavioral variables are also important for identifying types of business and technical users. People within a single business-role definition may have different needs and motivations. Behavioral variables can capture this, although often not until user data has been gathered.

Given the difficulty in accurately anticipating behavioral variables before user data is gathered, another helpful approach in building a persona hypothesis is making use of demographic variables. When planning your interviews, you can use market research to identify ages, locations, gender, and incomes of the target markets for the product. Interviewees should be distributed across these demographic ranges in the hope of interviewing a sufficiently diverse group of people to identify the significant behavior patterns.

Domain expertise versus technical expertise

One important type of behavioral distinction is the difference between technical expertise (knowledge of digital technology) and domain expertise (knowledge of a specialized subject area pertaining to a product). Different users will have varying amounts of technical expertise; similarly, some users of a product may be less expert in their knowledge of the product’s domain (for example, accounting knowledge in the case of a general ledger application). Thus, depending on who the design target of the product is, domain support may be a necessary part of the product’s design, as well as technical ease of use. A relatively naive user will likely never be able to use more than a small subset of a domain-specific product’s functions without domain support provided in the interface. If naive users are part of the target market for a domain-specific product, care must be taken to support domain-naive behaviors.

Environmental considerations

A final consideration, especially in the case of business products, is the cultural differences between organizations in which the users are employed. At small companies, for example, workers tend to have a broader set of responsibilities and more interpersonal contact; at huge companies, workers tend to be highly specialized and there are often multiple layers of bureaucracy. Examples of these environmental variables include:

  • Company size (from small to multinational)

  • Company location (North America, Europe, Asia, and so on)

  • Industry/sector (electronics manufacturing, consumer packaged goods, and so on)

  • IT presence (from ad hoc to draconian)

  • Security level (from lax to tight)

Like behavioral variables, these may be difficult to identify without some domain research, because patterns do vary significantly by industry and geographic region.

Putting a plan together

After you have created a persona hypothesis, complete with potential roles and behavioral, demographic, and environmental variables, you then need to create an interview plan that can be communicated to the person in charge of coordinating and scheduling the interviews.

In our practice, we’ve observed that each presumed behavioral pattern requires about a half-dozen interviews to verify or refute (sometimes more if a domain is particularly complex). What this means in practice is that each identified role, behavioral variable, demographic variable, and environmental variable identified in the persona hypothesis should be explored in four to six interviews (sometimes more if a domain is particularly complex).

However, these interviews can overlap. If we believe that use of an enterprise product may differ, for example, by geographic location, industry, and company size, then research at a single small electronics manufacturer in Taiwan would allow us to cover several variables at once. By being clever about mapping variables to interviewee-screening profiles, you can keep the number of interviews to a manageable number.

Conducting ethnographic interviews

After the persona hypothesis has been formulated and an interview plan has been derived from it, you are ready to interview — assuming you get access to interviewees! While formulating the interview plan, designers should work closely with project stakeholders who have access to users. Stakeholder involvement is generally the best way to make interviews happen, especially for business and technical products.

If stakeholders can’t help you get in touch with users, you can contact a market or usability research firm that specializes in finding people for surveys and focus groups. These firms are useful for reaching consumers with diverse demographics. The difficulty with this approach is that it can sometimes be challenging to get interviewees who will permit you to interview them in their homes or places of work.

As a last alternative for consumer products, designers can recruit friends and relatives. This makes it easier to observe the interviewees in a natural environment but also is quite limiting as far as diversity of demographic and behavioral variables are concerned.

Interview teams and timing

The authors favor a team of two designers per interview, one to drive the interview and take light notes, and the other to take detailed notes (these roles can switch halfway through the interview). One hour per user interviewed is often sufficient, except in the case of highly complex domains such as medical, scientific, and financial services that may require more time to fully understand what the user is trying to accomplish. Be sure to budget travel time between interview sites, especially for consumer interviews in residential neighborhoods, or interviews that involve “shadowing” users as they interact with a (usually mobile) product while moving from place to place. Teams should try to limit interviews to six per day, so that there is adequate time for debriefing and strategizing between interviews, and so that the interviewers do not get fatigued.

Phases of ethnographic interviews

A complete set of ethnographic interviews for a project can be grouped into three distinct, chronological phases. The approach of the interviews in each successive phase is subtly different from the previous one, reflecting the growing knowledge of user behaviors that results from each additional interview. Focus tends to be broad at the start, aimed at gross structural and goal-oriented issues, and more narrow for interviews at the end of the cycle, zooming in on specific functions and task-oriented issues.

  • Early interviews are exploratory in nature, and focused on gathering domain knowledge from the point of view of the user. Broad, open-ended questions are common, with a lesser degree of drill-down into details.

  • Middle interviews are where designers begin to see patterns of use and ask open-ended and clarifying questions to help connect the dots. Questions in general are more focused on domain specifics, now that the designers have absorbed the basic rules, structures, and vocabularies of the domain.

  • Later interviews confirm previously observed patterns, further clarifying user roles and behaviors and making fine adjustments to assumptions about task and information needs. Closed-ended questions are used in greater number, tying up loose ends in the data.

After you have an idea who your actual interviewees will be, it can be useful to work with stakeholders to schedule individuals most appropriate for each phase in the interview cycle. For example, in a complex, technical domain it is often a good idea to perform early interviews with the more patient and articulate interview subjects. In some cases, you may also want to loop back and interview this particularly knowledgeable and articulate subject again at the end of the interview cycle to address any topics that you weren’t aware of during your initial interview.

Basic methods

The basic methods of ethnographic interviewing are simple, straightforward, and very low tech. Although the nuances of interviewing subjects takes some time to master, any practitioner should, if they follow the suggestions below, be rewarded with a wealth of useful qualitative data:

  • Interview where the interaction happens

  • Avoid a fixed set of questions

  • Focus on goals first, tasks second

  • Avoid making the user a designer

  • Avoid discussions of technology

  • Encourage storytelling

  • Ask for a show and tell

  • Avoid leading questions

We describe each of these methods in more detail in the following sections.

Interview where the interaction happens

Following the first principle of contextual inquiry, it is of critical importance that subjects be interviewed in the places where they actually use the products. Not only does this give the interviewers the opportunity to witness the product being used, but it also gives the interview team access to the environment in which the interaction occurs. This can give tremendous insight into product constraints and user needs and goals.

Observe the environment closely: It is likely to be crawling with clues about tasks the interviewee might not have mentioned. Notice, for example, the kind of information they need (papers on desks or adhesive notes on screen borders), inadequate systems (cheat sheets and user manuals), the frequency and priority of tasks (inbox and outbox); and the kind of workflows they follow (memos, charts, calendars). Don’t snoop without permission, but if you see something that looks interesting, ask your interviewee to discuss it.

Avoid a fixed set of questions

If you approach ethnographic interviews with a fixed questionnaire, you not only run the risk of alienating the interview subject but can also cause the interviewers to miss out on a wealth of valuable user data. The entire premise of ethnographic interviews (and contextual inquiry) is that we as interviewers don’t know enough about the domain to presuppose the questions that need asking: We must learn what is important from the people we talk to. This said, it’s certainly useful to have types of questions in mind. Depending on the domain, it may also be useful to have a standardized set of topics that you want to make sure you cover in the course of your interview. This list of topics may evolve over the course of your interviews, but this will help you make sure that you get enough detail from each interview so that you are able to recognize the significant behavior patterns.

Here are some goal-oriented questions to consider:

  • Goals—What makes a good day? A bad day?

  • Opportunity—What activities currently waste your time?

  • Priorities—What is most important to you?

  • Information—What helps you make decisions?

Another useful type of question is the system-oriented question:

  • Function—What are the most common things you do with the product?

  • Frequency—What parts of the product do you use most?

  • Preference—What are your favorite aspects of the product? What drives you crazy?

  • Failure—How do you work around problems?

  • Expertise—What shortcuts do you employ?

For business products, workflow-oriented questions can be helpful:

  • Process—What did you do when you first came in today? And after that?

  • Occurrence and recurrence—How often do you do this? What things do you do weekly or monthly, but not every day?

  • Exception—What constitutes a typical day? What would be an unusual event?

To better understand user motivations, you can employ attitude-oriented questions:

  • Aspiration—What do you see yourself doing five years from now?

  • Avoidance—What would you prefer not to do? What do you procrastinate on?

  • Motivation—What do you enjoy most about your job (or lifestyle)? What do you always tackle first?

Focus on goals first, tasks second

Unlike contextual inquiry and the majority of other qualitative research methods, the first priority of ethnographic interviewing is understanding the why of users — what motivates the behaviors of individuals in different roles, and how they hope to ultimately accomplish this goal — not the what of the tasks they perform. Understanding the tasks is important, and the tasks must be diligently recorded. But these tasks will ultimately be restructured to better match user goals in the final design.

Avoid making the user a designer

Guide the interviewee towards examining problems and away from expressing solutions. Most of the time, those solutions reflect the interview subject’s personal priorities, and while they sound good to him, they tend to be shortsighted, idiosyncratic, and lack the balance and refinement that an interaction designer can bring to a solution based upon adequate research and years of experience. That said, a proposed design solution can be a useful jumping off point to discuss a user’s goals and the problems they encounter with current systems. If a user blurts out an interesting idea, ask “What problem would that solve for you?” or “Why would that be a good solution?”

Avoid discussions of technology

Just as you don’t want to treat the user as a designer, you also don’t want to treat him as a programmer or engineer. Discussion of technology is meaningless without first understanding the purpose underlying any technical decisions. In the case of technical or scientific products, where technology is always an issue, distinguish between domain-related technology and product-related technology, and steer away from the latter. If an interview subject is particularly insistent on talking about how the product should be implemented, bring the subject back to his goals and motivations by asking “How would that help you?”

Encourage storytelling

Far more useful than asking users for design advice is encouraging them to tell specific stories about their experiences with a product (whether an old version of the one you’re redesigning, or an analogous product or process): how they use it, what they think of it, who else they interact with when using it, where they go with it, and so forth. Detailed stories of this kind are usually the best way to understand how users relate to and interact with products. Encourage stories that deal with typical cases and also more exceptional ones.

Ask for a show and tell

After you have a good idea of the flow and structure of a user’s activities and interactions and you have exhausted other questions, it is often useful to ask the interviewee for a show and tell or grand tour of artifacts related to the design problem. These can be domain-related artifacts, software interfaces, paper systems, tours of the work environment, or ideally all the above. Be careful to not only record the artifacts themselves (digital or video cameras are very handy at this stage) but also pay attention to how the interviewee describes them. Be sure to ask plenty of clarifying questions as well.

Avoid leading questions

One important thing to avoid in interviews is the use of leading questions. Just as in a courtroom, where lawyers can, by virtue of their authority, bias witnesses by suggesting answers to them, designers can inadvertently bias interview subjects by implicitly (or explicitly) suggesting solutions or opinions about behaviors. Examples of leading questions include:

  • Would feature X help you?

  • You like X, don’t you?

  • Do you think you’d use X if it were available?

After the interviews

After each interview, teams compare notes and discuss any particularly interesting trends observed or specific points brought up in the most recent interview. If they have the time, they should also look back at old notes to see whether unanswered questions from other interviews and research have been properly answered. This information should be used to strategize about the approach to take in subsequent interviews.

After the interview process is finished, it is useful to once again make a pass through all the notes, marking or highlighting trends and patterns in the data. This is very useful for the next step of creating personas from the cumulative research. If it is helpful, the team can create a binder of the notes, review any videotapes, and print out artifact images to place in the binder or on a public surface, such as a wall, where they are all visible simultaneously. This will be useful in later design phases.

Other Types of Research

This chapter has focused on qualitative research aimed at gathering user data that will later be used to construct robust user and domain models that form the key tools in the Goal-Directed Design methodology described in the next chapter. A wide variety of other forms of research are used by design and usability professionals, ranging from detailed task analysis activities to focus groups and usability tests. While many of these activities have the potential to contribute to the creation of useful and desirable products, we have found the qualitative approach described in this chapter to provide the most value to digital product design. Put simply, the qualitative approach helps answer questions about the product at both the big-picture and functional-detail level with a relatively small amount of effort and expense. No other research technique can claim this.

Mike Kuniavsky’s book Observing the User Experience is an excellent resource that describes a wide range of user research methods for use at many points in the design and development process. In the remainder of this chapter, we discuss just a few of the more prominent research methods and how they fit into the overall development effort.

Focus groups

Marketing organizations are particularly fond of gathering user data via focus groups, in which representative users, usually chosen to match previously identified demographic segments of the target market, are gathered together in a room and asked a structured set of questions and provided a structured set of choices. Often, the meeting is recorded on audio or video media for later reference. Focus groups are a standard technique in traditional product marketing. They are useful for gauging initial reactions to the form of a product, its visual appearance, or industrial design. Focus groups can also gather reactions to a product that the respondents have been using for some time.

Although focus groups may appear to provide the requisite user contact, the method is in many ways not appropriate as a design tool. Focus groups excel at eliciting information about products that people own or are willing (or unwilling) to purchase but are weak at gathering data about what people actually do with those products, or how and why they do it. Also, because they are a group activity, focus groups tend to drive to consensus. The majority or loudest opinion often becomes the group opinion. This is anathema to the design process, where designers must understand all the different patterns of behavior a product must address. Focus groups tend to stifle exactly the diversity of behavior and opinion that designers must accommodate.

Market demographics and market segments

The marketing profession has taken much of the guesswork out of determining what motivates people to buy. One of the most powerful tools for doing so is market segmentation, which typically uses data from focus groups and market surveys to group potential customers by demographic criteria (such as age, gender, educational level, and home zip code) to determine what types of consumers will be most receptive to a particular product or marketing message. More sophisticated consumer data also include psychographics and behavioral variables, including attitudes, lifestyle, values, ideology, risk aversion, and decision-making patterns. Classification systems such as SRI’s VALS segmentation and Jonathan Robbin’s geodemographic PRIZM clusters can add greater clarity to the data by predicting consumers’ purchasing power, motivation, self-orientation, and resources.

These market-modeling techniques are able to accurately forecast marketplace acceptance of products and services. They are an invaluable tool in assessing the viability of a product. They can also be powerful tools for convincing executives to build a product. After all, if you know X people might buy a product or service for Y dollars, it is easy to evaluate the potential return on investment.

However, understanding whether somebody wants to buy something is not the same thing as actually defining the product. Market segmentation is a great tool for identifying and quantifying a market opportunity, but an ineffective tool for defining a product that will capitalize on that opportunity.

It turns out, however, that data gathered via market research and that gathered via qualitative user research complement each other quite well. Because market research can help identify an opportunity, it is often the necessary starting point for a design initiative. Without assessing the opportunity, you will be hard pressed to convince a businessperson to fund the design. Also, as already discussed, ethnographic interviewers should use market research to help them select interview targets, and finally, as the video-editing story earlier in this chapter illustrates, qualitative research can shed critical light on the results of quantitative studies. We will discuss the differences between segmentation models and user models in more detail in Chapter 5.

Usability and user testing

Usability testing (also known, somewhat unfortunately, as “user testing”) is a collection of techniques used to measure characteristics of a user’s interaction with a product, usually with the goal of assessing the usability of that product. Typically, usability testing is focused on measuring how well users can complete specific, standardized tasks, as well as what problems they encounter in doing so. Results often reveal areas where users have problems understanding and utilizing the product, as well as places where users are more likely to be successful.

Usability testing requires a fairly complete and coherent design artifact to test against. Whether you are testing production software, a clickable prototype, or even a paper prototype, the point of the test is to validate a product design. This means that the appropriate place for usability testing is quite late in the design cycle, after there is a coherent concept and sufficient detail to generate such prototypes. We discuss evaluative usability testing as part of design refinement in Chapter 7.

A case could certainly be made for the appropriateness of usability testing at the beginning of a redesign effort, and the technique is certainly capable of finding opportunities for improvement in such a project. However, we find that we are able to assess major inadequacies of a product through our qualitative studies, and if the budget is limited so as to allow usability testing only once in a product design initiative, we find much more value in performing the tests after we have a candidate solution, as a means of testing the specific elements of the new design.

Because the findings of user testing are generally measurable and quantitative, usability research is especially useful in comparing specific design variants to choose the most effective solution. Customer feedback gathered from usability testing is most useful when you need to validate or refine particular interaction mechanisms or the form and expression of specific design elements.

Usability testing is especially effective at determining:

  • Naming—Do section/button labels make sense? Do certain words resonate better than others do?

  • Organization—Is information grouped into meaningful categories? Are items located in the places customers might look for them?

  • First-time use and discoverability—Are common items easy for new users to find? Are instructions clear? Are instructions necessary?

  • Effectiveness—Can customers efficiently complete specific tasks? Are they making missteps? Where? How often?

As suggested previously, it is also worth noting that usability testing is predominantly focused on assessing the first-time use of a product. It is often quite difficult (and always laborious) to measure how effective a solution is on its 50th use — in other words, for the most common target: the perpetual intermediate user. This is quite a conundrum when one is optimizing a design for intermediate or expert users. One technique for accomplishing this is the use of a diary study, in which subjects keep diaries detailing their interactions with the product. Again, Mike Kuniavsky provides a good explanation of this technique in Observing the User Experience.

Finally, when usability testing, be sure that what you are testing is actually measurable, that the test is administered correctly, that the results will be useful in correcting design issues, and that the resources necessary to fix the problems observed in a usability study are available. Jakob Nielsen’s Usability Engineering is the classic volume on usability and provides excellent guidance on the subject.

Card sorting

Popularized by information architects, card sorting is a technique to understand how users organize information and concepts. While there are a number of variations on the technique, it is typically performed by asking users to sort a deck of cards, each containing a piece of functionality or information related to the product or Web site. The tricky part is analyzing the results, either by looking for trends or using statistical analysis to uncover patterns and correlations.

While this can undoubtedly be a valuable tool to uncover one aspect of a user’s mental model, the technique assumes that the subject has refined organizational skills, and that the way that they sort a group of abstract topics will correlate to the way they will end up wanting to use your product. This is clearly not always the case. One way to overcome these potential challenges is to ask the users to sequence the cards based upon the completion of tasks that the product is being designed to support. Another way to enhance the value of a card sort study is to debrief the subject afterwards to understand any organizational principles they have employed in their sort (again, attempting to understand their mental model).

Ultimately, we believe that properly conducted open-ended interviews are quite capable of exploring these aspects of the user’s mental model. By asking the right questions and paying close attention to how a subject explains his activities and the domain, you can decipher how he mentally associates different bits of functionality and information.

Task analysis

Task analysis refers to a number of techniques that involve using either questionnaires or open-ended interviews to develop a detailed understanding of how people currently perform specific tasks. Of concern to such a study is:

  • Why the user is performing the task (that is, the underlying goal)

  • Frequency and importance of the task

  • Cues — what initiates or prompts the execution of the task

  • Dependencies — what must be in place to perform the task, as well as what is dependent on the completion of the task

  • People who are involved and their roles and responsibilities

  • Specific actions that are performed

  • Decisions that are made

  • Information that is used to support decisions

  • What goes wrong — errors and exception cases

  • How errors and exceptions are corrected

Once the questionnaires are compiled or the interviews are completed, tasks are formally decomposed and analyzed, typically into a flow chart or similar diagram that communicates the relationships between actions and often the relationships between people and processes.

We’ve found that this type of inquiry should be incorporated into ethnographic user interviews. Further, as we’ll discuss in the next chapter, the analysis activities are a useful part of our modeling activities. It should be noted, though, that while task analysis is a critical way of understanding the way users currently do something, as well as a way of identifying pain points and opportunities for improvement, we want to reiterate the importance of focusing first and foremost on the users’ goals. The way people do things today is often merely the product of the obsolete systems and organizations they are forced to interact with, and typically bear little resemblance to the way they would like to do things or the way they would be most effective.

User research is the critical foundation upon which your designs are built. Take the time to plan your user research and match the appropriate technique to the appropriate place in your development cycle. Your product will benefit, and you’ll avoid wasting time and resources. Putting a product to the test in a lab to see whether it passes or fails may provide a lot of data, but not necessarily a lot of value. Using ethnographic interviews at the beginning of the process allows you, as a designer, to truly understand your users, their needs, and their motivations. Once you have a solid design concept based on qualitative user research and the models that research feeds, your usability testing will become an even more efficient tool for judging the effectiveness of design choices you have made. Qualitative research allows you to do the heavy lifting up front in the process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.130.201