© Regine M. Gilbert 2019
R. M. GilbertInclusive Design for a Digital Worldhttps://doi.org/10.1007/978-1-4842-5016-7_6

6. Inclusive Design Research

Regine M. Gilbert1 
(1)
New York, NY, USA
 

Inclusive design is good business.

—IBM founder Thomas John Watson Jr.​

The case for doing research and incorporating accessibility into your plans early will benefit your organization. Not only can your organization benefit from incorporating people with disabilities when testing a product, you can also benefit by making them a part of the process from the very start. Involving people with disabilities in product development and advertisement can help businesses access a market worth billions of dollars.1

Ultimately, the decision on how to move forward with the project will be the responsibility of the stakeholders involved. Stakeholders may include executive leadership, managers, product owners, and so on.

In the building of digital experiences, we have to remember that human beings are going to be the ones using our products. It is easy to get wrapped up into our products and what they can do and we sometimes forget that.

Everyone is capable of doing research no matter what your role is on the team. One of the balancing acts that people have to weigh is business needs vs. user goals. Research can help balance things out by asking questions and seeking answers. Research plans can help facilitate the process.

One of the first questions I recommend asking is, “What problem are we solving?” If this is not something you can answer, you may want to reconsider what you are working on. As creators in this digital age, we should be making things better for people and help them solve problems. After asking what problem you are solving, move forward with a research plan. Research plans can vary depending on what is needed for a project.

In Table 6-1, we can see an example of what a research plan could include and some questions you might ask. Incorporating accessibility into the research plan will provide you with forethought and not leave accessibility as an afterthought after a product is developed.
Table 6-1

Example of research plan—pick and choose what works for you and your products needs

Research plan purpose

What is the purpose of the research? What goals are you trying to achieve?

Context of use of the product

Is this an existing product in which the uses are known? Is this a new product? When and where will people be using the product?

Priority areas of research

Based on your goals, what are the priority areas for your research?

Methodologies

What methodologies will you use for your research? Personas? Interviews? Surveys? Usability Testing? Others?

Premortem (what could go wrong)

Make a list of all the things that could go wrong during research and make a plan to mitigate those risks

Timeframe

How much time do you have for research and synthesis?

Research questions

What type of questions could you ask that will help you reach the goal of your research?

Goals

What is the primary goal of the research? How does this align with business goals?

Participants

Who will your participants be? How will you include people with disabilities? How will you recruit participants?

Script?

If the users are testing a product with a goal in mind, what type of script will you use?

Ethics

What are the ethical values of your organization?

Recruiting People with Disabilities

Included in your research plan will include how you plan to include participants. Getting people with disabilities involved in the process requires reaching out to local communities or using services that can help you find participants. The people you recruit should be people who use your product and are a part of your target market. Think about goal of the project when recruiting participants.

Focus the Recruiting Strategy

If you work with an external recruiter, ask them if they have experience recruiting people with disabilities; some do. If you are recruiting internally (without an external recruiter), you may need to reach out to organizations that have access to people with disabilities. For example, if you need to recruit participants with visually disabilities in the United States, you should contact a local chapter of the National Federation of the Blind (https://nfb.org/state-and-local-organizations) or a local training center such as the Carroll Center for the Blind in Massachusetts (http://carroll.org/). If you use social media to advertise your study, a good approach is to use the hashtag #a11y (stands for accessibility—there are 11 letters between the “a” and “y”) in your post.2 Other organizations for people with disabilities are listed in the appendix.

Remember that you want to gather enough information to make informed decisions.

Exclusion

Design is much more likely to be the source of exclusion than inclusion. When we design for other people, our own biases and preferences often lead the way. When we create a solution that we, ourselves, can see, touch, understand, or hear, it tends to work well for people with similar circumstances or preferences to us. It also ends up excluding many more people.

This is especially true with respect to disability. The World Health Organization defines disability as a mismatched interaction between the features of a person’s body and the features of the environment in which they live. This is also known as the social definition, or model, of disability.3

From the start, you can think about who you are excluding. Far too many times, people with disabilities are left out of research. The research plan and the time that you may have to plan will vary depending on the size of the project. The types of research you will want to conduct will vary in relation to what the goals of the research are.

Discovering how and why people behave as they do and what opportunities that presents for your business or organization will open the way to more innovative and appropriate design solutions than asking how they feel or merely tweaking your current design based on analytics.4

There are several types of methodologies you can use when researching your products. The great thing about research is that you can apply it to any platform. Christian Rohrer has a great layout of the landscape of research methodologies listing out attitudinal vs. behavioral and qualitative vs. quantitative (see Figure 6-1). We’ll discuss the differences between the types of research and what might be best depending on the context of what you are researching.
../images/471061_1_En_6_Chapter/471061_1_En_6_Fig1_HTML.jpg
Figure 6-1

A Landscape of User Research Methods by Christian Rohrer

20 UX Research Methods in Brief

Here’s a short description of the user research methods shown in the preceding chart:
  • Usability Lab Studies : Participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.

  • Ethnographic Field Studies: Researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question.

  • Participatory Design : Participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.

  • Focus Groups: Groups of 3–12 participants are led through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

  • Interviews : A researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.

  • Eyetracking : An eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with web sites, applications, physical products, or environments.

  • Usability Benchmarking: Tightly scripted usability studies are performed with several participants, using precise and predetermined measures of performance.

  • Moderated Remote Usability Studies: Usability studies conducted remotely with the use of tools such as screen-sharing software and remote-control capabilities.

  • Unmoderated Remote Panel Studies : A panel of trained participants who have video recording and data collection software installed on their own personal devices uses a web site or product while thinking aloud, having their experience recorded for immediate playback and analysis by the researcher or company.

  • Concept Testing : A researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants and either in person or online.

  • Diary/Camera Studies: Participants are given a mechanism (diary or camera) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.

  • Customer Feedback : Open-ended and/or close-ended information provided by a self-selected sample of users, often through a feedback link, button, form, or email.

  • Desirability Studies : Participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list; these studies can be both qualitative and quantitative.

  • Card Sorting : A quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.

  • Clickstream Analysis: Analyzing the record of screens or pages that users click and see, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled.

  • A/B Testing (also known as “multivariate testing,” “live testing,” or “bucket testing”): A method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.

  • Unmoderated UX Studies: A quantitative or qualitative and automated method that uses a specialized research tool to capture participant behaviors (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.

  • True Intent Studies: A method that asks random site visitors what their goal or intention is upon entering the site, measures their subsequent behavior, and asks whether they were successful in achieving their goal upon exiting the site.

  • Intercept Surveys : A survey that is triggered during the use of a site or application.

  • Email Surveys: A survey in which participants are recruited from an email message.5

Generative Research vs. Evaluative Research

When generating research, you may be conducting interviews, surveys, usability studies, etc. You are working to answer the question of what problem you are solving for. You are gathering data to be used to make informed decisions on what to do next.

Evaluative research is when you focus on the purpose of your product and may involve usability testing. Usability testing will be detailed in Chapter 8.

REAL-LIFE CASE STUDY: DIGITAL SERVICES GEORGIA

Scope: Accessibility, Visual Design

Timeline: June 2015–January 2016

Members: Nikhil Deshpande, Kendra Skeene, Jenna Tollerson, Jasmyne Dove

More and more, people turn to the Internet for critical information. Today, important government information and services—unemployment benefits, veterans’ services, tax information, and so much more—are available online.

But can everyone access it?

More than 8% of Georgia residents under the age of 65 have some form of disability, and the percentage only increases when you look at older groups. At the same time, people over 65 are the fastest growing group of Internet users. This is often the population that benefits most from online services and information but only when they can access it!

How We Made the Digital Services Georgia Platform More Accessible Case Study

So what did we actually do to make the platform more accessible? Advised by AMAC and the ADA’s office, we made the following changes to the platform code:
  • Increased Color Contrast

    Text needs to contrast with its background enough for users with low vision and color blindness to be able to read it. Previously, some of the text in our themes had too low of contrast to pass WCAG 2.0 AA standards. We used WebAIM Contrast Checker to narrow down our theme colors and select only accessible combinations for text and background colors.

  • Better Font Legibility

    We knew from the beginning that certain fonts are easier to read than others. When we took a second look at our theme fonts from an accessibility-focused perspective, we realized that some of our header and navigation fonts could be easier to read.

  • Improved Semantic Markup

    A lot of the accessibility needs can be addressed simply by using semantic markup—that is, using HTML markup that describes what content is and does instead of relying on styled <div> or <span> tags for everything. By adjusting our use of heading tags for all heading levels, removing heading tags from non-heading content, and adding <label> tags to the search form, we made our web sites more accessible and more SEO-friendly.

  • Improved Keyboard-Only Navigation

    Sometimes, users need to navigate the Web with a keyboard, rather than a mouse or touch screen. We adjusted our semantic markup to make content easier to access when navigating solely with a keyboard or screen reader software. Some of these changes included adding a visible border around all links when a user tabs to them (called a “visible focus”), adding labels to form elements, and making the menus easier to tab through.

  • Enhanced Functionality for Screen Readers

    For certain elements, we enhanced the screen reader experience with ARIA labels. Examples included adding ARIA accessibility labeling to “Read More” links across the platform to provide additional context to the link for screen reader users and adding ARIA labeling to pagination links to provide more context for where each “Next” or “Back” link goes.

Before and After

Subtle, not huge changes were made to the themes’ visual appearance, as shown in Figures 6-2 through 6-7.
../images/471061_1_En_6_Chapter/471061_1_En_6_Fig2_HTML.jpg
Figure 6-2

Before (Classic 2 theme)

../images/471061_1_En_6_Chapter/471061_1_En_6_Fig3_HTML.jpg
Figure 6-3

After (Classic 2 theme)

../images/471061_1_En_6_Chapter/471061_1_En_6_Fig4_HTML.jpg
Figure 6-4

Before (Friendly theme)

../images/471061_1_En_6_Chapter/471061_1_En_6_Fig5_HTML.jpg
Figure 6-5

After (Friendly theme)

../images/471061_1_En_6_Chapter/471061_1_En_6_Fig6_HTML.jpg
Figure 6-6

Before (Portal theme)

../images/471061_1_En_6_Chapter/471061_1_En_6_Fig7_HTML.jpg
Figure 6-7

After (Portal theme)

Accessibility for All

With these improvements, state agency web sites on our platform now have accessible themes. We’ve also built accessibility into our process for designing and building new platform features to ensure that all future development meets our standards from the start. However, the agency content managers play a critical role in ensuring the site’s information is also accessible.6

Georgia’s way of addressing accessibility through planning and research helped them build a better and more inclusive product.

Here are nine steps of user research from Erika Hall of Mule Design:

1. Get comfortable being uncomfortable.

All I know is that I know nothing.

—Socrates

We’ve all been brought up to value answers and fear questions. We were rewarded for right answers at school and we are rewarded for bright ideas at work. No wonder so many people look for reasons to avoid doing research, especially qualitative research. Anxiety around looking less knowledgeable runs deep. At least quant stuff has the comforting familiarity of standardized testing. Maintaining a research mindset means realizing that bias is rampant, certainty is an illusion, and any answer has a short shelf life. A good question is far more valuable in the long run. And you can’t ask good questions—meaning you can’t learn—until you admit that you don’t have the answers.

2. Ask first, prototype later.

If we only test bottle openers, we may never realize customers prefer screw-top bottles.

—Victor Lombardi, Why We Fail

So, of course there is a rush to prototype and test the prototype. A prototype is an answer, and it’s tangible, even if it’s simply a sketch on paper. This is comfortable, much more comfortable than just asking questions, even if it is tantamount to setting a large pile of money on fire. To anyone concerned about demonstrating their value by making fast, visible progress, simply asking questions feels as productive as a raccoon washing cotton candy.

The danger in prototyping too soon is investing resources in answering a question no one asked and ignoring the opportunity cost. Testing a prototype can help you refine an idea that is already good, not tell you whether you’re solving the right problem. And it’s easy to mistake the polish of a prototype for the quality of the idea (cough Juicero cough). FWIW, it’s also easy to mistake the gloss of a research report for the value of the insights.

Instead of saving and defending weak ideas, asking the right questions helps you identify and eradicate bad ideas faster. You just have to be strong enough to embrace being wrong.

3. Know your goal.

Asking questions is a waste of time unless you know your reason for doing so in advance. And you have to publicly swear that your reason is not “to be proven right.”

That is everyone’s secret goal. See number 1.

Often, in the enthusiasm to embrace research, teams will start talking to customers without a clear, shared goal. And then afterward, they feel like they spent precious time with no idea how to apply what they learned, hence nothing to show for it. This leads to statements like, “We tried doing research last year and it was a waste of time,” and, thus, a return to the comfort of building and testing. Or they walk away with different interpretations of what they heard, which leads to more arguments about who was proven right.

In large organizations, the unspoken goal is sometimes “demonstrate a commitment to research while allowing our product leaders to do what they want.” This might sound cynical, but I’ve talked to many skilled practitioners in well-funded research departments who generate magnificent reports that have zero impact on decision-making. Acknowledging this happens is the first step to stopping it.

It is perfectly fine and a great place to start for your goal to be “We need to level-set and quickly understand the perspective of people who aren’t us.” Just don’t tack on other goals after the fact.

Only after you have a goal will you know what you need to know. And you have to know your question before you can choose how to answer it.

4. Agree on the big questions.

At its core, all business is about making bets on human behavior.

—The Power of “Thick” Data, WSJ

The quality of your question determines the utility of the results. Asking the wrong question is the same as prototyping a solution to the wrong problem. They will both give you something other than what you need. Start with your high-priority questions. These come from the assumptions or areas of ignorance that carry the most risk if you’re wrong.

The big research question is what you want to know, not what you ask in an interview. In fact, asking your research question directly is often the worst way to learn anything. People often don’t know or are unwilling to admit to their true behaviors, but everyone is really good at making up answers.

Design research gets conflated with user research all the time. Talking to representative users is just one of many ways of answering high-priority research questions. Not everything you need to know is about users.

Often the most critical question is a variation of “Based on evidence, what do we really know about our customers/competition/internal capabilities?” This can be a particularly terrifying one to approach in total honesty, but you should be able to answer it within the hour.

5. There is always enough time and money.

When research is defined as a type of work outside of design, it’s easy to define gathering evidence as something extra and find reasons not to do it.

Often, teams have to ask permission of someone with authority in order to do work that is categorized as research. Asking questions is inherently threatening to authority. If you’ve ever worked with a leader who was resistant to doing qualitative research as part of a million-dollar project, ask yourself whether they would skip doing their own research before buying a $50,000 car. Stated objections are often cover for a fear of being undermined or proven wrong or not looking productive in the right way.

If you are clear and candid about your goals and high-priority questions, you can learn something useful within whatever time and budget is available to you. Find studies online. Go outside during lunch and observe people. Usability tests someone else’s product. Get creative.

Just avoid doing surveys.

6. Don’t expect data to change minds.

It is difficult to get a man to understand something, when his salary depends on his not understanding it.

—Upton Sinclair

This is often a hard one for highly trained, specialist researchers to embrace, even though research has demonstrated it to be true. If you are used to working with a community of peers who value a certain kind of data, you may be ill-equipped to convince people who reject it out of hand. And it can feel insulting to one’s professional competence that the data is not enough.

The whole point of gathering evidence is to make evidence-based decisions. If that evidence undermines or contradicts the ideas of beliefs of the person with authority to make decisions, they will find reasons to reject it or ignore it. This is also at the heart of why qualitative researchers have a hard time in some engineering-driven organizations. People who are comfortable and competent with numbers want answers in numbers, even if the question demands something more descriptive.

So you have to turn ethnography inward and learn how your peers and leaders make decisions before you try to use data to influence those decisions.

7. Embrace messy imperfection.

We’re fickle, stupid beings with poor memories and a great gift for self destruction.

—Suzanne Collins, Mockingjay

Human lives are messy. If people didn’t have problems, there would be no need for products and services to solve them and we wouldn’t have jobs. Figuring out the best way to solve problems for people requires some time out in the real, messy world and letting go of a certain amount of control. While an ethical, sufficiently rigorous approach is necessary, there is no qualitative clean room. A clear goal and a good question can withstand all sorts of unpredictable conditions.

The desire for tidy, comfortable activities that look and feel like expertise made visible leads to the inappropriate use of focus groups, usability labs, eye-tracking, surveys, and glossy reports when something much less formal would be much more effective.

Incorporating evidence into design decisions is itself a learning process. You will never find the right answer and be done. If the process is working, you will continue to make decisions with increasing levels of confidence.

8. Commit to collaboration.

Everyone working on the same thing needs to be operating in the same shared reality. The people making decisions about the product need to be the best informed. It doesn’t matter how good the knowledge is if it’s only in one person’s head (unless you are in London and that person is your cab driver).

Research without collaboration means that one group of people is learning and creating reports for another group to acknowledge and ignore. Knowledge leaks out of even the most well-meaning teams working like this. Collaboration without evidence means everyone has tacitly agreed whose personal preferences win. Neither of these is the most productive approach.

Directly involving the people creating the product in asking and answering the questions is the most productive approach. Plus, it’s fun. And there are several ways to accomplish this depending on the organization.

The whole point of asking questions is to establish a shared framework for making decisions so that you can make better decisions faster. It changes lives.

9. Find your bias buddies.

We can be blind to the obvious, and we are also blind to our blindness.

—Daniel Kahneman, Thinking Fast and Slow

So, you did the work and you found some answers. Now you need to decide what they mean. When it comes to interpreting the results of research, collaboration becomes particularly critical. Everyone with a human brain is burdened by human biases. And there is no way to sense one’s own. We all see what best fits our existing beliefs. So, we have to refer to an external standard (including the pre-established goals and questions) and work together to check each other.

This has nothing to do with how smart or how well-informed you are. Once you accept this, and as long as you work in a team that evinces psychological safety and mutual respect, it can be a fun game to identify biases and call them out.

The Wikipedia page has a nice list, along with the Cognitive Bias Codex to print and post on your wall.

Maybe, just call it design done right.

In sum, what we’re talking about when we’re talking about design research is really doing evidence-based design. Creation, criticism, and inquiry are all integral parts of the design process. Separating them leads to optimizing for the wrong things out of ignorance, ego, or fear.

Design is an exchange of value. You have to ask what people really need and value and what business value you expect to get in return, before putting anything at all into the world.

It doesn’t matter what questions you ask or how you find the answers, as long as you are ethical in your approach, honest about what you know, and apply yourself toward a worthwhile goal. There is no one right way and no one right answer. Enjoy the uncertainty! It never ends.7

Conclusion

Research is a valuable piece of the equation of making a product that will be usable. Research can often take a lot of time to conduct and many in the field have advanced degrees in the area of analysis and research. When possible, hire a professional researcher; it can help in cases of very large and complex projects or if your team does not have the bandwidth to conduct research. When it comes to making your product more accessible and inclusive, you will want to take the time and plan things out properly. Here are some tips for planning your research:
  • Identifying stakeholders

  • Proper documentation

  • Defining your problem

  • Research

  • User research

  • User journey

  • Personas with disabilities

  • Evaluation criteria

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.52.86