Chapter 10. The Challenges of Remote Testing

You’ve seen how remote research deals with the problems of traditional in-person research (geographical distance, recruiting, task validity, etc.), but it raises plenty of its own problems, too. We’d like to wind down this discussion with a review of the biggest challenges of adopting remote research methods: the doubts, concerns, and pains in the neck that seem to come up in study after study, even for seasoned practitioners.

Legitimacy

Remote research is still in its adolescence, and skeptical prospective clients often ask us, “Who else does remote research? If it’s so great, why haven’t I heard of it?”

As we mentioned at the beginning, lab research has run the show for a long time mostly because that’s the way things have always been done. In spite of this, plenty of big-name corporations have happily taken the plunge with remote research, including (just from personal experience) Sony, Autodesk, Greenpeace, AAA, HP, Genentech, Citibank, Wikipedia, UCSF Medical Center, the Washington Post, Esurance, Princess Cruises, Hallmark, Oracle, Blue Shield of California, Dolby, and the California Health Care Foundation, to name but a scant few. Automated tool sites boast Sony Ericsson, Motorola, YouTube, REI, eBay, Cisco, Bath & Body Works, Orbitz, Hyundai, and Continental Airlines as customers.

If you’re still not sure about it, we recommend looking through an exhaustively documented study (complete with full-session videos and highlight clips) of first-time Wikipedia editors that we conducted for the Wikipedia Usability Initiative project (http://usability.wikimedia.org/wiki/Usability_and_Experience_Study). It includes both lab and remote sessions with identical goals, so it’s a good comparative case study. If you have any reservations, you should watch the sessions and decide for yourself.

Not Seeing the Users’ Faces

We have always been confident that seeing a user’s face isn’t necessary for gleaning insight out of a user study, but clients and stakeholders and some UX researchers can get very persnickety about this issue. If a person isn’t physically present and being videotaped or sitting behind glass, they wonder, “How can you really research them?” Or “How can you develop empathy for someone you can’t see?”

Our firm belief is that onscreen user behavior and think-aloud comments, and not users’ facial expressions, provide the real value of the study because you want to learn what the users do and are able to do on the site, not how they feel about it. Even if we concede that participants’ emotional responses can bring in valuable insights about how they use a site, you’d be surprised at how much feeling can be just as effectively inferred from vocal tone, sighs, pauses, inflections, and interjections (“Ah-ha!” “Oh, man!”), not to mention the content of what they’re saying. Most people are veteran telephone users and have learned by now how to express themselves vocally.

Maybe in a few years, video chat will be commonplace, and not seeing the users’ faces probably won’t even be an issue anymore. For now, however, rest assured that not seeing the user’s face just isn’t that big a deal.

Technology Failures

Moderated remote research uses lots of separate technological components, any of which can malfunction for many reasons: a computer with multiple programs running on it, a microphone headset, an Internet connection, Web recruiting tools, third-party screen sharing solutions, recording software, two phone lines, IM clients, and so on. Then there are all the things that can go wrong with the users’ computer and phone setup. Users can be on a wireless connection, an unstable wired connection, or a cell phone; international phone lines can be muddy; their computers might not be able to install or run screen sharing.

One or two things going awry amounts to annoying delays: interruptions to an ongoing study, glitches in the recordings, difficulty hearing users, and so on. At its worst, having two or three or all these things fail can stop a study cold until the problems are resolved.

UX researchers aren’t necessarily tech experts, so if you want to stave off these problems, the best thing to do is test everything at least a day prior to the start of the study, referring to a checklist. Table 10-1 is a starter checklist for you. Modify it to suit the tools you use to conduct your research.

Table 10-1. Troubleshooting Checklist http://www.flickr.com/photos/rosenfeldmedia/4287138988/

Problem

What to Do

Screen sharing is interrupted/malfunctions

Check to see whether your Internet connection is stable. Check to see whether your user’s Internet connection is stable; if possible, have him/her switch to a wired connection. If it’s still not working, try a different screen sharing tool.

Recordings come out corrupted/glitchy/truncated

Test the recording tool. If test recordings don’t work, check recorder settings to ensure that they are recording to the correct format and quality. If test recordings work fine, most likely the computer was running too many processes during the recording. Close down unnecessary programs to fix this problem; if it persists, you may need to upgrade your computer with more RAM.

Also check whether you have sufficient hard drive space to store the recordings.

For corrupted files, use a video editing program or converter to attempt to convert the file to a different format. For certain file formats, there are also utilities that are able to fix minor problems.

Phone connection malfunctions

Check your phone connection. Use an alternate phone line, if one is available. Ask users if they’re using a cell phone. Ask if there is an alternate line to call. Ask users whether you can call back on a different line, at another time if necessary.

Microphone headset/sound input malfunction

Check whether the headset is muted. Check the mic input volume in system settings. If you’re using a VoIP service like Skype, check the software settings to see if it’s not muted.

Internet connection seems choppy or breaks

The problem could be either your connection or the user’s.

If it’s yours, postpone the study and switch to any alternate Internet connections you may have in your office. As a last resort, call your Internet service provider and see if the service has gone down.

If it’s the user’s connection, ask the user if he/she is on a wireless connection; if so, ask if he/she is able to switch to a wired connection. If that doesn’t work, attempt to reschedule the study to a time when the user will be at a different computer.

User’s firewall does not permit the screen sharing tool to function

Switch to an alternate, preferably browser-based screen sharing solution. If no available solutions, attempt to reschedule the study to a time when the user will be at a different computer.

Recordings have no sound

Check the system sound input volume and settings (make sure it’s not muted) and recording software settings.

Regardless of what happens, stay calm. The absolute best way to handle technical problems is to set everyone’s expectations ahead of time (yours, your team’s, and those of anyone who’s observing) that there’s always a chance issues will come up, and that it’s a normal part of the process. Make sure observers have their laptops or some poetry to read so they don’t sit around idly when a user’s cell phone dies.

In spite of your planning, it’s always stressful when you have observers watching you, a live participant waiting on the other line, and a stupid technological problem interrupts everything, even though you’re positive you tested it like a million times. Take a few seconds to step back and put it in perspective: life goes on. A hard-breathing, hyperthyroidal moderator will spoil a session even if all the technology starts working again.

Not as Inexpensive as You’d Think

Remote research is often represented as a discount method, a way of shaving costs, and people are often surprised to find that the cost of a remote moderated study is usually comparable to its in-person equivalent. Remote research can help save on travel, recruiting, and lab rental costs, but where moderator time, participant incentives, and scheduling are concerned, nothing is much different. Most of the expense of a research project is the research—having a trained researcher take the time to observe and analyze users’ behaviors carefully and then synthesize the findings into smart and meaningful recommendations. Don’t let the stakeholders of the study fall under the impression that the primary motive behind a remote study is the cost savings: the real benefit, again, is its ability to conduct time-aware research.

Organizational Challenges of Web Recruiting

Most Web recruiting tools require you to place a few lines of external code in the Web site’s source code. If you have a personal Web site or work for a small, scrappy start-up and have direct access to the code, this task shouldn’t be difficult. If, on the other hand, you’re dealing with a huge company with a complex content management system, you may have to prepare for red tape. You’ll have to cooperate with the IT operations guys and higher-up managers who have the final say as to what goes on the Web site. Be sure you have answers to the following questions:

  • What are the changes we need to make to the code?

  • What does the code do? Is it secure?

  • What pages does the code need to go on? Will it work with our Content Management System?

  • Which pages will the screener appear on?

  • How long will the recruiting code be active?

  • What will the screener look like to visitors?

  • How many people will see it?

  • How can the managers/IT people shut it off or disable it on their end?

  • Will the look and feel of the screener match the Web site’s look and feel?

The answers to all these questions depend on the tool you’re using to recruit. Come prepared with the answers to these questions before meeting with your IT people/managers to prevent delays and confusion in getting the screener up.

Getting the Right Recruits

Taking matters into your own hands with live recruiting on the Web is often cheaper, faster, and more dependable for remote research, but it means that you’ll have to bear more responsibility for recruiting your participants properly. For any number of reasons, getting enough recruits to conduct steady back-to-back sessions may not be easy for you. See Table 10-2 for reasons why.

Table 10-2. Dealing with Slow Recruiting http://www.flickr.com/photos/rosenfeldmedia/4287139118/

Problem

What to Do

Your Web site’s traffic volume isn’t high enough to bring in six qualified recruits an hour

Increase the screener display rate if it’s below 100%. Place the screener on multiple pages or a higher-level page in the IA. Schedule qualified recruits in advance to supplement the users you’re able to obtain. Increase the incentive, but not too high (or else you attract more fakers).

Lengthen the duration of the study (with healthy traffic, it’s possible to do about six users in a work day).

Your recruiting criteria are too strict

If you’re filtering your results, disable the filter to see if any of the filtered recruits are acceptable participants. Ask stakeholders if any recruiting criteria are negotiable and relax the lowest-priority ones. Increase the incentive.

The wording or length of your recruiting screener turns people off

Revise the wording to feel less like a deal or an offer. Omit needless words and questions. Be specific about the incentive.

Fakers are filling out your recruiting form

Review the “Why did you come to this site?” responses to determine whether the fakers were referred by a deals/bargains site. Add sneaky questions to the screener to trick fakers into tipping their hand. Add open-ended questions that can be answered plausibly only by your legitimate recruiting audience.

Natural User Behavior

Moderated remote research is great for watching users perform natural, self-directed tasks on their own initiative, but that kind of behavior isn’t a given. Some users who participate in a study have preconceived notions about what’s expected of them during a study. Either they’ll tell you what you want to hear, or they’ll be too critical. Some will ask: “So, what do you want me to do?” At every turn, you should encourage users to do what they would naturally do, only adding that you may have to move things along to keep to the time limit of the session. (This is a polite way of warning them that you might cut them off if they start meandering.)

When users get absorbed in their tasks, they may stop thinking aloud. That’s not necessarily a bad thing, depending on how clear their motivations are. Usually, you can keep users talkative with a few encouraging reminders to “keep saying whatever’s going through your head about what you’re doing.” Naturally quiet or shy users might need more explicit prompts, with extra acknowledgment of how awkward it is to think aloud: “I know it’s kind of odd to talk constantly while you’re browsing, but everything you have to say is really useful to us, so don’t hold back.”

Then again, sometimes it’s not the users who have problems with natural behavior, but the stakeholders. For an outside observer who’s accustomed to heavily scripted and controlled lab testing approaches (“Now do Task A... Now do Task B...”), it can be jarring to watch participants use the interface the way they normally would. Observing natural behavior often means allowing users to go off on digressions or to allow for long silences while users try to figure something out or to perform tasks that don’t appear to relate to the scripted tasks.

You need to set your stakeholders’ expectations. What may seem aimless and chaotic is actually rich, properly contextualized interaction that they should pay close attention to. Put it this way: when you go to a Web site, do you close down all your other applications and browser tabs, turn off your cell phone, stick to one focused task, and tell the kids and dog to be quiet? And even if you do, is anyone ordering you to do those things? You need to assure stakeholders that regardless of whatever unanticipated tasks the users perform, the moderator will see to it that the users also perform the core, necessary tasks.

But there are some cases in which users really are too distracted to be paying any attention to what they claim to be doing. If they’re simply veering off-track, you may either need to reschedule the session when it’s less hectic for these users or dismiss them. That decision is up to the discretion of the moderator, but it’s usually pretty obvious. Whether users listen and respond to what the moderator says is often a good indicator.

Multitasking

It’s tough to appreciate, without doing a few sessions, how much stuff you have to keep your eye on while moderating a remote session: your conversation with the user, the user’s onscreen behavior, observer questions and comments via IM, your notes, the time remaining in the session, your place in the facilitator guide, and occasionally the status of the recording tool. You also have to exude an aura of serenity; you can’t even sound as though you’re trying.

The main thing is practice, practice, practice. Find willing volunteers to participate in dry runs. Watch and learn from recorded past sessions— like our sessions from the Wikipedia Usability Initiative (http://usability.wikimedia.org/wiki/Usability_and_Experience_Study#Remote_Testing), for example.

Security and Confidentiality

Finally, there are the challenges of testing interfaces that need to be presented to users securely. These interfaces can’t be installed on users’ computers or placed live on the site, usually because they’re prototypes that aren’t ready for public exposure. Password-protected access to the site is the most preferable option, but in cases in which no files can be moved to users’ computers, you should use the reverse screen sharing techniques described in Chapter 9, making sure that the Internet connection is fast enough to support a natural interaction.

Persistent Negativity

Sometimes, for no particular reason, you’ll have stakeholders or team members who think remote research is a horrible and stupid idea. This opinion doesn’t make them bad people. Even after a great study, there will sometimes be criticisms of some of the methods and details. The reason is largely that most people aren’t familiar with remote research yet and don’t know what a successful session looks like. They’ll get freaked out about the moderator not assigning specific tasks, about having to wait 20 minutes to find a qualified user to live recruit, about the lack of active listening, or about any of the other things that are supposed to happen. And then there are die-hard skeptics, who won’t like what they see no matter what.

The best remedy for dealing with these situations is to deliver amazingly successful findings, which exceed the usual expectations of incremental usability fixes. Of course, doing that is not easy, but in spite of anyone’s doubts about the process, if you think hard about your users’ behavior in context of their real lives and then come up with insights that double the conversion rate or dramatically increase the ease of use of your interface, the naysayers will be turned.

Chapter Summary

  • Although remote research makes many things easier, it also introduces its own unique challenges for the researcher.

  • Many people are still skeptical about remote research because it’s new. Some people believe you can’t get good results without seeing users’ faces. (You can.) And some people are just plain resistant to the idea from the beginning. Smart effective findings will change their minds.

  • Since remote methods use lots of technology, there’s a higher incidence of tech failure. Be prepared for the most common scenarios.

  • One misconception is that remote research is significantly cheaper than in-person testing. While it’s true that you can save some costs, the overall cost is not drastically lower than in-person studies.

  • In most medium-to-large organizations, you’ll need to get different parts of the organization involved if you’re going to use a screener to do live recruiting.

  • It takes effort, patience, and experience to get the right recruits for your study, but as long as you have enough Web traffic, there are always things you can do to help things along.

  • It’s crucial to get people to behave naturally if you want good feedback. You have to pay attention to your phone mannerisms to make that happen. Reassure the study observers that going off-script and allowing silences is necessary to encourage natural behavior.

  • You need to multitask heavily to be an effective remote moderator. Practice a lot and watch old session videos to improve your techniques.

  • Confidentiality must be maintained for both you and your participant; take security precautions and use discretion in your language.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.171.212