Chapter 11 Wayfinding: How Do You Get There

[ 11 ]

Wayfinding: How Do You Get There?

Now let’s turn to findings that are related to wayfinding (Figure 11-1). As a reminder of what we discussed in Chapter 2, wayfinding is all about where people think they are in space, what they think they can do to interact and move around, and the challenges they might have there. We want to understand people’s perception of space—in our case, virtual space—and how they can interact in that virtual world.

Remember the story about the ant in the desert? It was all about how it thought it could get home based on its understanding of how the world works (which lacked accounting for being picked up and moved). Similarly, we want to observe the navigation and wayfinding behavior of our customers and identify any issues they are having while interacting with our products and services.

With wayfinding in mind, we are seeking to answer these questions:

  • Where do customers think they are?
  • How do they think they can get from Place A to Place B?
  • What do they think will happen next?
  • What are their expectations, and what are those expectations based on?
  • How do their expectations differ from how this interface actually works?
  • What interaction design challenges did they encounter as a result of their assumptions?

In this chapter, we’ll look at how customers “fill in the gaps” with their best guesses of what a typical interaction might be like, and what comes next. Especially with regard to service designs and flows, we need to know our customers’ expectations and anticipate their next steps so we can build trust and match those expectations.

Figure 11-1

Wayfinding, generally thought to harness the parietal lobes

Where Do Users Think They Are?

Let’s start with the most elemental part of wayfinding: where users think they actually are in space. Often with product design, we’re talking about virtual space, but even in virtual space, it’s helpful to consider our users’ concept of physical space.

Case Study: Shopping Mall

Challenge: You need to know where you are in order to determine if you’ve reached your destination, or, if not, how you will get there. Returning to the picture of that mall near my house, you can see that everything is uniform: the chairs, the ceiling, the layout (Figure 11-2). You can’t even see many store names. This setup gives you very few clues about where you are and where you’re going (physically and philosophically, especially when you’ve spent as much time as I have trying to find my way out of shopping malls!). It’s a little bit like the Snapchat problem we looked at in Chapter 3, but in physical space: there’s no way to figure out where you are, no unique cues.

Figure 11-2

What cues are you getting about your location in this mall?

Recommendation: I’ve never talked with our mall’s design team, but if I did, I would probably encourage them to add some unique cues, such as different colored chairs on different wings, or to remove some of the poles that block customers from seeing the stores ahead of them. All we need are a few cues that can remind us where we are and which way to go. The same goes for virtual design: Do you have virtual signposts or cues in place so your customers can tell where they are in the virtual space? Are the entrances, exits, and other key junctions clearly marked?

How Do They Think They Can Get from Place A to Place B?

Just by observing your users in the context of interacting with your product, you’ll notice the tendencies, workarounds, and “tricks” they use to navigate. Often, this happens in ways you never expected when you created the system in the first place.

Case Study: Search Terms

Challenge: Something I find remarkable is how frequently users of expert tools and databases actually start out by Googling words or phrases (terms of art) they think might come in handy while using those high-end tools. In observing a group of tax professionals, I realized that they thought they needed a specific term of art (i.e., a certain tax code term) to find the right page of tax code in a tool we were using, and that it couldn’t be found by browsing the system. Instead of navigating to the tax code, they Googled the name of the tax law to identify the term used by specialists, returned to the tool, and typed that term into the search bar. As designers, we know it’s because they were having trouble navigating the expert tool in the first place that they found other ways around that problem.

Recommendation: In designing our products or services, we need to make sure we take into account not only our product, but the constellation of other “helpers” and tools—search engines are just one example—that our end users are employing in conjunction with our product. We need to consider all of these to fully understand the big picture of how they believe they can go from Point A to Point B.

What Are Those Expectations Based On?

As you’ll notice as you embark on your own contextual inquiry, there is a lot of overlap between wayfinding, language, and memory; after all, any time someone interacts with your product or service, they come to it with base assumptions from memory.

Let me try to draw a line between wayfinding and memory. When talking about memory, I’m talking about a big-picture expectation of how an experience works (e.g., dining out at a nice restaurant, or going to a car wash). With wayfinding, or interaction design, I’m talking about expectations relating to moving around in space (real or virtual).

Here’s an example of the nuanced differences between the two. In some newer elevators, you have to type the floor you’re headed to on a central screen outside the elevator bank, which indicates which elevator you should take to get there. There are no floor buttons inside the elevators. This violates many people’s traditional notions of how to get from the lobby to a certain floor using an elevator. But because this relates to moving around in space, even though it taps into stored memories and schemas, I’d argue this is an example of wayfinding. In this case, the memory being summoned up is about an interaction design (i.e., getting from the lobby to the fifth floor), as opposed to an entire frame of reference.

Real-World Examples

Getting back to our sticky notes, Figures 11-311-7 show the ones we would categorize as findings related to wayfinding:

“Expected the search to provide type-ahead choices.”

This isn’t analogous to an ant moving around in space, but I do think that it relates to interaction design. It does use the word “expected,” implying memory, but I think the bigger thing is that it’s about how to get from Point A (i.e., the search function) to B (i.e., the relevant search results).

Figure 11-3

Research observation: search interaction expectations

“Expected that clicking on a book cover would reveal its table of contents.”

That’s an expectation about interaction design. This person has specific expectations of what will happen when they click on a book cover. This may not be the way most electronic books work right now, but it’s good to know that’s what the user was expecting.

Figure 11-4

Research observation: ecommerce interaction expectation

“Expecting to be able to ‘swipe like her phone’ to browse.”

Here’s an example of wayfinding we are seeing more and more as we work with “digital natives.” Like most of us, this person uses her phone for just about everything. As such, she expected to swipe, like on a phone, to browse. This sort of “swipe, swipe, swipe” expectation is increasingly becoming a standard, and something we need to take into account as designers. You could argue there’s a memory/frame of reference component, but I would counter that the memory in question is about an interactive design and how to move around in this virtual space.

Figure 11-5

Research observation: phone interaction expected on other surfaces

“Frustrated that voice commands don’t work with this app.”

This is a fair point about interaction design; the user would like to use voice interactions in addition to, say, clicking somewhere or shaking their phone and expecting something to happen. This is a good example of how wayfinding is about more than just physical actions in space. You might argue there is a language component here, but we’re not really sure this user had the expectation of being able to use voice commands; just that they would have liked it. We could get more data to know if a memory of another tool was responsible for this frustration.

Figure 11-6

Research observation: participant wanted voice-based interaction

“Expects that clicking on the movie starts a preview, not actual movie.”

This person had expectations of how to start a movie preview on a Roku or Netflix-type interface. In this particular case, it sounds like you get either a brief description or the whole movie, and nothing in between, which violates the user’s wayfinding expectations. If a preview option were there but the user missed it for some reason, we would recategorize this as a visual issue.

Figure 11-7

Research observation: expectations based on past experience

Case Study: Distracted Movie-Watching

Challenge: Since we’re on the topic of phones, I thought I’d mention one study where I observed participants looking at their phone, and a TV, and also how they navigated from, say, the Roku to other channels like Hulu, Starz, ESPN, etc. In this case, we were interested in how participants (who were wearing eye tracking glasses, as seen in Figure 11-8) thought they could go from one place to another within the interface. (Are they going to talk to the voice-activated remote? Are they going to click on something? Are they going to swipe? Is there something else they’re going to do?)

Recommendation: Two things came out loud and clear here. First, the “flat” design style that is so typical is not great. It is often difficult for users to know which element on the screen is currently selected, so they have real trouble knowing “where they are” in the interface. Second, the Roku was head and shoulders above the rest. Why? Because of one element on the remote: a back button! No matter where they were in the interface or on a channel, the back button worked exactly the same way. A great example of matching customer predictions about site navigation!

Figure 11-8

Using a head-mounted eye tracker to study attention on a TV-based interface

Concrete Recommendations

  • Ask users before anything happens with a system how they think it will work and why. Learn as much as you can about user expectations.
  • Ask these questions throughout the contextual interview: What will happen next? What will you have to do? What will happen if you make a mistake? How will you know it worked?
  • When a step is taken, moderators can ask (often knowing the answer but not the explanation): Is that what you expected would happen? Why/why not? What should have happened? Did that surprise you?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.135.224