Chapter 3 Wayfinding: Where Am I

[ 3 ]

Wayfinding: Where Am I?

A logical extension to thinking about what we are looking at is understanding where we are in space. A large portion of the human brain is devoted to the representation of spatial information, so we ought to discuss it and consider how this cognitive process might be harnessed in our designs from two perspectives: knowing where we are, and knowing how we can move around in space.

The Ant in the Desert: Computing Euclidean Space

To help you think about the concept of wayfinding, I’m going to tell you about large Tunisian ants in the desert—who interestingly share an important ability that we have. I first read about this and other amazing animal abilities in Randy Gallistel’s The Organization of Learning, which suggests that living creatures great and small share more cognitive processes than you might have anticipated. Representations of time, space, distance, light and sound intensity, and proportion of food over a geographic area are just a few examples of computations many creatures are capable of.

Imagine yourself as a Tunisian ant. Determining your location in a desert is a particularly thorny problem. There are no landmarks like trees, and the landscape can frequently change as sand moves in the wind. Therefore, ants that leave their nest must use something other than landmarks to find their way home. Their footprints, landmarks, and scent trails in the sand are all unreliable as they can change with a strong breeze.

Furthermore, these ants take meandering walks in the Tunisian desert scouting for food (the ant in Figure 3-1 generally goes northwest from its nest). In this experiment, a scientist has left out a bird feeder full of sweet syrup. This lucky ant climbs into the feeder, finds the syrup, and realizes that it has just found the jackpot of all food sources. After sampling the syrup, it can’t wait to “tell” its fellow ants about the great news! However, before it can do this, the scientist picks up the feeder (with the ant inside) and moves it east about 12 meters (depicted by the red arrow in the diagram).

Figure 3-1

Tunisian ant in the desert

The ant, still eager to share the good news with everyone at home, attempts to make a beeline (or “antline”) back home. The ant heads straight southeast, almost exactly in the direction where the anthill should have been, had the feeder not been moved. It travels approximately the distance needed, then starts walking in circles to spot the nest (which is a sensible strategy given there are no landmarks). Sadly, this particular ant doesn’t take into consideration having been picked up, and so is off by exactly the amount that the experimenter moved the feeder.

Nevertheless, this pattern of behavior demonstrates that the ant is capable of computing the net direction and distance traveled in Euclidean space (using the sun, no less). This is a great example of what our parietal lobes are great at computing.

Locating Yourself in Physical and Virtual Space

Just like that ant, we all have to determine where we are in space, where we want to go, and what we must do in order to get to our destination. We do this using our brain’s “where” system—one of the largest regions in the mammalian cortex.

If we have this uncanny, impressive ability to map space in the physical world built into us, wouldn’t it make sense if as product and service designers we tapped into its potential when it comes to wayfinding in the digital world?

[ SIDE NOTE ]

If you characterize yourself as “not good with directions,” you might be pleasantly surprised to find you’re better than you realize. Consider, for example, how effortlessly you walk to the bathroom in the morning from your bed without thinking about it. And, if it is of any solace, know that like the ant, we were never designed to be picked up by a car and transported into the middle of a parking lot that has very few unique visual cues from which to find that car on our return trip home.

As I talk about “wayfinding” in this book, keep in mind that I’m linking two concepts which are similar, but do not necessarily harness the same underlying cognitive processes:

  • Human wayfinding skills in the physical world using 3D space and motion over time
  • Wayfinding and travel in the virtual world

There is an overlap between the two, but as we study this more carefully, we’ll see that this is not a simple one-to-one mapping. The virtual world in most of today’s interfaces on phones and web browsers strips away many wayfinding landmarks and cues. It isn’t always clear where we are within a web page, app, or spoken experience (Alexa, Siri, etc.), nor is it always clear how to get where we want to be (or even create a mental map of where we are). Yet understanding where you are and how to move around the environment (real or virtual) is clearly critical to a great experience.

Where Can I Go? How Will I Get There?

In the physical world, it’s hard to get anywhere without distinct cues. Gate numbers at airports, signs on the highway, and trail markers on a hike are just a few of the tangible “breadcrumbs” that (most of the time) make our lives easier.

Navigating a new digital interface can be like walking around a shopping mall without a map: it is easy to get lost because there are so few distinct cues to indicate where you are in space. Figure 3-2 shows a picture of a mall near my house. There are about eight hallways that are nearly identical to this one. Just imagine your friend saying “I’m near the tables and chairs that are under the chandeliers” and then trying to find them!

Figure 3-2

Westfield Montgomery mall

To make things even harder, unlike in the real world, where we know how to locomote by walking, in the digital world, the actions we need to take to get to where we are going sometimes differ dramatically between products (e.g., apps versus operating systems). You may need to tap your phone for the desired action to occur, shake the whole phone, hit the center button, double tap, control-click, swipe right, etc.

Some interfaces make wayfinding much harder than it needs to be. Many (older?) people find it incredibly difficult to navigate around Snapchat, for example. Perhaps you are one of them! In many cases, there is no button or link to get you from one place to another, so you just have to know where to click or swipe to get places. It is full of hidden “Easter eggs” that most people (Gen Y and Z excepted) don’t know how to find (Figure 3-3).

Figure 3-3

Snapchat navigation

When Snapchat was updated in 2017, there was a mass revolt from the teens who loved it (don’t believe me? Google it!). Why? Because the users’ existing wayfinding expectations no longer applied. As I write this book, Snapchat is working hard to unwind those changes to conform better to existing expectations. Take note of that lesson as you design and redesign your products and services: matched expectations can make for a great experience, and violated expectations can often destroy an experience.

The more we can connect our virtual world to some equivalency of the physical world, the better our virtual world will be. We’re starting to get there, with augmented reality (AR) and virtual reality (VR), or even cues like edges of tiles that protrude from the edge of an interface (like Pinterest’s images, which protrude from the edge of the screen) to suggest a horizontally scrollable area. But there are so many more opportunities to improve today’s interfaces! Even something as basic as virtual breadcrumbs or cues (e.g., a slightly different background color for each section of a news site) could serve us well as navigational hints (that goes for you too, Westfield Montgomery mall!).

One of the navigational cues we cognitive scientists believe product designers underuse is our sense of 3D space. While you may never need to “walk” through a virtual space, there may be interesting ways to use 3D spatial cues, like in the scene shown in Figure 3-4. This scene provides perspective through the change in size of the cars and the width of the sidewalk as it extends back. This is an automatic cognitive processing system that we (as designers and humans) essentially “get for free.” Everyone has it. Further, this part of the “fast” system works automatically without taxing conscious mental processes. A myriad of interesting and as yet untapped possibilities abound!

Figure 3-4

Visual perspective

Testing Interfaces to Reveal Metaphors for Interaction

One thing that we do know today is that it is crucial to test interfaces to see if the metaphors we have created (for where customers are and how customers interact with a product) are clear. One of the early studies done using touchscreen laptops demonstrated the value of testing to learn how users think they can move around in the virtual space of an app or site. When attempting to use these devices for the first time, users instinctively used metaphors from the physical world, as you can see in Figure 3-5. Participants touched what they wanted to select (upper-right frame), dragged a web page up or down like it was a physical scroll (lower left frame), and touched the screen in the location where they wanted to type something (upper-left frame).

Figure 3-5

First reactions to a touchscreen laptop

However, in addition to simply doing what might be expected, as in every product test I’ve ever conducted, the test also uncovered things that we did not expect (Figure 3-6).

In this example, the participant used both thumbs on the monitor while resting his hands on the sides of the monitor, sliding the interface up and down using both thumbs on either side of the screen. Who might have predicted that?!

Figure 3-6

Using a touchscreen laptop with two thumbs

The touchscreen test demonstrated two things:

  • We can never fully anticipate how customers will interact with a new tool, which is why it’s so important to test products with actual customers and observe their behavior.
  • It’s crucial to learn how people represent virtual space, and which interactions they believe will allow them to move around in that space.

While doing so, you are observing those parietal lobes at work!

While observing users interact with relatively “flat” (i.e., lacking 3D cues) on-screen television apps like Netflix or Amazon Fire, we’ve also learned not only about how they try to navigate the virtual menu options, but also what their expectations are for that space.

In the real world, there is no delay when you move something. Naturally, then, when users select something in virtual space, they expect the system to respond instantaneously. If (as in the case shown in Figure 3-7) nothing happens a few seconds after you (virtually) “click” on an object, your brain is naturally puzzled, and as a result you instinctively focus on that oddity, removing you from the intended virtual experience.

Figure 3-7

Eye tracking TV screen interface

Thinking to the Future: Is There a “Where” in a Voice Interface?

There is great potential for voice-activated interfaces like Google Home, Amazon Alexa, Apple Siri, Microsoft Cortana, and more. But in our testing of these voice interfaces, we’ve found new users often demonstrate anxiety around these devices because they lack any physical cues that the device is listening to and/or hearing them, and because the system’s interactions and timing are so different from what customers expect of humans.

In testing both business and personal uses for these tools in a series of head-to-head comparisons, we’ve found there are a few major challenges that lie ahead for voice interfaces. First, unlike in the real world or with screen-based interfaces, there are no cues about where you are in the system. Suppose you start to discuss the weather in Paris with a voice interface. You might ask a follow-up question like “How long does it take to get to Monaco?” You’re still thinking about Paris, but it’s not clear if the voice system’s frame of reference is still Paris. Today, with only a few exceptions, these systems start fresh in every discussion and rarely follow a conversational thread (e.g., that you are still talking about Paris when you ask about getting to Monaco).

Second, if the system jumps to a specific topical or app “area” (e.g., Spotify functionality within Alexa), unlike in physical space, there are no cues that you are in that “area,” nor are there any cues as to what you can do or how you can interact. I can’t help but hope that experts in accessibility and sound-based interfaces will save the day and help us to improve today’s impressive—but still suboptimal—voice interfaces.

As product and service designers, we’re here to solve problems, not come up with new puzzles for our users. We should strive to match our audience’s perception of virtual space (whatever that may be) as best we can, and align our offerings to the ways our users already interact with other things or humans. Let’s put those parietal lobes to use!

Further Reading

Gallistel, C. R. (1990). The Organization of Learning. Cambridge, MA: MIT Press.

Müller, M., & Wehner, R. (1988). “Path Integration in Desert Ants, Cataglyphis Fortis.” Proceedings of the National Academy of Sciences, 85(14): 52875290.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.216.163