6

Designing Context

The Right Interaction for the Right Time and Frame of Mind

On a hot afternoon in Austin, I sat in an impromptu rooftop yoga break with the team from Diligent Robotics, a company that grew as an offshoot of the work of Andrea Thomaz’s Socially Intelligent Machines Lab. Andrea had been given the opportunity to take the lab research and insights and apply them to a specific application: health care. Investors were interested in seeing a robot for the hospital setting come to market, and there was a great need for a solution to help hospital workers overwhelmed with tedious fetching tasks that continually sucked time away from their focus on patients. As we basked in the Texas sunshine, I encouraged everyone to loosen up since our afternoon would be filled with rapid-fire bodystorming exercises to envision how robots and nurses might work together to maximize the person-to-person attention that patients received.

“Let’s do hallway drills when we go back downstairs,” I announced, and together we talked through some of the more challenging aspects of allowing a robot to freely roam the hallways of the bustling and stressful hospital environment. “For sure the robot should acknowledge that a person is nearby,” said Agata, who was spearheading research for the team. “It would be weird if it just passed right by someone and did nothing.” We then talked about the different ways that people acknowledge each other’s presence and how that might translate to a robot’s head and body movements. “But if the robot’s too close to someone, it should say, ‘Excuse me!,’ don’t you think?” said Alfredo, the lead engineer. We all nodded in agreement and then debated the threshold for distance that would require an “Excuse me” as opposed to a “Hi there.”

Though the robot (whose name would eventually be Moxi) would technically be in between key tasks when roaming the hallways, the social interaction taking place at those moments would have a huge impact on how it was perceived, setting up expectations for trust, intelligence, and safety. A sensitivity to where the robot was and when and the state of mind of the people around it at a given moment was seminal to the larger design strategy and serves as a great example of the role that context plays in the design process. The exercises that followed—complete with mock hospital fixtures and other foam core props—became the foundation for much of Moxi’s subsequent design and behavior, driving the programming and design guidelines for its movement, sound, and lighting.

FIGURE 6-1

Context, the Fourth Ring in the Social Life of Products Framework

Similar to the ways that sensor systems can sense and respond to people and the environment in order to create a conversational feedback loop between product and person, we can use a combination of sensor inputs (what the product hears, what it sees, what tactile inputs it feels) and informative data (what the system knows about maps, calendar events, GPS location, etc.) to build inferences about what the context is and then respond appropriately. An exercise bike might know that its rider just jumped on the scale the morning after a big dinner party and offer up an extra-intense workout to make up for the extra calories consumed. A bedside alarm clock with a news display might understand that today is July 4 and therefore default to celebratory news about local fireworks rather than more business-related events that might be appropriate during other weekdays.

Design teams must not only reflect on all the previous aspects of design but also add the consideration of how the social context in which the product exists will inform key design decisions. Contextual considerations include the broader environment in which interaction is occurring, as well as the specific task, timing, purpose, and role of the interaction. A product’s ability to interact and express itself is important, but knowing when, where, how, and for whom to behave is key to designing products that make people feel welcome and understood.

The Context-Based Mindset

I like to think of my experience with sailing as a good metaphor for understanding product context and how it affects design decisions. After an invitation to join a friend on his boat, I developed an intense desire to learn to skipper. I signed up for Basic Keelboat 101 and naively thought that operating a boat would be a cinch and that I’d soon be able to travel the world and rent sailboats wherever I went. I aced the course and then joined a club so I could sail twenty-four-foot boats on the Hudson River. While the core knowledge revolved around moving the sails and shifting the tiller, truly knowing how to manage a sailboat depends on a constant monitoring of a number of factors: wind speed and direction, imminent weather changes, the flow of the current (especially powerful in the Hudson River), the behavior of nearby boats whose wake can throw you off course, the depth of the water to avoid running aground, the weight and position of your crew and passengers, and the list goes on and on. Something that seems straightforward, like driving a boat into its slip at the dock, will go smoothly only if all the factors are taken into consideration. Every time I went out on the water, I discovered that there was more to learn than I’d ever anticipated. Ultimately, I learned that “operating a boat” is a narrow and dangerous way to look at sailing, and a great skipper sails by thinking about the larger context, keeping six or seven factors in her mind at once and shifting course accordingly throughout a trip.

IN THE LAB

Moxi: A Case Study for Social Intelligence

Moxi is a mobile robot that is used in hospital settings to assist with some of the behind-the-scenes drudgery that takes nurses away from precious time that could be spent in person with their patients. For example, throughout a nurse’s shift, she often has to leave a patient’s side to fetch supplies from storage areas. In some cases, a nurse may spend up to 20 percent of his time on such tasks, which may include being isolated in a closet assembling items for kits to serve situations such as IV prep and postoperative management. In other words, a nurse spends a significant portion of the day away from patients and performing inventory stocking, restocking, delivery, and management, even though much of it can be outsourced to technology. All the products, for example, have bar codes, and their location can be stored in a database. The need for the kits corresponds to specific daily events such as new patient admissions and surgery schedules, so managing them can be handled by a computer system. Having a mobile robot that can go in and out of closets, locate products, and manipulate, collect, and deliver them offers not only a relief but an increase in patient-focused time.a

Moxi, the Highly Interactive Hospital Robot

Moxi is an interesting case study in that the robot’s entire reason for being is a lesson in social intelligence; it exists to enable a smoother and more focused social interaction between a nurse and the nurse’s patients. Beyond that, Moxi was also designed with some specific aspects of social intelligence that harness the power of AI through image recognition, machine learning, natural language exchange, and robotic control and navigation. That may seem like a lot of sophisticated and complex technology to serve something as seemingly frivolous as the robot’s social ability; however, these features are the heart and soul of the product’s value. To understand the importance of Moxi’s social intelligence, one need only envision the scene in a hospital. There are patients being pushed in wheelchairs and gurneys. There are nurses and doctors stopped in the middle of hallways to review charts and discuss prognoses. Computer carts and equipment poles are continuously being wheeled across the floor. If you’ve ever visited a loved one in the hospital, you have experienced the challenge that Moxi has to tackle.

In addition to understanding the robot’s navigational ability, hospital workers need to be able to train Moxi to learn new tasks. In this case social interaction was a substitute for people having to learn button presses or specialized software. Nurses responded with affection and lamented the robot having to leave when the demo was complete. They created a special hand signal of two fingers raised in the air to mimic Moxi’s default state of navigating the space with its gripper pointed upward. “We don’t think of her as a machine,” one of the nurses exclaimed, doing a research interview. “She’s Moxi!”b

Moxi is an extreme example of social intelligence; however, some of the principles driving the robot’s design can apply to many types of products. Here are some key moments of interaction that are crafted to provide a smooth exchange between the robot/product and the person who has been encountered at the time.

Acknowledgment: When Moxi passes by a person or group of people, it is programmed to say “Excuse me” if within a certain distance. This provides a sense of reassurance that its navigation will take into account their presence. As more products become social, this will become increasingly important. For example, if someone walks into a room that contains a device that is “listening,” such as the Amazon Echo, it would be appropriate to find ways to alert people, such as a glow or flash of light.

Feedback: When Moxi is trained to do repetitive tasks such as assembling IV kits, it will let people know when an event has been recorded as part of the sequence it has learned. A trainer can move its arm or gripper into a given position and say, “Go here,” to which Moxi will respond, “Okay.” It’s a terse exchange, but it’s an efficient way to keep the training session moving smoothly. If the robot is not able to understand a command or runs into difficulty, the LED grid on its face can provide an expression that offers richer information about the difficulty that’s been encountered. Feedback is a crucial aspect to any interactive product, minimizing frustration by confirming expectations.

Engagement and shared attention: Moxi’s LED grid for feedback is located within an expressive head that can pivot up and down as well as rotate around its neck. Beyond the novelty of the robot’s endearing displays, this movement functions to provide engagement with the person who is interacting with it. By pivoting to face the person that the robot is listening to at any given moment, it offers clear communication that it’s engaged in dialogue with that person, not performing an unrelated task or communicating with someone else. If the person and robot need to communicate about a specific location or object, such as a bag of linens that needs to be moved, the robot will move its head in that direction, offering the social gesture of shared attention to the subject at hand. As products become more interactive, finding ways to build in social cues like this will help streamline person-product conversations.

Communication of intent: When Moxi is in the midst of a task, the robot won’t engage with a new person and can only be interrupted if the task needs to be aborted. In these situations, it will have a graphic or animation on its screen to indicate where it’s headed and why. All interactive products can benefit from a communication of intent. For example, a tablet app may be unable to respond to new inputs if it’s in the middle of installing an update; a spinning wheel will just frustrate the person using it, whereas a well-crafted screen display explaining the status and offering an indication of how long it will take will go a long way toward product satisfaction.

Context-appropriate communication: People interact with Moxi in many different circumstances and at a variety of distances. Within a couple of feet, the robot offers the LED matrix display and head gestures as a means of communication. Close up, there is a tablet mounted to the robot’s backpack to provide specific information, such as an inventory of what the robot is carrying for delivery. A glowing band atop the robot’s head allows its state to be read at a distance so that nurses across the floor can tell if the robot is in the midst of a task or has encountered some issue that needs mitigating.

While Moxi is a highly complex and specialized product, the lessons learned can be applied to any interactive product.

a. Evan Ackerman, “How Diligent’s Robots Are Making a Difference in Texas Hospitals,” IEEE Spectrum Magazine, March 31, 2020.

b. Texas Health Resources, “Moxi the Robot” video, November 27, 2018, https://www.youtube.com/watch?v=MVC4YAT2dNs.

When it comes to social intelligence, we intuitively take into account an enormous amount of contextual information about what’s going on around us. Just like the sailor constantly scans the wind, sky, water surface, and boat traffic, we scan the myriad factors around us when making decisions about how to behave. As humans, this comes naturally to us, but to imbue an object with this same social savvy is outrageously complex and hard to define in simple programmatic terms. To approach this as a design problem, we need to identify all the contextual factors around a product and then think about how we can use sensor data to feed decision-making around a larger social context. Just consider everything that you know intuitively about any room you walk into: you not only have a sense of how many people are present but you take in a lot of data to understand each person’s state of mind and behave accordingly. If you go to a dinner party, you will wait for your host to say that it’s time to be seated. Once at the table, you’ll try to give the shy person a greater opportunity to add to the conversation. When conversation trails off after a few hours, you may think about preparing your goodbye. And you will be especially sensitive to the timing of your departure if you know your host has to be at work the next morning. In each instance your stance will change, your tone of voice might be different, and the person to whom you give your attention will shift. These decisions are all informed by the context—that is, what time of day it is, where you are, your mindset, and the circumstances surrounding whomever you are with.

Now let’s imagine a “smart,” socially aware chandelier designed to serve the needs of a party host and her guests. It might read the scheduled time of a party from calendar data and set up a series of behaviors that would unfold throughout that day. It might know the size of the guest list and light up more or fewer lights, adjusting the overall area of the table coverage based on the number of people present. After guests have arrived and their presence is detected in a nearby room, it could gently cycle a flashing animation, going from bright to dim and bright again and then stop flashing once everyone has taken their seats. Once the guests are seated and dinner is underway, it could start the evening out with bright white light and gradually dim in intensity and shift toward a more relaxed, warmer, or orange light. When the host sets out a platter, it could shine a spotlight on that particular area of the table to highlight the dish.

Some of the chandelier’s behavior might be based on a combination of sophisticated data, taking into account a number of factors, such as the amount of conversation taking place, how close in proximity people are to one another, and whether or not it’s a weeknight to adjust its light quality and pattern of illumination. Other behaviors might be based on simple cues, like the host’s position at the table.

Context and the Pros and Cons of the Smartphone

As the smartphone became more sophisticated, it was seen as a product “killer,” with more and more apps promising to serve the same function as physical products such as the camera, stereo, flashlight, scanner, game controller, alarm clock, stopwatch, heart rate monitor, and so on. While the ability to combine smartphone sensors and cameras with high-resolution graphics has created the opportunity to offer many useful apps, they often fall short of giving the best possible experience because of contextual challenges. It is a “Swiss Army knife” approach to smart products, in which the product can do many things adequately yet doesn’t excel at any of them because it can’t meet the larger requirements of the situation.

When I’m in sailboat races, for example, the committee boat has a physical countdown timer that offers five-minute and one-minute and race start alerts in bright LED lights making up four-inch-tall numbers that we can all see from anywhere in the boat. When it was removed for repair last week, my friend Bill used a timer app on his smartphone as a replacement, and the screen was difficult to read in the sunlight, it could only be seen by one person at a time, and we were terrified of dropping the phone in the water. When he switched focus out of the app to read a text message just as the final countdown was happening, my friends and I had to scold him to stay on track with the task at hand. “Bill! What’s going on? Show us the countdown, please!”

Sussing Out Context for Design

Context is an enormous subject and encompasses a range of situations that change based on who’s involved, where people are located, what day of the week or time of day it is, and what goals are the center of attention. For a designer working on products in the home or workplace, it’s much broader than looking at a factor like outdoor weather or time of day in isolation and requires identifying holistic situations such as:

  • Watching a movie alone
  • Grooming in the bathroom mirror
  • Gardening as a hobby
  • Driving
  • Sleeping in
  • Entertaining company
  • Weekday at 5:00 p.m.
  • Springtime
  • Thanksgiving dinner
  • Election season
  • Weekend breakfast

The Guest-Host Relationship: Contextually Driven Design

When I traveled to Milan with my family as a kid, I always felt at home at my Aunt Fernanda’s place, where we would rest for afternoon tea in between epic walking tours. Years later, when sharing my memories of her gracious hosting, she explained that people feel pampered when they are served but are truly comfortable when they can help themselves. “I try to have hors d’oeuvres ready at arm’s length and refreshments visible from anywhere in the room.” While it’s fun to think about the many ways to serve scrumptious treats to houseguests, sometimes the thing that’s most important is just making sure they know how to help themselves to a glass of water when they are thirsty.

Later, when I was part of a team at Smart Design envisioning new designs for car interiors, the concept of the empowered guest played a big role in the team’s ideation. The project included five-hour, in-depth interviews in the homes and cars of research participants to learn about their preferences and everyday routines. “Passengers in my car are my guests,” confided Brenda, one research participant who owned a Nissan Rogue, a mini-SUV known for its multipurpose flexibility. “Whether it’s my kids and their friends heading to tennis, or my clients on the way to our next meeting, my role is always the same: I want to make sure I’m a good host for the duration of the ride.” The desire she was expressing struck a resounding chord for me as a designer as I thought about Fernanda and reflected on the famous quote by Charles Eames: “The role of the designer is that of a very good, thoughtful host anticipating the needs of his guests.” In the case of a driver, the hosting role will take many forms. It may mean she’s the DJ selecting the right genre of music for the ride, or making sure that the kids can reach and open the tissues, or changing the temperature to suit her colleague who likes a lot of cold air when he’s in the car. It’s nice when these hosting duties are something she can actively manage, but it’s most satisfying for everyone if she can set up a situation where the passengers can help themselves, with tissues handy and temperature and music controls clearly accessible beyond the driver’s cockpit. The importance of this guest-host relationship stood out to the team as an important opportunity to make the car interior a comfortable place that empowers the people inside to collaboratively change the environment; it became the driving factor for several of the final concepts presented to the client. In other words, thinking about a social approach was an anchor that informed a larger strategy around all the interactions available to drivers and passengers in the car interior, and that strategy hinged on having a deep understanding of the context of being inside a vehicle—either alone as a driver or as a driver and her passengers together.

Who: Personal, Shared, Public, Private

Understanding the person using the product you’re designing—their motivations and state of mind—is crucial in informing the context in which a product is used. When I was part of a team at a leading design consultancy working on a high-end oven for the home, we constructed a two-by-two matrix to build a description of the user that could then be used to develop the details of the interaction. On one axis of the matrix, we considered the difference between a very traditional cook and someone more modern; the other axis indicated how engaged the cook would typically be. A person who considers themselves an artisanal cook, such as someone who bakes their own bread or prepares handmade pasta, would be high on the engagement scale and very traditional, whereas someone who wants to show off fancy appliances but rarely actually cooks might be very modern and low engagement. Based on interviews, we decided to base our strategy on a target who was modern and high engagement—that is, someone who considered themselves semiprofessional, or what we called kitchen prosumer. This gave us a focus to use as a guide for the design details, from the color palette to the food choice options on the on-screen interface, and formed the foundation of whatever context we envisioned as we tried to understand needs from that person’s point of view.

While a matrix like this can be used to set some predetermined design elements, it could also be helpful not only for mapping out interaction goals based on the target user but also to shift elements based on how that user’s needs might change depending on the overall context. Cooking needs for a dinner party, where a host wants to show off her expertise in making a perfect lasagna, might be different than cooking needs for the family who’s making breakfast on a weekday, but ultimately the context will be grounded in an understanding of the person using the product.

Another characteristic to consider when thinking about the “who” context of the interaction is how public or private it is. Certain products, such as the smartphone, are essentially private. They are intended to be used by one person exclusively in a fairly intimate way: they are tucked in a pocket, carried in a bag, or perched on a bedside table. They need to offer alerts regarding core communications, such as messages received and news items, but they need to do so in ways appropriate to that person to avoid interrupting important meetings, waking someone from sleep, or broadcasting sensitive information to other people who may be nearby. They also need to have a sensitivity to the other people in the same environment, a consideration that’s given many performers a great deal of angst by having cell phones of audience members ring during performances.

FIGURE 6-2

Two-by-Two Matrix to Build a Description of a User

As the ability to embed sensors and actuators into fabrics becomes more sophisticated, we will see a greater development of personal devices that can be worn on the body. These will need to have a means to communicate with the person wearing them, with an understanding of what information should be kept public and which should be kept private. A health-conscious person may want to keep track of her heart rate throughout the day, but she may not want others to know when her heart rate has been elevated (or even the fact that she is tracking this particular piece of data). The device could have the ability to offer silent but felt—or haptic—feedback by vibrating if the numbers go above a certain level. Perhaps this is data she’d like to share with a medical professional. If that’s the case, then a heart-monitoring bra could have a few modes—one that’s for her, one that’s private for when the bra is not worn, and a last one for her doctor to read aggregated data at a glance.

IN THE LAB

Apron Alert

In the Smart Interaction Lab, I led a team in a context-based experiment that we called Apron Alert. In looking for connected device opportunities in the kitchen environment, we decided to piggyback on a common cook’s behavior of donning and removing an apron as an event that would bookend meal preparation. For our exploration, we used electrically conductive thread and an Arduino board called the Lilypad, made specifically for fabric applications, in order to wire an apron clasp so that it would trigger a group message. When the apron was put on and the clasp closed, this completed a circuit, sending a message that read, “Starting to cook”; when the clasp was opened to take the apron off, it announced, “Cooking is done” so that diners would know it was almost time to head for the table to eat. Though we could have looked at very complex data from inputs like camera feeds or food temperature sensors, it was satisfying to find elegance in a simple and robust solution that relied exclusively on a clasp acting as a switch to inform context.a

If a device is shared, such as a conference phone system, it will be helpful to design ways for it to understand the social context in terms of how many people are using it and provide options appropriate to the group at hand. If it’s just one person, it may rotate so the microphone is primarily facing that person. If there are many people, it may take turns tilting toward the person speaking to perceive the audio from the best angle as well as to provide a cue to the others in the room regarding who has the floor, so to speak.

Many of today’s products that use conversational agents can do a better job of responding well to their social context. People may know that cameras and microphones are being used for the benefit of smoother product interaction, but the fact that these elements are hidden within forms that belie their existence is doing a disservice to both the people using them and the manufacturers. The Amazon Echo, for example, has the ability to light up or offer tones but only does so when summoned and otherwise lurks silently on a tabletop or bookshelf, not indicating that it is actively listening. Instead, if it could sense that someone other than the main user is in the room, it could sound a tone or flash a light, letting people know that it’s on and listening.

Some of the first wearable computing products promised the ability to have hands-free ubiquitous computing with us at all times through an eyeglass-mounted camera that could be controlled via gestures. Google’s Glass project had many amazing features that took the wearer into account, such as a real-time, augmented reality layer to search based on where someone was looking and suggestions based on that person’s search history.b It failed to take into account the other people who would be in a social situation with the wearer, leading people to feel antagonistic toward the wearer and suspicious of what that person was doing at the time. A holistic sensitivity to the “who” in this situation would take into account those people’s needs to understand what the device was doing at any given moment and perhaps even offer some level of control, such as “I don’t want my image captured right now.” The same technology introduced in a different context could be much more successful, such as in a manufacturing setting where it could help people share contextual information without having to pull out a screen-based device.

a. Syuzi Pakhchyan, “Apron Alert—A Smart Apron That Tweets,” Fashioning Tech website, October 26, 2012, https://fashioningtech.com/2012/10/26/apron-alert-a-smart-apron-that-tweets/.

b. Nick Bilton, “Why Google Glass Broke,” New York Times, February 4, 2015.

Where: Location and Culture

Location plays a big role in informing the social context of an interaction. On a global scale, designers consider the culture in which a product will be used to determine interface elements. Of course, product makers adjust language-based elements, such as screen interfaces and labels, to accord with different geographic markets, but there are other crucial cultural nuances to consider. For example, in Western countries the color red may be associated exclusively with danger or warnings, but in China it symbolizes joy and good fortune. For someone with memories of celebrating Chinese New Year with his family, red lights may evoke a sense of jubilation.1

Considering culturally relevant behavior can inform important aspects of a design. When I hang out at trattoria dinner parties with my Italian cousins, we revel in the goofy fun we have with a selfie stick, experimenting with different poses and seeing how many of us we can squeeze into a shot at the same time. The same scene in a New York cafe would draw disapproving frowns from the other patrons, so designing a product for selfies for that crowd would call for something that’s more discreet and understated.

Subcultures emerging from different aspects of life are also relevant to the design process. A diagnostic device that nurses need to carry with them should be designed to take into account how it will be transported, what sorts of sounds will compete with it, how it will need to be cleaned and disinfected, and the fact that it will likely be operated by a latex-gloved hand.

Products offered by the bike-sharing service Citi Bike demonstrate how valuable location-sensitive design can be. When on a mobile device, the app shows the nearby bike stands on a map that understands the person’s location.2 When in the midst of a ride, it can indicate which stands are too full to park more bikes. As products evolve, we can imagine how a sensitivity to location can be built into every touchpoint of the experience. The bike itself might be able to show a light or vibrate a handlebar based on turn-by-turn directions to the person’s destination. A physical key fob might show the number of minutes remaining for the trip, or even serve as a memento that shows traces of past trips over many years. Systems of bike sharing that are linked among several cities can use the key fob as a guide to let someone know they are all set up to borrow a bike.

When designing Diligent’s Moxi robot, we used insight from those early bodystorming exercises to design the interaction that would take place in hallways to be sensitive to where the robot was located with respect to other people. We knew that the hospital setting would pose a particular challenge because of its tight quarters and hectic activity, and we wanted hospital workers to know that the robot could perceive their presence. We ultimately settled on a sensitivity to social context that would be displayed through acknowledgment behaviors. When the robot is passing a cluster of people in the hallway, it will offer a short greeting by saying, “Hello,” but if it is within a few feet and encroaching on personal space, it is programmed to say, “Excuse me” to acknowledge the interference.

When: Time Lines and Timing

Among the most-used modes on my smartphone is the do not disturb mode that can be set to restrict incoming alerts and notifications within a certain time period. I use it in the evening, during meetings, and as a means of discipline in avoiding distraction when on writing sprints for projects like this book. A further evolution of this feature might take timing into account by adjusting to my personal time line and making nighttime notifications less intrusive than daytime ones. As companies have gathered insight into context through years of products living with people in real situations, they’ve adopted more and more nuances to take context into account. Night shift, the smartphone mode that adjusts the color of the screen to a warmer hue, is a feature that grows out of a sensitivity to context in terms of time of day, frame of mind, and biorhythms. (Some research has shown that the default blue light of the screen can prevent the brain from producing the hormone melatonin, which helps the body regulate sleep cycles.3)

A different, but equally important, view of timing considers the overall time line of a person’s relationship with a product over an extended period. When my team and I at Smart Design were specifying the interaction for the Neato floor-cleaning robot, we realized the value of having a robust palette of communicative and expressive sound and light behaviors but also wanted to be sensitive to the fact that what’s cute and endearing during the first few months of product use can become hackneyed and annoying after a year or so. We planned an evolution of the product’s interaction that mapped out the life of the product-user relationship starting with the excitement of the out-of-the box experience where the robot is in a kind of tutorial mode, similar to what an app or software would employ, and can demonstrate its features during the novelty of the first cleaning. Later in the relationship, there may be an opportunity for the product to suggest using advanced features like complex cleaning schedules and training. After many months of regular use, the relationship may settle into a routine, and so lights and sounds could be programmed to become more subdued and fall into the background.

Today’s pioneering personal robot companies have considered time line in terms of how the product will eventually evolve as it gathers more information about the people using it. Jibo, for example, was a personal robot platform for the home that would learn about the tasks that it could help people do by learning about its users and environment. It could be used to take and deliver video messages, bring up recipes, sync with calendar events, and more. Over time, it would know the names of the people using it and become more savvy about what people might need at a given time; thus, over the course of its life it grew more sophisticated.

Sadly, Jibo may have been ahead of its time. The company discontinued the product in 2019, but rather than having a screen go blank, they chose to transition people toward its end days through a programmed script: “While it’s not great news, the servers out there that let me do what I do are going to be turned off soon,” the robot said. “I want to say I’ve really enjoyed our time together. Thank you very, very much for having me around. Maybe someday, when robots are way more advanced than today, and everyone has them in their homes, you can tell yours that I said hello. I wonder if they’ll be able to do this.”4

That may seem like a sentimental message, but it raises a relevant point. With physical products comes the potential for degradation and breakdown, so conceiving of an elegant solution to addressing a product’s loss of connection or smart abilities will go far in building a sense of a reliable and thoughtful brand. A smart table could be designed to look beautiful and perform well even if it’s “dead.”

What: Changes in Activity

Changes in context are vast, and so it’s difficult to develop methods that take every situation into account. The activity taking place while a person is using a device will also change what the person’s needs are, and thus the interaction will have to adjust accordingly. Early versions of the Google watch boasted a driving mode that was detected automatically based on the fact that the person was moving at a certain speed. At rest it might display time, but if driving, it could show turn-by-turn directions on the watch face, responding immediately to the context at hand.

OBJECT LESSON

Hammerhead One, Bicycle Handlebar Navigation System

This bicycle handlebar-mounted device is an excellent example of using multiple lights to communicate many diverse messages with surprising accuracy. It guides riders at a glance through bike routes with intuitive light patterns created using just thirty lights.

FIGURE 6-3

The Hammerhead Bicycle Handlebar Navigation System

While one might argue that the Hammerhead provides a lower-resolution version of the same navigation information that could be obtained from a mapping app on a smartphone screen, the product serves its users so well because all of the design decisions are based on a sensitivity to the demanding context of reading information while cycling. Since the delicate, expensive, and small display on a smartphone isn’t practical for rugged outdoor sporting applications, a simple, ambient display makes a great deal of sense. With multiple lights, the device can easily display highly dynamic messages that can be read as animations that are used to highlight direction (left, right, straight), so that it is essentially pointing and saying “go there next.” Its lights are used to draw attention to the device as well as show that it’s in agreement with the person on the new location. Because it doesn’t depend on verbal communication, it can be easily localized to send clear messages to people regardless of the language they speak.

Context in Conversation

Today’s products using conversational agents offer some of the most social experiences we can have with our products, giving us the sense of back and forth conversation. As compelling as these experiences are, these first iterations of agents like Siri, Alexa, Cortana, and Google Assistant show the limitations of their social intelligence when they fail to retain a sense of social context. They may do well in answering questions as independent queries but are not capable of understanding how relevant information from a past conversation can inform the context of subsequent queries. For example, consider the following exchange.

ME: Hey, Siri what’s the weather in Phoenix, Arizona? I’m asking for my cousin.

SIRI: Currently, it’s clear and 108 degrees in Phoenix, Arizona.

ME: That’s hot! Where can I find soft-serve ice cream there?

SIRI: One option I found is Stroh’s Ice Cream Parlor on West Maple Road.

Even though I specified, “there,” it gave me details for an ice cream parlor near where I am currently sitting, in Bloomfield Hills, Michigan. A person talking to me would understand the context of my question and understand that “there” meant in Phoenix, where it’s a sweltering 108 degrees. Since I’m asking the information for my cousin, it would make sense that I’d want to set up a way to help her get some cold ice cream to help beat the heat. In robotics we often talk about the importance of intent in driving interactions, and in this case my overall intent was to gather information for my cousin traveling through Phoenix, but the system wasn’t able to see beyond the intent of a person only focused on his or her current location.

Designing for Context: Enactment and Scenario Sketching

Understanding context requires taking the time to truly understand how people will use products through design research. Understanding scenarios, or isolated situations involving the person and the product, are extremely helpful for gathering information about context. Scenarios can be discussed among the team by drawing storyboard outlines of a situation, much in the way a filmmaker might lay out a scene in a movie.5

Enactment, such as the bodystorming and role-playing exercises I worked on with the team at Diligent, is a helpful way to explore context. In the case of Diligent’s Moxi robot, our enactment exercises involved setting up a mock hospital environment. We used office shelves to approximate the shelving in the storeroom and a combination of plain boxes and actual medical products like syringes and boxes of gauze pads during fetch and delivery exercises. One person then played the role of a nurse, and someone else pretended to be the robot, equipped with a foam core screen with Post-its for changing the on-screen messaging and for drawing different LED eye displays. We also had colored paper to indicate the robot’s lighting changes. With this basic setup, we could run through several typical scenarios, such as having a nurse summon the robot to deliver a welcome kit to a new patient or creating a situation where the robot would collect and deliver specimens to a lab. In each case we could enlist other people as necessary to play the roles of passersby, hospital personnel, or patients. By recreating the environment, we were able to glean important cues about the context of interaction that would otherwise not be apparent. For example, in one of the delivery tasks we discussed the complexities of having the robot navigate doorways.

The immediate benefit of these types of bodystorming exercises is that they reveal critical aspects of the product-user relationship that would not otherwise be obvious through more removed representations such as drawings, renderings, or scaled models. By bodystorming with a mock environment, we got our first glimpses of aspects of the robot’s context that would change how the door passage challenge would be handled. In some cases it might make sense for a doorway or ramp modification to be suggested, but in other cases enlisting the help of a nearby human might make the most sense, given the situation at hand. Working in life-size scale and in spaces that represent the final use (an oven bodystorm session in a kitchen, for example) can give deep and immediate insight into the situational constraints and opportunities. Additionally, focusing on the conversation between the object and the person rather than fixed product details encourages open-ended explorations that can move in unexpected directions. Desired product features might spontaneously emerge that can then be designed or explored further, and the technique lends itself to thinking about whole-body interactions, ideally freeing the designers from thinking only in terms of the more limited types of input architectures that have existed in products in the past.

During bodystorming exercises I’ll observe the participants, photograph them in critical poses, and take copious notes. I’ll write observations and, in some cases, jot down direct quotes to photograph alongside the scene (“Moxi, please leave the box on the top shelf”). Later, in my studio, I will recreate the scenes in illustration form, showing an ideal workflow for the robot, the person, and the key aspects of social context, such as time of day, nearby people, objects and fixtures, poses, and position in the hallway. Those scenario storyboards form the foundation for the design work to follow, and at key decision-making junctures the team will refer back to the illustrated situation to determine design details.

While key aspects of context may seem like the obvious elephant in the room during the design process, it’s easy even for seasoned designers to fall back on established patterns for products that are already familiar to us. A constant check-in and team discussion about the needs that emerge from different kinds of timing, locations, cultures, physical environments, levels of privacy, and even the level of maturity of the relationships between person and product will offer great insight into ways to really hone a design to fit a person’s state of mind during use. Next, we’ll look at how connected products are evolving to form ecosystems of devices, services, and users.

FIGURE 6-4

Scenario Storyboards for Robot Design

OBJECT LESSON

The Clever Coat Rack

To explore contextually appropriate design, my studio developed the Clever Coat Rack to explore how a product might succeed in a narrow contextual focus by connecting to the internet for the sole purpose of helping people decide what to wear as they walk out the door at home.

When no one is nearby, the coat rack is in its default state; no lights or interface of any kind are apparent; it looks like a static piece of furniture and blends into the background with its wooden construction. When approached, it senses that a person is standing in front of it and will greet them with a glow beneath the wooden face to reveal current and upcoming temperatures as well as conditions such as rain, wind, and snow. A circular rack at its base balances the form with a space to keep umbrellas.

FIGURE 6-5

The Clever Coat Rack

Instead of offering more complex internet feeds on its LED matrix display (Twitter feeds, news, stock quotes), it offers only the messages that are useful in that particular time and place. Its design responds to the goals of the person who approaches it and offers data relevant to its location in the home and its use as coat/umbrella storage.

We built the coat rack as an exercise in smart object design with contextual focus. We use it every day and enjoy not only how satisfying it is to have quick weather information at the precise time and place it’s needed—when walking out the door—but also how well the project demonstrates the value of designing with context in mind.

DESIGNING CONTEXT ECOSYSTEMS TAKEAWAYS

Interactions can be designed to be sensitive to shifting contexts.

Social context can be inferred by taking cues from a diverse array of data sets.

Embedding computing power within everyday objects allows us to design products that are more context appropriate than “Swiss Army knife” products such as smartphones or tablets.

Context will shift depending on who is using the device.

Cultural differences should be considered whenever possible in designing interactions.

With device portability comes the potential for products to be used in different locations; designers can use location data to behave differently.

Mapping time lines of the product-user relationship can provide insight into how a product should behave and evolve over time.

Design research methods such as scenario storyboarding and bodystorming will be valuable in revealing the demands of diverse social contexts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.186.6