©  Geoff Hulten 2018
Geoff HultenBuilding Intelligent Systemshttps://doi.org/10.1007/978-1-4842-3432-7_5

5. The Components of Intelligent Experiences

Geoff Hulten
(1)
Lynnwood, Washington, USA
 
At the core of every Intelligent System is a connection between the intelligence and the user. This connection is called the intelligent experience . An effective intelligent experience will:
  • Present the intelligence to the user by choosing how they perceive it, balancing the quality of the intelligence with the forcefulness of the experience to engage users.
  • Achieve the system’s objectives by creating an environment where users behave in the ways that achieve the system’s objectives—without getting upset.
  • Minimize any intelligence flaws by reducing the impact of small errors and helping users discover and correct any serious problems that do occur.
  • Create data to grow the system by shaping the interactions so they produce unbiased data that is clear, frequent, and accurate enough to improve the system’s intelligence.
Achieving these objectives and keeping them balanced as the Intelligent System evolves is as difficult as any other part of creating an Intelligent System, and it is critical for achieving success.
This chapter will introduce these components of intelligent experiences. Following chapters will explore key concepts in much more detail.

Presenting Intelligence to Users

Intelligence will make predictions about the world, the user, and what is going to happen next. The intelligent experience must use these predictions to change what the user sees and interacts with. Imagine:
  • The intelligence identifies 10 pieces of content a user might like to explore. How should the experience present these to the user?
  • The intelligence thinks the user is making an unsafe change to their computer’s settings. What should the experience do?
  • The intelligence determines the user is in a room where it a little bit too dark for humans to see comfortably. How should the automated home’s experience respond?
The experiences in these examples could be passive, giving gentle hints to the user, or they could be forceful, flashing big red lights in the user’s face. They might even automate things. Choosing the right balance is one of the key challenges when building intelligent experiences.
An intelligent experience might:
  • Automate : By taking an action on the user’s behalf. For example, when the intelligence is very sure the user is asleep, the experience might automatically turn down the volume on the television.
  • Prompt : By asking the user if an action should be taken. For example, when the intelligence thinks the user might have forgotten their wife’s birthday, the experience might ask: “Would you like me to order flowers for your wife’s birthday?”
  • Organize : By presenting a set of items in an order that might be useful. For example, when the intelligence thinks the user is on a diet, the experience might organize the electronic-menu at their favorite restaurant by putting the healthiest things the user might actually eat on top (and hiding everything that is a little too delicious).
  • Annotate : By adding information to a display. For example, when the intelligence thinks the user is running late for a meeting, the experience might display a little flashing running-man icon on the corner of the screen.
Or an intelligent experience might use some combination of these methods, for example annotating when the issue is uncertain but automating when something bad is very likely to happen soon.
The challenge is that the experience should get the most value out of the intelligence when the intelligence is right, and it should cause only an acceptable amount of trouble when the intelligence makes a mistake.
Any Intelligent System (regardless of the intelligence) can use any of these approaches to experience. And recall, the intelligence in an Intelligent System will change over time, becoming better as more users use it (or worse as the problem changes). The right experience might change over time as well.

An Example of Presenting Intelligence

As an example, Let’s consider a smart-home product that automates the lights in rooms of users’ homes. A user installs a few sensors to detect the light level, a few sensors to detect motion, and a few computer-controlled light bulbs. Now the user is going to want to be amazed. They are going to want to impress their friends with their new toy. They are going to want to never have to think about a light switch again.
So how could it work? First, let’s consider the intelligence. The role of the intelligence is to interpret the sensor data and make predictions. For example: given the recent motion sensor data and the recent light sensor readings , what is the probability a user would want to adjust the lights in the near future? The intelligence might know things like:
  • If the room is dark, and someone just entered it, the lights are 95% likely to turn on soon.
  • If the lights are on, and someone is leaving the room, the lights are 80% likely to turn off soon.
  • If the lights are on, and no one has been around for four hours, the next person into the room is likely to yell at the kids about how much power used to cost in their day and about the meaning of respect and about groundings—and then turn off the lights.
So what should the experience for the light-automation system do?
One choice would be to completely automate the lights. If the intelligence thinks the user would like to change the lights soon, then go ahead and change them. This is a pretty forceful experience, because it automates the raw intelligence output with no accommodation for mistakes. Will it be right? Will it make the user happy?
Well, what if the user is watching a movie? The light-automation system’s intelligence doesn’t know anything about the TV. If the user is watching a movie, and the motion sensor detects them—like maybe they stretch or reach for their drink—the lights might flip on. Not a great experience. Or what if the user is sleeping? Will the lights flash on every-time they roll over? Or what if the user is having a romantic dinner?
In these cases, an experience that simply automates the lights is not likely to do the right thing. That’s partly because the intelligence doesn’t have enough information to make perfect decisions and partly because the intelligence wouldn’t make perfect decisions even if it had all the information in the world (because intelligence makes mistakes).
So are we doomed? Should we scrap the product, go back to the drawing board?
Well, we could. Or we could consider some other ways of presenting the intelligence to users.
For example, instead of automating the lights, the experience might prompt the user when the intelligence thinks the lights should change. For example, if the system thinks it is too dark, it could ask (in a pleasantly computerized female voice), “Would you like me to turn on the lights?”
This might be better than automation, because the mistakes the systems makes are less irritating (a question in a nice voice instead of a suddenly-dark room). But will it be good enough? What if the intelligence makes too many mistakes? The voice is chiming in every few minutes, pleasantly asking: “Would you like me to mess up your lights now?”… “How about now?”… “Maybe now?”
The system would seem pretty stupid, asking the user if they would like to screw up their lighting multiple times per hour.
Another, even less intrusive, approach would expose the lighting information by providing annotations . For example, maybe a subtle “energy usage” warning on the user’s watch if there are a couple of lights on that the intelligence thinks should be off. Or maybe a subtle “health warning” if the intelligence thinks the user is in a room that is too dark. If the user notices the warning, and decides it is correct, they can change the lights.
These different experiences might all be built on the same intelligence—they may have exactly the same prediction in any particular situation—but they would seem very different from the user’s perspective.
So which one is right?
It depends on the system’s goals and the quality of the intelligence. If the intelligence is excellent, automation might work. If the intelligence is new (and not too accurate yet), the annotations might make more sense. One of the key goals of the intelligence experience is to take the user’s side and make something that works for them.

Achieve the System’s Objectives

An effective intelligent experience will present intelligence in a way that achieves the Intelligent System’s objective. The experience will be designed so users see the intelligence in ways that help them have good outcomes (and ways that help the business behind the Intelligent System achieve its objectives). Recall that the goals of an Intelligent System can include:
  • Organizational outcomes : sales and profit.
  • Leading indicators : engagement and sentiment.
  • User outcomes: achieving user objectives.
  • Model Properties : having accurate intelligence.
The job of the intelligent experience is to take the output of intelligence and use it to achieve user outcomes, to improve leading indicators, and to achieve organizational outcomes. This means that when the intelligence is right, the experience should push users toward actions that achieve an objective. And when the intelligence is wrong, the experience should protect users from having too bad an experience.
Consider the home lighting automation system . If the goal is to save power, what should the system do when it thinks the lights should be off?
  • Play a soft click at the light-switch: If the user happens to hear it and wants to walk to the switch, they might turn the light off. Maybe.
  • Prompt the user: If we ask whether the user would like the light off, they might say yes, or they might be in the middle of something (and miss the question).
  • Prompt the user and then automate after a delay: Give the user a chance to stop the action, but then take the action anyway.
  • Automate the lights: Just turn them off.
  • Shut down power to the house’s main breaker: Because, come on, the user is wasting power; they are a menace to themselves and to others…
These experience options are increasingly likely to get the lights to turn (and stay) off. Choosing one that is too passive will make the system less effective at achieving its objective. Choosing one that is too extreme will make the system unpleasant to use.
In fact, the experience is just as important as the intelligence in accomplishing the Intelligent System’s objectives: an ineffective experience can mess up the system just as thoroughly as terribly inaccurate intelligence predictions.
An intelligent experience is free to pick and choose, to ignore unhelpful intelligence—whatever it takes to achieve the desired outcomes.

An Example of Achieving Objectives

Consider the automated light system again. Imagine two possible objectives for the product:
  1. 1.
    To minimize the amount of power consumed.
     
  2. 2.
    To minimize the chance of a senior citizen tripping over something and getting hurt.
     
Everything else is the same—the intelligence, the sensors , the way the system is implemented—only the objectives are different. Would the same experience be successful for both of these objectives? Probably not.
The system designed to conserve power might never turn lights on, even if the intelligence thinks having a light on would be appropriate. Instead, the experience might rely on users to use the switch when they want a light on, and simply turn lights off when it thinks the user doesn’t need the light any more.
The system designed to avoid senior-citizen-tripping might behave very differently. It might turn on lights whenever there is any chance there is a user in the room, and leave them on until it is very certain the room is empty.
These experiences are quite different, so different they might be sold as totally different products—packaged differently, marketed differently, priced differently—and they might each be perfectly successful at achieving their overall objectives. And they might both use (and contribute to) exactly the same underlying intelligence.

Minimize Intelligence Flaws

Intelligence makes mistakes . Lots of them. All different types of mistakes. An effective intelligent experience will function despite these mistakes, minimizing their impact and making it easy for users to recover from them.
When designing an intelligent experience it is important to understand all of the following:
  1. 1.
    What types of mistakes the intelligence will make.
     
  2. 2.
    How often it will make mistakes.
     
  3. 3.
    What mistakes cost the user (and the organization):
    1. a.
      If the user notices the mistake and tries to correct it.
       
    2. b.
      If the user never notices that the mistake happened.
       
     
Then the intelligent experience can decide what to do about the mistakes . The experience can:
  1. 1.
    Stay away from bad mistakes by choosing not to do things that are too risky.
     
  2. 2.
    Control the number of interactions the user will have with the intelligence to control the number of mistakes the user will encounter.
     
  3. 3.
    Take less forceful actions in situations where mistakes are hard to undo (for example, prompting the user to be sure they want to launch the nuclear missile, instead of launching it automatically).
     
  4. 4.
    Provide the user with feedback about actions the system took and guidance on how to recover if something is wrong.
     
These techniques make an Intelligent System safer. They also reduce the potential of the Intelligent System by watering down interactions, and by demanding the user pay more attention to what it is doing. Creating a balance between achieving the objective and controlling mistakes is a key part of designing effective intelligent experiences.

Create Data to Grow the System

Intelligence needs data to grow. It needs to see examples of things happening in the world, and then see the outcomes that occurred. Was the interaction good for the user? For the business? Was it bad? Intelligence can use these examples of the world and of the outcomes to improve over time.
The experience plays a large role in making the data that comes from the Intelligent System valuable. Done right, an experience can produce large data sets, perfectly tailored for machine learning. Done wrong, an experience can make the data from the system useless.
An effective intelligent experience will interact with users in clear ways where the system can know:
  1. 1.
    The context of the interaction.
     
  2. 2.
    The action the user took.
     
  3. 3.
    The outcome.
     
Sometimes the experience can make this interaction completely implicit; that is, the experience can be designed so elegantly that the user produces the right type of data simply by using the system.
At other times the experience may require users to participate explicitly in creating good data (maybe by leaving ratings). We will discuss techniques and tradeoffs for getting good data from intelligent experiences in great detail in a later chapter.

An Example of Collecting Data

Consider the automated-light example.
The intelligence will examine the sensors and make a decision about what the lights should do. The experience will act on this intelligence, interacting with the user in some way. Then we need to know—was that interaction correct?
What if the lights were on, but the system thought they should be off? How would we know if the intelligence is right or not?
We might be right if:
  • The experience automatically turns off the lights and the user doesn’t turn them back on.
  • The experience prompts the user to turn off the lights, and the user says yes.
  • The experience provides a power warning, and the user turns off the lights.
We might be wrong if:
  • The experience automatically turns off the lights and the user immediately turns them back on.
  • The experience prompts the user to turn off the lights, but the user says no.
  • The experience provides a power warning, but the user doesn’t turn off the lights.
But this feedback isn’t totally clear. Users might not bother to correct mistakes. For example:
  • They might have preferred the lights remain on, but have given up fighting with the darn auto-lights and are trying to learn to see in the dark.
  • They might see the prompt to turn the lights off, but be in the middle of a level on their favorite game and be unable to respond.
  • They might have put their smart watch in a drawer and so they aren’t seeing any power warnings any more.
Data is best when outcomes are clear and users have an incentive to react to every interaction the experience initiates. This isn’t always easy, and it may come with some tradeoffs. But creating effective training data when users interact with intelligent experiences is a key way to unlock the value of an Intelligent System .

Summary

This chapter introduced intelligent experiences. The role of an intelligent experience is to present intelligence to users in a way that achieves the system’s objectives, minimizes intelligence flaws, and creates data to improve the intelligence.
Presenting intelligence involves selecting where it will appear, how often it will appear, and how forcefully it will appear. There are many options , and selecting the right combination for your situation is critical to having success with Intelligent Systems.
With the right experiences, a single intelligence can be used to achieve very different goals.
Experience can control the cost of mistakes by reducing their forcefulness, by making them easier to notice, by making them less costly, and by making them easier to correct.
An effective experience will support improving the intelligence. This can be done by creating interactions with users that collect frequent and unbiased samples of both positive and negative outcomes.
The intelligent experience must be on the user’s side and make them happy, engaged, and productive no matter how poor and quirky the intelligence might be.

For Thought…

After reading this chapter, you should:
  • Understand how user experience and intelligence come together to produce desirable impact.
  • Know the main goals of the intelligent experience and be able to explain some of the challenges associated with them.
You should be able to answer questions like these:
  • Consider your interactions with Intelligent Systems. What is the most forceful use of intelligence you have experienced? What is the most passive?
  • Name three ways you’ve provided data that helped improve an intelligent experience. Which was the most effective and why?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.177.86