©  Geoff Hulten 2018
Geoff HultenBuilding Intelligent Systemshttps://doi.org/10.1007/978-1-4842-3432-7_7

7. Balancing Intelligent Experiences

Geoff Hulten
(1)
Lynnwood, Washington, USA
 
Designing a successful intelligent experience is a balancing act between:
  1. 1.
    Achieving the desired outcome
     
  2. 2.
    Protecting users from mistakes
     
  3. 3.
    Getting data to grow the intelligence
     
When the intelligence is right, the system should shine, creating value, automating something, giving the user the choices they need, and encouraging the safest, most enjoyable, most profitable actions. The experience should strongly encourage the user to take advantage of whatever it is the intelligence was right about (as long as the user isn’t irritated and feels they are getting a good deal from the interaction).
When the intelligence is wrong, the system should minimize damage. This might involve allowing the user to undo any actions taken. It might involve letting the user look for more options. It might involve telling the user why it made the mistake. It might involve some way for the user to get help. It might involve ways for the user to avoid the particular mistake in the future.
The problem is that the experience won’t know if the intelligence is right or wrong. And so every piece of experience needs to be considered through two lenses: what should the user do if they got there because the intelligence was right; and what should the user do if they got there because the intelligence was wrong.
This creates tension, because the things that make an Intelligent System magical (like automating actions without any fuss) are at odds with the things that let a user cope with mistakes (like demanding their attention to examine every decision before the intelligence acts).
There are five main factors that affect the balance of an intelligent experience:
  • The forcefulness of the experience; that is, how strongly it encourages the user to do what the intelligence thinks they should.
  • The frequency of the experience; that is, how often the intelligent experience tries to interact with the user.
  • The value of the interaction when the intelligence is right; that is, how much the user thinks it benefits them, and how much it helps the Intelligent System achieve its goals.
  • The cost of the interaction when the intelligence is wrong; that is how much damage the mistake does and how hard it is for the user to notice and undo the damage.
  • The quality of the intelligence; that is, how often the intelligence is right and how often it is wrong.
To create an effective intelligent experience you must understand these factors and how they are related. Then you must take the user’s side and build an experience that effectively connects them to the intelligence. This chapter will explore these factors in more detail.

Forcefulness

An interaction is forceful if the user has a hard time ignoring (or stopping) it. An interaction is passive if it is less likely to attract the user’s attention or to affect them. For example, a forceful experience might:
  • Automate an action.
  • Interrupt the user and make them respond to a prompt before they can continue.
  • Flash a big, garish pop-up in front of the user every few seconds until they respond.
Forceful interactions are effective when:
  • The system is confident in the quality of the intelligence (that it is much more likely to be right than it is to be wrong).
  • The system really wants to engage the user’s attention.
  • The value of success is significantly higher than the cost of being wrong.
  • The value of knowing what the user thinks about the intelligence’s decision is high (to help create new intelligence).
A passive experience does not demand the user’s attention. It is easy for the user to choose to engage with a passive experience or not. Passive experiences include:
  • A subtle prompt that does not force the user to respond immediately.
  • A small icon in the corner of the screen that the user may or may not notice.
  • A list of recommended content on the bottom of the screen that the user can choose to click on or to ignore.
Passive interactions are effective when:
  • The system is not confident in the quality of the intelligence.
  • The system isn’t sure the value of the intelligent interaction is higher than what the user is currently doing.
  • The cost of a mistake is high.
One way to think about the forcefulness of an interaction is as an advertisement on a web page. If the ad pops over the web page and won’t let you continue till you click it—that’s a forceful experience. You’ll probably click the stupid ad (because you have to). And if the ad isn’t interesting, you’ll be mad at the product that is being advertised, at the web page that showed you the ad, and at whoever programed a web browser stupid enough to let such an irritating ad exist.
On the other hand, if the ad is blended tastefully into the page you may not notice it. You may or may not click the ad. If the ad is for something awesome you might miss out. But you are less likely to be irritated. You might go back to the web page again and again. Over time you may spend more time on the web page and end up clicking far more ads if they are passive and tasteful than if they are forceful and garish.

Frequency

An intelligent experience can choose to interact with the user, or it can choose not to. For example, imagine a smart map application that is giving you driving directions. This smart map might be getting millions of data points about traffic conditions, accidents, weather conditions, which lights are red and which are green, and so on. Whenever you come to an intersection, the directions app could say “turn left to save 17 seconds,” or “go faster to beat the next light and save 3 minutes,” or “pop a U and go back to the last intersection and turn right LIKE I TOLD YOU LAST TIME and you can still save 2 minutes.”
Or the app could choose to be more subtle, limiting itself to one suggestion per trip, and only making suggestions that save at least 10 minutes.
More frequent interactions tend to fatigue users, particularly if the frequent interactions are forceful ones. On the other hand, less frequent interactions have fewer chances to help users, and may be confusing when they do show up (if users aren’t accustomed to them).
Some ways to control frequency are to:
  1. 1.
    Interact whenever the intelligence thinks it has a different answer. For example, fifteen seconds ago the intelligence thought the right direction was straight, now it has more information and thinks the right direction is right. And ten seconds later it changes its mind back to thinking you should go straight. Using intelligence output directly like this can result in very frequent interactions. This can be effective if the intelligence is usually right and the interaction doesn’t take too much user attention. Or it could drive users crazy.
     
  2. 2.
    Interact whenever the intelligence thinks it has a significantly different answer. That is, only interact when the new intelligence will create a “large value” for the user. This trades off some potential value for a reduction in user interruption. And the meaning of “large value” can be tuned over time to control the number of interaction.
     
  3. 3.
    Explicitly limit the rate of interaction. For example, you might allow one interaction per hour, or ten interactions per day. This can be effective when you aren’t sure how users will respond. It allows limiting the potential cost and fatigue while gaining data to test assumptions and to improve the Intelligent System.
     
  4. 4.
    Interact whenever the intelligence thinks the user will respond. This involves having some understanding of the user . Do they like the interactions you are offering? Do they tend to accept or ignore them? Are they doing something else or are they receptive to an interruption? Are they starting to get sick of all the interactions? Done well, this type of interaction mode can work well for users, meshing the Intelligent System with their style. But it is more complex. And there is always the risk of misunderstanding the user. For example, maybe one day the user has a guest in the car, so they ignore all the suggestions and focus on their guest. Then the intelligent experience (incorrectly) learns that the user doesn’t like interruptions at all. It stops providing improved directions. The user misses out on the value of the system because the system was trying to be too-cute.
     
  5. 5.
    Interact whenever the user asks for it. That is, do not interact until the user explicitly requests an interaction. This can be very effective at reducing user fatigue in the experience. Allowing users to initiate interaction is good as a backstop, allowing the system to interact a little bit less, but allowing the users to get information or actions when they want them. On the other hand, relying too heavily on this mode of interaction can greatly reduce the potential of the Intelligent System —what if the user never asks for an interaction? How would the user even know (or remember) how to ask for an interaction?
     
And keep in mind that humans get sick of things. Human brains are very good at ignoring things that nag at them. If your experience interacts with a user too much, they will start to ignore it. Be careful with frequent interactions—less can be more.

Value of Success

Users will be more willing to put up with mistakes and with problems if they feel they are getting value. When an Intelligent System is helping with something obviously important—like a life-critical problem or saving users a large amount of money and time—it will be easier to balance the intelligent experience in a way that users like, because users will be willing to engage more of their time on the decisions the Intelligent System is making. When the Intelligent System is providing smaller value—like helping users toast their bread or saving them a few pennies of power per day—it will be intrinsically harder to create a balanced intelligent experience that users feel is worth the effort to engage with. Users tend to find interactions valuable if they:
  • Notice that something happened.
  • Feel that the interaction solved a problem they care about, or believe it provided them some meaningful (if indirect) improvement.
  • Can connect the outcome with the interaction; they realize what the interaction was trying to do, and what it did do.
  • Trust that the system is on their side and not just trying to make someone else a buck.
  • Think the system is cool, smart, or amazing.
It is possible for an interaction to leave users in a better situation and have zero perceived value. It is also possible for an interaction to leave the users in a worse situation, but leave the user thinking the Intelligent System is great. The intelligent experience plays a large role in helping users feel they are getting value from an Intelligent System.
When the intelligence is right, an effective intelligent experience will be prominent, will make users feel good about what happened, and will take credit where credit is due. But a savvy intelligent experience will be careful—intelligence won’t always be right, and there is nothing worse than an intelligent experience taking credit for helping you when it actually made your life worse.
Interactions must also be valuable to the Intelligent System. In general, an interaction will be valuable to the Intelligent System if it achieves the system’s objectives; for example, when it:
  • Causes the user to use the Intelligent System more (increases engagement).
  • Causes the user to have better feelings about the Intelligent System (improves sentiment).
  • Causes the user to spend more money (creates revenue).
  • Creates data that helps the Intelligent System improve (grows intelligence).
In general, experiences that explicitly try to make money and to get data from users will be in conflict with making users feel they are getting a good value. An effective intelligent experience will be flexible to make trade-offs between these over the life-cycle of the Intelligent System to help everyone benefit.

Cost of Mistakes

Users don’t like problems; Intelligent Systems will have problems. A balanced intelligent experience will be sensitive to how much mistakes cost users and will do as much as possible to minimize those costs.
Mistakes have intrinsic costs based on the type of mistake. For example, a mistake that threatens human life or that costs a large amount of money and time is intrinsically very costly. And a mistake that causes a minor inconvenience, like causing a grill to be a few degrees colder than the user requests, is not so costly.
But most mistakes can be corrected. And mistakes that are easy to correct are less costly than ones that are hard (or impossible) to correct. An intelligent experience will help users notice when there is a mistake and it will provide good options for recovering from mistakes.
Sometimes the mistakes are pretty minor, in which case users might not care enough to know the mistake even happened. Or there may be no way to recover , in which case the Intelligent System might want to pretend nothing is wrong—no sense crying over spilt milk.

Knowing There Is a Mistake

The first step to solving a problem is to knowing there is a problem. An effective intelligent experience will help users know there is a mistake in a way that:
  1. 1.
    Doesn’t take too much of the user’s attention, especially when it turns out there isn’t a mistake.
     
  2. 2.
    Finds the mistake while there is still time to recover from the damage.
     
  3. 3.
    Makes the user feel better about the mistake and the overall interaction.
     
Sometimes the mistakes are obvious, as when an intelligent experience turns off the lights, leaving the user in the dark. The user knows that something is wrong the instant they are sitting in the dark. But sometimes the mistakes are not obvious, as when an intelligent experience changes the configuration of a computer-system’s settings without any user interaction. The user might go for years with the suboptimal settings and never know that something is wrong. Some options to help users identify mistakes include these:
  • Informing the user when the intelligent experience makes a change. For example, the intelligent experience might automate an action, but provide a subtle prompt that the action was taken. When these notifications are forceful, they will demand the user’s attention and give the user a chance to consider and find mistakes. But this will also fatigue users and should be used sparingly.
  • Maintaining a log of the actions the intelligent experience took . For example, the “junk” folder in a spam filtering system is a log of messages the spam filter suppressed. This lets users track down problems but doesn’t require them to babysit every interaction. Note that the log does not need to be complete; it might only contain interactions where the intelligence was not confident.

Recovering from a Mistake

Intelligent experiences can also help users recover from mistakes.
Some mistakes are easy to recover from—for example, when a light turns off, the user can go turn it back on. And some mistakes are much harder to recover from, as when the intelligent experience sends an email on the user’s behalf—once that email is beeping on the recipient’s screen there is no turning back.
The two elements of recovering from mistakes are: how much of the cost of the mistake can be recovered; and what the user has to do to recover from the mistake. To help make mistakes recoverable, an intelligent experience might:
  1. 1.
    Limit the scope of a decision, for example, by taking a partial action to see if the user complains.
     
  2. 2.
    Delay an action, for example, by giving the user time to reflect on their decision before taking destructive actions (like deleting all their files).
     
  3. 3.
    Be designed not to take destructive actions at all, for example by limiting the experience to actions that can be undone.
     
If the cost of mistakes is high enough, the experience might want to go to great lengths to help make mistakes recoverable.
When the user wants to recover from a mistake, the intelligent experience can:
  1. 1.
    Put an option to undo the action directly in the experience (with a single command).
     
  2. 2.
    Force the user to undo the action manually (by tracking down whatever the action did and changing the various parts of the action back one by one).
     
  3. 3.
    Provide an option to escalate to some support agent (when the action can only be undone by an administrator of the system).
     
The best mistakes are ones that can be noticed easily, that don’t require too much user attention to discover, and that can be completely recovered from in a single interaction.
The worst are ones that are hard to find, that can only be partially recovered from, and where the user needs to spend a lot of time (and the Intelligent System needs to pay for support staff) to recover.

Intelligence Quality

The experience can’t control the quality of the intelligence. But to make an effective intelligent experience you had better understand the quality of the intelligence in detail. When the intelligence is good, you’ll want to be more frequent and forceful. When the intelligence is shaky you’ll want to be cautious about when and how you interact with the user.
Mistakes come in many type. An intelligence might:
  • Make many essentially random mistakes. For example, it might have the right answer 90% of the time and the wrong answer 10% of the time. In this case, the experience can be balanced for all users simultaneously.
  • Make focused mistakes. For example, it might be more likely to have the right answer for some users than for others. For example, making more mistakes with users who have glasses than with users who do not. Focused mistakes are quite common, and are often hard to identify. When focused mistakes become a problem it may be necessary to balance the intelligent experience for the lowest common denominator—the user-category that has the worst outcomes (or to abandon some users and focus on the ones where the intelligence works well).
Intelligence quality is a life-cycle, and experience will need to adapt as the intelligence does. For example, early in development, before the system has real customers, the intelligence will generally be poor. In some cases, the intelligence will be so poor that the best experience results from hiding the intelligence from users completely (while gathering data).
As the system is deployed, and customers come, the system will gather data to help the intelligence improve. And as the intelligence improves, the experience will have more options to find balance and to achieve the Intelligent System’s objectives. Keep in mind:
  1. 1.
    Better intelligence can support more forceful, frequent experiences.
     
  2. 2.
    Change is not always easy (even if the change comes from improved intelligence). Be careful to bring your users along with you.
     
  3. 3.
    Sometimes intelligence will change in bad ways, maybe if the problem gets harder or new types of users start using the Intelligent System (or if the intelligence creators make some mistakes). Sometimes the experience will need to take a step backwards to support the broader system. That’s fine—life happens.
     

Summary

An effective intelligent experience will balance:
  • The forcefulness of the interactions it has with users.
  • The frequency with which it interacts with users.
  • The value that successful interactions have for the user and for the Intelligent System.
  • The cost that mistakes have for the user and for the Intelligent System.
  • The quality of intelligence.
Both the users and the Intelligent System must be getting a good deal. For users this means they must perceive the Intelligent System as being worth the time to engage with. For the Intelligent System this means it must be achieving objectives and getting data to improve.
When intelligence is good (very accurate), the intelligent experience can be frequent and forceful, creating a lot of value for everyone, and not making many mistakes.
When the intelligence is not good (or mistakes are very expensive), the intelligent experience must be more cautious about what actions it proposes and how it proposes them, and there must be ways for users to find mistakes and recover from them.
The frequency of interaction can be controlled by interacting:
  • Whenever the intelligence changes its mind.
  • Whenever the intelligence finds a large improvement.
  • A limited number of times.
  • Only when the system thinks the user will respond.
  • Only when the user asks for an interaction.
Interacting more often creates more opportunity for producing value. But interacting can also create fatigue and lead to users ignoring the intelligent experience.
Users find value when they understand what is going on and think they are getting a good deal. The intelligent experience is critical in helping users see the value in what the intelligence is providing them.
Intelligence will be wrong, and to make an effective intelligent experience you must understand how it is wrong and work to support it. The quality of intelligence will change over the life-cycle of an Intelligent System and as it does, the intelligent experience can rebalance to create more value.

For Thought…

After reading this chapter you should:
  • Know the factors that must be balanced to create an effective intelligent experience: one that is pleasant to use, achieves objectives and mitigates mistakes, and evolves as intelligence does.
  • Know how to balance intelligent experiences by controlling the key factors of: forcefulness, frequency, value, cost, and understanding intelligence quality.
You should be able to answer questions like these:
  • What is the most forceful intelligent experience you have ever interacted with? What is the most passive intelligent experience you can remember encountering?
  • Give an example of an intelligent experience you think would be more valuable if it were less frequent. Why?
  • List three ways intelligent experiences you’ve interacted with have helped you find mistakes. Which was most effective and why?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.79.147