©  Geoff Hulten 2018
Geoff HultenBuilding Intelligent Systemshttps://doi.org/10.1007/978-1-4842-3432-7_8

8. Modes of Intelligent Interaction

Geoff Hulten
(1)
Lynnwood, Washington, USA
 
There are many, many ways to create user experiences, and just about all of them can be made intelligent. This chapter explores some broad approaches to interaction between intelligence and users and discusses how these approaches can be used to create well-balanced intelligent experiences. These approaches include:
  • Automating actions on behalf of the user.
  • Prompting users to see if they want to take an action.
  • Organizing the information a user sees to help them make better decisions.
  • Annotating other parts of the user experience with intelligent content.
  • Hybrids of these that interact differently depending on the intelligence.
The following sections will explore these approaches, providing examples and discussing their pros and cons.

Automate

An automated experience is one in which the system does something for the user without allowing the user to approve or to stop the action. For example:
  • You get into your car on a Monday morning, take a nap, and wake up in your office parking lot.
  • You lounge in front of your TV, take a bite of popcorn, and the perfect movie starts playing.
  • You log into your computer and your computer changes its settings to make it run faster.
  • You rip all the light switches out of your house, hook the bulbs up to an Intelligent System, and get perfectly correct lighting for all situations without ever touching a switch or wasting one watt of power again.
These are great, magical, delightful… if they work. But in order for experiences like these to produce good outcomes, the intelligence behind them needs to be exceptionally good. If the intelligence is not good, automated intelligent experiences can be disastrous.
Automated experiences are:
  • Very forceful, in that they force actions on the user.
  • Not always obvious, in that they may require extra user experience to let the user know what is happening and why. This may reduce the value of the interactions and make mistakes harder to notice.
  • Difficult to get training data from, in that users will generally only give feedback when mistakes happen. Sometimes the system can tie outcomes back to the automated action, but sometimes it can’t. Automated systems usually need careful thought about how to interpret user actions and outcomes as training data. They often require additional user experiences that gather information about the quality of the outcomes.
Automation is best used when:
  • The intelligence is very good.
  • There is a long-term commitment to maintaining intelligence.
  • The cost of mistakes is not too high compared to the value of a correct automation.

Prompt

Intelligence can initiate an interaction between the system and the user. For example:
  • If the intelligence suspects a user is about to do something wrong, it might ask them if they are sure they want to proceed.
  • If the intelligence thinks the user is entering a grocery store, it might ask if they would like to call their significant other and ask if there is anything they need.
  • If the intelligence notices the user has a meeting soon and estimates they are going to be late, it might ask if the user would like to send a notice to the other participants.
These interactions demand the user’s attention . They also allow the user to consider the action, make a decision if it is right or not, and approve or reject the action. In this sense, experiences based on prompting allow the user to act as a back-stop for the intelligence, catching mistakes before they happen. Interactions based on prompting are:
  • Variably forceful. Depending on how the interaction is presented, it can be extremely forceful (for example with a dialog box that the user must respond to before progressing), or it can be very passive (for example, by playing a subtle sound or showing a small icon for a few moments).
  • Usually obvious, in that the user is aware of the action that was taken and why it was taken. This helps the user perceive the value of the intelligent interaction. It also helps users notice and recover from mistakes. But frequent prompts can contribute to fatigue: if users are prompted too often, they will begin to ignore the prompts and become irritated, reducing the value of the Intelligent System over time.
  • Usually good to get training data from, in that the user will be responding to specific requests. The Intelligent System will have visibility into exactly what is going on. It will see the context that led to the interaction. It will have the user consider the context and give input on the action. This allows the system to learn, so that it will know whether the prompt was good or bad and be able to improve the intelligence.
These types of interactions are often used when:
  • The intelligence is unreliable or the system is missing context to make a definitive decision.
  • The intelligence is good enough that the prompts won’t seem stupid, and the prompts can be infrequent enough that they don’t lead to fatigue.
  • The cost of a mistake is high relative to the value of the action, so the system needs the user to take part in approving action or changing behavior.
  • The action to take is outside the control of the system, for example when the user needs to get up and walk to their next meeting.

Organize

The intelligence can be used to decide what information to present to the user, and in what order. For example:
  • If the intelligence thinks the user is querying for information about their favorite band, it might select the most relevant web pages to display.
  • If the intelligence thinks the user is shopping for a camera, it might select camera-related ads to tempt the user.
  • If the intelligence thinks the user is having a party, it might offer a bunch of upbeat 80s songs on the user’s smart-jukebox.
These types of experiences are commonly used when there are many, many potential options, so that even “good” intelligence would be unlikely to narrow the answer down to one exactly correct choice.
For example, at any time there might be 50 movies a user would enjoy watching, depending on their mood, how much time they have, and who they are with. Maybe the user is 10% likely to watch the top movie, 5% likely to watch the second, and so on. If the system showed the user just the top choice, it would be wrong 90% of the time.
Instead, these types of systems use intelligence to pre-filter the possible choices down to a manageable set and then present this set in some browsable or eye-catching way to achieve their objectives.
Interactions that organize choices for the user are:
  • Not intrinsically forceful , in that they don’t take important actions for the user and they don’t demand the user’s attention. If the user experience presenting the organized information is big and jarring, the interaction might be perceived as forceful, but that is fully in the hands of the experience designer.
  • Somewhat obvious, in that the user will see the choices, and might or might not feel that there was significant intelligence behind the ordering of the information. It is also sometimes challenging for users to find mistakes—if an option isn’t presented, the user may not know whether the option was excluded because the system doesn’t support the option (have the product, movie, song, and so on) or because the intelligence was wrong.
  • OK to get training data from, in that it is easy to see what the user interacts with (when the system did something right), but it is harder to know when the system got something wrong (when it suppressed the option the user might have selected). This can lead to bias, in that the options users tend to see more often will tend to be selected more. These systems often require some coordination between intelligence and experience to avoid bias, for example by testing various options with users—the intelligence doesn’t know if the user will like something or not, so the experience tries it out.
Interactions that organize information/options are best when:
  • There are a lot of potential options and the intelligence can’t reasonably detect a “best” option.
  • The intelligence is still able to find some good options, these good options are a small set, and the user is probably going to want one of them.
  • The problem is big and open-ended and so that users can’t reasonably be expected to browse through all options and find things on their own.

Annotate

The intelligence can add subtle information to other parts of the experience to help the user make better decisions. For example:
  • If the intelligence thinks the user has their sprinklers set to water for too long (maybe there is rain in the forecast), it can turn on a blinking yellow light on the sprinkler box.
  • If the intelligence thinks there is a small chance the user’s car will break down soon (maybe the engine is running a little hot), it might display a “get service” indicator on the dashboard.
  • If the intelligence finds a sequence of characters that doesn’t seem to be a word, it can underline it with a subtle red line to indicate the spelling might be wrong.
These types of experiences add a little bit of information, usually in a subtle way. When the information is correct, the user is smarter, can make better decisions, can initiate an interaction to achieve a positive outcome. When the information is wrong it is easy to ignore, and the cost of the mistake is generally small.
Interactions based on annotation are:
  • Generally passive, in that they don’t demand anything of the user and may not even be noticed. This can reduce the fatigue that can come with prompts. It can also lead to users never noticing or interacting with the experience at all.
  • Variably obvious, in that the user may or may not know where the annotation came from and why it is intelligent. Depending on how prominent the user experience is around the annotation, the user might never notice it. Users may or may not be able to understand and correct mistakes.
  • Difficult to get training data from, as it is often difficult for the system to know: 1) if the user noticed the annotation; 2) if they changed their behavior because of it; and 3) if the change in behavior was positive or negative. Interactions based on annotation can require additional user experience to understand what actions the user took, and to improve the intelligence.
Annotations work best when:
  • The intelligence is not very good and you want to expose it to users in a very limited way.
  • The Intelligent System is not able to act on the information, so users will have to use the information on their own.
  • The information can be presented in a way that isn’t too prominent but is easy for the user to find when they want it.

Hybrid Experiences

Intelligent experiences can be built by combining these types of experiences, using one type of experience in places where the intelligence is confident, and another type in places where the intelligence is not. For example, when the intelligence is really, really sure, the experience might automate an action. But when the intelligence is only sort of sure, the experience might prompt the user. And when the intelligence is unsure, the experience might annotate the user experience with some information.
An example of a hybrid intelligent experience is spam filtering. The ideal spam filter would delete all spam email messages before a user had to see them and would never, never, never delete a legitimate email. But this is difficult.
Sometimes the intelligence can be very sure that a message is spam, because some spammers don’t try very hard. They put the name of certain enhancement pills right in their email messages without bothering to scramble the letters at all. They send their messages from part of the Internet known be owned by abusers. In these cases it is easy for a spam filter to identify these obvious spam messages and be very certain that they aren’t legitimate. Most intelligent spam filters delete such obvious spam messages without any user involvement at all. Spam filtering systems have deleted billions of messages this way.
But some spammers are smart. They disguise their messages in ways designed to fool spam filtering systems—like replacing i’s with l’s or replacing text with images—and they are very good at tricking spam filters. When the intelligence encounters a carefully obscured spam email, it isn’t always able to distinguish it from legitimate emails. If the spam filter tried to delete these types of messages it would make lots of mistakes and delete a lot of legitimate messages too. If the spam filter put these types of messages into the inbox, skilled spammers would be able to circumvent most filters. And so most spam filtering systems have a junk folder. Difficult messages are moved to the junk folder, where the user is able to inspect them and rescue any mistakes.
These are two forms of automation, one extremely forceful (deleting a message), and one less forceful (moving a message to a junk folder). The intelligent experience chooses between them based on the confidence of the intelligence.
But many spam filters provide even more experiences than this. Consider the case when the spam filter thinks the message is good, but it is a little bit suspicious. Maybe the message seems to be a perfect personal communication from an old friend, but it comes from a part of the Internet that a lot of spammers use. In this case the experience might put the message into the user’s inbox but add a subtle message to the top of the message that says “Reminder—do not share personal information or passwords over e-mail.”
In this example, the Intelligent System uses forceful automation where it is appropriate (deleting messages without user involvement). It uses less-forceful automation when it must (moving messages to a junk folder). And it uses some annotation occasionally when it thinks that will help (warnings on a few percent of messages).
Hybrid experiences are common in large Intelligent Systems. They are very effective when:
  • The problem can be decoupled into clear parts, some of which are easy and some of which are hard.
  • The intelligence problem is difficult and it needs many types of support from the experience.
  • You want to reduce the amount of attention you demand from users and avoid asking them questions where the intelligence is certain, or where the intelligence is particularly unreliable. This can make the questions you do ask the user much more meaningful and helpful (both to the user and as training data).

Summary

There are many ways to present intelligence to users, but important ones include these:
  • Automating actions
  • Prompting the user to action
  • Organizing information
  • Adding information
  • Hybrid experiences
Options like automating and prompting require very good intelligence (or low costs when there is a mistake). Other options are less visible to the user and can be good at masking mistakes when the intelligence is less reliable or the cost of a mistake is high.
Most large Intelligent Systems have hybrid experiences that automate easy things, prompt for high-quality interactions, and annotate for things they are less sure about.

For Thought…

After reading this chapter you should:
  • Know the most common ways to connect intelligence to users.
  • Have some intuition about which to use when.
You should be able to answer questions like these:
  • Find examples of all the types of intelligent experiences in systems you interact with regularly (automation, prompting, organizing, annotating, and a hybrid experience).
Consider the intelligent experience you interact with most.
  • What type of experience is it?
  • Re-imagine it with another interaction mode (for example, by replacing prompts with automation or annotation with organization).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.179.220