©  Geoff Hulten 2018
Geoff HultenBuilding Intelligent Systemshttps://doi.org/10.1007/978-1-4842-3432-7_11

11. The Components of an Intelligence Implementation

Geoff Hulten
(1)
Lynnwood, Washington, USA
 
In order to have impact, intelligence must be connected to the user.
That is, when the user interacts with the system, the intelligence must be executed with the proper inputs and the user experience must respond. When new intelligence is created, it must be verified and deployed to where it is needed. And there must be ways to determine if everything is working, and mechanisms to gather what is needed to improve the intelligence over time.
The intelligence implementation takes care of all of this. It is the foundation upon which an intelligent service is created.
As an analogy, consider a web server .
A web server implementation needs to accept network connections, do authentication, interpret requests and find the right content, process it (maybe running scripts), serve it, create logs—and a lot more.
Once a web server implementation is available, content creators can use it to produce all sorts of useful web sites.
This is similar to an implementation of an Intelligent System—it allows intelligence orchestrators to produce useful user interactions over time. Intelligence, artificial intelligence, and machine learning are changing the world, but proper implementations of Intelligent Systems put these into position to fulfill their promise.
This chapter introduces the components that make up an implementation of an Intelligent System.

An Example of Intelligence Implementation

Imagine an Intelligent System designed to help users know if a web page is funny or not.
The user browses to a page and a little smiley face pops up if the system thinks the user will laugh if they read the page. You know, so no one ever has to miss something funny ever again…
Simple enough. Let’s discuss how this Intelligent System might be implemented.
It starts with a program that can examine the contents of a web page and estimate if it is funny or not. This is probably a model produced by machine learning, but it could be some simple heuristics. For example, a very simple heuristic might say that any page containing the phrases “walked into a bar” or “who’s there” is funny, and every page that doesn’t contain those phrases is not funny.
And for some it might actually be true—maybe you have that uncle, too…
Someone could write a plug-in for web browsers that includes this model. When a new web page is loaded, the plug-in takes the contents of the page, checks it against the model, and displays the appropriate user experience depending on what the model says (the smiley face if the model thinks the page is funny, nothing if the model thinks the page is not funny).
This plug-in is a very simple intelligence implementation.
But ship the plug-in, see users download it, and pretty soon someone is going to want to know if it’s working.
So maybe the plug-in includes a way for users to provide feedback. If they get to the bottom of a page the system said would be funny, but they didn’t laugh, there is a frowny face they can click to let the system know it made a mistake. If they read a page the system didn’t flag as funny, but find themselves laughing anyway, there is a laugh button they can click to let the system know. We might even want to know if users are more likely to read pages that are marked as funny compared to pages that aren’t, so maybe we measure how long users spend on pages we’ve flagged as funny, and how long they spend on ones we haven’t.
The plug-in gathers all of this user feedback and behavior and sends it back to the service as telemetry.
Adding telemetry improves the overall intelligent implementation because it lets us answer important questions about how good the intelligence is and how users are perceiving the overall system—if we are achieving our overall objectives. It also sets us up to improve.
Because when the intelligence creators look at the telemetry data, they are going to find all sorts of places where their initial model didn’t work well. Maybe most funny things on the Internet are in images, not text. Maybe people in one country aren’t amused by the same things that people in another country find hilarious. That kind of thing.
The intelligence creators will spend some time looking at the telemetry; maybe they’ll crawl some of the pages where the initial system made mistakes, and build some new models. Then they are going to have a model they like better than the original one and they are going to want to ship it to users.
The intelligence implementation is going to have to take care of this. One option would be to ship a whole new version of the plug-in—the Funny Finder v2.0—which contains the new model. But users of the first version would need to find this new plug-in and choose to install it. Most of them won’t. And even the ones who do might take a long time to do it. This causes the intelligence to update slowly (if at all) and reduces the potential of the intelligence creators’ work. Further, the intelligence might change fast: maybe every week, maybe every day, maybe multiple times per hour. Unless the implementation can get new intelligence to users regularly and reliably, the Intelligent System won’t be very intelligent.
So the implementation might upgrade the plug-in to do automatic updates . Or better, the plug-in might be changed to update just the models (which are essentially data) while leaving all the rest of the code the same. Periodically, the plug-in checks a server, determines if there is some new intelligence, and downloads it.
Great. Now the implementation runs the intelligence, measures it, and updates it. The amount of code to service the intelligence at this point is probably more than the intelligence itself: a good portion of any Intelligent System is implementation. And this version of the system is complete, and closes the loop between the user and the intelligence, allowing the user to benefit from and improve the intelligence simply by using the system and providing feedback.
But some things are really not funny. Some things are offensive, horrible. We really, really don’t want our system to make the type of mistake that flags highly offensive content as funny.
So maybe there is a way for users to report when our system is making really terrible mistakes. A new button in the user experience that sends telemetry about offensive pages back to our service.
Maybe we build a little workflow and employ a few humans to verify that user-reported-offensive sites actually are terrible (and not some sort of comedian-war where they report each other’s content). This results in a list of “really, truly not funny” sites. The implementation needs to make sure clients get updated with changes to this list as soon as possible. This list could be updated when the model is updated. Or maybe that isn’t fast enough, and the plug-in needs to be more active about checking this list and combining the intelligence it contains with the intelligence the model outputs.
So now the plug-in is updated so that every time it visits a new page it makes a service call while it runs its local model. Then it combines the results of these two forms of intelligence (the server-based intelligence and the client-based intelligence). If the server says a site is “really, truly not funny,” it doesn’t matter what the client-side model says—that site is not funny.
By this point the intelligence creators are going to have all sorts of ideas for how to build better models that can’t run in the plug-in. Maybe the new ideas can’t be in the client because they take too much RAM. Maybe they can’t be on the client because they required external lookups (for example, to language translation services) that introduce too much latency in the plug-in. Maybe the plug-in needs to run in seriously CPU-restrained environments, like in a phone, and the intelligence creators just want a bit more headroom.
These types of intelligences may not be runnable on the client, but they may be perfectly usable in the service’s back end.
For example, when a lot of users start visiting a new site, the back end could notice. It could crawl the site. It could run dozens of different algorithms—ones specialized for images, ones that look for particular types of humor, ones tuned to different languages or cultures—and ship the outputs to the client somehow.
So now the plug-in is combining intelligence from multiple sources—some in the cloud, and some that it executes itself. It is managing experience. It is measuring the results of users interacting with the intelligence and collecting data for improvement. And more.
Not every Intelligent System implementation needs all of these components. And not every Intelligent System needs them implemented to the same degree. This part of the book will provide a foundation to help you know when and how to invest in various components so your Intelligent System has the greatest chance of achieving its goals—and doing it efficiently.

Components of an Intelligence Implementation

An intelligence implementation can be very simple , or it can be very complex. But there are some key functions that each Intelligent System implementation must address, these are:
  • Execute the intelligence at runtime and light up the experience.
  • Ingest new intelligence, verify it, and get it where it needs to be.
  • Monitor the intelligence (and user outcomes) and get any telemetry needed to improve the system.
  • Provide support to the intelligence creators.
  • Provide controls to orchestrate the system, to evolve it in a controlled fashion, and to deal with mistakes.

The Intelligence Runtime

Intelligence must be executed and connected to the user experience.
In order to execute intelligence, the system must gather up the context of the interaction, all the things the intelligence needs to consider to make a good decisions. This might include: what the user is doing; what the user has done recently; what the relevant sensors say; what the user is looking at on the screen; anything that might be relevant to the intelligence making its decision.
The context must be bundled up, converted, and presented to the model (or models) that represent the intelligence. The combination of context and model results in a prediction.
The runtime might be entirely in the client, or it might coordinate between a client and a service.
Then the prediction must be used to affect the system, light up the experience—create impact .

Intelligence Management

As new intelligence becomes available it must be ingested and delivered to where it is needed.
For example, if the intelligence is created in a lab at corporate headquarters, and the runtime needs to execute the intelligence on toasters all across America, the intelligence management system must ship the intelligence to all the toasters.
Or maybe the intelligence runs in a service.
Or maybe it runs partially in a back-end, and partially in the client.
There are many options for where intelligence can live, with pros and cons.
Along the way, the intelligence needs to be verified to make sure it isn’t going to do anything (too) crazy.
And Intelligent Systems usually rely on more than one source of intelligence. These might include multiple models, heuristics, and error-overrides. These must be combined, and their combination needs to be verified and delivered.

Intelligence Telemetry Pipeline

Getting the right monitoring and telemetry is foundational to producing an intelligence that functions correctly and that can be improved over time.
Effective monitoring and telemetry includes knowing what contexts the intelligence is running in and what types of answers it is giving. It includes knowing what experiences the intelligence is producing and how users are responding. And an effective telemetry system will get appropriate data to grow new and better intelligence.
In large Intelligent Systems the telemetry and monitoring systems can produce a lot of data—a LOT.
And so the intelligence implementation must decide what to observe, what to sample, and how to digest and summarize the information to enable intelligence creation and orchestration among the various parts of the Intelligent System.

The Intelligence Creation Environment

In order for an Intelligent System to succeed, there needs to be a great deal of coordination between the runtime, the delivery, the monitoring, and the intelligence creation.
For example, in order to produce accurate intelligence, the intelligence creator must be able to recreate exactly what happens at runtime. This means that:
  • The telemetry must capture the same context that the runtime uses (the information about what the user was doing, the content, the sensor output, and so on).
  • The intelligence creator must be able to process this context exactly the same way it is processed at runtime.
  • The intelligence creator must be able to connect contexts that show up in telemetry to the eventual outcome the user got from the interaction—good or bad.
This can be hard when intelligence creators want to innovate on what types of information/context their models examine (and they will; it’s called feature engineering and is a key part of the intelligence creation process). It can be hard when the monitoring system doesn’t collect exactly the right information, leading to mismatches between runtime and what the intelligence creator can see. It can be hard when the intelligence is executed on a type of device different from the intelligence creation device. Maybe the runtime has a different coprocessor, a different graphics card, or a different version of a math library from the ones in the intelligence creation environment.
Any one of these can lead to problems.
An intelligence implementation will create an environment that mitigates these problems, by providing as much consistency for intelligence creators as possible.

Intelligence Orchestration

The Intelligent System needs to be orchestrated.
  • If it gets into a bad state, someone needs to get it out.
  • If it starts making bad mistakes, someone needs to mitigate them while the root cause is investigated.
  • As the problem, the intelligence, and the user base evolve, someone needs to be able to take control and tune the intelligence and experience to achieve the desired results.
For example, if the intelligence creation produces a bad model (and none of the checks catch it) the model may start giving bad experiences to customers. The intelligence orchestration team should be able to identify the regression quickly in telemetry, track the problems down to the specific model, and disable or revert the model to a previous version.
If things go badly enough, the intelligence orchestrator might need to shut the whole thing down, and the implementation and experience should respond gracefully.
Providing good visibility and tools for orchestrating an Intelligent System allows intelligence creation to act with more confidence, take bigger risks, and improve more rapidly .

Summary

An Intelligent System needs to be implemented so that intelligence is connected with customers . This includes:
  • Executing the intelligence and lighting up the intelligent experience.
  • Managing the intelligence, shipping it where it needs to be.
  • Collecting telemetry on how the intelligence is operating and how users are responding (and to improve the intelligence).
  • Supporting the people who are going to have to create intelligence by allowing them to interact with contexts exactly the same way users will.
  • Helping orchestrate the Intelligent System through its life cycle, controlling its components, dealing with mistakes, and so on.
An Intelligent System is not like a traditional program, in that it is completed, implemented, shipped and walked away from. It is more like a service that must be run and improved over time. The implementation of the Intelligent System is the platform that allows this to happen.

For Thought…

After reading this chapter, you should:
  • Understand the properties of an effective intelligent implementation—what it takes to go from a piece of intelligence (for example, a machine learned model) to a fully functional Intelligent System.
  • Be able to name and describe the key components that make up an implementation of an Intelligent System.
You should be able to answer questions like these:
Consider an activity you do daily:
  • What would a minimalist implementation of an Intelligent System to support the activity look like?
  • Which component—the runtime, the intelligence management, the telemetry, the intelligence creation environment, the orchestration—do you think would require the most investment? Why?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.123.155