Chapter 10

Predicting the Future of Augmented Intelligence

Introduction

We are still at an early stage in the evolution of machine learning, artificial intelligence (AI), and augmented intelligence. The key questions that organizations are asking are:

  • Will AI take over the world and replace many of the jobs that people do?

  • Will AI-driven systems be able to think for us?

  • Will we be able to codify and leverage the knowledge in our organizations to make professionals able to do their jobs better?

Not surprisingly, the current business market for emerging technologies in the fields of AI and machine learning (ML) is quite confusing. The hype around the potential of AI and machine learning has risen to a level where some leaders assume that emerging technologies will be capable of automating every ordinary process a human can perform. Some experts have conjectured that deep learning systems will be designed to think like a human. Others experts are assuming that automated systems and robots will have the intelligence and ability to learn so that they will displace the vast majority of functions and jobs that people currently serve. In this chapter, we take a look into the future and provide our top predictions of what we can expect on the journey to augmented intelligence.

The Future of Governance and Compliance

Governance and compliance requirements will be built into models to satisfy governmental regulations and support the needs of management. Industry leaders will work with regulators to help codify rules and limitations on augmented intelligence systems. In addition, auditors will need to gain visibility into augmented intelligence systems to ensure that they are performing as expected. These governance capabilities will be designed to alert teams when something looks out of line with required compliance rules. However, automated functions will not prevail in every situation. When an alert is triggered, the system will provide guidance that experts can rely on to make better-informed decisions. At the same time, teams will also need to develop their own rules and follow their intuitions as well as their understanding in their area of expertise.

Regulation of any AI system is an emerging area because corporations and governments cannot assume consistent rules and policies due to constant changes in technology and business. In addition, human regulation relies on teams to anticipate problems and to respond when unanticipated problems occur. So new regulations need to be supported as they emerge (potentially via a software update) and added to company-specific standards. Business rules will be able to specify when an automated response to a potential violation should be triggered and when, alternatively, an alert should be sent to a human agent to make a decision on a remedial action. Furthermore, teams will always need to be aware of outcomes that seem suspicious. In many cases, an alert will not be generated, so it is important that the team examine the results of automated systems.

Professionals in all fields are asked to make decisions based on their knowledge and experience. An augmented system can provide both guidance in terms of providing research findings and best practices to the professional. There are times when the decision seems obvious. However, the obvious choice does not always serve as the right answer. Future requirements include having an augmented system look for anomalies in governance and privacy regulations, detect biases, and determine whether an AI-based decision contradicts a business best practice.

Emergence of Different Jobs

Will the hybrid collaboration of humans and machines result in fewer jobs for people? This is a subject upon which opinions differ widely. But what’s clear is that the nature of work is already changing and will continue to change, reflecting the tradeoffs between what machines do best and what people do best. When machines automate more routine tasks, human experts can focus more on handling exceptions. For example, a call center bot handles frequently occurring questions rapidly and precisely. But the bot must be configured to hand off more complex and more risky questions to human agents. Likewise, an auditing bot can handle huge workloads, such as reviewing all transactions for signals of irregularities. But the bot hands off tasks to human auditors to investigate potentially fraudulent cases. When humans handle exceptions, they must get an informative alert with context from the machine, often with a recommendation on how to proceed. This is the future of augmented intelligence.

There will be many new jobs that do not exist today. With the advance of augmented intelligence into more domains, there will be a greater need for regulatory, governance, and ethical frameworks. Machines are not the source of such standards—that is the responsibility of humans. The many jobs that will be needed to manage augmented intelligence and handle machine-discovered exceptions have not existed before.

We cannot be sure at this point whether the jobs lost will exceed the jobs gained. Most likely, the largest category of all will be the jobs that are transformed via augmented intelligence, resulting in a far different mix of tasks than ever before. Those people who do not adapt to the changed nature of work will be the people at the greatest risk of loss of employment. But the greatest challenge for society, from a labor perspective, will be the massive training of those who are displaced by intelligent systems to fill the new jobs that augmented intelligence makes available.

Machines Will Learn to Train Humans

In the future, it will be possible to have a model observe employee actions and look for ways to improve them. However, this capability will not be a ubiquitous process in the workplace because of cultural issues. Observing people’s actions and training them to perform with “better results” may make workers feel that they are devalued. On the other hand, workers may be more willing to have a system suggest process improvements that will help work become more successful. Imagine that you can use ML to train humans based on their data. There are some areas in which this approach could be non-threatening. For example, think about programs that help train an individual to become a better chess player. You play a chess game against the computer. The computer analyzes your moves and then explains how you could have made different moves in order to get better results. As you continue to play, the program provides interactive advice as you progress. This same process can be applied to any field in which employees’ actions can be observed in order to improve the performance of tasks and in which they welcome ways to improve their performance. In essence, the system becomes an intelligent tutor. This will lead to new applications that will guide students based on the way they learn. The augmented intelligence system will be able to judge which areas the student needs additional help in and will guide the learning process. Therefore, the system will first have to diagnose the expertise of the learner and then present the most appropriate lessons at the right sequence. In an augmented system, humans will help to determine the areas of performance employees might be open to new training. Business leaders will work in collaboration with employees to identify these areas of potential improvement. Thus, the machine learning tools will be used in collaboration with people making decisions about where best to utilize those tools. This sort of collaboration between different teams of humans and the augmented AI system will result in the best outcome for the business.

New Techniques for Identifying Bias in Data

Emerging tools will support management’s ability to identify biases in data that may not be apparent. Often business decision makers don’t even realize that their decisions are biased. They assume that the data reflects a consistent, acceptable, and predictable model of reality. However, the data itself may be biased because it relies on previous and current data and best practices that are themselves biased against a certain group or organization.

A new generation of tools will be able to contribute to recognizing bias and recommend changes, such as including new data sources and the removal of data sources that are biased. Recommendations are straightforward in cases in which a model is inspectable and bias can be detected directly. But in cases where a model is not inspectable (as is typically the case), there must be support to evaluate the outcomes of automated decisions or recommendations made by the model—determining whether the actual decisions or recommendations reflect differential, adverse treatment against a protected class. These protected groups of people are defined in anti-discrimination laws such as the US Civil Rights Act of 1964, as well as local regulations, and include groups within a total population based on characteristics such as age, gender, race, or national origin. Often biases are shared by the organization, perhaps unwittingly, and so the recognition of bias is even more difficult. Businesses will need new ways to evaluate their standards and processes to ensure against hard-to-recognize biases.

Emerging Techniques for Understanding Unlabeled Data

Today, it is not easy to understand and interpret unlabeled data because there is no gold standard for building a model based on unlabeled data. In the future, there will be new techniques that will help data scientists to understand models created from unlabeled data. This innovation will accelerate the ability of organizations to make use of unstructured data to better understand the context of information for decision making. The upcoming systems to help understand models from unlabeled data will not appear soon. Current research is focused on making progress on this topic, but currently there have not been compelling results. Bringing research from the lab to the business world can be a slow process, so although we predict that results in this area will come about, it will not be in the next few years.

Emerging Techniques for Training Data

New techniques will emerge that make using data to create new models faster and more efficient. One of the most complex tasks for organizations is to have enough of the right data to accurately train a model. Emerging techniques will provide pretrained models that most closely reflect the type of model being developed. Once created, these pretrained models can be updated to reflect the nuances of the specific problem being addressed.

This scheme works as follows. Typically, a model is trained to handle a more general problem. That same model can then be modified to handle a more specialized version of the general problem. The benefit is that the special version will be able to be built with less new data. You only need to handle the differences that distinguish the specific case from the general situation that the original model was trained on. By providing the data based on the special case, the more general model can learn from its previous data plus data for the special case. For example, suppose a clothing retailer has a model for recommending outer-wear to customers that purchase certain clothing. The general recommendation model can be customized to be more specific during a promotional period to only recommend shoes and boots. Of course, this approach only makes sense if you know what the general problem is, and if you know that your problem is really a special case of the general problem. If the model were inspectable, this knowledge would be relatively straightforward to obtain, but in the case of models from black box algorithms, you have to do a lot more guessing about the shape of the general model and whether your problem really is a special case of the general model. However, people working with the general model are likely to come to know what it does well, and so the human-in-the-loop can provide the assessment needed to judge what the general model really does.

Reinforcement Learning Will Gain Huge Momentum

Although much of the focus in recent years has been around deep learning, deep reinforcement learning (RL) is emerging as a powerful technique that combines neural network modeling with reinforcement learning. The power of reinforcement learning comes from its ability to have a system learn to take next actions based on trial and error. It is a powerful technique for when you need to determine a series of actions required to achieve a goal or reward. This technique has been successfully applied to games wherein the player takes an action and then must respond to the next action taken by another player. Two examples of where RL is commonly used in a business context are marketing and customer services. In marketing, reinforcement learning can help determine the next step to take as a customer or prospect progresses down the path toward a sale. For customer service, the system helps to guide a service agent on the next best action to take when interacting with a customer. Deep learning provides the ability to analyze and learn from layers of hidden patterns. Combining reinforcement learning with neural networks could provide a much richer platform that understands context and learns from actions in order to transform business processes based on experience.

The use of reinforcement learning to make business decisions has left the research lab and is in use in business. For example, this model can be applied to the loan industry, wherein an algorithm can help determine the best series of steps to follow to successfully encourage a person to pay back a debt.

Using deep neural networks to understand the policies (rules) chosen in RL would have huge implications for business, since it could help management understand why the RL algorithm created this specific rule. However, this area of research is just barely started. So if it succeeds, it will be very useful.

New Algorithms Will Improve Accuracy

The emergence of new algorithms will improve the accuracy of machine learning models. Currently, there are more than 40 key machine learning algorithms widely used for a variety of applications in science and business. Because organizations want to be able to integrate vision, speech, sound, and smell into their models, there will be new algorithms developed, or combinations of existing ones used, that will understand the nuances of these data types. One example, OpenAI’s new algorithm, called GPT-2, is designed for language modeling and makes use of a program’s ability to predict the next word in a given sentence. This capability increases the ability to generate sentences and stories. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt. Although this very recently developed algorithm does not integrate vision, speech, and so on, it indicates that new techniques developed over the next 10 years might have rather surprising capabilities.

Distributed Data Models Will Protect Data

One of the issues that organizations have to grapple with is the need to move sensitive data outside of their organization in order to execute machine learning models. Techniques that enable a business to move the model to the data rather than moving the data will provide more secure methods of protecting data security during analytic processing. An emerging approach is data virtualization. Data virtualization allows organizations to manage data access and manipulate and query data without having to move the data into a single repository or warehouse. In essence, data virtualization is a peer-to-peer architecture whereby queries are broken down and sent closer to the data sets. After all the subqueries are processed, results are combined along the way, thus eliminating the application entry point/service node as the bottleneck. Data virtualization allows organizations to analyze data where it resides rather than requiring that the data be moved to a different location.

Explainability Will Become a Requirement

Providing guidance to experts through an augmented intelligence system requires that developers and business management understand how the results were arrived at, and the level of confidence the model has in the results. There is a huge risk if an expert simply accepts a conclusion or answer blindly. Machines are only as good as the developers of the model. In the future, therefore, models will have greater transparency—or inspectability—so that there is an explanation for how the model reached its conclusion. This is necessary for dealing with legal challenges to decisions based on a model—where a consumer would seek to understand why he/she received a particular score that impacted credit or hiring.

Linking Business Process to Machine Learning Models

The next stage in model creation is to have a way to link related business process machine learning models together. For example, an insurance business that sells both mortgages and car loans may be able to link those data models so that it is easier to make decisions based on how data about an individual or organization is related. If one individual has defaulted on a mortgage and is asking for a car loan, linking these data models together will help decision makers evaluate risk. The revelations from linking models will help developers evolve the models so that they are more accurate. An important concept to consider is that many businesses have over-relied on single sources of data, or data that is aggregated from similar sources, to link data.

For example, a business that extends credit to customers, such as a furniture store, an auto dealership, landlords, or a financial institution, often relies on an individual’s credit score. A person’s credit score, which was intended to predict the likelihood of a borrower to repay a loan, has now become a measure of an employee’s level of responsibility. Checking a potential employee’s credit score is now a common practice among many businesses. This expanded use of a credit score has penalized perfectly good customers or employees, and it might be seen as an unfair business decision. Regulatory bodies have been slow to respond to the broadening use of data, such as credit scores. In general, the extension of the intended use of a data aggregate to another domain is hard to control—for example, the use of a drug for a non-intended purpose—and is a problem for the model developer who did not authorize the use but could be held responsible for the consequences.

Summary

We are at an exciting inflection point in the movement toward artificial intelligence and machine learning. Augmented intelligence has the potential to put these technologies to work in a way that opens up huge opportunities to create a hybrid collaboration between humans and machines for solving real-world challenges. There will be a variety of new techniques that will help augmentation become more reliable and predictable. Although some of these predictions are around the corner, other techniques will take time and attention before they become mainstream.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.110.32