Introduction: The Right Brain in the Right Place

The great feat of modern AI is programming algorithms that can adapt and change their behavior based on feedback. The goal of Designing Autonomous AI is to show you how to put these learning algorithms to work by teaching AI to make successful decisions in real, production environments.

What is Autonomous AI?

Autonomous AI is AI-powered automation that optimizes equipment and processes by sensing and responding in real time.

I consulted for a company that uses Computer Numerical Control (CNC) machines to make cell phone cases. Spinning tools cut metal stock into the shape of the phone. After each case is cut, the CNC machine door opens. A robotic arm loads the finished part onto a conveyor, then grasps the next part from a fixture, and loads it into the CNC machine to be cut. If the part does not orient in the fixture at precisely the right angle, the robot arm will fail to grasp the part or will drop the part before it reaches the CNC machine. And if the arriving case is wider or thinner than expected, even just a little, the robot arm will again fail to grasp the part or drop it before it reaches the CNC machine. The automated system is inflexible. An automated system is one that makes decisions by calculating, searching or by lookup. The robot arm controller was programmed by hand to travel from one fixed point to another and perform the task in a very specific way. It succeeds only if the phone case is the perfect width and sits in the fixture at the perfect angle.

Phone Cases
Figure -1. Width and fixture angle variations that might challenge an automated system during a cell phone manufacturing process

This organization needs more flexible and adaptable automation that can control the robot arm to successfully grasp cases of a wide variety of widths from a range of fixture orientations. This is a great application for Autonomous AI. Autonomous AI is flexible and adapts to what it perceives. For example, it can practice grasping cases of various widths that sit in the fixture at various angles and learns to succeed in a wider variety of scenarios.

Figure_1-2.png
Figure -2. Width and fixture angle variations that an Autonomous AI might learn to adapt to for a cell phone case manufacturing process
Figure_1-3.png
Figure -3. Table of examples showing the difference between Machine Learning perception and AI decision making for several scenarios

The key differentiating factor of Autonomous AI brains (I will often refer to a specfic instance of Autonomous AI as a brain) is that they can flexibly and adaptilby respond to what they perceive is happening, much more than automated systems can. This allows Autonomous AI to make more human-like decisions and address a variety of common problems in manufacturing, logistics and other areas.

Tip

An AI brain is an instance of Autonomous AI that has learned to perform a specific task.

This does not mean that Autonomous AI brains will replace human decision making. Sometimes Autonomous AI is used to directly control equipment, like drones or robotic arms for example, to perform tasks that are difficult for humans. This was a problem for Sberbank, which needed a machine that could identify and grasp bags of coins and place them on a table for counting—a much more difficult task than you might expect. People at the bank were suffering repetitive stress injuries from lifting bags from the delivery carts onto the table where they counted the currency. The problem with automating this process is that bags of coins are flexible, which makes their features harder to see and which makes them much harder to grasp than rigid objects. Automated systems cannot translate visual input into real-time control without extensive custom programming. AI and robotics researchers built the bank an Autonomous AI that practiced identifying and grasping bags in simulation and learned to grasp bags autonomously. It succeeds 97% of the time in real life and it was designed using the principles in this book.

Other times, Autonomous AI is used to take over one function from an automated system while a human retains control of other functions. When a bulldozer driver “cuts” the dirt on a construction site so that it is flat and ready for construction, an automated system lifts and lowers the bulldozer blade to keep the cut flat. The automated system, which is based on technology that the US Navy invented in 1912, works very well for similar kinds of dirt that it was tuned to handle, but doesn’t yield a flat cut when the dirt is too sandy, too wet, or too gravely for what it was programmed to handle. When the operator arrives at the construction site, she re-tunes the controller if she finds the dirt to be outside the range of dirt that the controller will handle well. Bulldozers have to handle all sorts of terrain, but the automated systems built into them cannot handle the wide range of dirt conditions without manual calibration. The Autonomous AI brain learned to control multiple different bulldozer models (this is unheard of in industrial controls!), lifting and lowering the blade for a flat cut across many different types of dirt. It learned by practicing in simulation and responding to feedback.

Other times, brains train humans and help them make better decisions. Ashe Menon, an executive from NOV, asked me to design a brain to improve their CNC processes. Ashe didn’t want AI to replace people, in fact, Ashe looked out into his community and saw young people doing repetitive low wage work who could have been building a career in CNC manufacturing, if only he had an AI brain that could help them succeed at their job. He wanted a brain that expert machinists could “download their experience” into. The experts get better results than engineers using automated recommendation software. Automated systems, unlike autonomous AI systems, cannot use senses like visual perception or sound to control equipment or processes. The brain that we designed uses the sounds that the spinning tools make as they cut to determine how to control the equipment, just like human machinists do.

Looking for answers to a changing world

An executive from a global steel company asked me to travel to Indiana to examine part of their steel making operation, determine where an AI brain could help, and to design that brain. We arrived at the steel mill early in the morning and met with the site CTO, who directed us to the building that housed the process he wanted us to focus on and give us a bit of direction, then we put on protective shoes, hard hats, and metal sleeves and went in for a tour. The foremean came out to meet us and took us on a tour of a “building” that could fit many tall buildings inside of it and was many city blocks wide and long. This was the last phase of the steel-making process where a strip of steel rolled between what looked like paper towel rolls, through a furnace to temper it, and finally through a bath of molten zinc to protect it from rust.

We talked to the operator at each control room (the control rooms at steel mills are called pulpits) and I interviewed them about how they make decisions to run the machines (what information they use to make the decision and how they operate the machines differently under different scenarios). Then, our hosts whisked me away to a research center where I reported my recommendations to the Chief Digital Officer (CDO) and a group of researchers about which decisions to use AI to help improve. I recommended the galvanization step, the last step of the process.

Figure_1-4.jpg
Figure -4. Photograph of steel mill

The operators control the coating equipment in real-time to make sure that the zinc coating is even and the correct thickness. This job used to be a lot easier when the plant made most of its steel the same thickness, the same width, and the same coating thickness for the big three US auto manufacturers. Now, many more customers ask for many different thicknesses and widths of steel for heating ducts, construction, and all kinds of other things. So, the operators were having a hard time keeping the coating uniform and the thickness correct across all these variations. Some customers required wide, thin steel with a thin coating, other customers ordered narrow, thick steel with a thick coating and everything in between. The world of steel manufacturing changed and this company was looking to Autonomous AI for answers.

This company and most of the hundreds of companies that have asked me to consult them about AI are facing a difficult situation. Their business environments (customers, markets, processes, equipment and workers) are changing and they are struggling to adapt their decision making in response. Often, their automated systems, which were built to automate repeatable, predictable processes, cannot change their programmed behavior in response to these changing environments. As conditions change, they make worse decisions and sometimes are taken out of service altogether because their decisions are no longer relevant or high quality.

Problems need solutions, not AI

Humans and automated systems are reaching the limit for improvements they can make to industrial processes. So, enterprises are turning to AI for answers. Unfortunately, much of the discussion about AI focuses on AI as fiction (overhyped and over-promised capability) or science fiction (whether AI will ever reach superintelligence and if it does, what are the philosophical and ethical implications). Neither of these discussions help organizations improve their operations. What enterprises need instead is a playbook for how to design useful AI into autonomous systems where it can make decisions more effectively than humans or automated systems.

When I first started designing Autonomous AI, I pitched “a new form of AI” that was different from other kinds of AI and Machine Learning. I quickly realized that the companies I consulted didn’t care about AI. They sought technology that had unique capabilities to control and optimize their high-stakes business processes well compared to their existing solutions. They cared that their operators and automated control systems were effective, but struggled to deliver additional process improvement. They understood that control and optimization technology is always evolving and that Autonomous AI is simply an evolution of control and optimization technology with unique differentiating characteristics.

What can AI do for me in real life?

The AI Index Report cites that over 120,000 AI related peer-reviewed academic papers were published in 2019. More than a few of these papers were highly publicized in the press. Some call this the “Research to PR Pipeline'' because of how companies shuttle research breakthroughs, straight from the laboratory to the press in announcements. While it’s great to have access to cutting edge research, this research to PR pipeline can make it seem that every new algorithm is ready to solve real-life problems. The challenge is that people and process concerns combined with the uncertainties of real-life production processes, render many algorithms, which seem very promising in controlled laboratory experiments, practically useless. Let me give you an example.

A major US rental car company came to us asking whether AI could help them schedule the daily delivery of cars between their locations. Every day, in most major cities, about a dozen drivers shuttle cars from the rental outlet where they were dropped off, to rental locations where they are needed for pickup. A human scheduler plans the routes for each driver, to deliver the right vehicles to the right place. Those familiar with a field called Operations Research, which is very active in research for logistics and delivery problems, might call this the “Vehicle Routing Problem.” Then, they might tell you that there are various optimization algorithms that can search and find the “optimal” solution of routes for each driver so that together, the drivers travel the shortest distance. So, what’s the problem? Why would this company be using human schedulers? Don’t they know about Dykstra’s algorithm for finding routes that travel the shortest total distance? Wait a minute. It’s not that simple.

Dijkstra’s shortest path algorithm searches possible routes and schedules routes for each driver that place each stop as close together as possible. So, if you are in a city where the best policy is to always schedule each next stop as closely as possible, Dijkstra’s algorithm will give you the best possible answer every time. Here’s the problem. For most metropolitan cities, the determining factor for the time each trip leg takes is traffic, not distance. But Operations Research defines the vehicle routing problem without considering traffic. There are plenty of situations where the next best stop is not the closest because of bad traffic conditions. This is especially true during rush hour traffic. Each city has unique traffic patterns, but traffic varies based on a number of factors. Dijkstra’s algorithm doesn’t consider traffic at all and it doesn’t change its scheduling behavior based on any of the factors that dictate traffic patterns. So, even if every rental car company knows how to program and utilize Dijkstra’s algorithm, it won’t effectively replace human route schedulers.

Instead, here’s a brain that might better adapt to traffic patterns than Dijkstra’s algorithm. Figure 1-5 below is just an example, not one that was designed for a real company, but using the techniques in this book, you can easily design similar brains and modify this brain design for similar applications.

Figure_1-5.png
Figure -5. Example brain for real-time scheduling of rental car deliveries in a major city

The example brain in Figure -5 works like Uber dispatch. Each time a driver arrives delivering a car, it decides to which location the driver should deliver their next car. The goal is to deliver all cars to the locations where they are needed in the least amount of time.

Here’s how to read the brain design diagram. The yellow ovals represent the input and the output of the brain. The brain receives information about traffic, vehicles that need to be delivered, and delivery locations. For example, the brain might receive information that it is Wednesday during morning rush hour commute time, that 5 cars have been delivered so far and that 98 cars await delivery so far for the day through its input node. The modules represent skills that the brain learns to make scheduling decisions. We design a Machine Learning module into the brain (represented by a green hexagon) to predict the trip length to each possible destination based on traffic patterns for the city. This module works a lot like the algorithms in Google and Apple Maps that predict how long each trip will take. The blue rectangle represents an AI decision-making module that determines which destination to route the driver to. See “Visual Language of Brain Designs” in chapter 2 for more details on how we visually represent brain designs. The brain learns to make scheduling decisions that better adapt to traffic patterns and create schedules that deliver the daily stable of cars more quickly than Dijkstra’s algorithm.

This example doesn’t suggest that software algorithms are not useful to solve real-life problems. It’s a warning against picking a software algorithm or a technique that’s been demonstrated in research from a list and applying it to solve a real-world problem without considering all the requirements for a solution to that problem. I recently (in relation to writing this book chapter) participated in a roundtable on AI in manufacturing hosted by the National Science Foundation. One of the guiding questions that the facilitators asked was: “Why hasn’t software solved more problems in manufacturing?” In my response, I explained the conundrum of picking from a “list of software algorithms” without deeply understanding the operations and the processes that you are trying to improve.

In his 1970 book Structure of Scientific Revolutions, Thomas Kuhn describes research breakthroughs as the punctuation between long periods of incremental improvement and experimentation. For example, in 1687 Sir Isaac Newton made an important discovery about gravity. Later in 1915 Albert Einstein made breakthroughs that provided a more nuanced and accurate picture of gravity. Einstein’s breakthrough doesn’t contradict Newton’s Law, but it provides a more comprehensive and nuanced view of how gravity works.

Throughout the history of AI and other research technologies, these periods of puzzle solving and incremental change between research breakthroughs have also served as opportunities for some of this research to spin off and become useful to industry. Let’s take the expert system for example (a method for making automated decisions based on human experience). Expert systems were developed during the second major wave of AI research. Expert systems are great at capturing existing knowledge about how to perform tasks, but proved to be inflexible and difficult to maintain. At one point some thought that expert systems would reach full human comparable intelligence and comprised much of what was then considered AI research, but by the 1990s they had all but disappeared from AI research efforts. During this period between the breakthrough of expert systems as a new way to make automated decisions and the AI research breakthroughs that addressed many of the issues with expert systems (see “Deep Reinforcement Learning and Neural Network” in chapter 2 for more details), expert systems went underground where companies democratized and mainstreamed the technology.

Expert systems are widely used today in finance and engineering. NASA developed a software language for writing them in 1980. And in this book, we’ll combine expert systems with other AI techniques as we design Autonomous AI. Research breakthroughs often aren’t ready to add value to production systems and processes until they mature to meet the people and process concerns of those who run these processes.

Figure_1-6.png
Figure -6. Scientific advancement over time: revolutions separated by periods of puzzle solving where mainstreaming and democratization occurs

In the same way that software hasn’t solved more problems in manufacturing, it appears that the burgeoning field of Data Science hasn’t produced the anticipated sweeping positive effect on industry either. As VentureBeat reports, 87% of machine learning models never make it into production. This article makes multiple recommendations, but I have some observations of my own.

I was in Canada at a large Nickel mine consulting with process experts about using AI to control a SAG Mill (think about an eight story tall, cement mixer). I stepped outside to take a phone call, and when I walked back down the hallway I found one of the data scientists arguing with one of the experts. It wasn’t anything that professionals can’t work through together, but the disagreement was about whether we should trust and respect the operators’ existing expertise about how to run the mill.

Another time, I spoke with an executive who had a Master’s Degree in Artificial Intelligence and oversaw optimization of processes at a manufacturing company. I explained to him that my approach to designing AI for industrial processes relies heavily on existing subject matter expertise about how to control or optimize the system. He thanked me for not being one of the many companies that have come in, told him that all they needed was some of his data, and that with that data they would build him an AI based control system. He didn’t believe that it was possible to ignore decades of human expertise and come up with a control system that would both function well and address all the people and process concerns related to running expensive safety-critical equipment. Neither do I.

There’s a flavor of data science that functions a little like colonialism. In colonialism, countries explore or even invade other territories claiming intentions to improve the societies they encounter, usually without consideration for the existing culture and values. A colonialistic mindset might ask, Why would I need to consult a more primitive society about what help they need from me? I should just be telling them what to do! That’s one of the most egregious perspectives of colonialism: the arrogance that you don’t need to learn or consider anything about the people that you are allegedly helping. Unfortunately, I see a similar perspective among some misguided data science practitioners who don’t see the need to slow down, listen, and learn about the process from people before attempting to design a superior solution. This is not to say that all data scientists practice this flavor of their trade; there are data scientists who are curious and practice great empathy. The humility and curiosity to inquire and learn what people already know about making decisions will go a long way when designing Autonomous AI.

Remember the bulldozer AI that I told you about? One of the subject matter experts, a PHD controls engineer named Francisco, thanked me during the AI design process. He felt condescending to and “treated him like an idiot” by others who consulted him about AI. What?! Francisco was brilliant in mathematics and control theory—why would anyone condescend to him about AI? The best brain designers are curious, humble, and resist the temptation to practice data science colonialism.

Any brain that you design to make real decisions for real processes should address the changing world, the changing workforce, and pressing problems.

Looking for answers to a changing workforce

When automation systems don’t perform well or make good decisions, factories and processes revert back to human control. Humans step in to make high value decisions for some processes only when the automated systems are making bad decisions, but humans retain complete control of other processes that they haven’t figured out how to automate well. But experts are retiring at an alarming rate and taking decades of hard earned knowledge about how to make industrial decisions out of the workforce with them. After talking to expert after expert and business after business, I realized that people look to AI for answers to their changing workforce because expertise is hard to acquire and equally hard to maintain. To make matters worse, expertise is relatively easy to teach, but takes a lot of practice.

Expertise is hard to aquire

I visited a chemical company that makes plastic film for computer displays and other products on an extruder. An extruder takes raw material (soap, cornmeal for food, or in this case plastic pellets) and heats them up in a metal tube with a turning screw inside. The screw forces the material out a slit to make the plastic. Then the plastic film (it looks just like Saran Wrap) gets stretched in both directions, cooled and sometimes coated. The control room was filled with computer screens and keyboards to check measurements and make real-time adjustments. Can you guess how long an operator trains before they can “call the shots'' in the control room as a Senior Operator? Seven years! Many operators put themselves through university Chemical Engineering programs during this time. It takes a whole lot of practice turning the knobs on a process until you can control it well for different products, across varying customer demand, types of plastic, types of coating and machine wear. And after your experts get really, really good at controlling your process, it’s time for them to retire and you need a way to pass on this expertise to others who are less experienced.

Expertise is hard to maintain

Navasota, Texas, is a small town about two hours’ drive from Houston. I went there to help a company named NOV with their machine shop operations. We arrived in a pickup truck, to a parking lot full of pickup trucks and I felt out of place because I was only one of two people I saw that day who weren’t wearing cowboy boots. Our executive sponsor was a forward thinking executive who wanted to use AI as a training tool. Many are afraid that AI will take away people’s jobs, but he told me the opposite: “I want to be able to hire a 16 year-old high school dropout, sit a brain next to him and have him succeed as a machine operator.”

We sat down in a no-frills industrial conference room over strong coffee and he introduced me to a machinist named David. I prefer discussing AI in plain language instead of using research jargon, so I explained to this 35 year expert machinist that a new form of AI can learn by practicing and getting feedback just like he has over all these years, and that we can even use his valuable expertise to teach machines some of the things he already knows so that it gets better and faster as it practices.

You see, when David and other expert machinists control the cutting machines (give them instructions about where to move and how fast to spin the cutter), the cutting jobs get done quicker, much quicker and at better quality than when the engineers use automated software to generate the instructions. David has practiced cutting many different kinds of parts using over 40 different machine makes and models. Some of the machines are new and some of the machines in the shop are over 20 years old. These machines all behave quite differently while cutting metal and David learned how to get the best out of each machine by operating it differently.

NOV and many other companies want to capture and codify the best expertise from their seasoned operators, upload this experience into an AI brain, and sit that brain next to less experienced operators to help them get up to speed more quickly and perform more proficiently. This requires interviewing experts to identify the skills and strategies that they practiced in order to succeed at a task. Then you will be able to design an AI that will practice these same skills, get feedback, and also learn to succeed at the task.

An executive in the resources industry told me that their 20 and 30 year experts are retiring in large groups and that it feels like their hard fought, valuable experience about how to best manage their business is walking out the door, never to return. Humans can learn how to control complex equipment that changes in really odd ways, but it takes a lot of practice time to build the nuances into our intuition. Most expert operators tell me that it took years or decades to learn to do their job well.

Tip

Designing Autonomous AI allows you to package expertise into AI as neat units of skill that can be passed onto other humans, saved for later, combined in new and interesting ways, or used to control processes autonomously.

Expertise is easy to teach, but requires practice

SCG Chemical is part of a 100 year old company that manufactures plastic. For one type of plastic, they invented the process, learned to run the reactors efficiently, and even researched advanced chemistry to simulate the process. The operators practiced controlling the reactors well for all the different plastic products they make and for the catalysts they use to make them.

One of the first questions that I asked the experts was “How do you teach new operators this complex skill of controlling reactors?” The answer was concise and easy to understand: There are two primary strategies that we teach every boardman (operators of all genders at SCG are called boardmen). Here’s the first strategy: continue to add ingredients until the density reaches the target range. Ignore the melting point measurements for the process while you are using this first strategy. Then, when the density of the plastic is in target range, switch over to the second strategy. While using this strategy, ignore the density and add ingredients until the melting point for the plastic reaches product specification. Because of the way the chemistry works, if you work the strategies in that order, both the density and the melting point will turn out right. They invented this process, but even they don’t have all the Chemistry that explains why it works this way. It works every time, so that’s what they teach their operators.

The supervising engineer, Pitak, writes customized recipes that boardmen can follow to successfully execute each of the strategies as the chemical plant conditions change over time. Even though the boardman knows the two strategies and has a procedure for how to use them, it takes a lot of practice to modify the strategies to match the changing process conditions. For example, a boardman might add ingredients (called reagents in chemistry) to the reactor in different amounts while making one kind of plastic using one type of catalyst, but might add ingredients to the reactor in slightly different amounts while making a different kind of plastic using a second type of catalyst.

This is very similar to what happens while baking (baking is a complex chemical reaction after all). Your Father might have taught you to mix the dough while adding the first set of ingredients until it feels sticky and smells like almonds. This is the first strategy. He also might have taught you that, next, you add a different set of ingredients and knead the dough until it’s firm. This is the second strategy. Your father taught you two strategies and how to sequence them.

Figure_1-7.png
Figure -7. Baking preparation process with two skills used in sequence

The strategies are pretty easy to teach and understand, but take practice to master. That’s what recipes are for. They tell you exactly how much of each ingredient to add during each step of the process and recommend how long to mix and how long to knead. The problem with recipes (for baking, making plastic and many other tasks) is that the recipe is rigid. An expert baker knows that if it’s hot and humid outside you will mix for a shorter period of time before you start kneading, the same way that Pitak knows that if it’s more hot and humid outside, the boardman will need to add more reagents or more catalyst to the reactor. That’s why Pitak updates the recipes for the boardman to follow as the temperature and humidity change over time. With a lot more practice, bakers and boardman, no longer need the recipes. They create their own recipes on the fly (bakers based on the feel and smell of the dough, boardman based on the temperature and pressure in the reactor). This is why my Mom never uses recipes when she cooks. She started decades ago with a recipe for each dish, but now when she cooks each dish, she adjusts the ingredients to taste as she goes. When she first taught me how to make our family recipe chili, I followed the recipe “to the T,” but now I improvise while making chili just like she does!

Looking for answers to pressing problems

Climate Change is a pressing societal problem. Many companies have made pledges to take action to slow the effect of Climate Change. Is there a way that AI can help?

Well, less energy consumption means less need for energy from fossil fuels. Did you know that 50% of energy usage in buildings comes from Heating, Cooling, Air Conditioning and Ventilation (HVAC) systems? It turns out that this is an opportunity for AI to make a material difference on Climate Change. Many commercial HVAC systems, like the system that cools and heats your office building, rely on human engineers and operators to tune and control them.

Driving various rooms toward the right temperature while carefully managing energy usage is not as easy or intuitive as it appears. Managing energy consumption for a building or campus adds several layers of variability like the controls for cooling towers, water pumps, and chillers. This is further complicated by occupants entering and leaving the building constantly throughout the day. There’s a pattern to it (imagine commute times and traffic conditions) but they are complex to perceive. The price of energy changes throughout the day. There are peak times where energy is most expensive and off-peak times where energy is cheaper. You can recycle air to save money from heating outside air, but legal standards dictate how much carbon dioxide is allowed in the building which limits your ability to recycle. Each layer of complexity makes it harder for a human to understand how each variable will impact the outcome of a control setting.

Microsoft built an Autonomous AI to control the HVAC systems on its Redmond West Campus. The campus had automated systems, but those systems cannot make supervisory decisions based on occupancy and outdoor temperature in real time. My team worked with mechanical engineers to design a brain to make those decisions, and the new system is currently using about 15% less energy. Two years earlier, Google successfully tested an AI that reduced energy consumption in data centers by 40%.

AI is a tool; use it for good

Every day I see people debating the ethics and perils of AI on social media. While I agree that ethics are important and that we should be very careful as a society about how we approach AI, the only way to ensure that AI gets used for good is to design and build AI that explicitly does useful, helpful, things.

I just finished teaching my first Designing Autonomous AI cohort to underrepresented minority students in New York City with the Urban Arts Partnership. What an amazing experience working with such energetic and talented college students! As a Black man who works in AI research, I feel the weight of unequal access to advanced technologies like AI every day. If the fourth industrial revolution can endow superpowers, tremendous wealth, and expansive opportunity for those who lead it, then unequal access to AI presents something of a calcifying caste system. Four percent of the workforce at Microsoft and at Facebook are Black; 2.5% of the workforce at Google is Black. Less than 20% of all AI professors are women, 18% of major research papers at AI conferences are written by women and only 15% of Google AI research staff are women. Robert J. Shiller, 2013 Nobel laureate in economics, says it well: “You cannot wait until a house burns down to buy fire insurance on it. We cannot wait until there are massive dislocations in our society to prepare for the Fourth Industrial Revolution.”

Starting with the principles and techniques in this book, I intend to further democratize access to decision-making Autonomous AI and put it into the hands of the underrepresented and the underprivileged as a means for solving societal problems and economic advancement.

First, imagine an operator at the chemical company that I talked about above (the one with the plastic extruder) not just learning to control the extruder well, but designing and building AI that she will take with her into the control room to help her make decisions. Next, imagine a squad of chess players from inner city East Oakland, California, all minorities, who learned how to play chess by playing with and against Autonomous AI that they designed and taught. We have much work to do to fulfill the vision, but the progress is real and I invite you to use your skills designing Autonomous AI to do good in areas that you are passionate about.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset