© Tom Taulli 2019
Tom TaulliArtificial Intelligence Basicshttps://doi.org/10.1007/978-1-4842-5028-0_1

1. AI Foundations

History Lessons
Tom Taulli1 
(1)
Monrovia, CA, USA
 

Artificial intelligence would be the ultimate version of Google . The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.

—Larry Page, the co-founder of Google Inc. and CEO of Alphabet1

In Fredric Brown’s 1954 short story, “Answer,” all of the computers across the 96 billion planets in the universe were connected into one super machine. It was then asked, “Is there a God?” to which it answered, “Yes, now there is a God.”

No doubt, Brown’s story was certainly clever—as well as a bit comical and chilling! Science fiction has been a way for us to understand the implications of new technologies, and artificial intelligence (AI) has been a major theme. Some of the most memorable characters in science fiction involve androids or computers that become self-aware, such as in Terminator, Blade Runner, 2001: A Space Odyssey, and even Frankenstein.

But with the relentless pace of new technologies and innovation nowadays, science fiction is starting to become real. We can now talk to our smartphones and get answers; our social media accounts provide us with the content we’re interested in; our banking apps provide us with reminders; and on and on. This personalized content creation almost seems magical but is quickly becoming normal in our everyday lives.

To understand AI, it’s important to have a grounding in its rich history. You’ll see how the development of this industry has been full of breakthroughs and setbacks. There is also a cast of brilliant researchers and academics, like Alan Turing, John McCarthy, Marvin Minsky, and Geoffrey Hinton, who pushed the boundaries of the technology. But through it all, there was constant progress.

Let’s get started.

Alan Turing and the Turing Test

Alan Turing is a towering figure in computer science and AI. He is often called the “father of AI.”

In 1936, he wrote a paper called “On Computable Numbers.” In it, he set forth the core concepts of a computer, which became known as the Turing machine. Keep in mind that real computers would not be developed until more than a decade later.

Yet it was his paper, called “Computing Machinery and Intelligence,” that would become historic for AI. He focused on the concept of a machine that was intelligent. But in order to do this, there had to be a way to measure it. What is intelligence—at least for a machine?

This is where he came up with the famous “Turing Test.” It is essentially a game with three players: two that are human and one that is a computer. The evaluator, a human, asks open-ended questions of the other two (one human, one computer) with the goal of determining which one is the human. If the evaluator cannot make a determination, then it is presumed that the computer is intelligent. Figure 1-1 shows the basic workflow of the Turing Test.
../images/480660_1_En_1_Chapter/480660_1_En_1_Fig1_HTML.jpg
Figure 1-1.

The basic workflow of the Turing Test

The genius of this concept is that there is no need to see if the machine actually knows something, is self-aware, or even if it is correct. Rather, the Turing Test indicates that a machine can process large amounts of information, interpret speech, and communicate with humans.

Turing believed that it would actually not be until about the turn of the century that a machine would pass his test. Yes, this was one of many predictions of AI that would come up short.

So how has the Turing Test held up over the years? Well, it has proven to be difficult to crack. Keep in mind that there are contests, such as the Loebner Prize and the Turing Test Competition, to encourage people to create intelligent software systems.

In 2014, there was a case where it did look like the Turing Test was passed. It involved a computer that said it was 13 years old.2 Interestingly enough, the human judges likely were fooled because some of the answers had errors.

Then in May 2018 at Google’s I/O conference, CEO Sundar Pichai gave a standout demo of Google Assistant.3 Before a live audience, he used the device to call a local hairdresser to make an appointment. The person on the other end of the line acted as if she was talking to a person!

Amazing, right? Definitely. Yet it still probably did not pass the Turing Test. The reason is that the conversation was focused on one topic—not open ended.

As should be no surprise, there has been ongoing controversy with the Turing Test, as some people think it can be manipulated. In 1980, philosopher John Searle wrote a famous paper, entitled “Minds, Brains, and Programs,” where he set up his own thought experiment, called the “Chinese room argument” to highlight the flaws.

Here’s how it worked: Let’s say John is in a room and does not understand the Chinese language. However, he does have manuals that provide easy-to-use rules to translate it. Outside the room is Jan, who does understand the language and submits characters to John. After some time, she will then get an accurate translation from John. As such, it’s reasonable to assume that Jan believes that John can speak Chinese.

Searle’s conclusion:

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer , qua computer, has anything the man does not have.4

It was a pretty good argument—and has been a hot topic of debate in AI circles since.

Searle also believed there were two forms of AI:
  • Strong AI : This is when a machine truly understands what is happening. There may even be emotions and creativity. For the most part, it is what we see in science fiction movies. This type of AI is also known as Artificial General Intelligence (AGI). Note that there are only a handful of companies that focus on this category, such as Google’s DeepMind.

  • Weak AI : With this, a machine is pattern matching and usually focused on narrow tasks. Examples of this include Apple’s Siri and Amazon’s Alexa.

The reality is that AI is in the early phases of weak AI. Reaching the point of strong AI could easily take decades. Some researchers think it may never happen.

Given the limitations to the Turing Test, there have emerged alternatives, such as the following:
  • Kurzweil-Kapor Test: This is from futurologist Ray Kurzweil and tech entrepreneur Mitch Kapor. Their test requires that a computer carry on a conversation for two hours and that two of three judges believe it is a human talking. As for Kapor, he does not believe this will be achieved until 2029.

  • Coffee Test: This is from Apple co-founder Steve Wozniak. According to the coffee test, a robot must be able to go into a stranger’s home , locate the kitchen, and brew a cup of coffee.

The Brain Is a…Machine?

In 1943, Warren McCulloch and Walter Pitts met at the University of Chicago, and they became fast friends even though their backgrounds were starkly different as were their ages (McCulloch was 42 and Pitts was 18). McCulloch grew up in a wealthy Eastern Establishment family, having gone to prestigious schools. Pitts, on the other hand, grew up in a low-income neighborhood and was even homeless as a teenager.

Despite all this, the partnership would turn into one of the most consequential in the development of AI. McCulloch and Pitts developed new theories to explain the brain, which often went against the conventional wisdom of Freudian psychology. But both of them thought that logic could explain the power of the brain and also looked at the insights from Alan Turing. From this, they co-wrote a paper in 1943 called “A Logical Calculus of the Ideas Immanent in Nervous Activity,” and it appeared in the Bulletin of Mathematical Biophysics. The thesis was that the brain’s core functions like neurons and synapses could be explained by logic and mathematics, say with logical operators like And, Or, and Not. With these, you could construct a complex network that could process information, learn, and think.

Ironically, the paper did not get much traction with neurologists. But it did get the attention with those working on computers and AI.

Cybernetics

While Norbert Wiener created various theories, his most famous one was about cybernetics. It was focused on understanding control and communications with animals, people, and machines—showing the importance of feedback loops.

In 1948, Wiener published Cybernetics: Or Control and Communication in the Animal and the Machine. Even though it was a scholarly work—filled with complex equations—the book still became a bestseller, hitting the New York Times list.

It was definitely wide ranging. Some of the topics included Newtonian mechanics, meteorology, statistics, astronomy, and thermodynamics. This book would anticipate the development of chaos theory, digital communications, and even computer memory.

But the book would also be influential for AI. Like McCulloch and Pitts, Wiener compared the human brain to the computer. Furthermore, he speculated that a computer would be able to play chess and eventually beat grand masters. The main reason is that he believed that a machine could learn as it played games. He even thought that computers would be able to replicate themselves.

But Cybernetics was not utopian either. Wiener was also prescient in understanding the downsides of computers, such as the potential for dehumanization. He even thought that machines would make people unnecessary.

It was definitely a mixed message. But Wiener’s ideas were powerful and spurred the development of AI.

The Origin Story

John McCarthy’s interest in computers was spurred in 1948, when he attended a seminar, called “Cerebral Mechanisms in Behavior,” which covered the topic of how machines would eventually be able to think. Some of the participants included the leading pioneers in the field such as John von Neumann, Alan Turing, and Claude Shannon.

McCarthy continued to immerse himself in the emerging computer industry—including a stint at Bell Labs—and in 1956, he organized a ten-week research project at Dartmouth University. He called it a “study of artificial intelligence.” It was the first time the term had been used.

The attendees included academics like Marvin Minsky, Nathaniel Rochester, Allen Newell, O. G. Selfridge, Raymond Solomonoff, and Claude Shannon. All of them would go on to become major players in AI.

The goals for the study were definitely ambitious:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.5

At the conference, Allen Newell, Cliff Shaw, and Herbert Simon demoed a computer program called the Logic Theorist, which they developed at the Research and Development (RAND) Corporation. The main inspiration came from Simon (who would win the Nobel Prize in Economics in 1978). When he saw how computers printed out words on a map for air defense systems, he realized that these machines could be more than just about processing numbers. It could also help with images, characters, and symbols—all of which could lead to a thinking machine.

Regarding Logic Theorist, the focus was on solving various math theorems from Principia Mathematica. One of the solutions from the software turned out to be more elegant—and the co-author of the book, Bertrand Russell, was delighted.

Creating the Logic Theorist was no easy feat. Newell, Shaw, and Simon used an IBM 701, which used machine language. So they created a high-level language, called IPL (Information Processing Language), that sped up the programming. For several years, it was the language of choice for AI.

The IBM 701 also did not have enough memory for the Logic Theorist. This led to another innovation: list processing. It allowed for dynamically allocating and deallocating memory as the program ran.

Bottom line: The Logic Theorist is considered the first AI program ever developed.

Despite this, it did not garner much interest! The Dartmouth conference was mostly a disappointment. Even the phrase “artificial intelligence” was criticized.

Researchers tried to come up with alternatives, such as “complex information processing.” But they were not catchy like AI was—and the term stuck.

As for McCarthy, he continued on his mission to push innovation in AI. Consider the following:
  • During the late 1950s, he developed the Lisp programming language, which was often used for AI projects because of the ease of using nonnumerical data. He also created programming concepts like recursion, dynamic typing, and garbage collection. Lisp continues to be used today, such as with robotics and business applications. While McCarthy was developing the language, he also co-founded the MIT Artificial Intelligence Laboratory.

  • In 1961, he formulated the concept of time-sharing of computers, which had a transformative impact on the industry. This also led to the development of the Internet and cloud computing.

  • A few years later, he founded Stanford’s Artificial Intelligence Laboratory.

  • In 1969, he wrote a paper called “Computer-Controlled Cars,” in which he described how a person could enter directions with a keyboard and a television camera would navigate the vehicle.

  • He won the Turing Award in 1971. This prize is considered the Nobel Prize for Computer Science.

In a speech in 2006, McCarthy noted that he was too optimistic about the progress of strong AI. According to him, “we humans are not very good at identifying the heuristics we ourselves use.”6

Golden Age of AI

From 1956 to 1974, the AI field was one of the hottest spots in the tech world. A major catalyst was the rapid development in computer technologies. They went from being massive systems—based on vacuum tubes—to smaller systems run on integrated circuits that were much quicker and had more storage capacity.

The federal government was also investing heavily in new technologies. Part of this was due to the ambitious goals of the Apollo space program and the heavy demands of the Cold War.

As for AI, the main funding source was the Advanced Research Projects Agency (ARPA), which was launched in the late 1950s after the shock of Russia’s Sputnik. The spending on projects usually came with few requirements. The goal was to inspire breakthrough innovation. One of the leaders of ARPA, J. C. R. Licklider, had a motto of “fund people, not projects.” For the most part, the majority of the funding was from Stanford, MIT, Lincoln Laboratories, and Carnegie Mellon University.

Other than IBM, the private sector had little involvement in AI development. Keep in mind that—by the mid-1950s—IBM would pull back and focus on the commercialization of its computers. There was actually fear from customers that this technology would lead to significant job losses. So IBM did not want to be blamed.

In other words, much of the innovation in AI spun out from academia. For example, in 1959, Newell, Shaw, and Simon continued to push the boundaries in the AI field with the development of a program called “General Problem Solver.” As the name implied, it was about solving math problems, such as the Tower of Hanoi.

But there were many other programs that attempted to achieve some level of strong AI. Examples included the following:
  • SAINT or Symbolic Automatic INTegrator (1961): This program, created by MIT researcher James Slagle, helped to solve freshman calculus problems. It would be updated into other programs, called SIN and MACSYMA, that did much more advanced math. SAINT was actually the first example of an expert system , a category of AI we’ll cover later in this chapter.

  • ANALOGY (1963): This program was the creation of MIT professor Thomas Evans. The application demonstrated that a computer could solve analogy problems of an IQ test.

  • STUDENT (1964): Under the supervision of Minsky at MIT, Daniel Bobrow created this AI application for his PhD thesis. The system used Natural Language Processing (NLP) to solve algebra problems for high school students.

  • ELIZA (1965): MIT professor Joseph Weizenbaum designed this program, which instantly became a big hit. It even got buzz in the mainstream press. It was named after Eliza (based on George Bernard Shaw’s play Pygmalion) and served as a psychoanalyst. A user could type in questions, and ELIZA would provide counsel (this was the first example of a chatbot). Some people who used it thought the program was a real person, which deeply concerned Weizenbaum since the underlying technology was fairly basic. You can find examples of ELIZA on the web, such as at http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm .

  • Computer Vision (1966): In a legendary story, MIT’s Marvin Minsky said to a student, Gerald Jay Sussman, to spend the summer linking a camera to a computer and getting the computer to describe what it saw. He did just that and built a system that detected basic patterns. It was the first use of computer vision.

  • Mac Hack (1968): MIT professor Richard D. Greenblatt created this program that played chess. It was the first to play in real tournaments and got a C-rating.

  • Hearsay I (Late 1960s): Professor Raj Reddy developed a continuous speech recognition system. Some of his students would then go on to create Dragon Systems, which became a major tech company.

During this period, there was a proliferation of AI academic papers and books. Some of the topics included Bayesian methods, machine learning, and vision.

But there were generally two major theories about AI. One was led by Minsky, who said that there needed to be symbolic systems. This meant that AI should be based on traditional computer logic or preprogramming—that is, the use of approaches like If-Then-Else statements.

Next, there was Frank Rosenblatt, who believed that AI needed to use systems similar to the brain like neural networks (this field was also known as connectionism). But instead of calling the inner workings neurons, he referred to them as perceptrons. A system would be able to learn as it ingested data over time.

In 1957, Rosenblatt created the first computer program for this, called the Mark 1 Perceptron. It included cameras to help to differentiate between two images (they had 20 × 20 pixels). The Mark 1 Perceptron would use data that had random weightings and then go through the following process:
  1. 1.

    Take in an input and come up with the perceptron output.

     
  2. 2.
    If there is not a match, then
    1. a.

      If the output should have been 0 but was 1, then the weight for 1 will be decreased.

       
    2. b.

      If the output should have been 1 but was 0, then the weight for 1 will be increased.

       
     
  3. 3.

    Repeat steps #1 and #2 until the results are accurate.

     

This was definitely pathbreaking for AI. The New York Times even had a write-up for Rosenblatt, extolling “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”7

But there were still nagging issues with the perceptron. One was that the neural network had only one layer (primarily because of the lack of computation power at the time). Next, brain research was still in the nascent stages and did not offer much in terms of understanding cognitive ability.

Minsky would co-write a book, along with Seymour Papert, called Perceptrons (1969). The authors were relentless in attacking Rosenblatt’s approach, and it quickly faded away. Note that in the early 1950s Minsky developed a crude neural net machine, such as by using hundreds of vacuum tubes and spare parts from a B-24 bomber. But he saw that the technology was nowhere at a point to be workable.

Rosenblatt tried to fight back, but it was too late. The AI community quickly turned sour on neural networks. Rosenblatt would then die a couple years later in a boating accident. He was 43 years old.

Yet by the 1980s, his ideas would be revived—which would lead to a revolution in AI, primarily with the development of deep learning.

For the most part, the Golden Age of AI was freewheeling and exciting. Some of the brightest academics in the world were trying to create machines that could truly think. But the optimism often went to the extremes. In 1965, Simon said that within 20 years, a machine could do anything a human could. Then in 1970, in an interview with Life magazine, he said this would happen in only 3–8 years (by the way, he was an advisor on the 2001: A Space Odyssey movie).

Unfortunately, the next phase of AI would be much darker. There were more academics who were becoming skeptical. Perhaps the most vocal was Hubert Dreyfus, a philosopher. In books such as What Computers Still Can’t Do: A Critique of Artificial Reason,8 he set forth his ideas that computers were not similar to the human brain and that AI would woefully fall short of the lofty expectations.

AI Winter

During the early 1970s, the enthusiasm for AI started to wane. It would become known as the “AI winter,” which would last through 1980 or so (the term came from “nuclear winter,” an extinction event where the sun is blocked and temperatures plunge across the world).

Even though there were many strides made with AI, they still were mostly academic and involved in controlled environments. At the time, the computer systems were still limited. For example, a DEC PDP-11/45—which was common for AI research—had the ability to expand its memory to only 128K.

The Lisp language also was not ideal for computer systems. Rather, in the corporate world, the focus was primarily on FORTRAN.

Next, there were still many complex aspects when understanding intelligence and reasoning. Just one is disambiguation. This is the situation when a word has more than one meaning. This adds to the difficulty for an AI program since it will also need to understand the context.

Finally, the economic environment in the 1970s was far from robust. There were persistent inflation, slow growth, and supply disruptions, such as with the oil crisis.

Given all this, it should be no surprise that the US government was getting more stringent with funding. After all, for a Pentagon planner, how useful is a program that can play chess, solve a theorem, or recognize some basic images?

Not much, unfortunately.

A notable case is the Speech Understanding Research program at Carnegie Mellon University. For the Defense Advanced Research Projects Agency (DARPA), it thought this speech recognition system could be used for fighter pilots to make voice commands. But it proved to be unworkable. One of the programs, which was called Harpy, could understand 1,011 words—which is what a typical 3-year-old knows.

The officials at DARPA actually thought that it had been hoodwinked and eliminated the $3 million annual budget for the program.

But the biggest hit to AI came via a report—which came out in 1973—from Professor Sir James Lighthill. Funded by the UK Parliament, it was a full-on repudiation of the “grandiose objectives” of strong AI . A major issue he noted was “combinatorial explosion,” which was the problem where the models got too complicated and were difficult to adjust.

The report concluded: “In no part of the field have the discoveries made so far produced the major impact that was then promised.”9 He was so pessimistic that he did not believe computers would be able to recognize images or beat a chess grand master.

The report also led to a public debate that was televised on the BCC (you can find the videos on YouTube). It was Lighthill against Donald Michie, Richard Gregory, and John McCarthy.

Even though Lighthill had valid points—and evaluated large amounts of research—he did not see the power of weak AI . But it did not seem to matter as the winter took hold.

Things got so bad that many researchers changed their career paths. And as for those who still studied AI, they often referred to their work with other terms—like machine learning, pattern recognition, and informatics!

The Rise and Fall of Expert Systems

Even during the AI winter , there continued to be major innovations. One was backpropagation, which is essential for assigning weights for neural networks. Then there was the development of the recurrent neural network (RNN). This allows for connections to move through the input and output layers.

But in the 1980s and 1990s, there also was the emergence of expert systems. A key driver was the explosive growth of PCs and minicomputers.

Expert systems were based on the concepts of Minsky’s symbolic logic, involving complex pathways. These were often developed by domain experts in particular fields like medicine, finance, and auto manufacturing.

Figure 1-2 shows the key parts of an expert system.
../images/480660_1_En_1_Chapter/480660_1_En_1_Fig2_HTML.jpg
Figure 1-2.

Key parts of an expert system

While there are expert systems that go back to the mid-1960s, they did not gain commercial use until the 1980s. An example was XCON (eXpert CONfigurer), which John McDermott developed at Carnegie Mellon University. The system allowed for optimizing the selection of computer components and initially had about 2,500 rules. Think of it as the first recommendation engine. From the launch in 1980, it turned out to be a big cost saver for DEC for its line of VAX computers (about $40 million by 1986).

When companies saw the success of XCON, there was a boom in expert systems—turning into a billion-dollar industry. The Japanese government also saw the opportunity and invested hundreds of millions to bolster its home market. However, the results were mostly a disappointment. Much of the innovation was in the United States.

Consider that IBM used an expert system for its Deep Blue computer. In 1996, it would beat grand chess master Garry Kasparov, in one of six matches. Deep Blue, which IBM had been developing since 1985, processed 200 million positions per second.

But there were issues with expert systems. They were often narrow and difficult to apply across other categories. Furthermore, as the expert systems got larger, it became more challenging to manage them and feed data. The result was that there were more errors in the outcomes. Next, testing the systems often proved to be a complex process. Let’s face it, there were times when the experts would disagree on fundamental matters. Finally, expert systems did not learn over time. Instead, there had to be constant updates to the underlying logic models, which added greatly to the costs and complexities.

By the late 1980s, expert systems started to lose favor in the business world, and many startups merged or went bust. Actually, this helped cause another AI winter, which would last until about 1993. PCs were rapidly eating into higher-end hardware markets, which meant a steep reduction in Lisp-based machines.

Government funding for AI, such as from DARPA, also dried up. Then again, the Cold War was rapidly coming to a quiet end with the fall of the Soviet Union.

Neural Networks and Deep Learning

As a teen in the 1950s, Geoffrey Hinton wanted to be a professor and to study AI. He came from a family of noted academics (his great-great-grandfather was George Boole). His mom would often say, “Be an academic or be a failure.”10

Even during the first AI winter, Hinton was passionate about AI and was convinced that Rosenblatt’s neural network approach was the right path. So in 1972, he received his PhD on the topic from the University of Edinburgh.

But during this period, many people thought that Hinton was wasting his time and talents. AI was essentially considered a fringe area. It wasn’t even thought of as a science.

But this only encouraged Hinton more. He relished his position as an outsider and knew that his ideas would win out in the end.

Hinton realized that the biggest hindrance to AI was computer power. But he also saw that time was on his side. Moore’s Law predicted that the number of components on a chip would double about every 18 months.

In the meantime, Hinton worked tirelessly on developing the core theories of neural networks—something that eventually became known as deep learning. In 1986, he wrote—along with David Rumelhart and Ronald J. Williams—a pathbreaking paper, called “Learning Representations by Back-propagating Errors.” It set forth key processes for using backpropagation in neural networks. The result was that there would be significant improvement in accuracy, such as with predictions and visual recognition.

Of course, this did not happen in isolation. Hinton’s pioneering work was based on the achievements of other researchers who also were believers in neural networks. And his own research spurred a flurry of other major achievements:
  • 1980: Kunihiko Fukushima created Neocognitron, which was a system to recognize patterns that became the basis of convolutional neural networks. These were based on the visual cortex of animals.

  • 1982: John Hopfield developed “Hopfield Networks.” This was essentially a recurrent neural network.

  • 1989: Yann LeCun merged convolutional networks with backpropagation. This approach would find use cases with analyzing handwritten checks.

  • 1989: Christopher Watkins’ PhD thesis, “Learning from Delayed Rewards,” described Q-Learning. This was a major advance in helping with reinforcement learning.

  • 1998: Yann LeCun published “Gradient-Based Learning Applied to Document Recognition,” which used descent algorithms to improve neural networks.

Technological Drivers of Modern AI

Besides advances in new conceptual approaches, theories, and models, AI had some other important drivers. Here’s a look at the main ones:
  • Explosive Growth in Datasets: The internet has been a major factor for AI because it has allowed for the creation of massive datasets. In the next chapter, we’ll take a look at how data has transformed this technology.

  • Infrastructure: Perhaps the most consequential company for AI during the past 15 years or so has been Google. To keep up with the indexing of the Web—which was growing at a staggering rate—the company had to come up with creative approaches to build scalable systems. The result has been innovation in commodity server clusters, virtualization, and open source software. Google was also one of the early adopters of deep learning, with the launch of the “Google Brain” project in 2011. Oh, and a few years later the company hired Hinton.

  • GPUs (Graphics Processing Units): This chip technology, which was pioneered by NVIDIA, was originally for high-speed graphics in games. But the architecture of GPUs would eventually be ideal for AI as well. Note that most deep learning research is done with these chips. The reason is that—with parallel processing—the speed is multiples higher than traditional CPUs. This means that computing a model may take a day or two vs. weeks or even months.

All these factors reinforced themselves—adding fuel to the growth of AI. What’s more, these factors are likely to remain vibrant for many years to come.

Structure of AI

In this chapter, we’ve covered many concepts. Now it can be tough to understand the organization of AI. For instance, it is common to see terms like machine learning and deep learning get confused. But it is essential to understand the distinctions, which we will cover in detail in the rest of this book.

But on a high-level view of things, Figure 1-3 shows how the main elements of AI relate to each other. At the top is AI, which covers a wide variety of theories and technologies. You can then break this down into two main categories: machine learning and deep learning.
../images/480660_1_En_1_Chapter/480660_1_En_1_Fig3_HTML.png
Figure 1-3.

This is a high-level look at the main components of the AI world

Conclusion

There’s nothing new that AI is a buzzword today. The term has seen various stomach-churning boom-bust cycles.

Maybe it will once again go out of favor? Perhaps. But this time around, there are true innovations with AI that are transforming businesses. Mega tech companies like Google, Microsoft, and Facebook consider the category to be a major priority. All in all, it seems like a good bet that AI will continue to grow and change our world.

Key Takeaways

  • Technology often takes longer to evolve than originally understood.

  • AI is not just about computer science and mathematics. There have been key contributions from fields like economics, neuroscience, psychology, linguistics, electrical engineering, mathematics, and philosophy.

  • There are two main types of AI: weak and strong. Strong is where machines become self-aware, whereas weak is for systems that focus on specific tasks. Currently, AI is at the weak stage.

  • The Turing Test is a common way to test if a machine can think. It is based on whether someone really thinks a system is intelligent.

  • Some of the key drivers for AI include new theories from researchers like Hinton, the explosive growth in data, new technology infrastructure, and GPUs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.116.69