What Is AI?

(Excerpted from the report What Is Artificial Intelligence?)

Defining artificial intelligence isn’t just difficult: it’s impossible, not the least because we don’t really understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn’t than what artificial intelligence is.

The Meaning of Intelligence

What we mean by “intelligence” is a fundamental question. In a Radar post from 2014, Beau Cronin did an excellent job of summarizing the many definitions of AI. What we expect from AI depends critically on what we want the AI to do.

If we assume that AI must be embodied in hardware that’s capable of motion, such as a robot or an autonomous vehicle, we get a different set of criteria. We’re asking the computer to perform a poorly defined task (like driving to the store) under its own control. We can already build AI systems that can do a better job of planning a route and driving than most humans. The one accident in which one of Google’s autonomous vehicles was at fault occurred because the algorithms were modified to drive more like a human, and to take risks that the AI system would not normally have taken.

We can define AI more simply by dispensing with the intricacies of conversational systems or autonomous robotic systems and saying that AI is solely about building systems that answer questions and solve problems. Systems that can answer questions and reason about complex logic are the “expert systems” that we’ve been building for some years now, most recently embodied in IBM’s Watson. (AlphaGo solves a different kind of problem.) However, as Beau Cronin points out, solving problems that humans find intellectually challenging is relatively easy; what’s much more difficult is solving the problems that humans find easy. Few three-year-olds can learn to play Go. All three-year-olds can recognize their parents—and without a substantial set of tagged images.

Assistants or Actors?

Press coverage of AI focuses on autonomous systems—machines that act on their own—with good reason: that’s the fun, sexy, and somewhat scary face of AI. It’s easy to watch AlphaGo, with a human servant to make its moves, and fantasize about a future dominated by machines. But there’s something more to AI than autonomous devices that make humans obsolete. Where is the real value—artificial intelligence or intelligence augmentation? AI or IA? That question has been asked since the first attempts at AI and is explored in depth by John Markoff in Machines of Loving Grace. We may not want an AI system to make decisions; we may want to reserve decision making for ourselves. We may want AI that augments our intelligence by providing us with information, predicting the consequences of any course of action, and making recommendations, but leaving decisions to the humans.

A GPS navigation system is an excellent example of an AI system that augments human intelligence. Given a good map, most humans can navigate from point A to point B, though our abilities leave a lot to be desired, particularly if we’re in unfamiliar territory. Plotting the best route between two locations is a difficult problem, particularly when you account for problems like bad traffic and road conditions. But, with the exception of autonomous vehicles, we’ve never connected the navigation engine to the steering wheel. A GPS is strictly an assistive technology: it gives recommendations, not commands. Whenever you hear the GPS saying “recalculating route,” a human has made a decision (or a mistake) that ignored the GPS recommendation, and the GPS is adapting.

Over the past few years, we’ve seen many applications that qualify as AI, in one sense or another. Almost anything that falls under the rubric of “machine learning” qualifies as artificial intelligence: indeed, “machine learning” was the name given to the more successful parts of AI back when the discipline fell into disrepute. You don’t need to build something with a human voice, like Amazon’s Alexa, to be AI. Amazon’s recommendation engine is certainly AI. So is a web application like Stitch Fix, which augments choices made by fashion experts with choices made by a recommendation engine. We’ve become accustomed to (and are frequently annoyed by) chat bots that handle customer service calls, more or less accurately. You’ll probably end up talking to a human, but the secret is using the chat bot to get all the routine questions out of the way. There’s no point in requiring a human to transcribe your address, your policy number, and other standard information: a computer can do it at least as accurately, if not more so.

Always in the Future

Mark Zuckerberg recently said that AI will be better than humans at most basic tasks in 5 to 10 years. He might be correct, but it’s also clear that he’s talking about narrow intelligence: specific tasks like speech recognition, image classification, and, of course, game playing. He continues to say “That doesn’t mean that the computers will be thinking…” Depending on who you talk to, a real general intelligence is 10 to 50 years out. Given the difficulty of predicting the future of technology, the best answer is “more than 10 years,” and possibly much more. When will human-level, machine intelligence (HLMI) be achieved? A recent survey of experts suggests that HLMI will occur (with 50% probability) sometime between 2040 and 2050.” As Yann LeCun says, “Human-level general AI is several decades away.”

Will we ever be able to point to something and say, “Yes, that’s artificial intelligence”? Yes, certainly; we can do that now. What’s more important is that we will inevitably be surrounded by AI—bathed in it—even before we know it. We take plumbing for granted; we take electricity for granted; our children take streaming music for granted. We will take AI for granted, even as it becomes a larger part of our lives.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.228.95