CHAPTER 1

The Thinking Machine

CLOSE YOUR EYES and imagine a purple cow. Now close your eyes and imagine a yellow cow. Where did these thoughts come from? How did they get there? Is there a place in your brain that stores cow images, and another with an inventory of paint cans with which to color them? These are important questions, because before we can create something that thinks, we must understand how we think.

Supercomputers can search billions of records in a fraction of a second. They can remember hundreds of billions of facts perfectly. They can even power up their electric circuits to react at least a million times as fast as the neurons in our brains.

Pretty smart, huh? So all we have to do is build more of those industrial-size microprocessor farms, packed with supercomputers, and someday we’ll have human intelligence capable of imagining purple cows. Right?

No. Not really. In fact, not at all.

The truth is that a supercomputer is too good to behave like a human brain. A computer could never answer or even imagine the purple cow question. It is too precise, too correct, too predictable, too Miss Goody Two-Shoes. The computer is like the ticking of a Swiss watch, and the human brain is more like a blues note on a bender.

To behave like the human brain, a computer would have to behave like this: start searching for an item with fierce concentration, then back off a little, then jump back in, then find itself staring blankly out the window (assuming there is a window), then off to a warm reverie—shafts of sunlight bouncing off the green grass or something like that—and then suddenly, bang, back to reality with an abrupt epiphany: “Got to put Puppy Chow on the grocery list!” Now that’s more like a human brain.

Even logical thinking—the kind you might expect from a rocket scientist or a McKinsey strategist—is more like a swallow wafting in the evening air, doing loop-de-loops and acrobatic tumbles, than an arrow sent shivering into a tree. We can’t help it. That’s the way the brain works.

For that reason, the human brain is also a lousy computer. And it’s not one that you would probably care to employ. Can you imagine having a handheld calculator that begins humming “Strawberry Fields Forever” in the middle of a calculation? Or an ABS brake sensor that wonders, just as you are going into a skid, what it must be like to be an air bag deployer?

What characterizes human thought? “A period of mulling,” says University of Chicago professor Howard Margolis, “followed by periods of recapitulation, in which we describe to ourselves what seems to have gone on during the mulling.”1 In other words, just like a swallow, the human mind thinks in a series of loop-de-loops.

But we haven’t gathered here today to badmouth the brain. Quite the contrary. The brain is capable of doing marvelous things —things that computers couldn’t imagine doing (if they could imagine doing anything). The appreciation of beauty, creativity, contemplation, imagination, and, yes, purple cows—these are all in the realm of the human brain.

When we talk about the creation of a thinking machine, in fact, this is the kind of intelligence we mean: not merely a machine that can calculate a sum to the billionth decimal point but one that has a sense of reason, balance, and intuition. A machine that can learn from its mistakes, as humans sometimes can, and even move the ball of civilization forward. Can we really build something like this, ever? This is an important question, but before we can answer it, we must ask ourselves, What do we mean by thinking? What do we mean by conscious thought? To answer those questions, I need to introduce you to Dan Dennett, my former philosophy professor and mentor.2



Daniel Dennett looks like what one might imagine a wizard would: part Oz, part Merlin. Like Hollywood’s version, he has a shiny pate, bushy arched eyebrows, an immense mustache, and a beard that wiggles when he speaks. His crinkled eyes laugh with him when he exhibits his frequent dry wit. You would expect Dennett to be found in a cluttered office in an ivory tower institution, and in this respect he does not disappoint. Dennett spends much of his time in an office at Tufts University. But his personality, and his life, is much more multidimensional than that.

Dennett also sails his own forty-two-foot yacht. He’s an excellent jazz pianist (who has played many a bar). He’s also an expert downhill skier, a sculptor, and a tennis champion. His father was a Harvard PhD who worked for the OSS (and later the CIA) during World War II. The elder Dennett was killed in an airplane crash on a secret mission in Ethiopia in 1947, when Dan was five.

Dennett can speak brilliantly on many topics, but his greatest skill is in explaining what is meant by human intelligence. It was Dennett, in fact, who posed the purple and yellow cow question in his best-selling book, Consciousness Explained, and then explained the conundrum:

“The trouble is that since these cows are just imagined cows, rather than real cows, or painted pictures of cows on canvas, or cow shapes on a color TV screen, it is hard to see what could be purple in the first instance and yellow in the second . . . Nothing roughly cow shaped in your brain (or in your eyeball) turns purple in one case and yellow in the other, and even if it did, this would not be much help, since it’s pitch black inside your skull, and, besides, you haven’t any eyes in there to see colors with.”3

Beyond wondering how the images got there in the first place, Dennett takes the question one step farther. Who is it looking at those cows, anyhow? Who is the audience? Is there someone in the brain? Says Dennett, “The problem with brains is that when you look into them, you discover that nobody’s home. No part of the brain is the thinker that does the thinking or the feeler that does the feeling.”4

So if a thought is not in the brain, nor is the audience, where is it? What is thinking all about? And how in the world can we build a machine to replicate what seems inexplicable?

Dennett’s answer is that there is nothing magical about the brain—no particular place where thoughts are created, as though in Santa’s workshop. The brain is merely a wrinkled mass of carbon molecules, with the consistency of chilled butter, that somehow creates thoughts, reason, and emotion.

Because there’s nothing magical in the brain, it means that there is no reason we cannot create similar functionality—such as thought, reason, and emotion—in a thinking machine made of, say, clay or silicon. After all, just as there is no particular reason for this lump in our heads to appreciate fine wines and music, daydream, and aspire to greater things, there is no reason that silicon or some other fundamental substance (maybe even carbon someday) could not be coaxed into creating something similar. Because the brain is nothing special in terms of materials—only carbon molecules—it is reasonable to assume that we can also build a machine with carbon (or silicon) that replicates its functions.

But what characteristics of thought should we aspire to in this artificial brain? The answer is that any machine we create must have the same loopy, iterative process as is found in the brain.

In other words, we need to create a machine that stops calculating from time to time to gaze out the window. We don’t need the Amazing Hulk of supercomputers to do that. Dennett maintains that it can be done when “fixed, predesigned programs, running along railroad tracks with a few branch points, depending on the data, have been replaced by flexible, indeed volatile systems whose subsequent behavior is much more a function of the complex interactions between what the system is currently encountering and what it has encountered in the past.”5 Yes, we need a loop-de-loop.

Eric Clapton’s Loop-de-Loop Machine

How did loopiness first arise in humans? Dennett believes it started when a single human—call her Eve—cried out, perhaps in pain. When no one responded to her, she did it again and again. And one day, the external cry became internalized. With humans, this first cry evolved into trains of thought—almost constant thought—that keep us thinking always, even when we are alone, sometimes to the extent that we talk to ourselves incessantly.6 That, according to Dennett, is where consciousness, on the order of “I think, therefore I am,” came from. In other words, like a swallow wafting in the wind, doing loop-de-loops, thoughts traveled from brain to mouth to ear, to brain, round and round, until internal consciousness arose.

That thought is echoed in I Am a Strange Loop, a stunning book by Pulitzer Prize–winning brain scientist Douglas Hofstadter. In the book, he argues that consciousness is an endlessly changing loop, where the brain is constantly fed information and constantly edits it, in an existence that is as elusive and self-repeating as the image we see of ourselves in a hall of mirrors.7

This is also, not coincidentally, how humans learn. “We human beings have used our plasticity not only to learn, but to learn how to learn better,” Dennett says.8 Yet another endless loop-de-loop. We repeat and repeat and repeat something until we get it down better and better and better. In his recent autobiography, guitarist Eric Clapton wrote, “I’d listen carefully to the recording of whatever song I was working on, then copy it and copy it till I could match it. I remember trying to imitate the bell-like tone achieved by Muddy Waters on his song “Honey Bee” . . . I had no technique, of course; I just spent hours mimicking it.”9 This is the primary thesis behind Malcolm Gladwell’s newest best-seller, Outliers, in which he argued that the greatest achievements of humankind have come, not from genius or luck, but from reiterative practice.

To create real intelligence, then, we must make it on the order of the human mind, constantly questioning, learning, yearning, and looping. Just as a river is richer for its meandering, so must the brain follow a recursive path, looping like a swallow and practicing like Clapton around and around, until it not only learns but also learns how to learn.

So how do we build an Eric Clapton machine? Oddly enough, if we could create a machine that guesses, fumbles, rounds off, and is not very good with numbers (no offense Eric), we may be closer to something that replicates the human mind. What else do we need? One that is recursive, that edits itself continuously, that creates all kinds of little changes, tests them against problems, and discards the losers. We want a machine that learns through repetition and that would rather be half-right than completely right (not only because half-right is faster but also because completely right is a mental railroad track, devoid of the opportunistic loop-de-loops of real thinking). In other words, we need a prediction machine.

How do we do that? For that answer, I must introduce you to another of my mentors, Jim Anderson, whom I first met when he was the chairman of the Cognitive Sciences department at Brown University.

A Brain in a Bottle

Jim Anderson is one of the world’s great brain scientists. His particular talent is in his ability to take psychological functions, such as concept formation, reduce them to the biological level, and then model them with computers. He is uniquely qualified in this area, given that he has degrees in physics and biology from MIT. Anderson is also a true Renaissance man, having been a professor at Brown of brain science, cognitive science, applied mathematics, neuroscience, psychology, medicine, and biology. Anderson, in other words, is a brain.

For that reason, when I first walked into his office in the fall of 1998, hoping against hope that he would accept me for Brown’s doctoral program, I was nervous. There he was, a thoughtfullooking man behind thick glasses, surrounded by books and papers, with—I should have known it—a brain sitting on his desk. Not a toy brain or a plastic model, but a real brain in a bottle, listing slightly to one side in a sea of dark green formaldehyde.

At the time, I myself could hardly be considered a brain—and certainly not a brain scientist. I had studied under Dan Dennett, but that was all. However, I did have another experience that I hoped would play to my advantage.

In 1994, a few years after college, I responded to the inspirational urgings of my parents (“Get a job!”) and landed a job with GTE (which, after a series of mergers, changed its name to Verizon). As the (very) Young Turk in the office, I was handed a project no one else wanted. It had to do with a thing called “the Internet” and a project called “SuperPages.” SuperPages turned out to be one of the first Internet search engines, a program that would search Verizon’s yellow pages and help consumers find the phone numbers and addresses of businesses.

With each iteration of SuperPages, I began to see the connection between what I knew of the brain and what we were doing online. For example, how information was presented online directly affected how it was perceived and interpreted both by humans and by Internet software.10 The more closely we modeled our efforts on the brain, I realized, the better SuperPages performed. Moreover, when I began to study the emergent Internet as a whole, I had trouble finding areas where there were not analogies to the brain. It finally dawned on me that if I wanted to build Internet companies, I needed to know everything I could about the brain.

When I discussed this idea with Dan Dennett, he replied that “the brain” meant science and technology—and not only philosophy and psychology. What I needed to study was brain science, he said, and the place for that was Brown University, in Providence, Rhode Island. With that, he wrote me a letter of recommendation, and I applied for the PhD program. And so here I was, standing in the doorway of the office of Jim Anderson, with (I figured) about three or four minutes to talk my way in.

The meeting didn’t start well. Most of the other applicants had done years of brain research, and my application, in comparison, looked like a joke. But the fact that Dan Dennett had sent me probably kept Jim interested enough to humor me. He asked me a few questions about the science of the mind, of which I had apparently little knowledge, and then switched gears and asked me to tell him about the Internet.

So I took a breath.

“The Internet is a brain,” I said.

Not “The Internet is like a brain.” I said, “The Internet is a brain.” It was the wildest card I had. In fact, it was the only card I had. I figured it would probably get me bounced out of his office. But there it was; I had said it.

Jim was busy taking notes (more likely, grading papers), but when I said that, he looked up. He began to speak, and with every word he became more excited. Jim talked about the brain, technology, and evolution, but mostly he talked about the fact that he had always believed that telecommunications had followed the path of the brain. For more than an hour he continued, virtually uninterrupted, except for my occasional nod and “uh-huh.” He even showed me old lecture notes and slides linking the two. “It is a wonderful analogy,” he said.

It was a bit later, when Jim’s energy subsided, that I decided to get something else off my chest: I confessed that my real reason for applying to the program at Brown was to get a PhD not to go into teaching or do research but to start a company—many companies, in fact—that would apply brain science to the Internet.

Jim studied me across his cluttered desk. “You can join the program on one condition,” he said at last.

“What’s that?” I replied feebly.

“That when you start your company, you’ll make me your first employee.”

As they say, the rest is history. I did start a company, called Simpli.com, and Jim became one of the founders. Within a few months, we had hired a good chunk of the brain science department at Brown. We went on to develop search engine technology with George Miller, National Medal of Science winner from Princeton—technology that is now the basis for the advertising capabilities of numerous Internet firms.

Better still, we sold that company in March 2000, only weeks before the Internet bubble collapsed, for about $30 million (it had been worth as much as $100 million at one point, if you include the stock that the dot-com crash made nearly worthless).

Artificial Brains

If anyone is on the right path to making an artificial brain, it is Jim Anderson. And the way he is doing it follows closely Dennett’s definition of intelligence. In other words, Anderson is building a loopy brain. It’s loopy because Jim’s approach doesn’t work in a logical or serial manner (this plus this equals this). Rather, like the brain, it does a thing called parallel processing, in which lots of ideas loop around in our heads simultaneously. Parallel processing, in many ways, is the key to understanding and replicating intelligence.

One of the cornerstones of parallel processing is pattern recognition. As our brains incorporate information in parallel, they are constantly looking for patterns to use in making educated guesses as to which process is best. As a result, we are constantly creating patterns, filling in the blanks, and interpreting the world around us, as opposed to observing it independently. Take a look at figure 1-1, and try to avoid seeing the triangle that does not exist.

You can’t, of course, because as parts of your brain take in the information, other parts fill in the blanks. Now look at figure 1-2, which shows three pillars, all identical in size.

FIGURE 1-1


Try not to see a triangle

e9781422152768_i0004.jpg

FIGURE 1-2


Which pillar is the longest?

e9781422152768_i0005.jpg

Most people do not believe that these pillars are the same size (they are), but they clearly see a group of columns cascading into the distance. In the real world, depth perception is a far more important visual cue than size. So this misperception is not necessarily a bad thing. We process the information in parallel, discard size, and focus on depth. Our survival, in many ways, has been dependent on allowing the most pertinent information to percolate to the forefront.

The brain has evolved to process almost everything in parallel, and this way of processing has enabled thought, foresight, and consciousness. The brain takes in information and then processes it. We do not do so in a linear series of “x, then y, then z” steps. Instead, our minds construct theories that compete with one another. Our one brain, it turns out, is really a multicomputational system.

As Anderson has noted, “One system is old, highly evolved, highly optimized, basically associative, perceptually oriented, memory driven, and logical.” The second is “recent, oddly contoured, unreliable, symbolic and rule based.”11 In other words, the first system is very much like a computer; the latter, as you will see, is more like the Internet.

It is that second system—the cerebral cortex—that brings us human intelligence. So the human mind has billions of neurons working together in parallel, allowing us to walk, chew gum, speak, and remember someone’s name, all at the same time. This is also how we will build real intelligence into the Internet. The only way to create the loopiness and iterative nature of the human mind, after all, is to emulate this quality.

With this, we can chip away at the mystery of human intelligence. Just as the microbe hunters of the past peered into their microscopes to reveal the secrets of organisms responsible for everything from smallpox to yellow fever, so can today’s neuron hunters harness the power of the Internet to study the secrets of the mind.

Artificial Brains, Real Intelligence

I recently stopped by the Ersatz Brain Project at Brown, which, as the name implies, is focused on building a “fake brain.” My favorite neuron hunter, Jim Anderson, was there. On his desk this time was not a human brain in a bottle, but a computer screen. Anderson and his colleagues were writing software that modeled neural networks.12 At some point they will feed this software into the school’s parallel processing supercomputer. And from that they hope some signs of intelligence, even at a primitive level, may appear.

As I watched, Anderson’s programmers were building “minicolumns” of synthetic neurons. Minicolumns, composed of 80 to 100 “neurons” each, are the basic unit of the cerebral cortex. The human cortex, by the twenty-sixth week of gestation, is composed of a large number of these minicolumns, all set in parallel vertical arrays.13 Of course, the brain has a lot of them—at least 10 to the 10th power of neurons, connected together with at least 10 to the 14th power of neural connections. In the brain, these minicolumns form clusters that bind together to form horizontal connections, or what Jim eloquently calls a “network of networks” (see figure 1-3).

In the brain, neurons work by switching themselves on and off. You might think of this process as black and white, but when you put a lot of neurons together and some are black and others are white, the overall expression becomes a certain shade of gray. We can do the same thing with computer chips. They are based on 1’s and 0’s. That’s our black and white. But to imitate networks of neurons, we can also shave the 1’s and 0’s into finer slices—say, 0.1 or 0.06—and thereby get shades of gray similar to those of the brain. That’s what Anderson’s team is hoping to do.

FIGURE 1-3



Network of networks modular architecture

e9781422152768_i0006.jpg

Source: Courtesy of James Anderson.

Anderson admits that he’s starting small. His goal, he says, is to build a “shoddy, second-rate” brain. But if it can be built small, he can snap on additional components. It’s like working with Legos. The more Legos you put together, the more patterns begin to form. When you snap together different patterns simultaneously, a new pattern forms. This is how the mind works.

In fact, Anderson says that a massively parallel computer—replicating the human cortex—is now technically feasible. It will require about “a million simple CPUs and a terabyte of memory for connection between the CPUs.”14 In the end, Anderson hopes to have shaped an intelligence that is an “adequate approximation of reality.” For him that means it is “good enough.” But good enough is far beyond the capabilities of computers. Good enough would combine speech recognition, object recognition, face recognition, motor control, complex memory functions, and information processing.

Anderson is working on groundbreaking theory and application. But if we are looking for a second-rate brain, it is already being built on a massive scale. What is the world’s biggest parallel architecture—composed of millions of computers connected to one another? The Internet.

Imagine if the Internet were used to process information rather than merely pass it along. Well, it is being done by some of the biggest companies in the world.

Let’s move on to the Pacific Northwest and take a look.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.103.202