5. The Human Computer: How to Rewire and Turbo-Boost Your Ape Brain

According to a 2013 Gallup poll, 70 percent of Americans hate their jobs. A 2015 survey showed 70 percent of Britons dread the week ahead.

Across the pond in the UK, Kevin Warwick isn’t one of those people. He is a professor of cybernetics at the University of Reading (England) and when he goes to work he gets to do things such as grow miniature brains from mice neurons, and put them into robots to see what happens.

Yes, really.

How cool is that?

But wait, there’s more.

Besides that, he’s done incredible things such as connect his nervous system to his wife’s, allowing the couple to remotely communicate using their thoughts.

Warwick is at the forefront of brain science. He’s also working with a team of researchers to develop a new brain stimulation device that will treat neurological issues, such as the very hard to solve Parkinson’s disease.

Then there’s this cool thing. In 2015, he witnessed the first computer beating the Turing test, impressively showing its capability to fool human judges into thinking it was human.

His industry colleagues across the world are doing equally impressive research in brain science. Dr. Adam Gazzaley is a leading neuroscientist developing technologies that work on the principles of brain plasticity. He is also Professor of Cognitive Neuroscience at the University of California, San Francisco and makes therapeutic video games. Sounds oxymoronic, but his amazing suite of brain-training video games are designed to be used to improve or reverse neurological disorders and build upon human cognitive abilities.

Yes, video games that make you smarter. Go figure.

Then there is our friend Ray Kurzweil, who we introduced in Chapter 1The Emergence of (You) the Human Machine.” Kurzweil is building a synthetic neocortex, the part of the brain that makes humans the smartest species on the planet.

If that does not amaze or excite you, the rest of this chapter will because it’s all about what happens when the brain—the most complex human organ—gets wired up with technology to make it better than ever.

Here’s the bad news though.

As intriguing as all this is, brain science is pretty new and has really only made inroads since the late 1900s, and the big developments have really only happened in the last two or maybe three decades.

Until brain imaging came about in the 1970s, it was the Neuro-Dark Ages. And that kind of kicked off any real understanding of how the gray goo between our ears worked.

But let’s get real here. Quite frankly the top experts still don’t know much about how the human brain works. When we asked Gazzaley how close scientists are to understanding how the brain works in its entirety, he said, “Without knowing exactly what the end point is, it’s hard to predict what percentage we’re at along the way.”


Image The Unmapped Continent

Scientific American eloquently spelled out the progress (or lack thereof) in neuroscience in an article in 2012:

“There is one largely unmapped continent, perhaps the most intriguing of them all, because it is the instrument of discovery itself: the human brain. It is the presumptive seat of our thoughts, and feelings, and consciousness. Even the clinical criteria for death feature the brain prominently, so it arbitrates human life as well.”


So what do scientists know? Well a lot about the brain of the Caenorhabditis elegans, a dumb little worm. In 2011, the connectome, the map of the worm’s neural pathways, were published. The roundworm is 1 millimeter in length (think: the width of a strand of spaghetti). Its behavior is basic. It travels from place to place, inching forward and back. It lives in soil or rotting vegetation and it feeds on microbes such as bacteria.


Image Worms Our Heart

See a useful video on on the C. Elegans worm: http://superyou.link/dumblittleworm


This puts it into perspective, however: The C. elegans worm possesses 0.3 percent of the intelligence of humans, though subjectively it might seem like more when compared to some of the candidates in the lead-up to the most recent presidential race. The C. elegans has 302 neurons and 7,000 synapses. Humans have approximately 100 billion neurons and 100 trillion synapses. Humbling, yes.

“The C. elegans worm possesses 0.3 percent of the intelligence of humans.”

This is not to say mapping this tiny worm’s brain was not a major breakthrough in the grand scheme of neuroscience. We don’t want to diminish the achievement here.

But scientists still have a long way to go to understanding (and mapping) the human brain. It’s still really early in the history of neuroscience. If this was the history of flight, we’d still be perhaps at the equivalent of crossing the Atlantic for the first time. Let’s rewind even further back and inspect the beginnings of neuroscience, because looking back we can see the accelerating technologies develop on a timeline and it informs us how fast new developments will come as the time between each breakthrough shortens.

A Brief Early History of the Brain

Approximately 200 million years ago, a pivotal evolutionary shift gave man the ability to learn about himself and his behavior. Before that time, it simply wasn’t possible. Man was not intelligent enough to study ... well, anything, including himself. This required a change in the structure of his brain’s anatomy. It grew, along with his forehead, to house an essential new structure called the neocortex (you might recall this from Chapter 1The Emergence of (You) the Human Machine”).

It’s the most recently evolved structure of the mammalian brain. And it’s an important one. It serves the major functions of sensory perception, spatial reasoning, thinking, and language. This allows for social behavior, tool making, and high-level consciousness.

The neocortex prompts humans to learn and study. To create science. To create art. And, most importantly to create language.

Language was a crucial technological development: “The first thing we invented was a communication technology, called spoken language,” said Kurzweil. “This gave us one way of expanding our neocortex beyond the 300 million pattern recognition system modules. It allowed us to solve problems with people or a community of people by communicating with each other.”

Early records of man’s notions about how his biology and behavior were connected is speculative. Ancient cultures had no real-time technologies to help them understand the mechanisms of the human “machine.”

In fact, most of what was known about the brain came out of a need to treat brain injuries. The oldest medical treatise in history, the Edwin Smith Papyrus from 1700 b.c.e., contains the first written record about brain injury.

The true early pioneers of neuroscience were the Ancient Greeks. Between 460 and 379 b.c.e., Hippocrates’s book, On Injuries of the Head, was the first to suggest that the brain controlled the body. They discovered that an injury to the left hemisphere of the brain would cause an injury to the right side of the body and vice versa.

These early discoveries were based on an ill-conceived theory of medicine known as humorism, which suggested that the body was made up of four bodily fluids—blood, phlegm, yellow bile, and black bile. The theory suggested when each one is in balance, the body functions normally. An overproduction of black bile, for example, was linked to the brain disease known as epilepsy. Today, of course, we know that is ridiculous. Unfortunately, humorism governed Western medical thinking for more than 2,000 years.

Greek philosopher Aristotle, a student of Hippocrates, took a different approach. In 337 b.c.e., he proposed that the heart controlled all mental processes. The heart moves constantly, is centrally located, and is the first organ to develop in the embryo. So he thought, for these reasons, it controlled man, and was responsible for epilepsy.

Luckily this hypothesis was abandoned. In 177 a.d., a Greek surgeon named Galen of Pergamon began cutting heads open to understand the brain’s anatomy. (There’s really no pretty way to phrase that)

Galen is known as the “Father of Anatomy.” He performed studies on the brains of animals and human cadavers. He discovered that the brain has four fluid-filled cavities that contain major communication networks. They’re known as ventricles. Each one contains a protective fluid—cerebrospinal fluid—that protects the brain by bathing it in nutrients, while also eliminating waste.

Galen figured the fluid contained “animal spirits,” weightless invisible miniature ghosts with their own wills and intentions.

He said the spirits were also the culprits responsible for psychiatric disorders. This was long before Scooby Doo, so in theory these animal spirits got away with it because there were no meddling kids to stop them.

Finally, in the late seventeenth century, physician Thomas Willis decided to map out the brain’s physical anatomy. He’s the “Father of Neuroscience” and the man who coined the term “neurology.” He also published six books on brain anatomy. His research was revolutionary for its time.

Based on the structural differences of the human brain relative to other animal brains, Willis believed man had an immortal soul because he had a higher level of cognitive function than any other species. Said another way, Willis figured we were all pretty smart and this was his way of explaining consciousness, a phenomenon that scientists today still do not understand. (We will get to that later.)

Other scientists of that time took a similar approach. They believed in something called “dualism,” which is a view that the mind is a spiritual entity. People who bought into dualism believed the mind was something ethereal and godly whereas the body is purely physical ... and they are distinctly separate.

French philosopher René Descartes believed in dualism, but was the first to view the human body as a machine. He believed the mind—whatever the “mind” was—controlled the human body. And the human body fed the mind information it collected from the environment.


Image Cogito Ergo Sum

French philosopher René Descartes is the man who coined the famous phrase, Cogito ergo sum, or “I think therefore, I am.”


This was progress. He based his theory on his observations of mechanical statues in the royal gardens at Saint-Germain in France. Powered by water, these statues would move in ways reminiscent of humans, even though the internal parts were springs rather than muscles watching the statues made him realize that human beings were in essence complicated machines and he realized there was something else far more complex animating us.

Nevertheless, Descartes was the first to use technological principles to try to explain how the nervous system functioned. His use of technology to understand the functions of the brain was carried into the research of scientists John Walsh and Luigi Galvani. Their pioneering work contributed to the discovery that the brain is an electrical system.

One of the greatest (and most notoriously overlooked) breakthroughs in neuroscience occurred in 1773 when Walsh used strips of tin foil to generate a spark from an electric eel. The discovery was never documented, but Walsh was awarded a medal for his work. It’s also worth noting that with a bit more tin foil, he could also have discovered barbecued eels. But that’s another story.

Walsh’s work encouraged scientists such as Galvani to further understand the electrical nature of the brain. In 1781, Galvani was an Italian scientist who made a frog’s severed legs move using a current from a static electric generator.

From that point on, brain function started to become better understood. German physiologist Johannes Peter Muller discovered that although all brain cells communicate by electrical impulse, the systems of communication for each sense and bodily function were different.

And so, French physiologist Pierre Flourens invented experimental ablation, a research method where a specific region of the brain is interfered with or removed to learn how it’s connected to human behavior.

This new research technique allowed new discoveries on the function of specific brain regions. Neuroscience was gaining clout as a true science backed by empirical data, and not merely assumption.

However, for some strange reason brain science took a U-turn with the study of phrenology as it gained popularity between 1810 and 1840. The now-pseudoscience suggested that measurements of the head could determine characteristics of an individual. The theory was a fad and there was no real evidence to support it. That didn’t stop people from using it to make baseless and sometimes extremely unfortunate judgments about a person’s character. In fact, as late as 1928, a phrenologist’s testimony was used to help convict a woman accused of murdering her husband. (Never trust bumpy headed people!)

By the late nineteenth century, developments in brain imaging technology improved neuroscience dramatically. New devices gave scientists the ability to view the brain in real time. Scientist Angelo Mosso invented the first neuroimaging technique in the late 1880s. His device, called the “human circulation balance” machine, measured the redistribution of blood to the brain in real time. This was the genesis of more advanced noninvasive brain imaging devices.

Notice here that it took a long time to get to this point from the time when the Ancient Greeks started their work. This technology, however, further accelerated the understanding of the human brain.

Fast forward to the 1970s through the 1990s where there was an influx of brain imaging technologies. Many of them are still in use today 20 to 25 years later, and they are instrumental in helping scientists understand the brain.

Thanks to these brain-measuring tactics, neuroscience made a major discovery in the twentieth century that changed the course of the entire field. Between the 1990s to the mid-2000s, studies led many scientists to conclude that the brain is plastic, meaning it could change and reconfigure itself. Until then, the accepted belief was that the anatomy of the brain was fixed once an individual reached adulthood.

The brains that humans have today evolved from what they were about 200 million years ago. Because humans are bipedal—meaning that our species stands upright on two legs—the upright posture limited the size of the cranium that could fit through a woman’s birth canal. So, when a baby is born, its brain is not fully developed.

Early scientists didn’t need much technology to figure that one out. The fact is that a baby can’t go to the toilet, feed itself, or clean grout between dirty floor tiles. And forget mathematical equations. If you ask a baby the formula for the circumference of a circle, it won’t respond with 2πR. The moment a child enters the world, it needs much more development. It’s not until late adolescence that the human brain reaches its fully developed weight of 1,400 grams. Although let’s face it, like, even then teens aren’t, you know, the sharpest knives in the proverbial drawer.

Early scientists believed that brain development stopped there. The former scientific model assumed each individual is born with a finite number of neurons. That is simply not the case.

It’s been scientifically proven that the brain’s anatomy goes through structural changes during adulthood simply by processing new information. This process is known as neuroplasticity.

Your Brain Is Plastic? What?

We are not saying your brain is made out of the same stuff as Tupperware. Plastic, in this context, means moldable or changeable.

Neuroplasticity is the brain’s miraculous ability to change its structure through information processing, or in other words, through thoughts.

Yes, you can think your way toward changing the physical structure of your brain.

And what’s cool is if you think your brain is not that super, the good news is it can be because you can train it to redesign itself to be super smart.

“... if you think your brain is not that super, the good news is it can be because you can train it to redesign itself to be super smart.”

In fact, your brain is constantly in a state of adaptation. It creates new communication pathways based on what it requires at a given moment in time. It is continually adjusting to the world around it, to be sure that both the brain and the body not only continue to survive, but also function in a way that’s optimal for both.

The brain is always in a state of optimization. Think of this like a car’s GPS system. You program in your destination but there are a lot of little things that can change along the way—you might find a detour on your route; you might not be paying attention and miss a turn, or you might just decide to take the scenic route. Whenever you deviate from the GPS’s original plan, it takes a moment to acknowledge the deviation and then creates a new plan based on these new variables that will allow it to give you new directions to your destination. The brain works in a similar way (except without the nagging mother-in-law voice).

For instance, when the body suffers an injury, such as a disease or stroke, the brain reorganizes its neural circuits to adjust to changes in the body and its new way of interacting with the world.

A classic example of neuroplasticity is evident among patients with an illness called phantom limb syndrome, a mental disorder that occurs when an amputee still feels the presence of the limb long after it’s been removed from the body.

A bizarre phenomenon, known as “phantom limb,” occurs in patients with a missing arm.” It’s common for them to feel pain at the site of the missing limb when they are touched on the face. They still feel the touch to the face, but anytime they receive a sensation to this region, the missing arm feels pain, too.

As it turns out, the brain region that recognizes a touch on the face is located next to the region that would normally receive sensory input from the missing limb. These two areas become crossed when the limb is removed. The region that’s connected to the missing limb becomes hungry for sensory input. So, it simply reorganizes itself to get what it needs by invading the neighbor region. It’s like a toddler who wants another kid’s blocks.

American neuroscientist Paul Bach-y-Rita understood this adaptation principle long before it was scientifically accepted. In 1969, he built a machine that allowed congenitally blind people to see by being touched.

It sounds impossible, but Bach-y-Rita simply rejigged the processes involved in sight. He understood that for a person to see, a pulse must be sent from the back of the eye to the brain. That means that the physical eye is simply a messenger. It transmits information from the environment to the more critical region in the back of the eye. The brain’s job is to decode the message it receives, so that the person understands what’s in their visual field.

Pulses for seeing aren’t any different from pulses involved in other senses, such as the pulses involved in touch. The difference is the frequency of pulses that get delivered to the brain. Each message tells the brain what it needs to understand.

A patient using Bach-y-Rita’s machine would sit in a chair and have small needles touch his or her back. The chair was connected to a camera that acted as a set of eyes. Based on what the camera saw, it provided information to the needles that touched the patient. That information was sent to the brain and allowed the participant to see what the camera saw. Watch a video that explains Bach-y-Rita’s revolutionary technology here: http://superyou.link/brilliantbachyrita.

This study was revolutionary and ahead of its time. It took many more studies for scientists to conclude that the brain is plastic. Another popular experiment was conducted by Eleanor Maguire. In 2000, she discovered that a key brain region for processing memory—the hippocampus—was larger in the brains of London taxi drivers. The job requirement for the cabbies forced them to learn and remember up to 400 different travel routes. This caused a redistribution of their gray matter so that their brains could better retain navigational information.

The fact that the brain is plastic also means humans can heal their own brain deficiencies through conscious thought and action to intentionally rewire pathways.

In the book The Brain’s Way of Healing, author and prominent Canadian psychiatrist Norman Doidge writes about John Pepper, a South African patient with Parkinson’s disease. He retrained himself to walk using specific self-developed conscious thinking processes. Using thoughts, he trained areas of his brain to become reignited where they had stopped being used.

Another example of this is David Webber, who went blind from an autoimmune disease, but then used meditation and hand-eye coordination exercises to cure his vision using the brain’s amazing ability to restructure itself.

The discovery that the brain can not only create new connections but also reverse disease means exercising the brain has never been more important. For this reason, in recent years, there’s been a reinvigorated focus on brain-training techniques that use thinking as a tool to expand cognitive function or reverse degradation.

Do it Yourself (DIY) Brain Technology

Because the brain is plastic and not static, there are techniques anyone can use to prevent cognitive decline and to enhance intelligence and ability.

This is also why in the last ten years there’s been an emphasis on developing new and less invasive technologies that improve brain chemistry. And it’s an exciting development for anyone dealing with brain issues such as Alzheimer’s disease, Parkinson’s disease, and more commonly, the decline that naturally occurs with old age.

If you want to stay smart and perky without the need for brain chips, drugs, or surgery, then you will want to integrate the following practices into your daily ritual.

Go Learn Something

Whether you are conscious of it or not, your brain is always adapting. It creates new connections, prunes dead pathways, and it all happens naturally and without effort when you do one simple thing—learn.

When you practice a new activity, a new group of neurons chatter to each other. This is known as an electrochemical pathway. Over time, repetition of the same learned task or idea creates a stronger, faster connection. Externally, it looks like you are getting better at a new skill or task.

Neuroscientists describe it this way: “Cells that fire together, wire together.” Over time, they communicate more efficiently. The more the same network is stimulated, the stronger it becomes.

Consider a chain of stock boys unloading a delivery truck at the local grocery store who are working as a team to get the job done. Angelo stands inside the truck. He gives a box of tomatoes to Evan, who passes the box to Stefan, who loads the box in the warehouse. By the time this team has moved the sixty-eighth tomato box they’re likely to have sped up the time it takes for a box to move from the truck to the store. Actions among the team start to happen automatically. Everyone gets into a rhythm and they all know what to do.

Likewise, brain communication pathways work together to pass signals through, and with repetition they become more efficient and adept at delivering that signal. It’s like memorizing a secret code and learning it so well you can immediately look at a line of apparent gibberish and immediately see the original message in all the mayhem.

Here’s the kicker: When you stop practicing certain skills, the brain learns to eliminate the pathways it once created. The point of this pruning is to get rid of the unused pathways to make room for new ones. When this happens, you forget the secret code, and have to manually decrypt every message by hand, in a time-consuming fashion.

Do you remember what your family’s phone number was when you were 6? If you haven’t thought about it in a very long time it might have been erased by this natural process. However, recalling it every year (or more often) will keep it around.

A musician can easily relate to this process. A young piano player who practices the same piece of music repeatedly will quickly learn a pattern of moving their fingers across the keyboard. If they stop playing for years and later sit down at a keyboard, they’ll still be able to play the piece but more slowly and it will involve greater concentration. If the same pianist were to relearn a piece through daily practice they could restore the connection fairly quickly because it’s already been learned once.

So, continued learning is crucial for ensuring that the brain connections fire quickly and accurately. The brain—like an unemployed stock boy on the couch in his Calvins—gets lazy when it’s not being used.

This process also explains how negative thinking patterns and behaviors get created and reinforced. The brain doesn’t differentiate. It simply processes the information. It’s always moving the individual toward pleasure and away from pain.

Consider the fundamental brain processes that lead to someone becoming an alcoholic. The first sip of a crantini sends a signal to the brain’s reward center to stimulate the production of a happy mood chemical called dopamine. The brain likes dopamine because it produces a sensation that registers as “feeling good.” Naturally, the brain wants more of the “feeling good,” so it tells the body to tell the bartender to pour another tasty drink.

Communication pathways speed up and strengthen because repeated communication stimulates the growth of a natural insulator called myelin. It’s a fatty, white material that grows around brain chemical pathways and helps facilitate connections.

This process happens a lot during childhood. Kids are little myelin factories. This is why they learn faster than adults. Researchers hope to use this knowledge to develop new technologies to help adults learn (once again) at the same rate as children.

The main takeaway here: Learn more. Learn often. Strengthen those connections to ward off brain disease. Or, take the advice of Kay’s 91-year-old Oma: “Do more crossvort puzzles and vatch a lot of Y-eopardy.” Crossword puzzles and Jeopardy keep your myelin growing!

Meditate on This

One of the earliest types of brain technology is meditation. People have been practicing it for more than 2,500 years. Some of those people, such as Siddhartha Gautama, a Nepalese philosopher (also known as a guy called Buddha), sought a phenomenon called “enlightenment.” In searching for that mystical thing—described as “liberation from human suffering”—their brain is doing something very scientific: Rewiring itself using the principles of neuroplasticity. This rewiring, as we have discussed, has many brain benefits.

An outsider might look upon a meditating human and regard them as somewhat silly, sitting there in silence, usually with legs crossed, and their body twisted like a salted pretzel. Practiced meditators sometimes use props such as silly looking angled cushions. Sometimes they sit still for hours on end.

However, meditation is an active process. During the meditative act, the state of attention is highly alert but relaxed. Objectivity is practiced by being aware and acknowledging what is happening—the sensations that are being experienced—without making any judgments about them. An individual learns to “be with what is” in reality, with indifference to it.

For instance, a meditator sitting on the hill who feels a brush of wind would think about the wind like this: “Oh, I feel a brush of wind”. He or she would not engage in subjective thoughts such as, “it’s pretty cold,” or “why the heck am I even on this hill anyways when I could be cleaning the grout between my tiles in my kitchen?”

Surprisingly, this type of structured thinking has many brain benefits. A successful meditator (because it is possible to do it wrong) is simply teaching his or her brain to better register and control the information it is processing. Externally, this slows down the automatic responses that are learned and happening between the brain and body.

To better explain the distinction consider this comparison that follows.

Ned the nonmeditator gets rear-ended by an idiot driver. His immediate reaction is to spring out of the driver’s seat of his car, stomp over to the idiot driver, and scream at him. Ned’s reaction to bad things in life is to flare into a raging anger. His response is automatic. Sometimes it doesn’t work in his favor, such as when he argues with his wife because she withdraws her wifely services—like lunch making—which he is rather fond of.

Now there’s Molly the meditator. She’s driving one day and she gets rear-ended by an idiot driver. While Molly used to react like Ned, she’s been practicing 1 hour of meditation each day consistently for 2 years now. Her reaction is to feel the anger, understand it’s there, and choose what she’d like to do with it. Her anger doesn’t go away but she can better control her reaction.

That’s why Ned got punched in the face by the idiot driver, who was 6 foot 4 inches and 300 pounds, while Molly serenely noticed the snake tattoo on his neck and quickly exchanged insurance information and got on with her day.

It’s been suggested that meditators have a superior ability to regulate and control emotions. Meditation has also been linked to lowered stress, better sleep, improved attention and concentration, and greater levels of happiness.

In the twenty first century, studies to better understand the benefits of meditation are part of a growing sector in neurological research. Brain imaging tools, such as EEG and functional magnetic imaging (fMRI), are commonly used to watch activity in a meditator’s brain during a session.


Image Spatial Resolution Doubles Each Year

Spatial resolution of MRI technology is doubling every year. That means the voxels (the 3D pixels that represent visual information) are getting smaller.


In 2015, a study from University of California, Los Angeles, reported that meditators who had practiced for more than 20 years had better brain health than nonmeditators of the same age.

In 2012, a team led by Dr. Judson Brewer of the Yale School of Medicine reported, that regularly practicing mindful meditation decreased activity in an area of the brain called the default mode network (DMN). Control of the DMN is linked to better control of thoughts, increased focus on the present moment, and potentially a greater happiness level.

And, in 2011, a team led by Sara Lazar at Harvard concluded that eight weeks of a specific form of meditation known as mindfulness-based stress reduction (MBSR), produced structural changes in key brain regions—the hippocampus and the amygdala—that led to a higher resilience to stress.

The only apparent downfall of meditation is that it takes time and effort. For an individual to reap the benefits, he or she must practice regularly for a minimum of 15 to 20 minutes per day, though more is recommended.

It might not be possible to fit that into a productive life between the (some would argue) more important stop for the Starbucks morning latte, the budget meeting, and the school pick-up of small, sniffly offspring. But, that’s okay, there are new technologies providing the same effects.

Brain Fixers

Eat well. Learn. Meditate. These are good things to do for brain health, but none of these methods are foolproof ways for preventing neurological illness. These methods only help when the brain functions as a complete system. That is, these methods help if all the brain’s mechanisms are working as they’re supposed to.

Because scientists don’t yet have a complete picture of all the inner-workings of the brain, little is known about what can be done to completely stave off brain illnesses such as Alzheimer’s, schizophrenia, bipolar disorder, or major depression, to name a few nasties.

What is known about brain issues is that they all involve a combination of environmental, lifestyle, and genetic factors. Unfortunately, there’s no specific formula that can predict the likelihood of brain disease or malfunction. At least not yet.

Medical conditions affecting the brain are classified as neurological disorders or mental illnesses. Included in this category are conditions involving damage to brain regions and their functions that are caused by deterioration or a genetic defect, or from a physical accident. These issues hamper an individual’s ability to interpret his or her world, behave socially, or operate the body efficiently.

A brain that doesn’t function as it should can have extremely negative consequences. The experience of life for the individual is altered. He or she can experience inability to be happy, or worse, become socially outcast, lose the ability to make a living and afford a home.

Consider what happened to famed railroad foreman Phineas Gage. In 1848, his head was impaled with a steel tamping iron while preparing a site for blasting. Improbably, he survived, and managed to live another 12 years following the accident, but he was forevermore a nasty jerk. The reason? The tamping iron punctured a brain region essential for social politeness and compassion. (Not coincidentally, this region does not fully develop until the age of four. This is why toddlers use the word “mine,” stomp their feet, and have 3-minute meltdowns when their mothers don’t feed them more chocolate milk.)

A brain that doesn’t function as it should can be unpleasant to live with. It’s why there’s a need for new technologies to treat them. Prior to developing technologies such as drugs and electro-therapy, the initial ways of treating mentally ill patients were barbaric.

Gage’s famous accident became an unfortunate treatment plan. It was called the lobotomy and involved a doctor putting a rod through a patient’s skull to calm them down. Other early methods were throwing patients in ice baths, poking holes in their skulls, or inducing insulin comas to see if that would do anything useful. When nothing could be done, patients would go to a hospital and stay there until they died. These solutions treated the adverse behaviors, not the actual brain issue.

In a 2015 TED Talk, neuroscientist Greg Gage (no relation to Phineas from what we can tell) said, “One out of five of us—that’s 20 percent of the entire world—will have a neurological disorder. And there are zero cures for these diseases.”


Image Zero Cures

See Greg Gage explain this in his TED Talk: http://superyou.link/greggage


He’s right. There are still no cures for brain diseases. There are however, treatment options.

The two challenges to consider when creating technologies to treat brain illness include:

• The brain is located under the skull, so to fix it, the method needs to be something that can get to the brain. It usually has to be as invasive as cutting open the skull and tinkering with the rather complex organ. And all the cells in the human brain (aka neurons) range in size from 4 microns (0.004 millimeter) to 100 microns (0.1 millimeter). Anything used to treat it or work on it needs to be small itself.

“One out of five of us—that’s 20 percent of the entire world—will have a neurological disorder.”

• Then there is this perplexing problem: We still don’t fully understand how the brain works. So cutting it open and playing with the parts is not an ideal way of treating it.

That brings us to the nonsurgical treatments. Technologies such as brain drugs and tools that use electrical currents to rewire brain communication pathways are the other tools used to correct the brain when it’s developed abnormal functions. These therapies are generally better than lobotomies because they often work fairly well. However, they don’t always.

Brain-Zapping Fixes

If you ask your local tech geek, they will often give you this advice when dealing with a computer problem: “Turn it off and on again and see if that fixes the problem.” It usually does for many common glitches.

When you reboot a computer, it temporarily stops all systems and then restarts them, clearing potential errors out of memory that might be causing the malfunction and resetting all processes. A similar process is used to treat brain issues. Consider that the brain is simply an electrical information processing system. To fix it, you want to reboot it to correct the issue. Here are some treatments that work on this principle.

Electroconvulsive Therapy

During an electroconvulsive treatment, a person receives a stream of electric currents to the head that go into the brain and induce a short seizure. The result is changes in the patient’s brain chemistry that might reignite lagging or malfunctioning communication processes. It might sound like a crude treatment, but it is highly effective for some people. And the newer treatments use better equipment and technology, so it’s much safer than it used to be.

In the late 1930s, early treatments involved extremely high doses of electricity that were applied without pain medications or anesthetics. This resulted in major issues such as fractured bones and memory loss. Not ideal.

Today, patients are given multiple treatments instead of one giant one. And often, treatment is administered on only one side of the head, which is called unilateral, instead of both sides simultaneously, which is called bilateral. The treatment is given under general anesthesia, so it is thankfully painless.

The patients are asleep for most of the treatment. When they wake up they might experience some side effects—headache, upset stomach, and muscle aches. Some patients experience minor memory loss. Most side effects are short-term and go away within a week or two. In most cases, unilateral electroconvulsive therapy (ECT) is safer and has less unpleasant side effects than the bilateral version.

The treatments are administered two to three times per week for three to four weeks. Most patients receive somewhere in the range of 6 to 12 treatments and the number of treatments depends on the severity of symptoms.

There are more targeted forms of electrical-stimulation treatment that send signals to specific brain regions rather than rebooting the entire system. Some scientists believe that focusing on a specific brain area is more effective and reduces the risk of side effects commonly associated with ECT.

These targeted forms are:

Vagus Nerve Stimulation (VNS)—This technique is slightly more invasive than ECT, tDBS, and TMS (see descriptions that follow), as it requires a surgical implant under the skin. It sends electrical impulses through the vagus nerve, which carries messages from the brain to the body’s major organs such as the heart, lungs, and intestines, and to areas of the brain that control mood, sleep, and other functions.

Transcranial deep brain stimulation (tDBS)—A technique that involves sending constant and low amounts of electrical currents through electrodes on the scalp that target specific brain regions. The current induces flow to chemicals that either increase or decrease activity.

Transcranial magnetic stimulation (TMS)—A procedure that uses magnetic fields to activate specific brain cells. The electric currents are delivered via a machine with a large electromagnetic coil that’s placed above the scalp near the forehead.

Deep Brain Stimulation

A relatively new form of treatment that uses electrical currents to manipulate brain pathways is deep brain stimulation (DBS). It was first approved by the FDA in the late 1990s to treat tremors, and is now used to manage symptoms of Parkinson’s disease, dystonia (a disorder of abnormal muscular tone), and obsessive-compulsive disorder. Most recently, it has gained attention as a potential treatment for chronic pain issues and mood disorders, such as major depression.

DBS requires brain surgery so that miniature electrodes can be inserted into specific regions of the brain (see Figure 5.2). The electrodes are connected with wires to a surgically implanted battery pack that’s commonly placed near or under the collarbone. The treatment requires regular adjustments from a physician to tweak the level of stimulation to manage the patient’s current symptoms. Patients are also given a device that gives them some control. However, there are risks with DBS, such as coma, brain hemorrhage, cerebral spinal fluid leakage, seizures, and paralysis.

Image

(Illustration by Cornelia Svela.)

Figure 5.2 Deep brain stimulation requires the surgical insertion of electrodes deep in the brain and is used to treat Parkinson’s Disease and in the future, other neurological disorders.

Consider this typical trajectory for a Parkinson’s patient. He or she is prescribed what’s known as a dopaminergic medication called Levodopa to manage the motor issues that come with the disease. It works for approximately five years, but then the symptoms start to reappear. More (or other) medications might be required to control the disease. But, with higher levels of the drug come new side effects. A common one for Parkinson’s patients is dyskinesia, which causes involuntary motor movements. DBS treatment complements these drugs and gets rid of many of these terrible symptoms.

Dr. Helen Mayberg runs a research team at Emory University School of Medicine that has had a lot of success using DBS to treat severe cases of major depression.

In 2008 when trials began, 30 patients received implants. Mayberg’s team continues to track the progress of 27 of those patients. She said “We have about a 75 percent sustained response rate now out to seven years, with continued stimulation.”

The electrodes they insert target a specific brain region called Area 25. (Not to be confused with Area 51, where the government keeps, you know, captured aliens.)

Area 25 is rich with the mood chemical transporter serotonin that affects appetite and sleep. This area governs a number of key brain regions involved in processing memory and mood.

In a 2012 interview with CNN, Mayberg confessed, “To be brutally honest, we have no idea how this works.” She acknowledged that more study is required to understand which variables are at play. “Maybe we are doing something wrong. Maybe the electrodes aren’t positioned correctly. Or, maybe they are not the right patient. That means we’ve got to understand the biology better.”

At this point Mayberg’s trials are on hold while they consider using better biomarkers and improve targeting. This means that FDA approval for depression or other mood disorders using the technique won’t come anytime soon. But the work continues.

That said, aside from Mayberg’s project, there are new initiatives currently being developed with more sophisticated devices at the National Institutes of Health (NIH) and Defense Advanced Research Projects Agency (DARPA).

Brain Drugs

Pharmaceutical drugs are currently the number one treatment method used in Western medicine to treat neurological issues. Their major benefit is that they are less invasive than surgery.

Drugs alter brain communication pathways by changing the way chemical messengers, called neurotransmitters, talk to one another. There are approximately 100 of these neurotransmitters in the brain, though there might be more. Each neurotransmitter carries a unique message. A prescription drug acts by mimicking certain neurotransmitters to alter the body’s behavior.

For a drug to get to the brain, it has to enter through the bloodstream and cross what’s known as the blood-brain barrier (BBB). It is a semipermeable membrane, meaning it can selectively choose the substances it allows to pass through it. Scientists have yet to fully understand how it works.

What is known about the BBB is that it has three critical functions:

• It protects the brain from various substances in the blood that can injure it.

• It also acts as a gatekeeper, blocking hormones and neurotransmitters found in the body from affecting the brain.

• It keeps the brain safe in a protected environment.

Large molecules have trouble passing through the BBB, as do substances that are fat soluble. However, low lipid (fat) soluble molecules can pass through the BBB easily.

An over-the-counter drug, such as the pain reliever acetaminophen (more commonly known as the brand Tylenol in the United States) works by interfering with brain signals received by cells in the body alerting a person to the feeling of pain. The result: The message is not able to make it from the cells of the body to the brain. This keeps a person from feeling pain.

Because the brain controls all processes of the body, including itself, drug treatments are some of the most important medications used to treat many illnesses including those that affect the brain directly.

No New Brain Cells: Myth Busted

Many doctors practicing today learned in medical school that adults do not develop new brain cells. What you have is what you got. As we said earlier: Wrong!

The hippocampus, which is a structure deep in the brain, produces 700 new neurons per day in adults. This process is called neurogenesis.

“You might think this is not much, compared to the billions of neurons we have. But by the time we turn 50, we will have all exchanged the neurons we were born with in that structure with adult-born neurons,” said neural stem cell researcher Sandrine Thuret of King’s College in London, in her June 2015 TED Talk, also in London.

Curiously, Thuret said depressed people have lower levels of neurogenesis and that antidepressants increase neurogenesis. The great news is if doctors can control neurogenesis, then they could likely therapeutically improve or impact memory formation, mood, and even prevent neurological decline associated with aging or stress.

So what can be done to increase neurogenesis? See Table 5.1.

Image

Table 5.1 Growing New Brain Cells

Brain Enhancers

It’s unfortunate your authors wrote this brain chapter as one of our last. Learning about nootropics might have served us in the delivery of this book. (Or, potentially, months in advance.)

The term “nootropic” is used to classify a group of over-the-counter substances that enhance brain function with few (and in some cases zero) negative side effects. They can increase the supply of certain brain chemicals that expand the brain’s ability. They’re used to improve cognitive functions like memory, mood, intelligence, attention, concentration, motor-control, and self-discipline.

There are five criteria that need to be met in order for a substance to fall into the category of a nootropic:

• It enhances one or more cognitive functions.

• It has few side effects and is virtually nontoxic.

• It enables firing mechanisms and facilitates communication between brain cells.

• It protects the brain from physical assault (such as injury or concussion) or chemical assaults.

• It assists the brain in functioning under disruptive conditions, such as hypoxia (low oxygen) and electroconvulsive shock.

Nootropics are not new. The term was coined in 1972 by Romanian psychologist and chemist Corneliu E. Giurgea, who once said, “Man will not wait passively for millions of years for evolution to offer him a better brain.”

Within the last ten years, these cognitive enhancers have grown in popularity.

Americans are competing more than ever in career and school to get to the top, and specifically in regions where technology is being developed, such as in Silicon Valley. Career-obsessed developers are trying to “one-up” each other. Many young entrepreneurs use nootropics to hack their brain, reprogramming their biology to gain an edge.

Tim Ferriss, author of the 4-Hour Work Week, told CNN in an interview, “Just like an Olympic athlete who is willing to do anything—even if it shortens their life by five years to win a gold medal—(young Silicon Valley entrepreneurs) are going to think about what (they) can take. The difference between losing and making a million dollars, or a billion dollars, is your brain.”

Ferriss has admittedly used nootropic substances to increase his productivity levels. So has Dave Asprey, the creator of Bulletproof Coffee. Asprey is a famous biohacker most infamously known for buttering his coffee instead of his toast in the morning.


Image Butter Your Coffee, Not Your Toast

Cloud computer pioneer Dave Asprey first created his Bulletproof Coffee recipe after visiting Nepal and drinking tea made with yak butter.

It’s made of brewed coffee, unsalted butter, and a special blend of oils. And it’s become all the rage in Silicon Valley because:

1. It can trigger weight loss by kicking fat burning into hyperdrive.

2. It stops hunger.

3. It gives you clarity of mind in the morning.

It also gives you a quick way to consume fats and 460 calories without eating carbohydrate-loaded breakfast food.

The recipe is found on www.bulletproofexec.com.


He also takes approximately 15 bio-enhancing drugs each morning, most of which are considered to be safe. However, he told CNN that even if he learns later there are long-term health issues, his “quality of life is so much better now than it was 10 years ago, that it’s priceless.”

Asprey told CNN he has done his research in biohacking. He’s spent approximately $300,000 and more than 15 years learning what’s effective and claims to have increased his IQ by 20 points.

An important distinction to make about nootropics is they enhance cognitive ability without (any known) detrimental side effects. So, while caffeine is cited by some as a nootropic, it’s technically not.

Stimulants such as coffee that contain caffeine give you heightened energy, but come with adverse effects. Drinking ten cups of coffee, for example, would raise blood pressure levels and later lower mood and increase anxiety levels.

Prescription-only medications such as Modafinil and Adderall also are not nootropics. They are, however, commonly called “smart drugs” or cognitive enhancers. They are used to treat serious disorders such as attention-deficit/hyperactivity disorder (ADHD) and narcolepsy, but the number of off-label users is growing. These drugs are helping many people perform better at school and work.

In 2014, one popular study published by Kimberly R. Urban and Wen-Jun Gao, from the University of Delaware, reviewed the ramifications of a group of under-the-counter drugs (MPH, Modafinil, and Ampakine) that many young adults use as cognitive enhancers. Their study concludes there are “deeply concerning effects” related to reduction in the brain’s plasticity in teenagers or young adults with immature brains. But they conclude their research notes by citing a need for “further exploration.”

Considering taking a nootropic? Be sure to do your research beforehand. Take products that are pharmaceutical grade. And never overdose. Substances that boost brain power should be consumed with caution until more is known about their long-term impact.

“Substances that boost brain power should be consumed with caution until more is known about their long-term impact.”

Over-the-Counter Drugs to Improve Brain Plasticity

Dr. Takao Hensch, a professor of neurology at Harvard University, is developing a drug that reverts an adult brain back to its childlike state. When it comes to brain development, the period from birth to seven years old is one of the brain’s critical learning periods. During this stage, the brain builds many pathways facilitating complex processes that involve seeing, hearing, touch, language, and speech, and higher cognitive functions such as reading, mathematics, and critical thinking.

Learning new skills is easier during this time than it is in adulthood due to the brain’s need for stimulation and a greater level of plasticity. It’s also a crucial time for building neural circuits that determine who we become later in life, and the formation of skills such as athletic ability and bilingualism.

Hensch and his team discovered the length of this heightened stage of learning and plasticity that occurs early in life is inhibited as a person ages.

There’s an enzyme involved in the process that affects DNA by making it harder to switch genes on or off. By reversing histone deacetylase (HDAC) with a drug used to treat bipolar disorder known as Valporate, Hensch’s team found it enabled the brain’s early neuroplasticity levels to kick in again.

In 2012, the first trial involved a group of adult mice with a lazy eye condition known as amblyopia. Similar to humans, this condition can only be corrected during the critical period of brain development, and only if an eye patch is worn over the strong eye so the weaker eye is strengthened. The adult mice with amblyopia received Valproate. They learned to effectively use their weak eye.

A further trial was performed on 24 men with little-to-no musical training. The study involved a test for perfect pitch, a skill that can only be learned within the first six years of development.

Some men received a dose of Valproate and some of the men received a placebo. All the men watched a 10-minute video daily that taught them skills of perfect pitch for a week. When they were quizzed at the end of the week, men who took Valproate correctly identified 5.09 notes out of 18 on a test. The placebo group identified 3.5.

The results of the study are promising despite the small test group. The team is now working toward replicating the study with a larger group of participants. In addition, they are testing more drugs, including ones that involve different genes.

This Is Your Brain on Video Games

Using drugs to treat brain illnesses or to boost intelligence is less invasive than cutting open the skull and working on the brain, but drugs still alter brain chemistry using chemical substances. Also, some drugs have side effects that hamper other cognitive and motor functions. For example, some antidepressant medications to treat major depression cause erectile dysfunction.

Thankfully, there is perhaps an easier way to get smarter or to reverse cognitive issues: video games. Since the discovery of neuroplasticity, scientists have been focusing on developing new brain-training technologies, and there’s been an explosion in brain games. Popular companies such as Luminosity, BrainHQ, and Happify sell games that help improve many areas of cognitive function.

The Gazzaley Lab, led by Dr. Adam Gazzaley, is a cognitive neuroscience research lab at the University of California, San Francisco, which is at the forefront of the brain game movement. Gazzaley and his team develop different types of video games with two key focuses:

• To treat brain illnesses and cognitive decline by correcting deficits existing in impaired populations. They work with all neurological disorders from ADHD to Alzheimer’s, and major depression.

• To enhance the function in the brains of healthy individuals, and to optimize it as far as it can go.

Gazzaley calls it a different class of medicine. It’s “what I would call ‘digital medicine,’” he said in an interview with the website 52 Insights (www.52-insights.com).

The games developed by the Gazzaley Lab set themselves apart from other manufacturers’ games because they are customized to the player’s personal experience.

Gazzaley told us, “The video games we build are very immersive and engaging. They are built at a high level with professionals from the video game industry. We validate them in our lab with neural recordings to understand what their impact is on the brain.”


Image Brain Games

See Adam Gazzaley talk about video games as digital medicine: http://superyou.link/gazzaleysgames


They use a mechanism he calls “adaptivity.”

“During game play, real-time performance is being recorded and being used to guide the challenge of the game. It is scaled in an appropriate way to the user’s ability. So, as they inch forward in terms of improving their performance, the game pushes them right to their maximum level,” he explained.

The games detect when the challenge is too hard. “If it pushes too far it pulls back and finds the player’s ‘sweet spot.’ It is completely individualized from the moment they start playing.”

The game adjusts to the play so that it’s not so hard that players will become frustrated and abandon the game, or so easy that they get bored. During the game, positive and negative feedback is also provided to help guide the user’s understanding of the tactics in the game.

In 2013, the Gazzaley Lab announced success with the game Neuroracer. The premise is to navigate a car on windy roads and shoot colored signs. After a month of regularly playing the game, a group of older adults dealing with cognitive decline showed improved attention and memory.

The lab is also currently working on a rhythm game called Rhythmicity. On this project, he teamed up with musician Mickey Hart, drummer of the band Grateful Dead, to develop a game that teaches players to learn rhythm.

Gazzaley explained: “Our brains are rhythm machines. It’s a core part of its function.”

When people have clinical conditions there is also a timing and rhythmic dysfunction that occurs in the brain. So the hypothesis is, if we can make someone more rhythmic will we see a change in their cognitive function?”

The team is trying to crack the “rhythm genome,” so they can use this information to develop new technologies and to learn more about the brain.

Along with the Rhythmicity, the lab is building four new games aimed at solving various neurological issues. As we go to press in mid 2016, these games still haven’t been released to the public. However, another Gazzaley-founded company, Akili Interactive Labs, has raised more than $30 million for further clinical development and building a commercial infrastructure, with the hopes of being on the market in 2017. In application, these tools will help improve and restore cognitive capability across all neurological issues.

When the newest games are ready, Gazzaley admittedly will be testing some of his own technology. “I hope that [...] I’ll start getting some benefits from the things we are building. I hope there will be cognitive enhancements I can document.”

Gyms of the future might have to expand themselves to include video games so people can work out their brain after they finish pumping iron. This field is exciting and we expect it to grow.

More Super Cool Brain Projects

In our research for this book, we spoke to a lot of scientists about a lot of cool projects. Here are a few that got our attention, in a kind of wildly distracted “was that a duck reciting the Gettysburg address?” kind of way.

However, you will see tons of crazy new projects referenced in the media and on the Web in the coming years as researchers continue to explore the brain.

Brain-Controlled Exoskeleton

If it is possible to connect a machine to a human brain, why not connect a human brain to a machine? Brilliant idea, right? It’s recently been done and it involves World Cup level sports. (And you thought this chapter couldn’t get any better.)

At the 2014 World Cup (for soccer, or futbal, as some nations refer to it), a man named Juliano Pinto kicked out the first soccer ball at the Corinthians Arena in São Paulo. Considering Pinto was a 29-year-old paraplegic, this was super cool.

Brazilian neuroscientist Miguel Nicolelis of Duke University, and a team of 150 researchers worked for more than a decade to develop the brain-machine interface that made this possible for Pinto. The robotic exoskeleton uses brain signals translated into commands the machine understands so it can bend it like Beckham (English people keep reading; Americans please see factoid).


Image Bend It Like Who?

David Beckham is a famous English soccer player known for kicking the ball in a way that curved around a defending player, which fans referred to it as “Bend it like Beckham.”


Gordon Cheng of the Technical University of Munich led the development of the exoskeleton, which included artificial skin with printed circuit boards. Each board contained pressure, speed, and temperature sensors. Tactile sensors on the robot feet transmit signals to a device on the patient’s arm. After some practice, the brain learns to recognize the arm vibration as associated with leg movements. But how does the exoskeleton recognize what the brain wants?

Early in the research, Nicolelis and his team identified brain signals through recordings of movement, which sounded almost like radio static. Nicolelis said his son calls it the sound of popping popcorn while listening to a badly tuned AM radio station.

“We recorded more than 100 brain cells simultaneously. We could measure the electrical sparks of one hundred cells in the same animal,” he said in a 2012 TEDMed talk, adding, “We got a little snippet of a thought, and we could see it in front of us.”

The find prompted Nicolelis and his team to proceed with experiments involving a monkey named Aurora, which played a video game.

This is getting weird right? Stick with it, science fans, it gets cooler. Aurora was required to move a cursor to a target on a video screen using a simple joystick. She learned to hit the target successfully 97 percent of the time in a United States lab. The movements were mapped from Aurora’s brain activity and transmitted to a robotic arm in Japan, which was playing the same game.

Eventually the monkey learned to play the game and operate the robotic arm with its brain alone. And the research progressed from there, to the day when Pinto kicked the ball with the engineered exoskeleton at the World Cup.


Image Bend It Like Juliano

Paraplegic Juliano Pinto talks about kicking a ball wearing a robotic exoskeleton in this video: http://superyou.link/julianoskick


The only really poopy thing about all this wicked science and research effort is that it didn’t draw a lot of well-deserved attention, likely because it was poorly hyped at the World Cup event. The breakthrough is enormous, and c’mon, anyone who can get an American monkey to play a video game in Japan by just thinking about it should pretty much get a Nobel Prize.

OpenWorm Project

Earlier, we talked about the tiny C. elegans worm. With only 302 neurons, it is the only brain that scientists have mapped.

OpenWorm is an international research project that’s created a simulation engine that uses computational models of the worm in their research. The goal is to build a simulation of the worm’s nervous system. Its aim is to help science make comparisons and answer fundamental questions about the human brain by learning about the worm and using it as a model.

In 2014, Timothy Busbice, a programmer involved in the OpenWorm project, was able to build a Lego robot that was controlled by the worm’s brain. Prior to that point, Busbice had spent more than 20 years trying to create a connectome that would use individual programs to represent each neuron and mimic their unique set of functions in a computer.

Initial attempts failed due to the “limitations of the machines.” On a 32-bit computer, there were too few processes. He overcame this challenge by using a 64-bit machine to support a system of programs that matched the 302 neurons in the C. elegans brain.


Image Duck Deliveries

Most computers these days are 64-bit processors, which means bigger chunks of data move across the chip. Imagine a truck hauling 64 rubber duckies down a highway compared to a smaller truck hauling 32 rubber duckies at a time. Fill the highway with 64-bit trucks and that’s a lot more duckies being delivered.


Busbice discovered he was able to turn a LEGO robot into a simulated worm. He was fascinated by “the recursiveness of the neurons.” He discovered that neurons communicate by looping back to one another.

“It’s not that you push a button and neuron a talks to b, c, d and then it stops. That’s not how the nervous system works. The output is continuous. They all come back and activate (the other) neurons again and again,” said Busbice. “My hypothesis is that this recursiveness in our neural networks dictates our behavior.”

Busbice’s project is a great example of a bit of wild science. Emulating animal brains in bots to see what the outcome will be can teach us a lot about how the brain, and by extension the human brain, works. Researchers don’t always know where they’ll end up, which we imagine is part of the thrill of being a scientist that works with artificial worm brains. Plus, how cool is it to kiss your spouse goodbye in the morning and say, “See you later my little love muffin. I’m off to work to see if I can get an artificial worm brain to drive a LEGO robot.”


Image Wormy Robot

See a robot that’s run by a worm brain: http://superyou.link/wormbrain.


Human Brains in Bots

These days, Kevin Warwick’s team, among other cool stuff, conducts research with brains in bots. Early iterations on their project involved culturing miniature brains from rat neurons and using them to control small robots.

“We get rat embryos, take part of the cortex out of them, separate the neurons with different enzymes and lay them out in a small dish that has electrodes on it. Then, we put them in the incubator where they live and grow,” he said.

Growing the rat brains allows the researchers to gain insights on how they develop. Then, when they’re deemed ready, they are placed into what Warwick describes as “a little robot with ultrasonic sensors on it.”

The rat brain is connected to its machine body via a Bluetooth connection. Sensory signals from the robot stimulate the brain. Output from the brain stimulates actions in the robot. This connection allows the researchers to understand how the brain receives information from the environment and processes that information to produce behaviors.

“We learn simple things, like how the robot learns to move around and not bump into objects. We learn about the brain from what it’s doing as it’s going through the learning procedures.”

These experiments teach researchers more about the brain so the information can be used to advance brain science. More specifically, this information can be used to treat disabilities. For example, one of Warwick’s team’s objectives is to learn how memories are created. This understanding will help develop cures for diseases like Alzheimer’s, where memories are lost over time.

The team has since moved into the process of using human brains. “We’re culturing human brain tissue,” said Warwick. “With rat neurons, we feel we get good results. But certainly, the surgeons we work with (say) rat brains are so unlike humans so the results don’t really map that easily.”

Brain-to-Brain Communication

Miguel Nicolelis and his team from Duke University have worked on a number of significant projects involving brain-to-brain communication. That is, the wiring of brains together, using a form of technology to connect them, or what’s known as a brain-to-brain interface.

In 2013, his team linked two rat brains across the world—one in Durham, North Carolina and one in Natal, Brazil. They had previously learned to connect brain to a machine. The goal was to determine if they could connect brain-to-brain through a machine to facilitate an information exchange.

Each rat was first taught a simple task: To press a lever for food when they were signaled by a light flashing in their cage.

Rat 1 functioned as an “encoder” rat. He was shown the signal (the light) and once he pressed the correct lever for food, his brain signal transmitted to Rat 2 (who was not nicknamed Ratatouille, by the way).

Rat 2 was the “decoder” rat. He received no flashing light but was still expected to press the lever when he got the information that Rat 1 had seen the signal. And this is what he did.

The pair worked in collaboration 70 percent of the time. When the decoder rat made a mistake, the encoder rat also adjusted for the behavior. Nicolelis’s team discovered that brains can communicate with one another through a network and are not limited by physical location.

In 2013, he told KurzweilAI.net that more research like this could lead to a new field he calls “neurophysiology of social interaction.” “To understand social interaction, we could record from animal brains while they are socializing and analyze how their brains adapt—for example when a new member of the colony is introduced,” he said.

In 2015, his team used the combined thinking power of three monkeys to successfully move a mechanical arm. Each monkey was individually taught how to move a virtual arm in a 3D space by picturing the motion in their minds. Then the team of monkeys shared control of the arm. Together, they collaborated to adjust movement and speed so they could collectively grab the digital ball. The old adage “two brains are better than one” proves true, at least in monkeys. This increased brain power has an application for expanding intelligence in humans. The coming work will be to connect humans together.

Human-to-Human Brain Communication

In 2002, Kevin Warwick had already connected human brains, but only because he and his wife were willing to use themselves in place of the monkeys. He wired his nervous system to his wife’s so that the couple could communicate through thought.

For this project, he had circuitry surgically implanted to a nerve in his arm. BrainGate, as it is called, is technology that uses electrical signals to move robotic devices. It is designed to improve cognitive or motor function in disabled individuals. Using BrainGate, an individual can move a limb just by thinking that they’d like to do it.

With the BrainGate implant in Warwick’s arm, and two electrodes placed in his wife’s nervous system, they were linked. Warwick explained: “When my wife closed her hand, my brain received a pulse. So if she tapped two or three times, my brain received two or three pulses.”

The communication between the couple was not complex, though it was effective. Warwick’s wife couldn’t tell him to stop at the grocery store to pick up a carton of milk on his way home, but she could tap her hand and he would know she was thinking about him.

“Think about a telegraph system, we were sending telegraphic signals, but it was directly from nervous system to nervous system,” he said.

“Sam Morse said when he was first coming up with Morse code, he was referring to signaling from brain-to-brain communication and he achieved 99 percent of that, except for this interface issue—how to get signals from the brain back to the wires and back again at the other end. So all we were doing is figuring out the one percent that Morse didn’t do.”

Signals were sent wirelessly using an Internet connection from lab-to-lab. Part of the reason for the experiment was to show how easy it was to connect nervous systems. But the team quickly realized that meant they could do much more with those captured signals—once they were no longer restricted to the body where they originated—such as send them vast distances over the Internet. Just like a Kim Kardashian selfie.

Warwick predicts that the next ten years will show major advances in thought communication technology.

“For me, thought communication in some basic form will be enormous. Just as the telephone has been enormous. Though, the telephone will only be a tenth of what thought communication will be like. I think the first experiments will have happened. How long until it’s a commercial success? That, I don’t know.”

Unsolved Mysteries of the Brain

As you can see, neuroscientists have made great strides in neuroscience in 5,000 years, especially in the last couple of decades. However, the reality is they know very little about the brain and how it works relative to what they would like to know.

There are some significant mysteries that remain unsolved, and many neuroscientists believe that the answers to these questions will likely remain very elusive for quite some time—that is, unless we start to see improvements in this area comparable to exponential rates of improvement found in consumer technology. We think the pioneering work being done this decade in neuroscience will pay off in the next decade. Remember, significant breakthroughs accelerate the next set of breakthroughs.

Here’s a summary of some of what we don’t know, and what the issues are.

What Is Consciousness?

This is probably the greatest of unsolved mysteries in neuroscience. Humans are conscious but we have no idea how or why we got the way we are. Nevertheless, there are two opposing camps on this.

• There are scientists who believe that consciousness can be solved and will be explained once we better understand the mechanics of how the brain works.

• The opposing group believes that consciousness is a law of the universe, like gravity, that can be understood but never explained.

Camp one aims to answer questions, such as:

• Does consciousness arise when a number of neural connections is reached?

• Or, does it originate from a specific brain region?

Perhaps, as quantum mechanics theory suggests, it comes from micro molecules inside the brain that work together as computing elements.

Why Do We Sleep?

There are still many unanswered questions about sleep. What scientists do understand is that sleep helps humans better perform. It aids man’s ability to process information. It’s essential for the body’s repair and rejuvenation process. Humans can’t live without sleep. A complete lack of sleep will kill you. A rare genetic disorder known as fatal familial insomnia (FFI), a condition where your brain can’t go into a resting state, demonstrates this. Anyone who develops this condition suffers a dramatic breakdown of cognitive processes. It starts with headaches, moves to panic attacks and hallucinations, then to mental retardation, and ultimately ends in death, 18 months after the initial onset of symptoms. The average age of those afflicted is 50, although the age range is 18 to 60.

The bottom line is: Sleep is crucial and life giving. And humans seem to do it in the most complex way. We go through five stages of sleep all with different important characteristics. Animals vary in their sleeping habits. The giraffe, for example, can go weeks without it. The bat or sloth can sleep for an entire day. Dolphins sleep while partially awake and with one-half of their brain operating, which allows them to always remain awake to ensure their survival.

Animals with a higher aptitude for intelligence, such as all mammals, spend more time in a stage of sleep called REM, rapid eye movement. Humans spend the most time in REM. Babies spend 50 percent of sleep in REM, while adults spend 20 percent. So, obviously REM is important, too. But why?

Well, perhaps, it’s because dreams occur during REM. This opens up another question? What are dreams? Why do we need them? We don’t know.

Do We Have Free Will?

This question circles back to the debate of consciousness. Are humans complex machines run entirely by their brains? Or, is there something else at play—something that is yet to be understood, or perhaps, man will never truly understand?

When your spouse asks you what you want for dinner, what are the thoughts you process to arrive at an answer? Is your brain processing information? Does it simply retrieve and collect memories that ignite your tastebuds and tell you to ask for sushi? Or, are the thoughts you process a convention of your “mind.” More simply put: When you think about your thinking are you an active agent? Or, do you just think you are, grasshopper?

Yikes.

Two schools of thought on free will are:

Immaterialists are scientists who believe in free will. They suggest there is a component of active choice in the decision-making process. A neuroscientist, for example, using a brain-imaging technique to read neuronal activity will observe the thinking process in a patient, but they can never know (yet!) what that person is thinking about. This suggests personal choice is being made internally. Immaterialists believe that psychology can never be understood.

Materialists, on the other hand, believe free will is merely an illusion. The brain controls everything. The thoughts the brain produces are merely based on what a person has learned from the past and their inferences about a given moment in time.

Research suggests there might also be a genetic component involved in free will. In 2013, Eeske Van Roekel’s team from the University of Groningen, Netherlands, discovered that a specific oxytocin receptor gene predispose a group of girls to intense feelings of loneliness when in the presence of judgmental friends. So, free will might also hinge on your biology and genetic makeup.

How Are Memories Processed?

Memories serve a crucial role in how each individual understands and engages with his and her world. Consider that from the moment a baby is born it starts learning by connecting with its environment. Its brain makes decisions on how to encode, store, and retain new information by measuring it against how safe it is.

After all, it is the brain’s ultimate job to keep the organism surviving. So memory plays a vital role in keeping man safe from perceived threats in his environment.

A boy who gets bitten by a dog, for instance, might learn “dogs aren’t safe.” He’ll catalogue the event in his biological filing system. At a later date, it will be retrieved when he sees another dog as a signal from the brain to say “remember what that other dog did to you? Watch out!” This also means he can start to relate to himself as someone who is “not a dog person.” So memory is also a factor in how a person sees and describes themselves, their personality.

There are important things we know about memory. We know some of the functions it serves, as noted above. Scientists also understand many of the brain regions—like the hippocampus—that are involved in memory processing. There are different types of memory—declarative, nondeclarative (muscle memory), short-term, and long-term—that get stored in different ways.

But storing memories is incredibly complex. Neuroscientists do not yet know where a memory gets stored and how memory recall works. It’s also unknown how the entire system works together. For example, a person driving a car uses memory recall to control motor skills and to remember where they are going. Scientists have figured out pieces of the puzzle, but there’s still a lot to be learned to get the complete picture.

Is that It?

As you’d imagine these aren’t all the unanswered questions that neuroscientists are noodling around their heads. There are dozens of big questions. And some are much more complex than most of us can understand. Many of them relate to how the brain’s structures work and work together.

If you are curious, do a Google search on “top unanswered questions of neuroscience” to get a sense of how big the task is that neuroscientists face in understanding the human brain.

Discover magazine quotes 23 of them from California Institute for Technology neuroscientist Ralph Adolphs, who wrote about them for Trends in Cognitive Science in April 2015. Here’s a sampler that Adolphs believes might be solved in the next 50 years:

• How do circuits of neurons compute?

• What is the complete connectome of the mouse brain (70,000,000 neurons)?

• How can we image a live mouse brain at cellular and millisecond resolution?

• What causes neurological illness?

• How do learning and memory work?

• Why do we sleep and dream?

• How do we make decisions?

• How does the brain represent abstract ideas?

With this part of science being helped along by ever-accelerating improvement of technology, we think they will get there faster than they think. If most of the breakthroughs in neuroscience from the last 5,000 years were achieved in the last 20 or 30 years, the next decade is going to be very, very surprising to a lot of researchers.

The Future

Ok, so let’s summarize our progress around the brain before we leap into some creepy stuff about the future that (spoiler alert) might have you send your child to the liquor cabinet to get some special nerve medicine for mommy (or daddy).

Lesson 1: It took a very long time to figure out pretty much anything useful about how the brain sort of, kind of works. But, if we are really honest, we pretty much know squat about it.

Lesson 2: Most of the big progress in neuroscience has happened recently—like in the last 20 or 30 years.

Lesson 3: We’ve figured out worm brains and done some cool experiments.

Lesson 4: Monkeys playing video games + some very smart scientists × ten years = Paraplegics that can play soccer.

Lesson 5: Computers are still pretty dumb and unconscious.

Lesson 6: There’s a crap-load of stuff we don’t know about the brain, but ...

... Now, we’re trying to build super smart computers that might suddenly become conscious and take over the planet and enslave us all to their evil bidding. Or so some smart rich guys say. Though our friend Mr. Kurzweil and other uber-smart scientists say don’t worry about that. Seems about right? Good, let’s do this future thing.

Expanding Human Intelligence

About 200 million years ago humans grew a neocortex. It was a new layer in the brain that provided early humans with the ability to develop technology in order to build and optimize the world around them.

All mammals have a neocortex, which allows for higher-level functions including sensory perception, generation of motor commands, and spatial reasoning. In humans, it also handles language and the capacity for art, music, and humor (yes, blame fart jokes on some of the more vulgar neocortexes out there). That amazing brain development was a quantum leap in the evolution of mankind (see Figure 5.3).

Image

(Illustration by Cornelia Svela.)

Figure 5.3 The evolving human brain.

This leap forward is about to happen again, at least if Google has anything to say about it. As we discussed in Chapter 1, “The Emergence of (You) the Human Machine,” natural evolution can take generations to enhance a human biologically, but technology can accelerate human abilities much faster. And perhaps technology is the new accelerated evolution.

The famed search engine company is hard at work building a synthetic neocortex, with Ray Kurzweil heading the project. Besides being an inventor and futurist, he is also a Director of Technology at Google.

“We’re simulating how the neocortex works and developing mathematical models of it. If we can develop them through computers and develop a synthetic neocortex, then we can do the same kinds of things the neocortex does. For example, human language.”

This amazing resource won’t just be an in-house Google tool. Anyone will be able to tap into it just by thinking. That’s because by the mid-2030s—in 20 years or so—there will be thousands of nanomachines swimming around in our bodies, doing cleanup work, keeping us healthy, but also connecting our brains to the Internet, including resources such as Google’s synthetic neocortex.


Image You Say Nanobot, I Say Nanite?

You have likely heard different words for these tiny robots that will swim around in our blood stream one day. Some call them nanobots, but they are variously referred to as “nanoids,” “nanites”, “nanomachines,” “nanocytes,” or even “nanomites.”

Nanobot is the term for tiny robots that provide all kinds of services at the nano-level. Nanocyte refers to a nano-sized cell, as “-cyte” is latin for cell. Nanites might be the closest correct term used here.


Here’s how it could work, said Kurzweil:

“Someone is approaching and I need to say something clever and my 300 million pattern recognition nodules (in my organic brain) are not going to cut it and I need a billion more,” he explained, “for two seconds, I could access that in the Cloud.”

This boosted brain power will massively expand our natural ability to process information. In the next 15 years, Kurzweil says, “we’ll become a hybrid of biological and nonbiological thinking. My model for that is not so much that we would put synthetic neocortex into our brains. We’ll basically put communication out from our neocortex into the Cloud.”

The “cloud” is geekspeak for the Internet, or rather the computation power it has to provide services that run on the Internet instead of locally on your computer. Dropbox, Netflix, Flickr, and Google Drive are all examples of cloud services.

This new resource will take human evolution to the next level, he said. “What happened two million years ago allowed us to take a qualitative leap. And this will allow us to take another qualitative leap.”

Some people fear this technology will make us less human and more robot. One scientist concerned about this is Miguel Nicolelis, head of neuroengineering at Duke University. (He was also the researcher we mentioned earlier who connected the paraplegic man to an exoskeleton.)

“I think we’re facing a big danger—if we keep relying so much on computers, we will begin to resemble our machines,” he warned in an interview with The WorldPost, a Huffington Post project. “Our brains will assimilate to the way computers operate, causing a significant reduction in the range of behaviors that we normally produce.”

Kurzweil argues that such a resource will make us more human. “Evolution creates structures and patterns that over time are more complicated, more knowledgeable, more creative, more capable of expressing higher sentiments, like being loving,” he said in his WorldPost interview. “It’s moving in the direction of qualities that God is described as having without limit.

“So as we evolve, we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world—it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”

Of course there are some obstacles to overcome to get there. Technologically, two decades should be sufficient to develop the nanites. However, as one pundit pointed out, it’s the FDA that will have to approve the availability of medical brain enhancing nano devices, and that might take longer than a couple of decades.

The other issue is these nanites need to be made of something our bodies will not reject. Tiny bits of nano-metal that interface us with the Cloud and float around in our bloodstream will need to interact with our biology in a way that is inert. You can’t have your immune system trying to eliminate them.

One solution to this might be to engineer a form of disabled virus made from our own tissue. Or generated from our own stem cells. If it’s made from us, our systems won’t reject it. In a way, vaccines are the beginnings of this kind of technology. They are actual disease agents injected into us that have been engineered to not cause the disease and be recognized by the body so that it can learn to stomp them out if the real thing comes along.

Viruses, and even cancer cells, have the capability to interface with our cells to do bad things to us. But what if we can engineer these to do good things for us? That seems to be inevitably the future of nanite technology, at least from where we sit right now.

“... you can never see the actual future from the point of view of the soon-to-be primitive past.”

Of course you can never see the actual future from the point of view of the soon-to-be primitive past. So while we don’t plan to miss the mark on this one, the actual outcome might look very different.

Here’s how that might play out. If you recall, The Jetsons was a TV show that started airing on Sunday nights in 1964 and was a prime time cartoon (not a kids show). It was designed to show the future in 100 years (2064). The Jetsons has been a remarkable forecaster of that future with one massive exception that was completely missed.

If you watch one of the initial 24 episodes, it failed to grasp the impact of digital technologies. All the inventions it illustrated were largely analog.

And so it might be with the future we are predicting here. When we say nanites will swim around our brains and connect us to the cloud to expand our intelligence that sounds possible. The outcome is highly likely, but the execution might look very different.

Artificial Intelligence (AI)

In a 2014 interview with BBC, Kevin Warwick, professor of cybernetics at Reading University (we mentioned him earlier in this chapter) said, “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test.”

The Turing Test was developed by Alan Turing, a British computer scientist, who is most famously known for leading a team that cracked the Enigma machine during World War II, allowing Britain to intercept encrypted communication between Nazi forces and their leadership. It ultimately helped the Allies win the war.

Turing proposed that by the year 2000, a computer would have the intellectual capability to fool humans into believing that it was real at least 30 percent of the time. His famous Turing Test evaluates a machine’s capability to match or supersede the intelligence level of humans.

During the test, a human judge assesses an exchange of natural language occurring between a human and either a human or a machine. The conversation is conducted through a computer keyboard typing channel so that the machine’s capability to speak is not necessary.

This was the scenario that took place at the Royal Society in London in June of 2014. A group of humans sat in one room, each at their own computer. The participants engaged in a text-only conversation with one another or with someone who couldn’t be seen in the room across the hall. In other words, they were unsure whether they were interacting with a human, or a “chatterbot,” a computer program engineered to interact with humans by simulating a conversation. A panel of judges evaluated the conversations to see if they could decipher whether the human was talking to a man or to a machine.

The result: 33 percent of the judges were tricked by a computer masquerading as a 13-year old boy from Ukraine named Eugene Goostman. Chatterbot Eugene Goostman was developed in Saint Petersburg, Russia, by three programmers, and he was the first machine to pass the legendary Turing Test. It was a landmark moment in AI. Though to be fair, it doesn’t mean that “Eugene” was actually conscious and capable of independent thought. It only proves that its algorithms were convincing enough to fool people on the other end. Eugene’s broken English probably helped mask any deficiencies, too.

“Some will claim that the Test has already been passed,” said Warwick. “The words Turing Test have been applied to similar competitions around the world. However, this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversation.”

Warwick said Eugene Goostman had a broader ability to answer questions than other computers that have shown an aptitude for intelligence.

Watson, IBM’s supercomputer, is famously known for beating a pair of humans at a game of Jeopardy! in 2011. Watson is a cognitive technology, a natural extension of what a human is capable of doing when it comes to interpreting and responding to questions expressed in natural language.

Watson is more advanced than other artificially intelligent computers. Like other systems, when Watson is asked a question, it goes into a retrieval process to come up with an answer. The retrieval process involves interpretation.

Ray Kurzweil said that Watson’s intelligence level shows great promise for the field for AI: “The interface Jeopardy! queries has all these subtle forms of language including riddles and metaphors and jokes. Watson got this right: ‘A long frothy speech delivered by a pie topping’ and it quickly responded ‘What is a: Meringue Harangue?’”

Another bot that is showing great promise is Bina48, a project developed in partnership between LifeNaut and the inventor of Sirius XM Radio, Martine Rothblatt and her wife, Bina Rothblatt.

Bina48 is a mechanical clone of Bina Rothblatt (at least her head and shoulders) and was created using video interviews, laser scanning, face recognition, and voice recognition to be a complete imitation of the real Bina.

Bina48 is an android that uses information she processes and stores about the real life Bina Rothblatt. She draws her knowledge from the real Bina’s “mind file”—an uploaded collection of her beliefs, memories, and personality traits.

However her inventors say she is learning and growing at an exponential rate, and will eventually evolve into something beyond the original Bina.

The android is an early demonstration of the Terasem Hypothesis, which is defined as “a conscious analog of a person that is created by combining sufficiently detailed data about the person (a mindfile) using future consciousness software (mindware).”


Image File Your Mind

You can create your own Mind File on this site: http://superyou.link/makeamindfile



Image Domo Arigato Bina Roboto

See Bina interviewed by a musician: http://superyou.link/sheisbina


Will the Machines Rise Up?

Kurzweil says he’s often asked questions such as, “Will computers like us? and “Will they want to keep us around?” He always reassures them that “that’s up to us.”

“We create these things,” Kurzweil said. “They are part of human civilization. They are part of humanity. Man couldn’t reach the food on the highest branch thousands of years ago, so we invented tools to expand our physical reach. Now those physical tools allow us to build skyscrapers.”

Kurzweil says, “It’s part of who we are. It’s not us versus machines. We will become hybrid and ultimately the nonbiological portion will dominate.”

Zoltan Istvan, leader of the Transhumanist Party, sees the future of artificial intelligence in much the same way .

“The likelihood of a Terminator scenario is pretty Hollywoodish,” Istvan said. “The much greater likelihood is that we’ll have a society that interacts with robots and uses artificial intelligence all the time.”

When Kurzweil spoke at the 2015 Exponential Finance conference hosted by Singularity University and CNBC he said this to counteract the worry: “We have these emerging existential risks and we also have emerging, so far, effective ways of dealing with them.”

He suggests the concerns about AI will cease to exist when the positive benefits are shown and people gain more confidence with the new technologies. He also makes the argument that “[man’s] always used technology to ... transcend our limitations, and there’s always been dangers.”

In 2014, Neil Jacobstein, Singularity University’s cochair in AI and Robotics, spoke at Summit Europe of the incredible good AI will bring to the world. Super-intelligent computers will be able to assist man in solving the most complex questions, from illnesses to climate change.

At the same time, he cautioned: “These new systems will not think like we do. We’ll have to exercise some control.” Man will always have a moral responsibility to maintain control over these systems. Efforts to do so will likely require rigorous lab tests and programming in controls to control machine behavior. Of a future with AI, Jacobstein said, “We have a very promising future ahead ... build the future boldly, but do it responsibly.”

An even bigger concern, according to Istvan, is “who gets to artificial intelligence first.” He suggests this could change the power dynamic of the entire world. We’ve explored this discussion in greater detail in the final chapter of this book, “Human 2.0: The Future Is You.”

... And What About Conscious Robots? “GULP”

What concerns many people about AI is the unknown variable of consciousness. Put more specifically—Will robots that are as smart as humans be conscious? How will we know they are conscious given that consciousness is a subjective experience?

The answer to this question requires us to first determine what consciousness is. As we mentioned earlier, it is one of the top unanswered questions in neuroscience. Will we ever understand what it is and the mechanisms that create it?

Some say yes. Some say no.

We say: Of course. Exponential growth in computing power is going to make understanding it inevitable. Or, at the least, we’ll be able conclude that we’ll never truly understand it and accept it as a law, such as gravity.

The mystery neuroscientists are dealing with is where in the brain does human consciousness originate? One theory is that consciousness occurs when we register, collect, calculate, and assess our experiences and our memories in a continuous mode that follows us through life. There are all these sensory inputs that are combined by the brain to provide each one of us with a highly personal and subjective experience of the world.

We know consciousness exists because we each have an experience of our own. And we can observe others who say they can recognize their own, although we can’t experience someone else’s consciousness or say whether another person is conscious.

In his 2014 TED Talk, Philosopher David Chambers, a professor of New York University said, “Our consciousness is a fundamental aspect of our existence ...there’s nothing we know about more directly ... but at the same time it’s the most mysterious phenomenon in the universe.”

Up until recently, there has been very little scientific work on human consciousness. Human behavior can be studied objectively and neuroscientists have studied and continue to study the brain objectively. But the human consciousness remains uncharted.

Approximately two decades ago, neuroscientists such as Francis Crick and physicists, such as Roger Penrose raised the idea of science investigating consciousness. It’s now believed that consciousness might be easily explained from studying recognized processes in the brain.

There is also the theory, according to Chambers, that consciousness is an existing fundamental and can be linked to other fundamental sciences including, space, time, mass, and physical processes.

To better understand this, Chambers and other scientists have attempted to find a correlation between brain activity and consciousness. The aim is to examine parts of the brain and how they influence the human ability to see faces or to experience pain or happiness. It is a science of correlation, but fails to explain what makes human consciousness tick.

“We know that these brain areas go along with certain kinds of conscious experience, but we don’t know why they do,” said Chambers. “But it doesn’t address the real mystery at the core of this subject: Why is it that all that physical processing in a brain should be accompanied by consciousness at all? Why is there this inner subjective movie? Right now, we don’t really have a lead on that.”

A bigger question asked by many scientific thinkers is: Will robots or computers have a capacity for consciousness? Christof Koch, Chief Scientific Officer of the Allen Institute for Brain Science in Seattle believes it is possible. He has spent almost a quarter of a century studying consciousness and is currently working at The Allen Institute to build a complete map of the mammalian brain. It’s a $500-million initiative funded by Microsoft cofounder Paul Allen.

In 2014, Koch spoke at MIT about integrated information theory (IIT), developed by Giulio Tononi at the University of Wisconsin. Tononi’s theory suggests consciousness occurs when a system that is so complex produces a “cause-effect” repertoire.

Koch told MIT Technology Review in an 2014 interview, “If you were to build a computer that had the same circuitry as the brain, this computer would also have consciousness associated with it. It would feel like something to be this computer.”

Koch says that humans will likely build AI systems before they understand them. He is probably right. And it might be from building it that we can start to understand it.

Neuroscientist Michael Graziano, professor of neuroscience at Princeton University, has an interesting perspective on consciousness. Writing in Aeon Magazine (www.aeon.co), he said “Artificial intelligence is growing more intelligent every year, but we’ve never given our machines consciousness. People once thought that if you made a computer complicated enough it would just sort of ‘wake up’ on its own. But that hasn’t panned out (so far as anyone knows). Apparently, the vital spark has to be deliberately designed into the machine. And so the race is on to figure out what exactly consciousness is and how to build it.”

Part of the problem is consciousness has been viewed as a mystical process and that is a sticking point for researchers.

“Consciousness research has been stuck because it assumes the existence of magic,” said Graziano, who is also the author of Consciousness and the Social Brain (Oxford University Press).

“Nobody uses that word (magic) because we all know there’s not supposed to be such a thing. Yet scholars of consciousness—I would say most scientific scholars of consciousness—ask the question: ‘How does the brain generate that seemingly impossible essence, an internal experience?’”

His answer is to engineer for it.

“As long as scholars think of consciousness as a magic essence floating inside the brain, it won’t be very interesting to engineers. But if it’s a crucial set of information, a kind of map that allows the brain to function correctly, then engineers may want to know about it.”

So perhaps that’s the job at hand. If we build it, will it be self-aware? Wait 20 years or so. We’ll soon see.

Super Us? Here Come Virtual Helpers Armed with Strong AI

It’s one thing to have AI available. It’s largely available today: It runs in the financial markets. Search engines use it. It watches you when you appear on security cameras. In fact, you probably interacted with an AI-enabled machine multiple times today during your daily routine and didn’t even know it. Heck, devices such as Siri on the Apple iPhone or the discreet “Google Now” on Google’s Android phone or other virtual assistants seemingly behave like artificial entities, however it is easy to tell that “they” are not real humans. (Try asking: “When will pigs fly?” Siri on my iPhone answered: “On the twelfth of never.”)

But the advent of strong AI is only a few decades away. By one measure, when it happens that will be the technology singularity: the point where the machine’s intellectual capability is functionally equal to a human’s.

At that point, super smart machines will start to self-design and we humans can retire to the beach and drink mai tais. Or cower under the rusty shell of a corrugated metal roof as World War III rages around us. (It depends on who you ask.)

“This intelligence explosion, which thinkers believe will happen around 2045 or so, is going to fundamentally change the world as we know it.”

This intelligence explosion, which thinkers believe will happen around 2045 or so, is going to fundamentally change the world as we know it. After 2045 we can’t really predict what will happen. This is why the technological Singularity is the point at which events can start to become highly unpredictable or even unfathomable to our own human intelligence.

So are we on track for this?

To date, computer scientists are making progress. Consider these results, as reported by New Scientist magazine:

• In September 2015, an AI system called ConceptNet answered a preschooler IQ test and produced results on par with the little tykes.

• In 2014, a system called To-Robo passed the English section of Japan’s national college entrance exam.

• A system called Aristo at the Allen Institute for Artificial Intelligence (also known as AI2) in Seattle, Washington, is taking New York state school science exams. AI2 has also challenged other computer scientists to beat the exam for a cash prize. (AI2 was created by Microsoft cofounder Paul Allen.)

The key problem with these kinds of challenges is that computers that do well on them might struggle with everyday questions that humans might find simple to answer. Like, perhaps, how many catfish are at the Humane Society? Or as New Scientist put it, “Is it possible to fold a watermelon?”

Machines can have trouble with context, sarcasm, humor, and communication that requires interpretation beyond basic information retrieval. They are, however, good at pattern matching. Ask a machine “Who is Sundar Pichai?” And it will tell you he is the recently instated CEO at Google. Regardless, intelligent machines, even if they don’t yet use “strong AI” are already replacing human jobs.

Boston Consulting Group forecasts that by 2025, up to 25 percent of human jobs will vanish and be replaced by either smart software or robots, while a study from Oxford University has suggested that 35 percent of existing UK jobs are at risk of automation in the next 20 years.

If your job is repetitive and doesn’t require super complex analysis or strategy, or you process information, you will likely be out of a job in the coming decade. Sorry! It will also replace those grumpy civil servants that can be miserable and rude. For them, we’re not sorry.

As you can imagine, if we can reverse engineer the human brain it will readily help us understand how to build artificial brains that can make fairly advanced decisions accurately and with a lower error rate than a tired, grumpy human.

Don’t worry, though—we will figure all this out. Luckily Transhumanist Party president and United States Presidential Candidate Zoltan Istvan, who is super brilliant, has a plan for you, dear reader. It’s called Universal Basic Income. After you hear about it in Chapter 7, “In Hacks We Trust? The Political and Religious Backlash Against the Future,” you might be saying, “Heck, give the robots my stinky job,” then retire to a life in a sunny deckchair by an ocean of your choice.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.236.219