1

Prime Directive

A perfect storm of change will arrive during the 2020s. We predict that we will see computing switch from staring into a four-inch piece of glass in our hands to using devices on our face that bring computing to every surface. Along with that, we will see advances such as vehicles moving without humans in them for the first time. A fourth paradigm of the personal computing age is upon us, Spatial Computing, and it is one that truly makes personal computers even more personal.

With the coming of Spatial Computing and its two purest members, Virtual and Augmented Reality, we believe that businesses and human cooperation need to be aimed at one thing: working together to build complex technologies to keep us around on this planet longer and in a more satisfied and productive state, while paying attention to the effects that these technologies have on ourselves and the planet. Spatial Computing in the 2020s will see immense challenges, but great opportunities will be available for brands to use new technologies for combined social good. Just what is it about Spatial Computing that is making human beings crave it?

At stadiums around the world, they are getting ready for it. We visited a place in Las Vegas because of it. Hollywood is getting ready for it. We visited a huge studio in Manhattan Beach, California, to see it. Investors and entrepreneurs are getting ready for it. We visited several in Seattle, New York, Los Angeles, and Silicon Valley. Hospitals, shopping malls, hotels, cities, automotive companies, power companies, banks, and more are investing in it. Tens of billions of dollars are being invested in it by both big and small tech companies.

That "it," which is Spatial Computing, is computing that you, virtual beings, or robots can move around in. It includes all the software and technology needed to move around in a digital 3D world.

That is software and technology associated with AI, including Machine Learning and Natural Language Processing, Computer Vision, Augmented Reality, Virtual Reality, and all other apps that support the creation and maintenance of a digital 3D world. We see great strides that will be made in Spatial Computing uses for many industry verticals, including Technology, Media, and Telecommunications (TMT), Transportation, Manufacturing, Retail, Finance, Healthcare, and Education.

Before we dig into everything that's happening that caused us to write The Infinite Retina, let's back up and think about the Prime Directive that is driving billions of dollars in human effort into Spatial Computing. Why do human beings need robots delivering food and building things in factories, and why do we need Spatial Computing devices on our faces so that we can work, entertain ourselves, educate ourselves, and collaborate with each other in new ways?

What is the Prime Directive? Does it have something to do with why humans spend more and more on technology or tools every year? Are any new trends, like our changing understanding of climate change, causing this change? Does culture itself change in a major way because of the Prime Directive?

Photo credit: Robert Scoble. Attendees at the 2019 Game Developers Conference in San Francisco use a Magic Leap Spatial Computing headset.

What Makes Us Human?

Human beings are classified as Homo sapiens, which in Latin means "knowing man." Modern Homo sapiens are believed to have appeared a little over 300,000 years ago. The distinction between Homo sapiens and what came before has to do with the relatively sophisticated use of tools―tools that were used to survive more efficiently and with which humans gained control of their surroundings. Tools were also used by early humans to make art on cave walls and carve statuettes of female fertility goddesses. The tools served as augmenting devices―augmenting humans' chances of survival and also of expression.

With modern humans, this augmentation can take the form of education, which in turn is used to gain knowledge. With knowledge, our chances of survival should be better. In many ways, our Prime Directive is to know how to better survive and how to better express ourselves by willfully creating and using tools for those purposes. It is a dual directive, for it cannot be proven that one gives rise to the other, but rather both are mutually beneficial. And it is for both very practical and expressive reasons that tools have continued to be created from the time of early man to today. An example of a human being's ingenuity that traverses both the practical and the expressive are the iterative inventions of the writing "pen and paper" combination. This combination tool, which goes back millennia, started out with cave walls, some form of patchworked dried grasses, as well as stone, serving as the "paper" and natural dye and sturdy reed, as well as a stone or metal chisel, serving as the "pen." "Pen and paper" has been used to record both business and legal matters, as well as nonfictional and fictional narrative, and poetry, as well as visual art, such as paintings, when the "pen" is conceived as pigments. With the advent of the typewriter, there was even more of a separation between the practical and textually expressive and the visually expressive. A machine, the typewriter, was then replaced by the word processor and then the computer. And here we are these days utilizing our computers and their smaller counterparts―the smartphone. Computers did not only replace typewriters; they are also in the process of causing people to question the continued existence of physical books and newspapers, as well as movie theaters.

Our Prime Directive to know how to better survive and how to better express ourselves now has a new channel―Spatial Computing. With Spatial Computing, the uses of the technologies of Virtual Reality, Augmented Reality, and Artificial Intelligence eclipse those of the computer we know today. In the near future, we will no longer have to use a physical computer to do our work and browse the internet. And we will be able to do so much more with the three-dimensionality of Spatial Computing and speech recognition software. It turns out that our need to better express ourselves appears to include a need to experience a replicated reality.

Replicating of reality in the forms of paintings, fiction, and films, as well as other forms, has existed as long as human beings have had the need to express the conditions of both their individual and social existence in an effort to better understand themselves. Experiencing a replicated reality also turns out to be a very good way to achieve a new skill and to get knowledge in general. Spatial Computing is the next generation of imaging that is able to replicate reality, allowing the movement from two-dimensional imaging to three-dimensional. With three-dimensional imaging, the replication of reality is able to be more closely related to the reality it is trying to represent.

Human beings seem to get satisfaction out of presenting and experiencing narratives that have the appearance of being real. An example of this is a movie. It is difficult to say exactly why we get such pleasure out of viewing a "good" movie. Perhaps it is empathy, but the question still remains why empathizing with movie characters that appear to be real should make us feel good, much less entertained. With Spatial Computing, the visuals are even more true-to-life and we are able to move through them (Virtual Reality) or incorporate and manipulate non-real objects into our real world (Augmented Reality). Artificial Intelligence adds another layer to the existing reality by organizing previously unconnected data into meaningful systems that could then be utilized in Spatial Computing to feed our Prime Directive needs.

Photo credit: Robert Scoble. Here, you can see the slums and other residential buildings as seen from the Four Seasons luxury hotel in Mumbai, India. Billions of people live in similar accommodations around the world and they will experience the world far away soon in Spatial Computing devices.

Drivers and Benefits

The benefits of Spatial Computing play right into our Prime Directive of knowing how to better survive and how to better express ourselves through our creation and use of tools. Our need to have replicated three-dimensional worlds and objects in order to master our understanding and manner of expression is one that could be served by the software and technologies of Augmented Reality, Virtual Reality, and Artificial Intelligence.

Noted investor and Netscape founder Marc Andreessen has told markets he has a contrarian view to Silicon Valley's belief that Augmented Reality represents a better investment than Virtual Reality. He noted that it is a privileged view―that in Silicon Valley, residents have tons of beautiful places within an hour's drive, from beaches to vineyards. Most people in the world, he said, don't have those advantages.

Walk through neighborhoods, even middle-class ones, and you will see millions living in small homes in high rises. Telling them that they will want to wear computing devices while walking through a beautiful area won't be hard. Instead, Andreessen sees a world where people will wear headsets to visit the natural beauty well out of reach somewhere else in the world. Even in the United States, only about 20 percent own a passport, so asking them to visit historic sites in, say, Egypt or Israel, won't be possible for most. We can, instead, take them there with Spatial Computing.

However, unlike Andreessen, we don't view the question of whether Virtual Reality or Augmented Reality will be the "winner" between the two. We see that, by the mid-2020s, our Spatial Computing devices will let you float between putting virtual things on top of, or replacing things, in the real world.

Where we are going with this argument is that the hunger for these devices, services, technologies, and experiences that Spatial Computing affords will probably be far greater among the billions who can't afford a private jet to fly them to a Davos, Switzerland ski vacation, or even afford a Tesla for a weekend jaunt to Napa or Yosemite. That seems to be Andreessen's greater point; that the investment opportunity here is grand because it not only will improve the lives of billions, but may lead to us saving ourselves with new education and new approaches to living, including being able to take courses using Virtual Reality and "travel" to different locations in the world without having to jump on an airplane.

However, an argument could be made that human beings are social―our social natures have aided us greatly in our need to survive and thrive and that Spatial Computing is too isolating. With Spatial Computing, though, we could choose to experience and learn something solo or networked with others.

Spatial Computing on its own can serve as the medium to interface with ideas, locations, processes, people, and AI characters, or as Betaworks' John Borthwick likes to call them, synthetic characters. These synthetic characters will replace the Machine Learning text bots that are currently ubiquitous―in effect, putting a three-dimensional body to the words. The benefit of this is that we will feel like we are engaging with a real being that feeds our social natures.

Along these same lines, entertainment that utilizes Spatial Computing will make it even more true-to-life. Characters in these new kinds of narratives will be more real to us, allowing us to gain even more insights into the human condition. Spatial Computing is a major innovation; the latest in a long line of ideas and inventions aimed at improving the human condition. It makes our lives better, bringing knowledge to us faster and with less expense overall.

Potential Dangers

It could be said that with great knowledge comes great responsibility. There are several associated potential dangers that come with our use of Spatial Computing. Areas we will touch upon here include potential loss of personal control, dilution of the natural, and population segmentation.

In terms of potential loss of personal control, the major one that everyone has at the front of their minds because of Facebook's transgressions is the disappearance of privacy. Especially with Augmented Reality, unauthorized use of data and media by companies or authoritarian use by governments could potentially be a problem. Location and spending data, along with videos made with the knowledge of the viewer and recording speech when headsets/glasses are worn could present another wave of Google Glass-like uproar. However, we do not think this is going to happen since the uses of Augmented Reality have been lauded over the last few years and there also seems to be better advance acceptance of glasses-like headsets due to the public's understanding that privacy issues will be addressed by Augmented Reality hardware and software companies, along with much deeper utilities that today's technology affords. Companies will need to be especially clear as to what their data policies are and have appropriate opt-in policies to meet the expectations of the public. There is so much reward that could be received by providing data for Augmented Reality purposes―these rewards should be heralded while understanding that there are those that would prefer to not share their data.

Another potential loss of personal control is one of over-advertising. As Keiichi Matsuda's 2016 six-minute nightmare concept film "Hyper-Reality" portrayed, a world where Augmented Reality advertising is constantly overlaid over the real world is one that is unbearable.

Having an opt-in system should solve this, but it might still be an issue when there is a willful exchange of some kind of visual goods with advertising. Companies will probably push the advertising threshold with potential consumers to see how far they can go in this case.

Talking about extremes, this brings up the point of addiction that has come so naturally with the advent of the smartphone. And as with any other kind of addiction, it is certainly the case that personal loss comes attached with it. There are many cases of people who have died while taking selfies while in the throes of being distracted by their digital addictions. There is the possibility that a person walking around in a future blockbuster Augmented Reality game will walk into traffic or even off a cliff, so technical safeguards based in Computer Vision will definitely have to be built-in as an alert device.

Along with this kind of addiction comes dilution of the significance of the natural objects and environments of the world as they appear in reality. This is relevant with regard to both Augmented and Virtual Reality. Dystopian visions on this abound, ranging from people never wanting to be in the real world again, to the death of learning, to the abandonment of care about pollution and global warming. In Virtual Reality, a person could potentially hurt and even kill a digital character without having the full effect of what these actions would be like in real life. A worry might be that these actions via Augmented and Virtual Reality could become so common stance that the line between the imaginary and the real could become blurred to the extent that real people would then get hurt and killed. Industry oversight organizations might spring up to create ratings of Spatial Computing experiences that would rate the level of violent content so that viewings these experiences could be better managed and controlled. In this way, possible negative societal effects could be mitigated.

On the other side of this, the benefits of Spatial Computing will not be able to be shared by everyone on the planet due to economic reasons. Even in relatively well-off countries, there will be segments of the population that will not be able to afford Spatial Computing headsets or glasses. The result of this is that there will be great inequality with regard to information and productivity between the two groups. We believe that both laptop and desk computers, as well as tablets and phones, will be replaced by Spatial Computing headsets or glasses. Without Spatial Computing devices, both work and entertainment could prove to be difficult. Over time, we believe that the cost of Spatial Computing devices will come down due to technical efficiencies and product commoditization, so that much more of the world's population will be able to afford them.

There are many uses for Spatial Computing devices, which is a main theme of this book. We will now provide a backdrop for understanding why Spatial Computing will have the impact on the world we believe it will.

Understanding the Natural World

Sometimes, the storm of change comes as a real storm. Visit Half Moon Bay, California, which is a sleepy coast-side town near San Francisco and Silicon Valley, and you'll probably meet "Farmer John." That's John Muller, and he moved there in 1947 to open a small family-run farm that is still operating, mostly growing pumpkins. We met him a while back and he told us that when he started his farm, he had to guess the weather, mostly by "feel." Today, he told us his son relies on satellite imagery and AI models to know what tomorrow's weather will hold, rather than holding his finger to the wind and trying to guess if a storm is on the way. By knowing the weather, he can better hire people to plant or harvest, and that knowledge saves his family many dollars. Knowing a storm is coming a few days in advance can save much. Knowing that a heavy rainstorm that might flood fields is coming can save millions in seed cost, particularly in places like Kansas. Knowing a tornado will soon hit a neighborhood can save many lives.

Soon, Farmer John's family will be warned of changing weather with Augmented Reality glasses, and they will be able to see storms in 3D instead of watching Twitter accounts like many farmers around the world do today (we built a Twitter feed composed of meteorologists and scientists to watch our changing climate better at https://twitter.com/Scobleizer/lists/climate-and-weather1). The government and others have spent billions on this infrastructure. It is hardly the only expenditure humans have made to understand our changing environment.

In 2003, we had a discussion with Bill Hill, who was a computer scientist at Microsoft. He invented the font smoothing technology that we use on all of our devices now, and he told us how he invented the technique. He had an interest in animals, thanks to reading tons of books when he grew up in poverty in Scotland and learned to track them through forests and meadows by looking at their footprints, or other signals. While doing that, he realized that humans evolved over millions of years to do exactly what he was doing as a hobby. Those who survived had deep visual skills and generally could see tons of patterns, especially in the green grass or trees that our ancestors lived in. If you didn't, you were attacked and eaten by an animal who was camouflaged in those trees. So, those who couldn't see patterns, especially in the green foliage around us, had their DNA taken out of the gene pool. He used that knowledge to figure out that he could hide visual information in red color fringes that surround each letter on your screen. Get out a magnifying glass and look at fonts on Microsoft Windows and you'll see his work.

Today, we no longer need to worry about a lion hiding in the grass waiting to eat us, but we have new challenges that continue to push us toward developing new tools that can help us continue this great human experiment that we are all in.

The lions of today might be those in the capital markets that are pushing our companies to be more profitable. Or they might be a disability that is slowing down our ability to be our best. We think that's all important, but what did our ancestors do after they survived the day walking in the jungle trying to find food to feed their families? They gathered around fires to tell stories to each other after a day of hunting and gathering. We can see some of those stories even today as drawings on cave walls, where drawings show brave hunters facing down animals and dealing with the challenges of crossing mountains and streams. Today, we don't need to hunt to eat, but there are those who still want to do these things as sport, and what they are doing with technology demonstrates we are under a Prime Directive to build better tools and experiences, even to do something humans have done for many generations.

Let's look at another example of how people are utilizing technology in order to better understand their environment and augment their experiences. Vail Colorado resident Ryan Thousand, https://www.linkedin.com/in/ryanthousand/, is an IT administrator for a healthcare company, Vail Health. During the day and many evenings after work or on weekends, he is a passionate fly fisherman who loves catching fish, not to eat, but to capture for his Instagram channel to brag to his friends, before he releases them back into the stream for others to experience the same joy. He saw a need to track what kind of lures he was using as stream conditions changed, so now he's building 3D models of the streams he's fishing in. While wading in the water, he wears smart boots that measure water flow, temperature, and can capture other things. The data he is capturing using those and other tools is then captured on his phone and streamed to a database he's building, and also to the government, who is using the data from sportspeople to track environmental ecosystems.

His early attempts to build useful tools for fly fishermen brought him to Oakley's ski goggles, which had a tiny little monitor built into them. He is seeing a world, coming soon, where he can move away from having to view data through a tiny little monitor that gets in the way of the real beauty he is usually surrounded by, and instead transition to wearing a headset like Microsoft's HoloLens while fishing so that he can capture stream conditions on top of the actual stream. Soon, he sees, people will be wearing a set of Spatial Computing glasses, which will include a virtual guide that will not just show you potential fishing spots in rivers and lakes before you go on a fishing vacation, but will then help you learn how to properly cast a fly, and even tell you where to stand in the stream to have the best chance of catching a fish. Thousand's Prime Directive is to make fishing better and to help everyone who visits Vail learn more about the environment, which will help them save it for many future generations.

The changes, he told us, could change all human activity, even one as familiar as grabbing a fishing rod for a day of relaxation in a stream. That's due, in part, to things like new 3D sensors and new optics that will let us augment the real world with new kinds of digital displays.

Faster Wireless Leads to New Affordances

Watching hockey after the Spatial Computing storm hits will never be the same. John Bollen, https://www.linkedin.com/in/johnbollen/, seems to have a different Prime Directive. Not one of catching fish, but one of catching fans.

He is putting the finishing touches on the 5G infrastructure inside Las Vegas' T-Mobile Arena and has built hotel infrastructure for a few of Vegas' most modern hotels. If you check into a room at the Aria resort, you will see his work when you turn on the IoT-run lights in your room, but now he has a bigger project: to augment an entire stadium. He told us of 5G's advantages: more than a gigabit per second of bandwidth, which is about 200xLTE ("Long-Term Evolution" wireless broadband communication for mobile devices) with very little relative latency, about two milliseconds from your phone to an antenna, and that a stadium full of people holding phones or wearing glasses will be able to use to connect. That is what many of the first users of 5G are experiencing. The theoretical limits are far higher. If you've been in a packed stadium and weren't able to even send a text, you'll know what a big deal that is.

He is excited by this last advantage of 5G: that visitors to the stadium he's outfitting with 5G antennas will be able to all use their devices at the same time.

T-Mobile and other mobile carriers want to use Las Vegas to show off the advantages of using 5G. They have hired him and his team to hang those antennas and lace fiber optic cables above the hallways in the stadium. He dreams of a world where hockey fans will use Spatial Computing to see hockey in a whole new way: one where you can see the stats streaming off of your favorite player skating across the rink.

He also sees the cost advantages of augmenting concerts. To make the fan experience amazing, the teams he's working with often have to hang huge expensive screens. He dreams of a day when he can deliver virtual screens all over the concert venue without renting as many expensive, large, and heavy screens and paying crews to hang them in different configurations. That dream will take most of the 2020s to realize because he has to wait for enough fans to wear glasses to make that possible, but his short-term plans are no less ambitious.

Photo credit: Robert Scoble. John Bollen, left, gives a tour of the 5G infrastructure at Las Vegas' T-Mobile Arena to healthcare executives.

Soon, you will hold your phone in the air to see Augmented Reality holograms dance and perform along with the real performers on stage. Or, maybe they won't be real at all. In China, thousands of people have already attended concerts of holograms that were computer generated, singing and dancing and entertaining stadiums. He knows humans are changing their ideas of what entertainment itself is, and he and his new stadium are ready. There are new stories to be made. New stories that will put us in the game, or on stage at a concert.

What really is going on here is that soon, everyone in the stadium will be streaming data up to massive new clouds, particularly hockey players, who will be captured by new cameras that capture the game in volumetric detail, and the players will be wearing sensor packages that show coaches and fans all sorts in terms of biometric data, such as their current speed, and more. This Spatial Computing data will change not just hockey games, but will change how we run our businesses, how we shop for new things, and the games we play with each other.

Data, Data, Data Everywhere

Data clouds and data floods are coming to our factories and businesses, thanks in part to the 5G infrastructure that's being built. This will bring a storm of change and a need to see patterns in that data in a whole new way.

A few years ago, we visited the new Jameson Distillery near Cork, Ireland. At one point, we asked the chief engineer, who was proudly showing us around its new building and machines that makes the whiskey, "How many sensors are in this factory?"

"So many I don't even know the number." That was more than five years ago. Today, some warehouses and factories have tens of thousands of robots and hundreds of thousands of sensors. You won't find anything useful in the data that's streaming off by using Microsoft Excel; there's just too much data to look through grids of numbers.

Los Angeles-based Suzie Borders has a better idea. Turn those millions of numbers streaming off of sensors into something that humans can better make sense of; for example, a simple virtual light on top of a factory machine. She showed us what she meant by putting us inside various devices, including a Magic Leap Spatial Computing headset and a variety of Virtual Reality headsets, where she walked us around datasets. Datasets that don't look like datasets at all! They are more like a new kind of virtual interface to data that you can grab and manipulate.

Photo Credit: Robert Scoble. Suzie Borders, CEO/Founder of BadVR, sports her Magic Leap while showing us around a virtualized factory floor. Oh, while we were having Tea in San Francisco.

After getting a demo of a bunch of different ways to look at different businesses and different data (the data streaming off of a sensor is quite different and needs to be seen by humans differently, than, say, the transaction data coming off of its bank accounts or point of sale machines), we came away believing that an entirely new way of working will soon arrive: one that will demand using new kinds of Spatial Computing devices.

Yes, people will resist wearing glasses, but those who dive in will find they get the raises and kudos for seeing new ways to make companies more effective and efficient, and we are betting a lot of those new jobs will be using Suzie's software.

The upgrade from 2D technologies to Spatial Computing's 3D technologies has its roots in the analog to digital transformation that happened prior. It is important to understand some of the impetus behind the prior change and what significance it has for our shift.

The Impulse to Capture, Understand, and Share

Paradigm shifts, which come in storms of new technologies, maim big companies and create new stars. The iPhone marginalized both Eastman Kodak and Nokia. Both companies were dominant at one point, but new technology created new stars. The same will happen in the Spatial Computing decade, which is just beginning. The storm will be violent and swift but to understand why this is so, we need to go back to that earlier time.

Ansel Adams first visited Yosemite in 1916 when he was 12 years old, carrying a new camera he had been given as a gift. He was so stricken by the beauty he saw everywhere that he tried to capture it, with the intent of sharing it with friends and family back home. He was frustrated, his son Michael told us, that his photos didn't properly capture anything close to what he experienced in real life, and that drove him to spend the rest of his life trying to find ways to improve photography so that his photos would more closely match what he experienced.

That act has been followed by millions of visitors each year who stand in the spots where he captured Half Dome or El Capitan, but it is by visiting Adams' home in Carmel, California that you see the ties between photography of the past and where we are going next. In that home, you see the photography of the past. His darkroom still holds his dodging tools, enlarger, and the bottles of chemicals Adams used to develop, stop, and fix the silver halides that we all see in Adams' photography today. That was hardly the only technology Adams used in making his photographs, but it gives insight into Adams' creative process, which has been studied by many photographers even today.

In fact, that same creative process has been trained into many cameras' metering systems and, now, in modern phones from Huawei and Apple, has been taken further thanks to Computer Vision systems that make photos sharper, better exposed, and more colorful.

Photo credit: Robert Scoble. Michael Adams, Ansel Adams son, shows us Yosemite Valley and tells us how, and where, his dad took famous photos, with him in tow.

What we also learned is that Adams created much of Kodak's advertising, by using a tripod he invented that enabled him to take "wrap around" photography by shooting several images one next to the other. A phone's panoramic mode basically does the same thing, without needing a tripod with notches in it to properly align the images. A computer in your phone now does the work that Ansel used to do. What would Adams think about digital photography and the hordes of people taking photos on phones and other devices? Or AIs that "improve" photography by searching large databases and replacing things, like blurry overexposed moons with great-looking properly exposed ones, something that Huawei's latest phones do? Adams' son says that if Ansel were alive today, he would be right there along with other innovators: pushing the technology of photography even harder in an attempt to get closer to the natural world.

Why? He, and many environmentalists such as John Muir, played key roles in protecting Yosemite as a National Park, as well as getting many to visit the park through their work. Their idea was that if they could just show people the natural world better, they would be able to get people to travel to see it. If they could travel to see it, they might change their attitudes toward nature and change their polluting ways. This point of view is more needed today now that we can see man's impact on Earth is far deeper and more dangerous to our long-term survival than even Adams could see 100 years ago.

Who fills Ansel's shoes today? Or who will need new forms of images the way Kodak did to hang in Penn Station, New York, like it did with Ansel's photos? People like Ross Finman. He runs an Augmented Reality lab in San Francisco, California, for Niantic, the company who built Pokémon Go. We can see a world where games built on Niantic's platforms will change our behavior quite deeply. We saw thousands of people running across New York's Central Park just to catch a coveted rare Pokémon. This is a new behavior-change platform, and Finman and his team are the ones building the technology underneath it. Isn't behavior change what advertisers are paying for? Niantic's platforms understand the world better than most and were started by the team behind Google Earth, so they've been at this for quite some time.

His dad, Paul, and mom, Lorna, met at Stanford University where they both got PhDs. His was in electrical engineering, hers was in physics. Both are passionate about science and share a sizeable warehouse where Paul continues to develop new technologies. We visited this lab because it is where Ross developed his love of robotics (he later went onto Carnegie Mellon to study that very subject, and afterward started a company, Escher Reality, which was sold to Niantic. More on that in a bit). The first Niantic game to see his work was "Harry Potter: Wizards Unite," which shipped last year.

Outside of the lab in a huge field is an autonomous tractor moving around showing us that this isn't your ordinary Idaho farmer. This field is where Ross first developed his SLAM algorithms, which we now call the AR Cloud, and are the basis for how Augmented Reality works (and how that tractor navigates around the field). SLAM stands for "Simultaneous Location and Mapping" and builds a 3D map of the field, which a computer then can navigate around.

Augmented Reality glasses and Autonomous Vehicles, along with other robots, and virtual beings, all use SLAM to "see."

Photo credit: Robert Scoble. An autonomous tractor rolls around in a field surrounding Paul Finman's lab in Ceour d'Alene, Idaho. This field is where Ross Finman did his work building the Augmented Reality technology now being used in Niantic's Harry Potter game.

The SLAM/AR Cloud system that Finman is developing isn't directly connected to Ansel Adams' form of chemical-based photography, but there is a tie to both Adams' impulse to capture and share and Finman's. Ansel's work was analog. Finman's work is digital. Unlike earlier digital photographers, Finman's system captures the real world as a 3D copy instead of a 2D grid, which was closer to what Ansel did. Finman is the Ansel Adams of this new polygon-based age. His work will let us be in the photograph. Unlike Adams, though, we can use that data to train AIs that will keep us, or at least something that looks, moves, and sounds like us, around―potentially forever.

This process of understanding the real world, converting it into digital information, and then processing it into a form that can be transmitted and displayed (developers call it a mesh) is the basis of everything you will read about in this book.

Understanding the gap between the real world (we think of it as the analog world, because the photons, that is, light, that hits our eyes from, say, a sunset, or the audio that hits our ears, arrive as a smooth wave) and the digital one, which is quantized into streams of numbers, is key.

This is very different from the digital world that underpins Spatial Computing, which has been sliced and diced into polygons or quantized into streams of numbers playing on Spotify. You see it when you see virtual beings, like avatars, and there isn't enough resolution to properly "fool" your mind. The industry calls this "the uncanny valley" for the difference between our emotional response to seeing, say, a real human singer, and an artificial one.

Finman studies this uncanny valley, and in his work at Niantic he is working to more completely understand the real world, build systems that have deeper situational awareness, and then close the gap between how we sense the real world and how we perceive, and emotionally respond to, this new, digital, world.

The impulse, that Finman has to capture, understand, and share the world using new technology, is the same that Adams had with his camera, and that is what ties both of them into our prime directive to become better human beings.

Soon, Finman will bring us powers to capture, understand, and share our world that Adams probably would never have fathomed. Our children will be able to jump into family dinner, either remotely, or in the future go back in time to experience what it was like to have their mom serve them dinner. A new way of remembering your life and everything in it is on the way with Spatial Computing.

Soon, too, will analog experiences be relegated to special places like racetracks or rare vacations to beautiful places. Both things that increasingly you'll need to be wealthy to experience. The storm of change is about to see everything turn digital and that has deep implications for games, entertainment, and how we capture, understand, or share our own lives with others. Soon, we will be interacting with this virtual world, and the innovation teams are already gaining new skills to build new interfaces with this new world.

We won't be surprised if, in a few decades, Finman and other innovators like him are celebrated the way we celebrate Adams today as a true pioneer that pushed technology to its ultimate edge. We also won't be shocked if a few huge companies or product lines disappear over the next decade.

To understand more of the gap between this polygon-based digital age and the analog age of media that is coming to a close, we need to visit Neil Young's studio. He invited us in to listen to some of his music on a two-inch analog tape. He wanted to show us what we lost when we moved music from analog tape and vinyl records to listening on Spotify on our phones.

His audio engineer, John Nowland, played us "Harvest Moon" on that analog tape, and then we listened to it in digital form after that analog master was turned into slices. 600,000 of them a second, which is about 12x more than you will hear on a CD. So, pretty high resolution.

We could still hear a difference. "The closer you were to the analog, the more natural it felt," Nowland told us. "You lose the nuances and detail when you squash it down." That interview is at https://youtu.be/Ta-RvERB6Ac.

Photo credit: Rocky Barbanica. Robert Scoble meets with Neil Young's audio engineer, John Nowland, and listens to music in both analog and digital formats.

Dancing Into Different Worlds

The storm of change will mean that the nerds who used to build the computer software of old, by typing thousands of lines of code into a black box on a flat screen, might need to learn to dance.

Yeah, if you watch someone play in Virtual Reality you might think they are dancing, but that isn't what we are thinking about. There's a new type of computer science underway: one that uses choreography to train the AIs that control autonomous cars, robots, virtual beings, and even present humans in Augmented and Virtual Reality with interfaces that better serve us.

Machines can be cold. In the worst of cases, they can crush us on factory floors, or kill us in the streets, like a computer-controlled car developed by Uber did one unfortunate night in Arizona. In the best of cases, they can give us superpowers and make our lives easier, but even then they often can be made to better serve humans in a dance, of kinds.

Did you know there is an entire conference for research on these new ways of training machines called "Choreographic Interfaces?" We didn't either, until we were introduced to Catie Cuan, who is currently studying for a PhD at Stanford University on the topic. As she showed us around Stanford University's AI and robotics labs, where she is working on her studies, we discussed how innovation teams will soon be changed by her, and other people who are using techniques that humans use in dance choreography to train computers.

Photo credit: Robert Scoble. Catie Cuan, Stanford AI student, tells us about using dance choreography to train robots in the lobby of the Bill Gates building at Stanford University, which is the building where Google started.

What makes her unique among most of the computer scientists we've met? She's a former dancer. "Dance and choreography is all about moving with intention," she says, as pointing out that she spent more than two decades dancing and that she uses the knowledge she gained by studying how human bodies could be made to move in different ways, and then translates that training and knowledge to teaching robots and virtual beings to move.

Cuan is working for automobile companies who are developing autonomous cars, among others, as part of her studies (she did some of her schooling at the University of Illinois, which is where Tesla and PayPal started, among others). She told us that humans go through an intricate dance of sorts as they wave each other through intersections, for instance, and cars without human drivers can't communicate with humans on the street or in other cars that way, so they need to be trained both how to recognize human gestures, say, like a police officer standing in the middle of an intersection directing traffic, and communicate with humans their intention to move or stop, but do so in a pleasant, human way.

Now, as computing will be everywhere, we need new interfaces. Ones that respect us, understand us, warn us properly, all in a very human way. Building these new interfaces requires new kinds of innovation teams with, even, a dancer or two to help out. Why? Well, she says that she sees an optimistic future for human/machine interactions, and what better way to make the machines more fun and engaging to be around? As this storm of change comes, though, it may frustrate many people who want things to stay the way they are, or who can't wrap their heads around the fact that they need to work with someone doing choreographic work. Many pioneers face similar resistance in their careers. We suggest that if you are feeling yourself resisting this change, you might need to be the one to change lest you, or your company, be left behind.

The Frustrated Pioneer

Doug Engelbart had a frustrated spirit when we first talked with him in 2005. If you search YouTube for "mother of all demos," you'll find the video of him demoing something that looked like the Macintosh. He guided a mouse cursor across the screen and showed off many new computing concepts that would dramatically change how we would view and use computing. The official title was "A research center for augmenting human intellect." Way back in 1968. Almost 20 years later, the Macintosh was born from many of the ideas he demonstrated in that demo. He later went on to win many awards and accolades, but he was particularly proud of the National Medal of Technology, United States' highest technology award, yet here, he was sharing his frustration that he wouldn't be around to help humans see what he told us would be his most important work: augmenting human beings. In other words, giving them the tools to make themselves better.

"Why are you frustrated?" we asked him. That led to a discussion of how Engelbart was kicked out of his own research lab back in the 1970s (which is the same lab, SRI International, then named Stanford Research Institute). This is the lab that brought us Siri, HDTV, and played key roles in many, many other things, including the internet itself). Some of his coworkers say that he was too focused on the future and not on getting revenue for the lab, but Engelbart answered "people don't understand what I'm saying." He explained that he was dreaming of a better future, where humans and machines would join together. In one trip, he pulled out a glove that would let people touch and type faster than on keyboards.

Photo credit: Robert Scoble. Robert Scoble's son, Patrick, talks with Douglas Engelbart and plays with the original mouse.

When asked why they didn't understand, he answered that for that, you have to go back to the context of the day. Back in the 1970s, before personal computers came along, computers were run by either data entry clerks or by PhDs and other highly trained people who wrote the code that ran them. When he told his coworkers and others that someday we'd have a supercomputer in our hands, he was laughed off as a crackpot.

He had always dreamed of a world decades away and talked with us about how computers would soon lead to different realities. As the inventor of the original mouse, he was always looking for ways for humans to use computers better than typing on a keyboard or stacking piles of punch cards into trays, which is what most computer users back in the 1960s were doing. Today, we all use the progeny of his work, and this pioneering spirit of dreaming of a better world, where humans would have better control of the technology they invented, continues to drive us to new and greater innovations.

His idea was that computing could go beyond merely performing calculations, and that it could be used to augment the capabilities of the human mind. Why was he frustrated? Because even as he was in the final phases of his life, he could see a world where people would see or sense computing all around them and be able to touch it, maybe with gloves, but hopefully, he told us, with sensors that could see the human hands, eyes, and maybe someday into their minds.

These are ideas that even today aren't largely used or popular. Virtual Reality has only sold a few million headsets at the time of writing this book. Most humans haven't had even an hour in one of those, and here this man is telling us about a world he saw way past Virtual Reality to one where computing was on every surface and where you could use your hands, eyes, voice, to control it! His thinking on this topic led him to develop what came to be known as Engelbart's Law: that computing was increasing at an exponential rate, so we would be able to exponentially increase our performance as well. We are seeing this become one of the key drivers behind the development of the technology we see about to disrupt seven industries, and probably more.

He saw a world where we would soon have technical capabilities beyond the imaginations of most people; for instance, cars that could drive themselves, or systems that would let us see a new digital layer making our understanding of the world far better than before. His ideas of augmenting human performance through technology changed all of our lives, and soon will bring us both new powers and a new understanding of what it means to be human, but to do that, he needed to wait for our machines to get new senses. We wonder if even Engelbart himself, though he usually was decades ahead of others in his thinking, could have imagined the 3D sensors that are now in our phones and cars. Could he imagine the databases and cultural changes that they are now bringing? Knowing how he thinks, he might resist some things for a few microseconds but then he'd smile, lose his frustration, and see that someone else is pushing the world toward his ideas. Let's meet one of these people now.

Breakthroughs in Sensors

David Hall had a building to fill. In the 1980s, the company he started, Velodyne, built subwoofers. Back in the day, they were the best subwoofers because they were the first to be digitally controlled. Audio fans lined up to get one, since they made your audio system dramatically better. But by around 2000, he realized that China had taken over manufacturing, so he sent his production to China since he couldn't get parts anymore to build them within the United States, as he told us in an interview here: https://youtu.be/2SPnovjRSVM.

Photo credit: Rocky Barbanica. Velodyne founder/CEO, David Hall, shows off his LIDARs and talks about his career as an inventor of everything from new kinds of subwoofers to self-leveling boats.

That left him with an empty building, so he started looking for new things to do. He heard of the DARPA Grand Challenge, a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency (DARPA), a prominent research organization of the US Department of Defense, and readied a vehicle to enter to attempt to win the $1 million prize. The military wanted to spur development of autonomous vehicles. Hall soon realized he didn't have the software engineers needed to win, because he was against teams of university students from schools like Stanford and Carnegie Mellon, but that showed him a new market: making sensors for the other teams. He had the engineers and production ability, thanks to his empty building, to do that, and so he started building LIDARs.

LIDAR, which stands for Light Detection and Ranging, is the spinning thing you see on top of autonomous cars often seen driving around San Francisco or Silicon Valley. Most of them are from Velodyne. Google's early cars used his LIDARs, which spun dozens of lasers many times a second. The cars used that data to build a 3D model of the world around the car. "If they have a spinning one, that's mine," he told us.

Invisible light from his spinning device bounces off the road, signs, other vehicles, and the rest of the world, or in the case of his boat, waves in the ocean. A computer inside measures how fast the light returns to the sensor and makes the 3D model of the world. Without these sensors, the car or robot can't see, so it can't navigate around the world. His new technology plays an important role in Spatial Computing because it gave computer scientists new data to train new kinds of systems. Google Maps, for instance, is better, its head of R&D, Peter Norvig, told us, because as it used Hall's LIDARs, it trained a system to gather data about street signs the car was passing.

"(Starting a LIDAR company) was over everyone's dead body," he said, because his company was so focused on making subwoofers. "I look for things that are electrical and mechanical." He convinced his team that they had a new market and a new thing to do, and now is an important player in the autonomous car space. He also used them to build a boat that won't make you sick (computers control hydraulic lifts to glide over waves).

In 2019, Google also used that data to turn on new Augmented Reality navigation features, where data from your phone's camera was used to compare with 3D models of the street built using data gathered from one of Hall's LIDARs. It is this impulse to build machines that can better see, and hence, make humans more powerful, that enabled these new features and for that, Hall is an American engineering star who is bringing us a whole new world. As we're about to see, this sort of development was not only happening in America, with sensors attached to a Spatial Computing device much smaller than an autonomous car.

Thousands of miles away, in Tel Aviv, Israel, was another innovator working on similar 3D mapping technology, and had a similar philosophy about how 3D sensors would soon play a key role in making human life better. That man is Aviad Maizels, and he started PrimeSense. PrimeSense was purchased by Apple and did much of the engineering on the sensor inside modern iPhones that see your face in 3D, which lets them unlock just by seeing your face, and soon will do much more.

Maizels told us back in 2013 that 3D sensors would soon be on everything. We met Maizels at the company's first suite at the Consumer Electronics Show in Las Vegas, where his company showed what 3D sensors could do―everything from seeing you buying cereal boxes to drawing on a standard desk with your finger. Today's Amazon stores demonstrate his vision was correct. In them, sensors watch you take products off the shelves and properly charge you with no registers or lines to wait in, saving you time and hassle.

It is worthwhile, though, to go back to that booth and take a look at how large PrimeSense's sensor was back then. In just a few years, it would shrink 20 times down to fit in the notch on current iPhones. Soon, it may disappear altogether since standard cameras now can build 3D models of the world, which has been proven by companies like recently acquired 6D.ai, which can do much more with a standard camera than even PrimeSense demonstrated in that suite back in 2013 in this video: https://youtu.be/4VtXvj4X0CE.

Even in autonomous cars, this argument between using standard cameras and 3D sensors is still playing out. Tesla's engineers told us that they are betting on cameras instead of the more expensive LIDARs that companies like Velodyne are producing.

If you visit Silicon Valley, you will see lots of autonomous cars from companies like Cruise, now owned by General Motors, Waymo, formerly of Google, and others, all using LIDAR to sense the world around the car, while Tesla is betting on cameras, along with a few cheaper sensors like radar and ultrasonic sensors.

Photo Credit: Robert Scoble. PrimeSense founder/CEO shows off the 3D sensor he and his team invented. This technology has now been sold to Apple, which shrunk it and included a similar sensor on the top of modern iPhones, where it senses your face to unlock your phone or do Augmented Reality "Animojis."

That is a tangential argument to the one that we are making. These innovators brought our machines new ways to see and because of that, we now have new ways to see ourselves and the world we live in, which will grant us superpowers, including ways to be remembered by future generations that only science fiction dreamed about before.

Driven to Improve

"Watch this," Elon Musk, CEO of Tesla, told us as he raced Jason Calacanis, driving a top-of-the-line Corvette through the streets of Santa Monica in the first street drag race done in a Tesla on February 18, 2008. Good thing the video hasn't survived, since speeds in excess of the speed limit were quickly reached and Calacanis was left in the dust (he soon became an early investor in, and owner of, his own Tesla). Elon was behind the wheel of serial model #1 of the first Roadster Tesla produced. What we learned on that ride has become a deep part of Tesla's story and gave us insights into Elon's philosophies early on. Yes, he was demonstrating the advantages of electronic motors. He bragged that a Tesla, with its high-torque electric motors, could accelerate faster from 0-60 mph than his million-dollar McLaren F1.

He bragged about something else, though, that stuck with us all these years: electric motors could be programmed never to slip, unlike most gas engines, which have to "wind up" 400+ parts to apply torque to the pavement. We got a look at this recently as the co-author, Robert Scoble, drove his Tesla Model 3 through the snow in Yosemite. It never slipped, even while going uphill in icy conditions.

Photo credit: Robert Scoble. Elon Musk, left, and Jason Calacanis check out the first Tesla only days after Elon got it off of the factory floor, before heading out for a street race where Elon demonstrated how much faster it was than Calacanis' new Corvette.

Today, Elon is bragging about Tesla's safety: every new Tesla comes with eight cameras, along with several other sensors, and he is adding new self-driving features every month to the fleet of hundreds of thousands on the road, thanks to an over-the-air update system that is way ahead of other automakers. In early 2019, he announced the production of a new kind of Machine Learning chip with 21 billion transistors on board; he says these will make Teslas the most advanced cars on the road and will make them much safer than those that only have a human to steer them.

It's called Autopilot, because as of 2019, a human is still needed to drive the car in city traffic and needs to be available in case other situations arise. It doesn't navigate around new potholes, for instance. That's a temporary step, though, as Elon has demonstrated full self-driving features where the car can drive without human intervention.

Industry observers like Brad Templeton, who worked on Google's autonomous vehicle program and who also owns a Tesla Model 3, say that Elon is too aggressive with his timeline and that it may take most of the 2020s to make it safe enough to remove humans from the equation completely.

Photo Credit: Robert Scoble. Some of Tesla's Autopilot/Full Self Driving Programming team hang out with Scoble's son at a Salinas Tesla Supercharger and talk about the future of autonomous cars.

What does this have to do with Spatial Computing and especially our Prime Directive? Everything. The techniques Tesla's engineers are developing (along with others like GM's Cruise, startups like Zoox, or the former Google team that now is starting a company called Waymo) are very similar to the techniques that Finman uses to enable a virtual Harry Potter to walk around the real world, and are similar to the techniques that Apple, Microsoft, Facebook, and others will use to guide users around the real world in Spatial Computing glasses that will soon come to the market.

Teslas are already saving their users' lives, as you can see on YouTube, as Teslas automatically stop or drive around potential accidents, even at high speeds. Don't discount the many hours given back to their owners as cars automatically drive in traffic during commutes. Those hours represent lives, too. Hours, er, lives, that can be used to do other things.

In Chapter 3, Vision One – Transportation Automates, we'll dig deeper into the other changes that soon will come as humans won't be needed for cars to move around as the Machine Learning/AI that runs the self-driving technology is put to work doing other things.

Things like going to an Amazon distribution center to pick up packages in a much more secure way than having a delivery person leaving them on your front porch, where they are easy to steal. Even in 2019, the computers in a Tesla also watch for people tampering with the car and can automatically record, say, someone breaking a window to steal something inside, and share that with owners via an app on mobile phones.

Hands-on Use

While Elon, and his teams, were working to make cars using Machine Learning to look at the road ahead, Andy Wilson was working in his lab inside Microsoft Research to come up with better technology tools, not for our roads, but for our hands.

Wilson was the first to show us how a computer could watch your hands for gestures. He demonstrated how a computer could see that you had pinched your thumb and finger together.

His first demo showed how you could "grab" a map in midair and zoom it and twist it. This gesture survives even more than a decade later on Microsoft's HoloLens.

Wilson has been trusted for decades by Microsoft. Back when Bill Gates was giving speeches, Wilson built the demos for showing off the most advanced technology. That earned him his own Augmented Reality lab. Once when we visited, he had a set of projectors and sensors hung from tresses overhead. He demonstrated how he could put computing on every part of your body, and even the room. "Here you go," he said, while dragging a virtual photo out of his pocket and handing it to us. It was the first time we had experienced Spatial Computing where we were "inside" the computer and it was interacting with us.

That research, along with others, came together to build the basis of HoloLens where, now, while wearing a $3,500 headset (instead of standing in a room with thousands of dollars of projectors and sensors), you can do the same thing. It's amazing to see how long it takes some things to get commercialized. We first started visiting Wilson's lab back in 2005, but there's a guy who has been working on this stuff for far longer: Tom Furness.

His work in the 1960s was as an airplane cockpit developer, and that's when he built new virtual interfaces that we think of today as Virtual Reality or Spatial Computing. Now, he is a professor at the University of Washington and founder of the Human Interface Technology Lab. That school did seminal work on studying how Virtual Reality could be used to cure pain. Its results, at http://vrpain.com, show that Virtual Reality, when used with burn victims, is better at removing pain than morphine is. Something we hope becomes more adopted as we try to keep people from being addicted to opiates.

Photo Credit: Robert Scoble. Tom Furness, right, and Robert Scoble hang out at Digital Raign's Reality Summit.

He told us that he and his students are developing Spatial Computing to engage, enlighten, and even entertain people in new ways. We include him here not just because he is seen as the grandfather of Virtual Reality, but because he is a great example of the pattern of how innovation frequently happens: first for military uses, later moving to consumer applications. Many of the industry's leaders follow this pattern and is why technology centers tend to cluster around strong military centers, whether in Tel Aviv, Silicon Valley, or in Las Vegas, where the military tests out new kinds of airplanes and continues Furness' early work today. We visited Nellis Airforce Base, where we met several pilots who fly F-35s. The pilots in them use a big, expensive, Augmented Reality headset. One of them said something that stuck: "I'll never lose to an F-16. Why? Because I can see you and you can't see me, and I can stop." The F-35 not only has those magic headsets, which let pilots see the airspace in great detail, but are designed to be stealthy against radar, so they can't be detected by other planes. They are equipped with engines that can direct the flow from the jets to literally stop in mid-air, which the F-16s can't do.

It isn't lost on us that recently there've been reports that newer F-35s get rid of the pilots altogether, and that the AI flying usually beats the pilots because the computers flying can perform maneuvers that humans just can't handle due to the forces involved.

Wars handled by machines is a controversial thing to be certain, and one we won't take sides on here, but that means fewer pilots coming home in a box due to accidents or being shot down, and the military sees that as just as good a thing as seeing fewer deaths on the road. No matter what way you look at these new technologies and their role, you can't deny that most of these technologies were first designed with a military use in mind.

Beginning

In this chapter, we have thrown a light on just a portion of the stories of innovators and the innovations that they are bringing to market, all of which we will all use by 2030, or most likely by 2025. We've given you an initial indication of how versatile and useful Spatial Computing is―for use by farmers to fly fisherman to businesspeople that use data analyses, to people who ride in Teslas and soon, autonomous vehicles. Technologies such as Machine Learning and 5G will further strengthen the reach of Spatial Computing.

We've been telling people we are just at the beginning of a new 50-year product cycle, which brings a new kind of personal computing that many have said will lead into the Singularity―a deep merging of humans with machines. Already in R&D labs, researchers are showing a new kind of computing coming where you will just "think" of doing something, like sending a message to a loved one, and a message will appear in mid-air and you'll send it just by thinking "send."

The road to Spatial Computing has been a very long one―one that humans have been on ever since they started making tools that could substantially improve their chances of survival and their quality of life. Where we stand now is in an iterative stage that will yield significant gains. It is as if we are back in 1976 and Wozniak and Jobs just showed us the prototypes of the Apple II. We are still seeing improvements to personal computers 40 years later; however, this new type of computing will be a major leap.

It appears that the closer a created Spatial Computing experience is to real life, the more satisfaction a viewer has from it―both from practical and entertainment perspectives. The utility makes sense given the tool nature of Spatial Computing. In which way and how far humans will take Spatial Computing is very much the reason for this book. We are approaching it from an enterprise perspective, focusing on the particulars of industry vertical use, although our discussion of particular technologies will make it clear that there is a massive potential for wide consumer use of Spatial Computing.

For the rest of this book, we will detail and discuss what we think will happen in Spatial Computing within the next five years, give some major indications of what could happen in 10, talk about what it could mean for humanity in 25 years―focusing on culture and society―and give some far-reaching comments on what we might see in 50 years' time. We will look at seven industries that are about to undergo massive changes and radical disruptions and to understand why, we'll dive into the technology driving this perfect storm of change and take a look at some of the changes coming to each of them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.4.206