11

Spatial Computing World-Makers

In our previous chapters, we covered seven verticals that will be impacted by Spatial Computing. Here, we introduce seven people who we feel will be especially instrumental in making Spatial Computing technologies successful and profitable. Let's meet Raffaella Camera, Dr. Brennan Spiegel, Rob Coneybeer, Ken Bretschneider, John Borthwick, Hugo Swart, and Sebastian Thrun.

The Market Illuminator

Raffaella Camera, Managing Director and Global Head of Innovation and Market Strategy, XR at Accenture Interactive

"Product placement and assortment in a store and on a shelf isn't just guesswork," says Raffaella Camera, Managing Director, Global Head of Innovation and Market Strategy, XR (Extended Reality) at Accenture Interactive.

She sees radical changes coming to retail due to Spatial Computing and is, with her team, already saving consumer products brands and retailers billions of dollars through her research. She is researching eye-tracking, which not only helps in finding better places for products to sit on the shelf, but would also aid a new kind of retail worker, one that has data on their side and uses new Spatial Computing technology to reduce costs. Camera noticed that the old way to design stores and test them out was very expensive.

A lot of older consulting work required building prototype stores, often in a warehouse somewhere. We visited one such store at Sephora, where they had an entire store set up in a San Francisco warehouse. She saw that was expensive because it required executives from each brand to come to a physical location to give feedback on how to improve layouts or do testing with consumers. Instead, she pushed her clients to move to Virtual Reality headsets, which let both customers and all the stakeholders learn more, have earlier input on store layouts, and reduce costs. She told us that this work in VR led to major new insights that they wouldn't have gotten otherwise.

"We were able to prove that there was a high level of correlation between traditional testing and VR," she says, which is a huge return on investment (ROI) of these new technologies. This means that executives at brands can start to trust that results reached by studying customers in VR headsets will match real-world results.

The ROI gets better the more technology a store uses, Camera said her findings showed. She said that as stores have bigger libraries of 3D products and store layouts, they will see additional cost savings because they can redesign stores using existing libraries instead of having to spend a bunch scanning new displays or products.

Her biggest breakthrough, though, may be in the use of next-generation eye-tracking. She says that this field is bringing rafts of new data about how people shop. She is now able to get detailed data about what customers in research situations looked at, what they looked at prior to that, what action were they taking while they were looking at something, and whether they were picking up and turning the product around. How long was that done for? Did they put it in the cart? How did that relate to products prior to that? What she was testing wasn't the typical heat map that shows where people are looking, or hanging out, in a store. "We were inundated with data," she said, while explaining that the data was giving her insights that were impossible to understand at scale before.

This data is leading to changes in everything from product packaging design to how products are grouped together on store shelves.

"We wanted to use technology, and specifically VR, or Virtual Reality, to reinvent how brands gather consumer data and perform research, allowing them to do it faster, more affordably, and at scale," she says, while talking about work she and her team at Accenture did for Kellogg's as it launched a new Pop-Tarts brand. Using that data, they discovered that it was more effective to place the new Pop-Tarts Bites product on lower shelves than on upper shelves. At scale, this could lead to millions of dollars of profit and reduced costs as inventory sits on store shelves for less time.

When we talked with Camera, she walked us through a number of new retail innovations that will soon come to shoppers, including AI that knows where every product is in stores. She sees a world where glasses will navigate shoppers directly to products in the most efficient way possible, but also AR visualizations that make shopping more experiential and fun, all while feeding data back to the retailer and to the brand owners.

She is one of the few, though, who are working with retail brands, helping them think through and design for the changes that are coming in the home as we get radical "virtual stores" that let us walk through virtual showrooms with both AR and VR headsets. Distribution changes are coming with robots and other autonomous vehicles that will make product delivery faster, more consistent, and cheaper, which will also further encourage more people to shop electronically from home. The systems she is working on for the physical stores will someday help robots find and pack products in stores and warehouses too.

When you talk with her, though, her real passion is understanding consumer shopping behavior, and she's most interested in getting more shoppers to use eye sensors that will soon be included in lots of Spatial Computing wearable glasses. These eye sensors will enable her to understand even better how consumer beliefs and behaviors are changing and how packaging should be changed to appropriately grab consumers. She sees huge shifts coming as brands are able to "augment" their products too, and add new digital information onto those packages that will jump onto your glasses as your eyes pass over them, with new kinds of promotional ties and educational experiences, all with the new feedback loop of eye-sensor-tracked consumers giving new data to retailers and brands.

Her work is gaining industry attention, and the project she worked on with Kellogg's won a Lumiere Award for best use of VR for merchandising and retail.

The Pain Reducer

Dr. Brennan Spiegel, Director of Health Services Research at Cedars-Sinai Health System and Professor of Medicine and Public Health at UCLA

Dr. Brennan Spiegel is a true believer that Virtual Reality and 360 Video can be used to manage pain in hospitalized patients. Cognitive behavioral therapy and mindful meditation have been used to lower pain thresholds, along with, of course, opioids.

His belief is backed up by a formal study, which concluded in 2019, and another in 2017, both of which he led and were conducted at the Cedars-Sinai Medical Center. When combined with 360 Video, participants' reported pain relief was significantly greater than that which resulted from the use of conventional pain therapy alone.

For that 2019 study, the 61 participants in the experimental group out of the total test group of 120 used Samsung Gear headsets and software from AppliedVR, a company that produces VR and 360 Video experiences made specifically for medical use. They viewed experiences from a library of 21 possible experiences three times a day for 10 minutes, with additional viewings as needed during pain episodes.

Participants' pain was at least a level three on a scale of zero to 10 when the treatment began. With the 360 Video treatment, for participants reporting pain at seven or above on the pain scale, the average reduction in pain was three levels. And for every 10 years the patient increased in age, an average further reduction of 0.6 on the pain scale was indicated. So, the more pain a participant felt and the older they were, the better the treatment was.

Opioids were still administered, along with the viewing of 360 Videos. Dr. Spiegel believes that "a future study may help answer if (VR) could be an alternative (to opioids)." He hopes that VR can be added not only to treatment with opioids but to any pain management program.

Additionally, in the future, the inclusion of lightly interactive VR experiences in addition to 360 Video is thought to possibly produce an even better result, since interactivity could further engage a patient and make them lose more track of time, taking the focus off of pain further.

According to Dr. Spiegel, a more formal way of looking at the mechanism by which VR and 360 Video could reduce pain has to with "the so-called gate theory of pain...if our mind, our prefrontal cortex is being overwhelmed by pain, what we have to do is introduce another stimulus that is even more compelling." This other stimulus should be of the visual kind, since "the visual cortex accounts for far more than fifty percent of sensory cortex." Dr. Spiegel views this visual stimulus to reduce pain as "a photonic Trojan Horse."

Bolstered by these results, Dr. Spiegel and his team have developed the largest and most widely-documented medical VR program in the world at Cedars-Sinai. About 20 percent of Dr. Spiegel's work time is spent on this medical VR program. Dr. Spiegel has also founded the first international symposium dedicated to medical VR (www.virtualmedicine.health), as well as written a book that will be coming out in October 2020, VRx: How Virtual Therapeutics Will Revolutionize Medicine.

Dr. Spiegel thinks that in the future VR and 360 Video therapy could be routinely used for cases of Alzheimer's disease and other kinds of dementia, schizophrenia, anorexia nervosa, and compulsive overeating.

For Alzheimer's disease and other kinds of dementia, as well as for anorexia nervosa and compulsive overeating, the mechanism that could allow VR and 360 Video therapy to work would be related to reinforcement. That is, with these conditions there is a loss of connection with the patient's inner body―the mind and body have become disconnected. The VR and 360 Video experiences would be specially tailored with content that would be re-watched until some kind of connected relief is felt.

For schizophrenia, according to Dr. Spiegel, the way VR therapy could routinely be used has to do with calibrating the alien voice or voices in a patient's head with their own voice in VR until the patient realizes that they could control the alien voice or voices.

"So when they get into Virtual Reality, when they see and hear their 'demons,' the entities are using their voice. And it's actually voiced by the therapist who sits in another room—through a voice that's been put through the computer to sound like their voice. And they can do this over time. Demonstrate to that patient that they can control their own voice, but they have to do it over 12 weeks or so. And eventually they gain providence over that voice. But the voice doesn't go away. The voice becomes a companion, somebody who's actually maybe there to help them out."

In the future, within the next five-to-ten years, Dr. Spiegel believes that there will be a new kind of clinician that he calls the "virtualist." This person will be one who is trained to use VR and 360 Video therapies and technologies within the context of clinical medicine and clinical psychiatry. In this way, the healthcare field will be able to more readily embrace and utilize Spatial Computing for patient wellbeing.

The Investing Visionary

Rob Coneybeer, Managing Director and Co-founder at Shasta Ventures

Rob Coneybeer of Shasta Ventures, an early-stage venture capital firm, has long seen the advantages of building machines that have smarts in them. He was one of the first investors in Nest, which made a thermostat for the home that was connected to a cloud computing service. By doing that, homes became more efficient and let owners control them much more easily, even letting them turn on heat or air conditioning from mobile phones before they got home.

Coneybeer, who has a background in aerospace engineering and is a race car enthusiast, co-founded Shasta Ventures with partners Ravi Mohan and Tod Francis. Shasta currently has more than $1 billion under management, with two-thirds of its portfolio enterprise-related and the rest consumer-related.

Among other investments, Coneybeer has invested in Fetch Robotics, a company that develops robots for logistical and other markets, Airspace Systems, currently the only drone security solution capable of identifying, tracking, and autonomously removing rogue drones from the sky, Starship Technologies, a company developing small self-driving robotic delivery vehicles, and Elroy Air, makers of hybrid-electric autonomous vertical take-off and landing aircraft (VTOL) for cargo transport.

These four Spatial Computing investments were spotted early by Coneybeer. Shasta invested in Fetch Robotics for their Series A in 2015, and they have continued to invest for their Series B and Series C, which is the latest round. Similarly, Shasta invested in Airspace Systems' Seed Round in 2016, as well as their Series A (latest round), Starship Technologies' Seed Round in 2017, in addition to their Series A (latest round), and Elroy Air's Seed Round in 2017 (still at Seed Stage).

Coneybeer says that, as early as five years ago, there were very few companies that were offering robotics for commercial applications. LIDAR and sensors were just not yet powerful or relatively cheap enough yet to be put into smaller robots. Current companies like Shasta-invested Fetch Robotics and Starship Technologies, though, can take advantage of sensors that enable robots to be tele-operated―that is, supervised from another location by human operators when needed. The software for that is not easy to write, but very useful for the five-to-ten percent of the time when human intervention is needed. That covered differential could make the difference between whether a company will survive or not.

"When you have a system like that, what's beneficial about it is you start to be able to gather statistics and you know that if out of your ten robots on a daily basis there's a path that they use over and over and over again, you can see that you got a problem that you need a human to solve―then you can go ahead and have basically like a scoring algorithm to figure out which problems you solved and then be able to turn them into fully autonomous situations so that the robot can go through and solve those problems on your own because they're encountering them over and over and over again," Coneybeer says.

Coneybeer is most excited about this stage where the interplay between humans and robots is crucial. It is during this time over the next five-to-10 years that neural nets and other algorithms will become more finely tuned after the robot systems have experienced both repetitive and fresh tasks. Companies that take advantage of the need for humans during this time, Coneybeer feels, will be winners.

An example of a technology that's currently being developed where human interplay is a core need is off-road four-wheel-drive autonomous driving, with a team of researchers at the University of Washington leading the effort. Coneybeer adds that some of the best places to find future Spatial Computing technologies and teams are in the research areas at universities such as Carnegie Mellon University and Stanford University, as well as the University of Washington.

Further out, in about ten-to-fifteen years, Coneybeer feels is when we will be able to really get robotic locomotion and manipulation so fine-tuned that "you could have a household robot that can do things like lock up upstairs open doors in your home, and pick things out of the refrigerator and then bring them to you." This kind of robotic virtue related to locomotion and grasping will also be very useful in enterprise situations, such as logistics and manufacturing.

"Advances in perception, driven by Deep Learning, Machine Vision, and inexpensive, high-performance cameras allow robots to safely navigate the real world, escape the manufacturing cages, and closely interact with humans," Coneybeer concludes.

The Immersive Genius

Ken Bretschneider, Founder and CEO of Evermore Park and The Grid, and Co-founder of The Void

In 2012, Ken Bretschneider sold his encryption security company, DigiCert, an encryption security company that had grown to more than 1,000 employees globally. He was on a mission then to develop and open his immersive outdoor 11-acre space, Evermore Park. Before doing that, though, he co-founded the pioneering VR location-based company The Void, short for "The Vision of Infinite Dimensions," in 2014. Evermore Park opened in 2018. In 2019, Bretschneider opened The Grid, an immersive destination and dining space that is located very close to The Void's location in Pleasant Grove, Utah.

Effectively, both The Void and The Grid are more technically sophisticated extensions of what Evermore Park offers—Virtual Reality is core to their business model, while Evermore Park focuses on live actors and does not have any rides, but has three different "seasons." The first is called Lore, which is geared toward Halloween, the second season is Aurora with a Christmas theme, and the third is called Mythos and is filled with magic and dragons.

Regarding the origins of Evermore Park, Bretschneider has said, "It started when I was a little five-year-old kid. I grew up in a really, really bad home situation where my father was very abusive, so it's not all a happy story. But I had (a) wonderful situation happen."

"That was so important for me as a kid―I needed escapism. I had to get out of that environment. It left such a huge impact on my entire life that I kept being drawn to this idea of imagination and creativity, and how it's so important for children and adults alike to be able to explore with their imagination and have an escape for a moment, to do something that's not part of the everyday grind."

Bretschneider had to stop work on Evermore Park for a few years because of its massive monetary and infrastructure development needs. In forming The Void, he could put his artistic and technical abilities to work in the meantime. A few years after its formation, the Walt Disney Company accepted The Void into their 2017 Disney Accelerator program. The initial investment of little more than $100,000, which was added to Bretschneider's own investment into the company, was not the main attraction―it was the support and Hollywood relationships that would prove to be useful, along with Disney's further investments into The Void at later dates.

The Void's first VR location-based experience was one that was commissioned for the new Ghostbusters movie and was called Ghostbusters: Dimension. The experience was released in 2016 and had a run at Madame Tussauds' New York location in Times Square. The actual interactive VR experience ran for only about 10 minutes and accommodated four players at a time. The VR headsets for The Void were custom made, along with the batteries needed to power them, and a haptic vest and computer backpack. The Void was the first company that had accomplished a successful VR location-based experience.

Sometime in 2017, Bretschneider decided that he wanted to get back more of his artistic freedom and he distanced himself from the increasingly Disney-operated The Void. He threw himself into getting Evermore Park up and running, which he did in 2018. Almost simultaneously, he started working on getting The Grid up and started, a 100,000-square-foot "experience center, an electronic playground" featuring the second-largest indoor kart race track in the country with vehicles capable of going 60 mph and proprietary VR experiences coming soon―in addition to 4,000 square feet dedicated to The Void experiences, and the One Up Restaurant and Lounge located on a mezzanine level.

According to Bretschneider, other locations for The Grid, including Chicago, Houston, and Seattle, are already on the radar.

Back in 2017, Bretschneider was the first person that spoke to us about using Machine Learning to develop virtual beings for use in VR location-based experiences. He continues to amaze us.

The Synthetics Craftsman

John Borthwick, CEO of Betaworks

John Borthwick sees "synthetic" in the future of Spatial Computing. He's an investor who sees a huge opportunity in using AI to create things we see and interact with. His name for this is "Synthetic Reality," for realities that include virtual beings walking around, and "Synthetic Media," for media, like music or videos that were created by AI with minimal human help.

Borthwick has been passing out $200,000 checks to a variety of companies that fit into this new form of AI-driven media creation and manipulation that he calls "synthetics." He told us he sees synthetics as a new world where there are virtual beings and other media, all created by Machine Learning. Think of them as virtual humans walking around you in either a totally virtual world, or on top of the real world in Augmented Reality forms or, alternately, various devices or combinations of devices and things that look human that perform music and could also bring you radical new art and video styles that weren't possible with just human-created media or forms.

One of his investments, Auxuman, showed us "Yona," a Machine Learning-driven singer-songwriter. Yona makes records in collaboration with human producers, performs live, and appears on social media. On screen, Yona looks like a human singing, but she is anything but human. Every note she sings or has playing in the background, and every action she makes, is created by a series of Machine Learning algorithms running underneath her. In other words, she is closer to a character you might see in a video game, except she has much more talent than a mere scripted character that you might meet in a video game.

Borthwick has great interest in this new field, which includes Lucy, an animated child who interacts with you in VR, built by San Francisco's Fable Studio, and Mica, a humanoid figure that plays games with you inside a Magic Leap headset. Lucy has won a ton of awards at film festivals, and an experience that featured her won a Primetime Emmy for "Outstanding Innovation in Interactive Media." Magic Leap's Mica was first demonstrated at the 2019 Games Developer Conference (GDC), where she sat in a real chair across from humans who waited in line to meet her and then she proceeded to play a puzzle game with the GDC attendees. Magic Leap was careful not to pitch Mica as an assistant, so as not to be construed as a Siri or Alexa competitor. Much would be needed if that were the objective, included some sophisticated Machine Learning tools. As of now, she doesn't even speak.

Borthwick doesn't see these other forms of synthetic characters as competition, but as the beginnings of a new art form that enables many new companies and entrepreneurs. He notes that while some music stars, like the Beatles, were hugely popular, it didn't keep many other performers from getting very rich on stage, and he sees this new world evolving like music, with vast opportunities for entrepreneurs to introduce new capabilities to users, from assistants to entertainers, and sometimes both in the same form.

Your first impression of Yona might cause you to write her off as not being able to compete with humans. We agree she won't win any The Voice TV show singing competitions. Yet. Borthwick takes these criticisms in his stride, and the way that he gently helps us see that these are the future is one of the reasons he's won so many fans.

He says that the point of these early attempts at building synthetic beings isn't to compete with humans, but to make our coming virtual worlds more fun, or open up new opportunities that humans can't be present in. He noted that maybe soon there would be singing competitions with these synthetics and no humans. Since some virtual influencers and performers have already built huge audiences, including some in real life, we understand his point.

Borthwick sees a variety of business models that will potentially drive his investments, everything from licensing to corporations, to presenting them as virtual add-ons in experiences in Spatial Computing glasses, to advertising. He sees lots of business models that most may miss, too. He notes that many retail stores or hotels pay music licensing fees for the "muzak" that they play in the background. The Machine Learning algorithms, like the ones that run underneath Yona, could generate muzak to be played in shopping malls and elevators, and other populated spaces.

Yona wasn't alone in the synthetic stable of start-ups he's invested in. There's Comixify, which uses Machine Learning to turn videos or photos into comics; Deeptrace, which combats AI-synthetized media, known as "deepfakes;" Radical, an AI-powered 3D animation solution that requires no hardware or coding knowledge; and Resemble, which creates much better speech synthesis using Deep Learning, as compared to current technologies; and a few other companies.

Borthwick's start-ups are located inside the multi-story Betaworks offices that he runs in New York's Meatpacking District. He has long built start-ups this way, or guided them, having funded and incubated tons, including GIPHY and Bitly (GIPHY is a search engine for cute graphics that's used by hundreds of millions of people a month, and Bitly is a URL shortener used by many bloggers and social media fans). GIPHY was, as of 2019, still located on one of Betaworks' floors upstairs.

Walking through his offices, which now includes an "open to entrepreneurs" club that he calls Betaworks Studios, you will meet quite a few of the start-ups Borthwick's firm invested in. They get cheap rent here, and even if it wasn't cheap, it's the place to be in New York because of the Studios part of Betaworks, a separate business that is a club in which entrepreneurs from around the region are welcome to hang out at. Here, Borthwick and team throw many business- and technical-related events every week. It is this openness that has put him at the center of New York's growing entrepreneurial community and given him the insights, not to mention the capital, to bring a range of new capabilities to Spatial Computing.

The Chip Innovator

Hugo Swart, VP and GM of XR (VR and AR) at Qualcomm

Literally every smartphone and, soon, every Spatial Computing device, will have Qualcomm technology inside. This company isn't well known by consumers, yet it makes a lot of the chips inside every phone used, which gives it a hugely influential role in the world. Microsoft HoloLens and Nreal headsets use Qualcomm's chips. They also are inside the Oculus Quest VR headsets, among others. Computation, graphics, AI, and wireless chips are its specialty. The company is largely seen as the best example of a "mobile-first" company, having started by building communications devices for trucking fleets. Its competitors, such as Intel and AMD, started by building processors for desktop computers that didn't need to be as small, light, or power efficient as those in mobile devices, and that background has given Qualcomm a unique role in the industry and thrust one of its employees into a very public role: Hugo Swart. He runs the AR and VR efforts for the company, and he's frequently seen keynoting industry conferences explaining its strategy.

In our discussions with Swart, he laid out what makes Qualcomm special: it has evolved making chips for mobile devices from literally the first day it was opened. Their chips, he claims, use less power than competitors and are more flexibly fit into the small form factors that are needed to be worn on people's faces.

In 2019, Qualcomm came out of a legal fight with Apple as a huge winner―its 5G radios will be included inside Apple's 5G iPhones that will be introduced in 2020 and beyond (Apple tried to go with radios from Intel, and was hoping to pay far less for them, but Intel's technology just wasn't as good as Qualcomm's and that forced Apple back to the negotiating table). For that, Qualcomm will receive $9 per phone, and it's taking that money to do new research and development, a good chunk of it on Spatial Computing devices since Swart says that he, and the company, see Spatial Computing as the key use case for getting 5G. Swart is planning for a world where hundreds of millions of users are wearing Spatial Computing glasses to do all the things you are reading about in this book.

He is effusive as he explains how we will live and work in the future. "Soon we will get to the holy grail: a headset that does both Virtual and Augmented Reality," he says. In late 2019 he and Qualcomm laid out an aggressive strategy to bring exactly that to market before 2025, and it announced a new chipset and reference design (which means it has prototypes to show other manufacturers how to build a product that has the chipset inside), the XR2.

The XR2, he says, will enable all sorts of new Spatial Computing devices for people to wear. Along the rim of the devices will be up to seven cameras―two to watch your eyes, one to watch your mouth for help for voice assistants and also avatars that will need to match what you do with your mouth, and four to see the world, both to be able to record it and so that Augmented Reality images could be accurately placed.

Hugo has a background in engineering and started out as a manager in technical marketing in his home country of Brazil, learning the ropes and understanding how regional mobile operators worked with Qualcomm to deploy its technology. That led to a similar role inside Qualcomm. Then he moved into IoT and consumer electronics roles, which led to his current, visible role leading the company's efforts.

This holistic view of Qualcomm pays off because with XR2 it works with different original equipment manufacturers (OEMs) (like Microsoft or Nreal), to build their technology into the very different products that just those two companies are planning. He told us that some of these OEMs might only want a piece of its reference design, which includes separate chips to do graphics, AI, processing, wireless, audio, and other tasks. His pitch is, though, that for best results you'll want to get all of them from Qualcomm. "When we integrate them all together, we can make the whole system run faster," he told us. What he wouldn't discuss is how that changes the bill of materials cost (known in the industry as BOM), but we expect that negotiations over the entire package go better than if you try to get one chip from Qualcomm, another from Intel, and yet another from somewhere else. We'll see how those negotiations go as new products that hit the market at the end of 2020 and beyond have the XR2 chipset inside.

This vantage point that Swart and Qualcomm have brings the best partners in the door, too, most notably Niantic, which as of early 2020 is the number one Augmented Reality company and is behind the Pokémon Go and Harry Potter AR games. It and Qualcomm have announced that it's working on a future headset together, presumably with Niantic's software platform inside, based on the data collected from hundreds of millions of users, and Qualcomm's chips running the headset.

Qualcomm doesn't only do chips, though; they have sizable software teams to make those chips do useful things. For instance, the 835 chipset inside the Oculus Quest has software for the four cameras on the front of that device to do "inside-out VR tracking," which means the headset doesn't need external sensors to work, like previous generations did. We first saw inside-out tracking inside Qualcomm's labs two years before Facebook introduced that device.

Qualcomm, too, is building chips for autonomous cars, drones, robots, and other IoT devices for factories and homes, including medical devices. Increasingly, he says, Spatial Computing will be the way all of those other devices will be controlled.

If Swart's vision is correct (we believe it to be which is why we wrote this book), then who knows how far Swart will rise in stature in our industry over the next decade?

The Future Flyer

Sebastian Thrun, CEO of Kitty Hawk, and Co-founder and President of Udacity

If we were talking to Sebastian Thrun 15 years ago when he was starting his career at Carnegie Mellon and Stanford University, we might write him off as a wild dreamer, telling us about a future of autonomous cars, but today we can no longer dismiss him or his ideas.

Now he is talking to us about flying cars, which he's building as founder of a new startup, Kitty Hawk.

His dreams of the future go far beyond autonomous vehicles or flying cars, though. He sees huge changes to cities and how people live thanks to Spatial Computing, with its new user interfaces on top of both physical and virtual worlds. He sees us talking to something like Alexa or Siri to both order up new transportation and be navigated toward your vehicle.

Back at the beginning of his career, he ran a small experimental team at Stanford University that went on to win the DARPA Grand Challenge, a race through the desert to see if someone could really build a self-driving vehicle. That got the attention of Google's founders, who convinced him and his team of other students doing AI research to join. Today, you see their work in Waymo's self-driving vehicle fleet, which just recently (2019) got approval to drive without humans in a few cities.

Thrun's dreams were born out of tragedy, though. When he was growing up in Germany, a car accident killed a childhood friend, and later another accident claimed the life of a coworker; both were avoidable accidents. He told us that's what drove him to develop autonomous cars.

Now that the idea of autonomous vehicles isn't so futuristic anymore, he's moved on to building other parts of the ecosystem. First, he started Udacity, in part to help finish off autonomous vehicle development. This online education service helps train many developers who then work in the industry, completing his dreams.

Now he is seeing a new problem with all of this―as transportation costs come down, we'll see new congestion near city centers. The costs aren't just economic, either. Already, autonomous vehicles are letting us do more while getting driven around. These give us minutes and sometimes hours of our lives back to do other things.

Either way, we will see humans use transportation a lot more, and for further distances, than ever before, causing more traffic on our roads.

This congestion can also be helped, he says, by using technology that could be an efficient "traffic control" system to perhaps limit the number of cars that are allowed to come toward a city per hour. He told us that you might need to "buy a slot" on the freeway. Leaving at, say, 8:05 a.m. to come into a city like San Francisco might cost more than leaving at 10 a.m., or you might even need to win the lottery to be able to come in at that time at all.

Coming to the rescue of this situation, Thrun's company Kitty Hawk uses an approach aimed at fixing this congestion problem by having vehicles that fly overhead and by using Spatial Computing technology in unique ways to help these electronic flying vehicles navigate the skies safely without a pilot on board. He sees that airspace can support many more passengers than a freeway can, and it'll be cheaper. He mapped out a 100-mile flight for us from Stockton to San Francisco, which today takes two or more hours on the freeway but could be done in about 20 minutes and would cost less than driving a car that far.

Thrun notes that some of these vehicles will be so light and quiet that they will potentially open up new flight paths (they are much quieter than helicopters, so will be accepted closer to homes).

He isn't alone in the belief that autonomous vehicles might increase congestion. Elon Musk, CEO of Tesla, sees this problem coming, too, and has started another company, The Boring Company, to dig tunnels under cities like Los Angeles to enable drivers to get from one side of the city to the other within minutes.

Thrun sees that as impractical. Tunnels are expensive to dig, and you can't dig enough of them quickly enough and in enough directions to really solve the problem. So, he looks to the sky. He sees unlimited transportation real estate there in three dimensions.

The advantages of looking to the air are numerous, he told us. You can – using new Spatial Computing technologies―pack more flying things into "air highways" than ever before. These new vehicles are lighter, which means they are cheaper to operate, and they are quieter, so will be more accepted over residential neighborhoods than, say, helicopters or private jets. Both of these together add up to be game changers, he says.

When we talked with him, we pressed him on some of the many details that will need to get worked out. Governments will need to write new laws. People will need to change their beliefs about whether they need a human in the cockpit and will need to trust electric motors to safely deliver them.

New systems will be needed to ensure that accidents don't happen, and that the new "highways in the air" are effectively used by thousands of flying vehicles. Put all that together and it might scare away most entrepreneurs as too big a challenge.

Thrun's visions aren't done, though. He sees a world where everyone is wearing Spatial Computing glasses that you talk to, or gesture to with hands, or control with your eyes, or all the above. He sees them as providing a new user interface to talk with the transportation system and find a scooter, or a car that can take you further, or even, schedule one of Kitty Hawk's electronic flying vehicles. This dream also lets him revisit the actual design of cities. After all, we won't need as many parking spaces, because we'll have robots rolling around sidewalks bringing us things, and even meeting rooms will change due to these technologies.

It is visionaries like Thrun who are dreaming of this new world and then building the companies to bring these visions to life who will end up dramatically changing human life in ways that 15 years ago seemed incomprehensible. After all, can you imagine seeing hundreds, if not thousands, of autonomous flying vehicles in the sky? One day, we think this will be so.

Spatial Computing Paths

In this chapter, we have presented exemplary Spatial Computing industry people and discussed how we think they will be moving the industry forward in the future. From retail to transportation, it's clear that visionaries are foreseeing huge changes in almost all areas of our lives. With such changes to the very way we live, the ethics surrounding this technology are sure to be significant. In our next and final chapter, we provide guidance on particular ethical issues surrounding Spatial Computing, including issues relating to privacy and security, identity and ownership, and human social good.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.181.52