10

The Always Predicted World

Humans are predictable. If we go to church, we are there at the same time every weekend. We head to work at the same time. Kids have to be dropped off at the same time, to the same place. Even things that seem somewhat random, like going to grocery stores or for dinner at restaurants, have a pattern to them.

What is your next best action? Humans are pretty good at figuring that out, right? But computers can assist you even more by warning you of, say, an accident ahead on your route, or some other pattern you can't see ahead of time. "Ahead, there's a new pothole in the left lane."

Here, we'll explore the predictive capabilities that will come with Spatial Computing and how these technologies will change almost every part of our world, from transportation to education.

The Predictive Breakthrough

Predictability goes beyond knowing that if it's 8:45 a.m. on a Monday, you probably are headed to work. Vic Gundotra, who used to be an exec at Google, told us that Google realized that if you walked into a store, that act predicted you would end up buying something in that store. That might seem obvious, but it was a breakthrough in thinking through how to advertise to users. Kelley Blue Book, among others, used that learning to put car advertising in front of you once you walked onto a car lot, via your mobile phone's notifications or by sending you an email based on your previous login to Kelley Blue Book, sometimes even convincing you to leave that lot to get a better deal elsewhere.

If you are visiting a Volvo lot after doing a bunch of searches on Kelley Blue Book, another car company might even offer you $50 to come to their dealership and take a test drive in an attempt to get you to leave the Volvo lot without making a purchase.

Glasses That Understand You

Those early uses of contextual data and predictive systems will soon seem pretty simple when compared to what's coming next. That's a world where many users will wear Spatial Computing glasses, with tons of cameras and sensors to watch what the wearer is looking at, talking about, and interacting with. This world will arrive at about the same time as autonomous vehicles and delivery robots start showing up on our city streets. Each will have a ton of sensors as well, gathering data about the street and sharing it with a new type of cloud computing that can learn exponentially and deliver radical new services and even virtual assistants, all due to massive amounts of data being collected about the real world and everything and everybody interacting in it.

The software of Spatial Computing glasses will get a very good understanding of the context that a user is in. We call the engines that will watch all of this "exponential learning systems." After all, you will want to do different things on computers if you are shopping than if you are trying to work in an office, and those two things are quite different than if you are in a restaurant on a date night.

These glasses we soon will wear will have three new systems: data collection, contextual deciders, and exponential learning systems that are constantly using data and user context, along with other predictors, such as who you are with, with the aim of making your life better. These systems have been under design for at least 15 years, but are now getting much more complex due to new technology, particularly on autonomous vehicles, which have AI that does all sorts of things, including predicting whether a pedestrian at the intersection ahead will cross the road or not.

At Microsoft Research, back in 2005, researchers showed us how they built a system that could predict traffic patterns in Seattle 24 hours a day. It knew, for instance, that it was pretty likely that you would get stuck in traffic coming across the I-90 bridge during rush hour. This might seem pretty obvious to most humans, but it was an important breakthrough for computers to be able to predict traffic conditions before they happened. With Spatial Computing glasses, similar Artificial Intelligence-based systems might make thousands of similar predictions for their wearer. These predictions, and the assistance they will give, will change human life pretty significantly, and provide the ability to build new assistants that go even further than the human assistants that executives use today.

(Human) Assistance Not Required

To understand how these changes will occur, we talked with someone who is an experienced executive assistant. She doesn't want to be quoted or have her name used, but she was an executive assistant to Steve Jobs and still is an assistant to an executive at Facebook. Executive assistants do everything from getting other executives on the phone to doing complicated travel planning, to maybe something simple like ordering lunch for the team. She told us stories about getting the CEO of Sony on the phone and helping to fix problems for Steve Jobs. In other words, she cleaned up messes, since he was famous for getting angry when something didn't go right. She has a deep Rolodex of other executives' cell phones and their personal assistants' details that she could use to make such meetings and other things happen.

The thing is, her kind of job is quickly going away. The Wall Street Journal reported that more than 40 percent of executive assistant positions have disappeared since the year 2000. Why? Well, think about your own life. In the 1970s, travel, for instance, was so complex that you probably would have used a travel agent to do it. Today, mobile phones and services like Google Flights have made that job much more simple, so you can do travel planning yourself. Heck, you can even get a ride to the airport by simply using a mobile app and you know if that ride is on its way, which wasn't true back just a decade ago. The same is happening to all parts of the executive assistant's job; after all, now that you can say "Hey Siri, call Jane Smith," you probably don't need an assistant like our source to get someone on the phone.

Soon, virtual assistants in our glasses will have a lot more data to use to help us than just looking at the number of steps you are taking or your heartbeat. We've seen AI-based systems that know how many calories you are consuming, just by taking a few images of the food you are looking at, and Apple has patents for its future Spatial Computing glasses that watch a number of different health factors, including listening to your voice, heart rate, and other biometric data to predict how your workout is going. Apple has a set of patents for a bunch of sensing capabilities for looking into the human eye for a variety of ailments too.

These aren't alone―Robert Adams, the founder of Global e-dentityTM, showed us how his patent that looks at your vascular and bone structure could be used for a number of things, from verifying your identity to assessing your health. Future glasses will have cameras aimed at different parts of your face for just these kinds of things. Our virtual assistants will have a number of helpful features made possible through Spatial Computing, which could allow them to see to most, if not all, of our assistance needs.

Pervasive

Now, imagine the other data that such a system would have if you are almost permanently wearing a camera on your face―developers could build systems to play games with you about your eating habits that would say to you, "Hey, don't eat that donut, but if you do, we will remove 400 points from your health score and that will require you to eat 10 salads and go on two extra walks this week."

Already we are seeing Computer Vision apps, like Chooch, that are starting to identify every object you aim the camera at. It's using the same kind of Computer Vision that a self-driving car uses to know it is seeing a stoplight or stop sign, or another object on the road (while writing this chapter, Robert's Tesla started recognizing garbage cans, of all things).

If you use Amazon's app on a mobile phones, there's a camera feature that does something similar. Aim the camera at a Starbucks cup and it will show you things you can buy from Starbucks. Aim it at someone's black-collared shirt and it will show you other collared shirts. In the Spatial Computing world, you will see these capabilities really come to bear in new kinds of shopping experiences that can take a single 3D object, like a scan of a coffee table, and show you lots of other things that will fit with that table.

What if, though, it knew you needed a shirt before you even started looking for one? How would that work? Well, if you put a new job interview on your calendar, a predictive system might surmise that you will need a new suit to look spiffy for that interview. Or, if you learn you are pregnant, such a system might know that you soon will need a new car, new insurance, and a new home. In fact, such predictive systems already are running all over the place and are the reasons why after a big life event, we get a ton of junk mail advertising just these kinds of things. The New York Times reported that a system at Target knew a woman was pregnant before she told her family because she started changing the products she was buying and a computer system recognized the change in her patterns.

We can see these systems at work in tons of other places, too, including inside email systems, like Gmail, that start answering our emails with us and giving us predetermined answers that often are pretty darn accurate.

The world we care about, though, isn't the one of email, it's the new computing world that's 3D and on our face. Here, predictive systems will be watching our eyes, our hands, our movements, and watching everything we touch and consume. Now, don't get all Black Mirror and dystopian on us.

Truth is, the early devices won't do that much due to the small batteries that will be in them and the sizeable privacy concerns, but over the next decade, these devices will do more and more and, when joined with devices in our homes, like Amazon Echo or Google Home, will start doing a sizeable number of new things, including changing the way we go to the store, using what we call "automatic shopping." Heck, we are pretty close to that already, and in our research, we visited Amazon and its Amazon Go store where you can buy things just by picking them up and walking out with them. There were hundreds of sensors watching every move in that real-world store. Soon we'll have versions of that store virtually in our living rooms.

Proactive

Going back to the human assistant, if you really had an assistant that knew you well, especially if you had one who lived with you, that assistant would know if you were running out of milk or cereal. Well, already, making a shopping list is easier than it used to be. In our homes, we just say "Alexa, add Cheerios to my shopping list" and it does. But within a few years, the glasses you are wearing will just make note that you have had your seventh bowl of Cheerios this month and that there are only two bowls left in the box in your kitchen, so it will just add another box to your shopping list. Yeah, you can take it off if it gets too aggressive, but we bet that most of the time it'll be pretty accurate. Added onto new delivery robots, you'll see almost a total change in how we stock our kitchens.

Already, when we were writing this book, our Tesla started doing something similarly automatic: "automatic navigation." Instead of having to put an address into the car's navigation system for, say, heading to a doctor's appointment, the car now looks at your Google Calendar and then automatically navigates to your next appointment when you simply get into the car. Hey, it's pretty accurate. After all, you did add that appointment to your calendar. Where it wasn't accurate or didn't have enough data to work, it trained us to make sure our calendars were accurately kept up to date with the street addresses of appointments.

This two-way feedback system proves very effective and its accuracy increases our trust in, and love of, our car. Other owners have reported the same. One thought it was the most amazing new feature because it saved her trying to find addresses while driving or touching the screen while her nails were drying on the way to work.

That's one level of prediction―a system that just does pretty simple stuff, but what will really change radically over the next decade is when you hook up the exponential learning systems of Artificial Intelligence and Computer Vision, along with millions of cameras and sensors moving around the real world.

Connected

What do we mean by "exponential learning system"? Well, one night when walking home from band practice, a fellow student back in 1982 was hit in front of the school and died. The next year, a stoplight was erected in front of Prospect High School in Saratoga that is still there 35 years later. That system was a linear-learning system. It improved that one intersection. If you lived in, say, France, though, it didn't improve your life at all.

Now, compare that to how a June Oven gets its information and makes appropriate changes. The June Oven has a camera inside and an Nvidia card that lets it do Machine Learning. Why do you want an oven like that? Well, if you put a piece of fish into the June Oven, it says "salmon" and correctly cooks it without you needing to look up the recipe. What if, though, you put something into the oven it can't recognize, such as Persian kabobs? Well, then you can set it manually, which teaches the oven about something new. The thing is, a photo of that kabob, along with the temperature and time settings, are sent up to June's servers, where programmers there can then add that to everyone's oven, which makes everyone's oven exponentially better.

It's the same for Tesla cars. When we slam on our brakes in our Tesla, it uploads the video from that part of our drive, along with all sensor readings. Programmers there can then work that into new Machine Learning training, improving the system for, say, understanding what a pedestrian looks like, and improving the system for the entire fleet on the next update. These new products that have neural networks inside learn exponentially and get dramatically better after you purchase them, and do it fleetwide. So, unlike the stoplight example, which only improved one intersection, next time something goes wrong, the programmers will fix the program for the entire fleet. It gets better exponentially and as more users get these products, the time to new features goes down exponentially as well. Within a decade, these exponential learning systems will dramatically change almost every part of our lives, particularly inside Spatial Computing glasses, which have an exponential learning system built in.

Back to Spatial Computing glasses, then. When you first get them, they might not recognize much in the world, but if millions of people have them and teach them one new thing a week, that means your glasses will recognize millions of things within a few weeks, exponentially learning and getting better with a rate that will speed up as more people get the glasses.

What this means is that we will soon have virtual assistants and coaches who get smarter and smarter every week. Our virtual running coach might start out only understanding simple runs, but within a few weeks might be taking us on back-country trails that are poorly marked.

Our virtual nutrition coaches might start out only understanding major brands but soon will recognize unusual candies from far off places. Our shopping services might start out not knowing our favorite brands, but within a few weeks, will learn exactly what brands and colors make you happiest.

Our Spatial Computing glasses will bring this kind of new utility, along with some that might even save your life and take the fall detection feature of the 2019 Apple Watch family way further. Let's dig into just how deeply human life will change because of this data collection, contextual decision-making, and processing, and the exponential learning systems that Spatial Computing will bring.

Data Dance With Seven Verticals

We'll present here some visions of how the intersection of data collection with Machine Learning and 3D visualization will benefit the seven verticals that we have been addressing in this book: Transportation; Technology, Media, and Telecommunications (TMT); Manufacturing; Retail; Healthcare; Finance; and Education. Let's begin by getting into how the world of transportation is set to change due to the influx of data, and our increasing capacity to utilize and learn from it.

Transportation

When you need transportation, you'll see that tons will have changed by 2030. Your Spatial Computing glasses will radically change the way you approach transport. These changes will be bigger than the ones that Uber, Lyft, and Didi brought as they enabled people to order a ride on their new mobile phones. Uber is worth looking back at, because we believe transportation, even the vehicle that sits on your own driveway, will act and work a lot more like Uber than like the Chevy that was memorialized in the 1971 anthem, "American Pie."

When Travis Kalanick was starting Uber, he told us that his goal was to have a single car near you. Not none, because that would mean waiting, which wouldn't be a good customer experience. Not more than one, because that would mean drivers were sitting around not getting paid and getting angry.

This is accomplished by data collection, smart data analysis, and proprietary algorithmic software. It knows, contextually, where rides are. In fact, drivers are urged to move around the city, and sometimes further, in anticipation of new demand. We watched its organization at the Coachella music festival, where they had drivers from across the Western United States―thousands of them―converged in Palm Springs by this system that controlled transportation.

The transportation system of 2030 will be different, though, because most of the pioneers we talked with expect autonomous systems to be well underway. In fact, as we open the 2020s, Waymo is just starting to drive people around in these, now named "robotaxis." At least it is in a few cities, like Phoenix, Arizona, and Mountain View, California. We cover these robotaxis in depth in the transportation chapter elsewhere in the book, but it's worth explaining how the data that these systems are slurping up everywhere will be used, processed, and displayed, all in an attempt to predict what the transportation needs are so that Kalanick's impulse of having a vehicle waiting nearby whenever you need it will become true, even without a driver involved.

Further, your glasses, and possibly Augmented Reality windshields, will see layers of other utilities, all built by the data being gathered. For instance, it's very possible that you will be able to see what the street looks like during the day from these glasses or windshields. The maps that are being built by autonomous cars are so detailed, and have so much data about what surrounds the car, that you might be presented with a "day mode" so that you can more easily see what surrounds the car.

The maps themselves, which the industry calls "HD maps," for "high definition," already go way beyond the maps most of us use in mobile apps, whether from Google or Apple. They include every stoplight, stop sign, tree surrounding the route, lane information, speed information, and much, much more.

The autonomous car system needs all that data to properly be able to navigate you from your home to, say, work―stopping correctly at intersections, accelerating to the right speed on freeways, and turning smoothly on mountain roads. All this is controlled by the HD maps underneath, usually invisible to the driver. Although, if you are in a recent Tesla model, you get a taste of these maps on the car's display, which, as we are writing, now shows stoplights and even things on the road like garbage cans. The data these maps are collecting, though―or rather, that the autonomous car groups, whether Cruise at General Motors, Waymo, or Tesla, are gathering―goes way beyond what you might think of as data for moving vehicles around.

Because they need to track literally everything on the road, their AI systems have been taught to recognize everything from a ball to a child riding a bike, and their systems predict the next likely action of each of these things. These systems have also learned how to recognize parking meters, parking spaces, curbs, poles, and other things that they might need to navigate into and out of.

Now, think about when Spatial Computing glasses and autonomous cars work together. The car system itself can watch how many people are waiting in line at, say, a nightclub. It can tell how busy a downtown district is, or whether a restaurant is either lighter or busier than usual.

At Microsoft, 15 years ago, they showed us how predictive systems could use huge amounts of data to predict just how busy a freeway would be at a certain time, long before you would leave on your journey. Today that kind of prediction seems pretty quaint as Spatial Computing glasses and autonomous vehicle systems are gathering 10,000 times or more data than was gathered by early AI-based systems, and the amount of data is increasing exponentially every year.

So, what is possible as these systems mature and as more and more data is gathered, either by fleets that grow from hundreds of thousands of AI-outfitted fleets to millions of cars, all assisted like an ant farm with millions of people gathering other data on their wearable computers that are also mapping the world out in 3D?

You'll be able to see literally everything about the street in real time. Is there a wreck ahead? You'll see it within seconds of it happening, and potentially even be able to watch it happen in 3D on your glasses. Lots of other news will also be displayed on maps in the future this way. Everything from bank robbers to fires. Already, lots of cars are able to display video captured from dashcams or the cameras included in, say, a Tesla, which has AI that senses that humans are moving around the car in a way that they shouldn't, so it starts playing loud music and records them in an attempt to get them to move away from the car.

Systems like Waze have long warned of things like potholes or objects on the road ahead, but in a world where cars have 5G radios and can transmit huge amounts of 3D data, by 2030 you will be able to see changing road conditions sent to you by cars imaging those conditions ahead. The thing is, most of us around that time won't care about road conditions at all, except maybe in some extreme weather situations where sensors stop working, or the road can't properly be sensed. Even then, the autonomous car network might "hand over" control of your vehicle to an employee wearing an expensive Virtual Reality headset and they would remote control your car without the passenger even knowing about it―all thanks to the data systems that are gathering, processing, and sharing a 3D view of both the real world and a digital twin, or copy, of the real world.

Speaking of that digital twin, with your Spatial Computing glasses, you will be able to see a whole raft of new kinds of games, along with new kinds of utilities. Pass by a historical marker? Well, grab a virtual version of that with your hand and pull it closer to you so you can read the sign, all while your car zooms past. Want a hamburger? Tell your glasses your needs and they will be satisfied but not just with a "McDonalds is 2.5 miles away" but with a full 3D menu of the food options so you can have your order waiting for you when your car navigates you there and you'll know exactly what to expect, whether it's a salad, a hamburger, or something more complex like a Vietnamese soup.

The kids, too, will have their own Spatial Computing glasses and will invite you to join in their game, where they are manipulating a metaverse, or virtual world, that you are driving through. Toyota, and others, have already demonstrated such Virtual and Augmented Reality games that are under development.

It is the predictions that are possible because of all the data being collected as you navigate, stop for food, go to church, head to work, that are really set to shake things up. Remember, a system that has hundreds of thousands of cars, each with eight cameras and a variety of other sensors, can "see" patterns humans just can't. The system can predict what's next, knowing that a baseball game is about to finish, for instance, and can reroute everyone within seconds. Humans just aren't that good at knowing things like that. These predictions, done on a mass scale, will save millions of people a few seconds a day, sometimes minutes. Add those all up and it's the same as saving a life's worth of time every day, and at a bigger scale will truly revolutionize transportation.

Already, Tesla is giving a taste of how magical predictive systems are. In late 2019, Tesla cars started automatically navigating you to your next appointment, thanks to hooking up to your work and personal calendars, whether on Google or Microsoft systems.

These predictive databases and the AIs that are working on your behalf will make the transportation system itself much more efficient, safer, and more affordable. For instance, if you were to take a car on a long trip, these systems would know that long before you left, probably because you put something about the trip on your calendar, or told social media friends "Hey, we are going to New Orleans next weekend. Anyone want to meet up?" Or, even without any signals, it would figure out that you were on a long-range trip as soon as you crossed a geofence 30 miles out of town. All the way, the master in the sky would be able to route you the fastest way and even hook you up in a convoy to save everyone some money on electricity for your vehicles.

Fleets like those owned by UPS are already starting to move toward being completely automated and electric. UPS, in January 2020, announced that it will buy 10,000 electric vans and install Waymo's autonomous driving package onto each (Waymo is the autonomous vehicle start-up spun out of Google's R&D labs, and was the first self-driving vehicle system we saw on Silicon Valley roads).

The predictive system taking care of you on trips, and preparing the car for you automatically, saves a few seconds of typing of an address into a phone-based map or clicking around on a screen to get it to navigate, an act that can be very dangerous if you attempt it while driving, which many people do. It seems like something minor, but over the next decade, our glasses and car screens will add dozens, if not hundreds, of little features like these to make our lives easier.

After all, we are creatures of habit. Sam Liang, who used to work on the Google Maps team, told us just that. He now runs an AI speech recognition engine. He recognized years ago that the context that humans were trying to do something in matters to make computing systems work better.

After all, if you are wearing your glasses while driving a car, you will expect a different set of functions to be available to you than, say, when you are attending church, and that will be different than if you are watching a movie.

The cloud computing infrastructure that is watching your glasses, and all the cars moving around you, is about to make life itself much different than it was a decade ago when Uber seemed like a big idea. Let's discover how it'll work, not just when you are ordering a car, but in the technology you utilize, the media you consume, and in the communications systems you use.

Technology, Media, and Telecommunications (TMT)

Where and what you pay attention to is already paid a lot of attention. Teams of engineers at Netflix, YouTube, Hulu, Spotify, Disney, and other media and entertainment services, have developed systems to pay attention to what media choices you make and where and what you watch, and they have already built extensive prediction systems. You can see these systems come to life when your kids use your accounts for the first time and all of a sudden you start seeing suggestions that you watch more SpongeBob.

In the Spatial Computing world, though, the data collected will go far beyond what movie, TV show, or website link you click on. There are seven cameras on the devices that Qualcomm's new XR2 chipset enables: four for looking at the world, one for watching your face for sentiment, and two for watching your eyes.

It is the ability to see user sentiment and watch where users are looking that will open up many new predictive capabilities that will change even the entertainment we see. Knowing you actually looked at a door in a game or interactive movie of the future, for instance, could put you down a different path than if you looked elsewhere. Thanks to the incredible download speeds that wireless will soon bring with 5G, the device you are wearing itself could load quite complex 3D models before you even open the door to look through.

Already, games like Minecraft Earth are loading assets based on where you are located, and if you walk down the street, it predicts you will continue walking down the street and loads characters and game pieces ahead of when you need to see them.

Microsoft's Flight Simulator has worked this way for more than a decade. As you fly your virtual plane, the system loads scenery and city details ahead of you so that everything is sharp and doesn't stutter when you fly over, say, New York. Entertainment makers are planning out new kinds of entertainment that let you choose a variety of paths through the entertainment. Netflix already has shown how this will work with its TV project "Black Mirror: Bandersnatch." There, you can make choices for the characters that shape the story as you go. In Spatial Computing glasses, touching things, looking at other things, and moving around your real-world living room might impel movies and games you are enjoying to show you different story paths.

Lots of pioneers in the industry refer to an off-Broadway New York play, "Sleep No More," which is a new version of Shakespeare's Macbeth, except it isn't performed on a stage. Audience members wander through a huge warehouse, with action happening all around as you walk through the set.

Innovators like Edward Saatchi, who is the cofounder and CEO of Fable Studio, see this as foundational to the future of entertainment. Soon, he says, audiences will be walking through virtual sets, similar to in "Sleep No More," and will be interacting with virtual beings that can talk with you, or even play games with you, amongst other interactions. Fable's "Wolves in the Walls" is an example of just this and introduces a virtual character, Lucy, that can do just this. This Emmy-winning project has Lucy talking with you, and interacting with you, and is the first example of a virtual being we've seen that makes you feel like you are inside a movie.

Others are coming, say investor John Borthwick, who has invested in a variety of synthetic music and virtual being technologies. He sees the development of these new beings as a new kind of human-machine interface that over the next decade will become very powerful, both in terms of bringing new kinds of assistants to us, like a Siri or Alexa that we can see and almost touch, to new kinds of musical and acting performers for entertainment purposes. We spent a day at his Betaworks Studio, where we saw early work on an AI-based musical performer. Everything she sang and performed was developed by AI running on a computer. It is still pretty rough, but on other projects, Borthwick showed us AI is good enough to automatically generate elevator "muzak." You know muzak as the crappy background music in elevators and other public spaces. If an AI can generate that, it'll decrease the cost that hotels and shopping malls have to pay for public versions of that music.

Virtual beings like Lil Miquela are entertaining millions of people, though that version of a virtual being is an impoverished one of what the future will bring. Soon, in glasses, we will see volumetrically-developed characters who entertain you.

It's hard to see how this will happen because Lil Miquela isn't very good, and isn't truly interactive, and isn't run by an AI-based system; rather, she's more like a puppet with humans pulling her strings. Still, she has caught the attention of marketers and others who use her (or it, since she exists only in virtual space) as an example of advantages to marketers and entertainment companies. She'll never get arrested, or die in an accident, and she won't complain about poor contracts or having to work around the clock. Plus, marketers can make sure she'll never damage their brands, or cause them to get woken up early on a Sunday morning because she said something disturbing on Twitter in a drunken rage.

These virtual beings will be programmed to interact with you, asking you questions, presenting you with choices, and constantly feeding an exponential learning machine running behind the scenes. "Would you like to listen to Taylor Swift or Elton John?" she might ask one day. The answers you give will change every interaction you have from then on, and you might not even know, or care, just how deeply your experience has been manipulated, or how much a company behind the scenes is learning about your preferences and passions.

Give an exponential learning machine enough of these interactions and it quite possibly could figure out that you are a conservative Republican, or that you will soon be a full-blown country music fan, or even that you are of a particular sexual orientation―even if you've not yet been entirely honest with yourself about it. Each can be used to both addict you to more things that interest you, but there's a dystopian fear that the system could, even by accident, divide you from other humans and radicalize you into having potentially negative beliefs. We'll discuss that more in Chapter 12, How Human?

For now, know that Spatial Computing glasses are arriving at the same time as extremely high-speed wireless networking and Artificial Intelligence that can drive cars and predict the future moves of hundreds of people, and animals, around you. Add this all together and you'll be experiencing new kinds of movies, new kinds of games, and will see the real world in quite different ways with many augmented layers that could show various things both above and below the street, including other people's information.

Don't believe us? Facebook has a strategy book that it built for the next five-to-ten years, where it details how people will use Spatial Computing and describes how you will know whether someone is a friend, or connected to a friend, on the street, and enable you to play new kinds of video games with them. Already, in Virtual Reality in Facebook's Oculus Quest product, you can play paintball and basketball, amongst dozens of other games, with friends and random people.

Next, let's think about how Spatial Computing and the data associated with it will impact the world of manufacturing.

Manufacturing

We predict that within the next five-to-seven years, Virtual Reality and Augmented Reality combined with Computer Vision and ML-massaged data will be used much more in manufacturing than they are currently. The drivers for making this happen have to do with efficiency, quality, and safety.

Efficiency, quality, and safety will all increase when more Spatial Computing systems are integrated into manufacturing processes because the humans working in manufacturing could then focus less on the repetitive aspects of their jobs that are currently taking up much of their time, to focus on other tasks where human decision-making processes could be used. The negative aspects of human rationale when encountering repetitive tasks could be avoided―those associated with a lack of focus and tiredness that lead to slower productivity, lower quality assurance, and injuries.

Additionally, these Spatial Computing systems, most notably those using Augmented Reality, will be fully integrated to work with robotics and other machinery so that currently existing setups could be improved upon. Besides being used for making products, both VR and AR will be used for training purposes, as well as for accessing machine breakdowns and recommending solutions.

Porsche Cars North America has rolled out an AR-based solution for service technicians at dealerships, for instance. Why? Porsche, and its vendor that built the system, Atheer, says that the system connects dealership technicians to remote experts via smartglasses for a live interaction that can shorten service resolution times by up to 40 percent.

In complex assembly, using Augmented Reality sees even bigger gains. As Boeing builds a 747-8 Freighter, its workers keep track of 130 miles worth of wiring with smart glasses and the Skylight platform from Upskill. The result? Boeing cut wiring production time by 25 percent and reduced error rates effectively to zero.

At BAE, they are using Microsoft HoloLens headsets to build the motors for their electric busses. The software system they used, Vuforia Studio from PTC, let them train new people from 30 percent to 40 percent more efficiently and build these motors much faster and with lower error rates.

Once a few more large corporations adopt Spatial Computing in their manufacturing operations, they will become even more operationally efficient, which will lower their operational costs and give them market advantages. This will cause other companies to adopt Spatial Computing technology for their manufacturing operations until it becomes a business standard.

Having covered manufacturing, let's now move on to think about the world of retail.

Retail

As you walk into Sephora's R&D lab in San Francisco, you see the depth to which they are thinking of using technology to improve the customer's experience there. They built an entire copy of the store so they can run tons of customers through the store and study every interaction. This shows up on perfume displays that have been digitized, so you can smell each product without spraying it on your skin, along with Augmented Reality-based signs, and even virtual makeup that you can try. All of these experiences are improved through the use of data on all sorts of things that happen in the store, from the dwell times that customers spend hanging out in front of displays to real-time sales that provide feedback loops, to promotions and store changes.

Another example of a retail store that integrates Spatial Computing is Amazon's Go store. The Go store has hundreds of sensors overhead, and in the store, there are sensors on the shelves and cameras above tracking everything. When a person grabs a product off the shelf, the store automatically charges them through the Go app. Gathering data on people's buying habits has never been easier.

Today's e-commerce stores have trouble vividly showing you how a product will actually look when you receive it. Spatial Computing glasses change all that deeply, by bringing you 3D visuals of products, and we predict 3D stores where you will be able to grab products off of virtual shelves, spin them around in your hands, and even try them out.

This will give retailers whole new sets of data to study and use to improve their presentations and the products themselves, but it is the predictive possibilities of these new displays that have the retail industry most excited says Raffaella Camera, Global Head of Innovation & Strategy, XR and Managing Director, Accenture Interactive.

What do we mean? Well, let's say you add a wedding or a baby shower to your calendar; these systems can bring new shopping experiences to life. Imagine a wedding registry, populated with 3D products that your friend has picked out, along with maybe 3D images of your friend explaining why they added that item. Touch an item or show some interest in it, and it can change the entire display. For instance, touch a china plate and it could instantly show you all sorts of other things that pair with it, from glassware to silverware that you might also purchase for your friend.

The industry is salivating at getting this new data, which includes where you look, what you touch, and what you buy.

"Give me eye-tracking data of you in the real world for a week, just going about your business," says Roope Rainisto, Chief Design Officer and cofounder at Varjo, which makes the high-end VR headset that Volvo is using to design new car interiors. "The value of that data for learning about your preferences is so immense―behaviors that are currently not visible to these online services."

What will that data be used for? Rainisto sees a new shopping assistant. "There is far too much information for one person to understand and to even keep track of. The personal assistant will swoop in, help us by telling us things we should know or would like to know, do menial tasks, like keeping track of the prices of particular goods, put items on a wish list for the future, etc.

"How does the assistant become good at its job?" he asks. "Through learning us. This learning is fundamentally the exact same learning as what is used in ad targeting. Once the assistant knows me, then it can properly help me. Before that, its services will 'feel like ads'―pushing stuff I do not find all that relevant or helpful."

Retail itself will be changed deeply by these predictive systems and by workers who use Spatial Computing devices. For instance, today if you buy something in, say, B&H Photo in New York, an order goes into a warehouse and a worker grabs the camera you want and puts it into a tray that goes on a conveyor belt to where you complete your purchase. If you are buying online, the same process happens, except your product goes to a separate packing warehouse. Soon these systems will be made more efficient thanks to workers who are directed what to do by wearable computers that can see the products via Computer Vision and show the worker all sorts of detail via Augmented Reality.

These warehouse workers using Spatial Computing devices and hooked into a system of retail helpers (both real humans and virtual assistants) will radically change retail.

Logistic robots are already doing this, but not as efficiently as a human because Computer Vision for a robot, along with Machine Learning, is not sophisticated enough to recognize novel objects and objects that are not uniform. There are researchers already working to make this possible, but it will take another five years.

When this happens, very few humans will be working in logistics. Even with today's limited technology, this process of automating everything is underway; we've met warehouse owners who tell us they turned on completely automated warehouses in 2019.

Like with transportation, retail too will have a cloud computing system watching literally everything. Soon, such a system might even save your life; let's talk healthcare.

Healthcare

The combination of data, ML, and 3D visualization will be a great boon to healthcare. The data and ML combination have been used by practitioners, as well as non-practitioners, as in the case of the Apple Watch, for a few years already. The inclusion of 3D visualization, paired with location and situational awareness, makes a big difference, even saving a few lives as the systems watching sensors can tell whether you have taken a terrible fall or are having a heart attack.

Healthcare practitioners will be able to view patients' current conditions, including 3D scans of their internal organs, as well as run simulations of conditions, to plan and predict how surgery and other kinds of interventional procedures should go for those patients. As we mentioned in our chapter on healthcare, a company called MediView is already using Augmented Reality actively during cancer ablation surgeries. In the future, decisions about whether or not to perform surgery could be made using Spatial Computing methods.

The big change that will enable this to happen is the massive digitization of patient data that is currently underway now by hospitals and other organizations. Issues regarding this are the different systems that are being used by the different organizations and how they might not be universally accessible, and privacy issues having to do with bulk data. Without data being generally accessible, applying ML will not provide trustable outcomes. We believe that eventually, perhaps by the mid-2030s, medical data sharing via patient opt-ins will become more ubiquitous due to the immense benefit to society that would occur as a result.

Moving on from healthcare, let's take a look at how change is coming to finance.

Finance

We envision a day when many types of financial analyses and trading will be accomplished using Spatial Computing glasses or even contact lenses, as in the case of the Mojo Lens, a prototype smart contact lens from a Saratoga, California-based start-up, Mojo Vision.

Much of this could be accomplished because of the massive amounts of raw data streams that will be made available to make decisions, as well as data that has been characterized by ML. All of this relevant data could be portrayed in 3D, so that decision-making could be made even faster.

Virtual assistants could also be used in combination with voice querying, as well as proactive and opted-in suggestions honed in by ML to aid in making decisions as to whether or not to participate in a trade.

In addition to financial analyses and trading, and 3D data representations, Spatial Computing can also be used for customer-financial service rep interfaces. The combination of the use of voice, data, ML, and 3D visuals could make figuring out thorny financial issues much easier and faster.

Add in cameras and Computer Vision, and now we have facial recognition technologies that use ML, as well as body gait analysis technologies. With these, security is greatly increased, and all kinds of financial transactions could be more efficiently accomplished.

Finance is truly one of those areas that will be completely disrupted by the use of Spatial Computing; as it is with something like this, it is difficult to completely foresee all the different kinds of new apps and technologies that will emerge as a result of Spatial Computing's disruptive force.

Education

Not only will there be millions of people that remain unemployed for a significant amount of time post COVID-19, the coming increased automation in manufacturing and cars will bring on even more upheaval in society. Truck drivers, for instance, will largely lose their jobs in the next few decades. To America alone, that means a pretty deep restructuring of the landscape because trucks will no longer need to stop at small towns along interstates, which allowed hotel keepers and cafe workers to be employed, alike. Millions of people will see their livelihoods threatened and will need retraining to remain relevant in the modern world.

Even if you aren't struggling to put food on the table, though, Spatial Computing will radically change what it means to "be educated." Everyday activities like travel, cooking, learning about the latest science, watching the news, and even participating in civic duties, will change deeply because of Spatial Computing. We can see education changing dramatically in our discussions with founders of companies who are already planning ahead with 3D lesson plans and are building exponential learning engines to teach us everything from how to save a life to how to cook a certain meal. Our June Oven already has a camera in it and an Nvidia card, running Artificial Intelligence so it can automatically detect the kind of food we place in it and properly set the cooking temperature and time. On its app, it is already exposing us to an exponential learning engine that gets better over time, showing us more and more recipes, and properly recognizing more and more food types, from different kinds of fish and meat to different kinds of bread.

In our chapter on education, we cover how extensively the field is being changed by Spatial Computing. Here, we explore some things that are coming due to always-connected data engines that we call "exponential learning engines," which will soon change every part of our world, including how we learn.

For instance, in China, students are being checked in via facial recognition. In other places in China, teachers are starting to be replaced by AI and Virtual Reality. Well, actually, not replaced, since there aren't enough teachers to keep up with demand. Soon, the same might happen here, and what we call exponential learning systems, which are a combination of data and applied Artificial Intelligence that constantly learns from an increasing collection of data, will take over the act of educating.

Even here in the United States, these exponential learning systems will take over the act of educating many people. Why? We won't have enough teachers to keep up with demand soon, either. Look at truck driving, which is the number one job in the United States with 1.3 million people driving today. What will happen over the next decade or two when their jobs go away? Who will teach them to do something else? Already, companies like Caterpillar are using Augmented Reality glasses to teach people how to fix tractors in real time through visual overlays on top of the tractor itself. Mercedes-Benz is using AR to teach first responders how to cut apart a car without touching an electric line or fuel line. This app, designed by Germany's Re'flekt AR development firm, works on a tablet today, and glasses tomorrow, and shows firefighters arriving on scene at a car wreck where it's safe to cut.

The use of AR alone will make education better but think about any task. Could it be made better by hooking it up to an exponential learning machine? Even the Mercedes app could be made better, by studying, say, how a firefighter in Japan uses it on a custom rebuilt car. That learning could be shared with the cloud computer in the sky and shared instantly with all the other firefighters in the world, making everyone's skills and knowledge better.

Can such a system learn via ML that humans aren't doing a task right and gently nudge them with visuals in their glasses that there is a better way to do something? Even something as difficult as, say, learning a language, how to play an instrument, or how to make a complicated meal, can be made better by joining an exponential learning engine with Spatial Computing glasses displaying Virtual Reality and Augmented Reality to its wearer, and mundane things, like cooking an egg, or replacing a tire, are made much simpler with Augmented Reality assistants. Yes, you can look those up on YouTube already, but if you were wearing Spatial Computing glasses, you would be able to see assistant images and directions right on top of what you are trying to learn, and the sensors on the glasses could watch to make sure you took the right steps too.

Classroom teachers are currently getting a taste with the Merge Cube, a low-cost plastic cube you hold in front of a camera on a phone or tablet that overlays AR images. But this is a very elementary example of what could be accomplished with Spatial Computing in the future. Keeping you immersed in the digital world makes learning better, and letting you use your hands on top of a real task will enable new education that is hard to imagine today. Like the employees at Caterpillar, students will learn through information and media superimposed on top of the task they are trying to master in real time.

Imagine learning chemistry or surgery this way! It's already happening, but this will speed up dramatically over the coming decade and affect every part of human life, always making sure we have the latest knowledge and visualization to help us understand in 3D, which is how our brains think and learn―all with data that's constantly getting better and more detailed. Already, AI is finding cancer in medical scans more accurately than human doctors can. In the future, an exponential learning engine will connect all scanners and doctors together and all doctors utilizing the system will receive the benefits each time the system learns something new.

Can school children be taught by such a method? While we can't see a world where an adult isn't nearby or facilitating learning, we can see many lessons taught via Augmented and Virtual Reality. When we were in school in the 1970s and 1980s, many lessons were taught via film strips or movies. Soon those kinds of lessons will be taught by asking kids to put on a pair of "learning glasses." These glasses will teach everything from history to chemical reactions and the connected exponential learning systems will keep track of every lesson and every student. The adults in the room could then still be notified if someone is falling behind, falling asleep, or goofing off, which is something we still see kids doing in the future.

We've now considered the seven verticals. Let's zoom out to a wider perspective in the next and final section and think about something that we refer to as the "bubbleland."

The "Bubbleland"

We soon will live in a digital ant farm, or, as we put it, the "bubbleland." We came up with the term after watching how autonomous cars see the world. Each has a digital bubble around it, beyond which it cannot see. Humans work the same way, but we don't think of our perception systems that way, sometimes believing that we can even sense the unseen. In the digital world of Spatial Computing, in the case of glasses, robots, virtual beings, and autonomous vehicles, however, their perception systems are limited to what their cameras and 3D sensors can see.

Or are they really limited that way? If you throw millions of such little bubbles onto a city, are there many places that a bubble doesn't touch? No.

The Bubble Network

We see such a city as having an exponentially growing number of exponential learning machines, each with a bubble of what they can sense, all communicating with a central cloud computing system that we call an exponential learning machine. Add a bubble or a new set of glasses or a new autonomous vehicle and this unified system gets more information that it can use to "see" or understand the world we live in better. Add even more and we'll get more services and more utility. It is a new network effect. In previous generations of networks, as more people joined in, the network got more interesting. This is why the web eventually got so complex we needed search engines like Yahoo and Google to keep track of it all.

The new network effect, though, isn't about the information on pages, but it's these information bubbles moving around the real world. One such bubble might have to do with all the information circulating waiting in line at a nightclub. Another might be around someone pumping gas into their car. Another might be around those worshipping at church. Yet another could occur when picking up groceries for tonight's family dinner.

Unified Learning, Shared Experiences

A unified exponential learning system can keep track of all of this, seeing inventory changes at grocery stores, lines at gas stations, activity at nightclubs, keeping track of the latest learning delivered at universities, and much more. Each bubble would be training the system and teaching AIs what to further look for. This unified exponential learning system eventually will turn even hungrier for data and might ask the people wearing the glasses, or the robot car, to move around and gather more data. "Hey, we haven't seen aisle three in a few hours, can you walk down there with your glasses and check out the cookie display?" our glasses might ask soon, maybe even offering a discount on a purchase to incent us to provide more data so that the system would have accurate cookie prices and inventory data to share with everyone on the network.

Soon maps will show these bubbles in real time, or at least the data that they are gathering. While today, when you see an accident on the road, it shows a red line to denote that traffic is backed up, or an icon where the accident is reported, tomorrow, you will see a 3D version of the wreck on the map and you will know the speed of each lane of traffic as reported by these bubbles.

It'll be the same for grocery stores, churches, malls, movie theaters, restaurants, laundromats, National Parks, and more. You might even be able to "dive into" a bubble that is explicitly sharing their digital bubble with the world. Wouldn't you love to do that in someone's bubble who is in the front row of a Lady Gaga concert? We know we would, and we would be happy to share such an experience by telling our glasses "Hey Siri, share this with the world." Other experiences might be more specific "Hey Siri, share this with Grandma." Or, "Hey Siri, share this with other parents at our school."

Soon the 360-degree video cameras that are currently on sale will seem very quaint indeed, as stupid, unconnected, individual data sources. The Spatial Computing glasses of the future, and the autonomous cars and robots, will take the concept a lot further, to where they teach exponential learning systems a lot more about the world as they are walked around, and present viewers with tons of interesting views of the real world, including new digital twins that someone thousands of miles away will be able to move around in, and interact with, and maybe even add to. This bubbleworld of the future is one that is very human, creative, and inevitable.

Pervasive and Proactive Future Assistance

As we've shown here, soon our technology will know we didn't yet put out the garbage or will prepare you for your next meeting in ways you never thought of, not to mention perhaps warning you about a health problem that's coming for you based on your biometric data. Our Tesla cars are already sensing when garbage cans are out, so this day isn't far away, and soon not only Teslas will perform this kind of object and pattern recognition, but so will millions of people wearing new Spatial Computing glasses and little delivery robots rolling around stores and streets.

Next, let's look at a few of the pioneers who are already starting to use these technologies to change everything from transportation to shopping.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.27.202