Afterword

Building the Mirrorworld

VR. AR. MR. XR. AI. CV. ML. AR cloud… the list goes on.

This isn’t just a grab-bag of trendy tech buzzwords; it comprises the foundation of a spatial computing future that is right around the corner.

We are moving to a new paradigm for accessing information, consuming entertainment, learning, doing our jobs, and communicating with each other. It’s a shift from 2D graphical representations viewed on flat screens—pinhole cameras into today’s incomprehensibly vast digital world—to immersive 3D visualizations of objects and spaces laid out all around us. This will not only imbue us with brand-new superpowers that allow us to transcend space and time; it will, generally, make these computer thingies that are inextricably enmeshed in our daily lives so much easier to use. We live in a 3D world: people move, think, and experience in three dimensions. Isn’t it time our computer interfaces got out of the way and let us do the same with digital information? It’s about the digital, made physical.

Perhaps more significantly, this step change is also about making the physical digital. Every mobile phone is already a camera; add another camera or two, and with a little help from computer vision algorithms powered by machine learning data, we have digital x-ray vision capable of recognizing images and objects and laying bare their contents for all to see. Every real-world object becomes its own display surface that can be enhanced with animated fun or useful knowledge about its capabilities, price, provenance or other interesting information.

This technology is on the market today, in the crude form of VR and MR headsets and AR-capable smartphones. Someday soon, these amazing new capabilities will be presented via sleek wearable devices like smart glasses that will have us looking at the world with our heads up again, and free up the hand that holds the phone. Further down the line, wearables will be supplanted by contact lenses, retinal projection, direct neural interfaces and/or holographic projection, so that we won’t even have to put a device on our heads at all. Someday.

Think Princess Leia on the tabletop, or the Holodeck. Or the holographic display for Jarvis, Tony Stark’s virtual assistant. Or, pick your favorite envisioning from the science fiction canon. However we imagine it, it will probably not look quite like that. But I can say with conviction that spatial computing will be the interface to everything, from a future version of Wikipedia to the entertainment center in the cabin of your self-driving car. Kevin Kelly recently revived the term mirrorworld, as apt a term as any to describe this blend of the physical and the virtual. It starts with an overlay of digital information on physical stuff, then moves to a full “digital twin” of the physical world around us that contains everything, reflects it, and enhances it—a 3D skin on the Internet of Things.

The infrastructure powering this transformation is rooted in real-time 3D graphics, computer vision and machine learning, and low-latency networking. The computer industry is taking its first steps toward building a global system comprised of devices, software, and communication protocols to support this dream, but again, everything today is in crude form. There’s no ubiquitous device, or even one or two go-to products. And mirrorworld today consists of silos: purpose-built applications to solve a business problem; online stores for delivering entertainment content; walled-garden social communities with face-filter and animoji-based customization. Content creation is an arduous, coding-centric exercise of integrating myriad tools and SDKs, and managing fragmentation between devices and operating systems. The mirrorworld of tomorrow will be more integrated and fluid, a new spatial world-wide web, hyperlinked and with instant access to 3D information. Content creation will be just that: make some 3D stuff, tag it, drop it onto the digital twin of the physical world and, permissions depending, anyone can access it, annotate it, and share it using any spatial computing device.

To paraphrase William Gibson: the mirrorworld is already here, but it’s not evenly distributed. The good news is, we can start designing and building for it with today’s systems in anticipation of tomorrow’s reality. The broad collection of techniques and technologies you read about in this book are here to stay, though over time, the alphabet soup of acronyms will likely be absorbed into a set of core system capabilities that we all take for granted, much the way we do today with developing for web or mobile. At that point, the vexing VR/AR/MR distinction will be a thing of the past, and we’ll all have a common, colloquial term for it. (Who knows? Maybe we’ll be calling it mirrorworld.)

Till then, this book was a great place to start. Hopefully it can serve as a guidebook for years to come as you embark on your journey. See you in the mirrorworld.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.67.251