Appendix A. The Future of AR

The earlier parts of this short book look at the present state of industrial enterprise AR. If you enjoy emerging tech and ideas like me, though, what’s coming in the future may be even more intriguing.

Specifically, this appendix focuses on two things: developing areas you should pay attention to, and why the way “mixed reality” evolves (as a term and as a set of technologies) matters so deeply. Both of these will shape where this category goes.

This is a look at what’s just beginning to emerge—with research minds and investment dollars indicating it will gain traction. It also gives insight into the kinds of ideas that will make AR more useful and exciting.

Durable Reality: Things That Stay in Space

One emerging space that will grow is durable reality—things that are durable and persistent, that stay in space. There are a few systems already created where you can leave notes, images, and objects in space just as you’d leave a sticky note in the physical world. For instance, you can leave a digital note on a machine for when a tech comes by; it pings him and tags a specific location telling him “it needs you here.” A few people featured in this short book have built basic systems of that sort. But there are a million other uses for durable reality as well.

  • Fashion designers can draw textured leather jackets and jeans, and then walk around “inside” their creation to see how it really looks and feels.

  • Artists can create and instantly change 3D sculptures and paintings that you can walk around inside, as they’re already beginning to do by experimenting with the new Tiltbrush.

  • You can leave your partner love notes on the mirror or windshield that the kids will never see because they are encoded just for his or her phone/app. :)

Imagine walking through the physical world and finding Easter eggs left there just for or by you.

Diminished Reality

There is another emerging category for AR: diminished reality. This idea has been discussed since 2011. Now, though few people have noticed, it’s starting to quietly and powerfully gain traction. How it works: rather than adding things into the space, you take them out. The possibilities for uses are as broad as you can imagine.

It’s the “real-life” version of photo retouching, and you can watch here and see how effective it is. You can use it to make things disappear. Really.

But we are not there yet.

This begins to get into the fuzzy parts of reality (mixed, augmented, virtual, and physical) as well as some deep questions of privacy, ethics, and physics that we all need to consider. For now, though let’s focus on the more immediate changes in the AR world.

Gestural Control

Gestural control is all about using hands to take actions in the physical world that show up in the virtual world. Everyone loves the Minority Report user interface. They actually made it at MIT using a Leap Motion Orion. Minority Report’s science advisor John Underkoffler demoed it at a 2010 TED conference. He called it a “spatial open environment” interface, and his talk is brilliant and worth watching to see how far we have come, even in six years.

Since then, people have built on the idea of controlling and displaying objects in space in various ways—some mimicking his approach, some adapting it. Obscura Digital has created some amazing and useful gesture-controlled systems, and companies like Navdy are playing in a similar realm with their heads-up displays for cars (this is real; thankfully, Sugarbeef is not).

Enhanced Reality: Multisensory Experiences

Imagine using electrostatic energy, the way Disney Research did, to run your finger over a screen and feel the bark of the tree. Or mimicking touch: with a UK-based tool called Shoogleit, you can scrunch up fabric and see how it folds. (Fashion company buyers could use things like that.) Or imagine modernized and improved versions of way-too-early technologies like Audio Perfume using ultrasonic frequencies and sounds to augment your world.1

Experiences that play with touch, taste, sound, subtle energies, and smell will only add to an environment that is created by augmented reality. If you’re remotely repairing a car, you may not wish to hear all the sounds of the highway. But hearing the isolated sound of the key in the ignition can help you learn whether the issue is the alternator or battery.

Noise-cancelling and enhancing technologies, haptics and touchless haptics, new uses for music and color, and other types of multisensory experience make the “overlays” and augmentations to our physical world more interesting.

Google Nose was announced a few years ago as a funny April Fool’s joke. But remember: Pokémon GO started as a Google April Fool’s joke as well. We may not have smell-o-vision anytime soon; it’s still a problem in search of a real use case (what BMW is doing with AirTouch is interesting...but not necessary). And it is an area that will continue to develop.

Max Maxfield points out one “simple haptic example, one of the free applications that comes with the Oculus Rift is the Oculus Dreamdeck, which allows you to sample a suite of virtual experiences (just listen to the commentator’s excitement when he reaches the 3:20 mark in this video).”

Creating a more realistic virtual world will require engaging senses other than sight. We’ve already started down that path.

Language and Lightfields Will Shape What AR Becomes

As technology evolves, so does the language that describes it. It’s chicken and egg; one informs and helps shape the other. There is one term you should really pay attention to here: mixed reality.

If you’re a CTO or a research geek like me, you’ll want to follow the way that phrase is beginning to emerge and evolve, because it represents very different streams of thought in the industry, and because it is being used in radically different ways by different people.

What does mixed reality mean right now? It depends on the person talking. Here are two current, common working definitions. The first refers to a blending of AR and VR. The second refers to blending digital assets into the fabric of the world around us—what we currently view as “reality” itself.

Mixed Reality: Blending of AR and VR

Vuforia’s Product Manager, Manish Sirdeshmukh, subscribes to the first definition. In a July 5 presentation at Unite Europe ’16, he calls mixed reality a hybrid of virtual and augmented reality technologies involving headsets and AR triggers. Here’s what else he says about it—and how you can create mixed reality experiences with Vuforia (something that’s not currently possible in Unity):

A lot of people have been talking about mixed reality, augmented reality in the same fashion. But what I really mean is...when I talk about virtual reality, if you are immersed in an environment which is completely 3D. It’s a completely virtual environment. Augmented reality is when I see things through the camera or directly through my eyes, and when I’m augmenting a virtual object into the real world. And then if I am switching between augmented reality and virtual reality or using the data from the real world that is captured through the computer vision algorithms and using that as a reference within my virtual reality application, that’s what I call mixed reality.

A few examples: if you’re developing an application in Unity, you have two paths – do you want to create an augmented reality application or do you want to create a virtual reality application? Right now, you can’t create a single application that does both. That’s what mixed reality can do.

What it is: a transition between both experiences. For instance, if you want to trigger based on an image an augmented reality experience and from there, you’re switching to a virtual reality experience—that’s one definition.

How does Vuforia support mixed reality? In three ways:

Improved rendering
The first is through stereo rendering—left eye and right eye view. This also includes distortion correction for the lens. A lens is never straight—it is always concave—so they compensate for the curvature of the lens. Those are the rendering parts of the features they added.
Rotational device tracking
This is also referred to as “3 doff.” When you’re not looking at an image or when you’re in a completely immersive environment of a VR experience, you don’t want to be restricted by looking at an image target. Even if the target is out of sight, you still need that extended tracking. That is baked in as a feature in Vuforia.
Mixed reality controller API
That is basically an API that lets you control for how you transition between the virtual and the augmented world.

This definition of mixed reality is the dominant hardware-driven one right now. It is simply and beautifully described in this article from the Next Web that also discusses how artificial intelligence is connected to AR development. However, other people define mixed reality differently.

Mixed Reality: Blending Digital Assets into the Fabric of the World Around Us

In a talk called “The Dawn of Mixed Reality” at the WIRED Business Conference in 2016, Rony Abovitz—founder, president, and CEO of Magic Leap—calls what they do “a mixed reality lightfield.” The way he thinks of it is not based on a blend of virtual reality and augmented reality the way we think of it today. It is not headsets and graphics.

It means walking through the world without any technology (except, perhaps, what’s embedded) using our visual cortex—and the power of our minds—as the display system. Abovitz is talking about our reality itself.

Here are some excerpts from that talk that give you insight into his current thinking about mixed reality, and how he plans to shape what it becomes:

Thinking about how we experience the world visually, without any technology, the idea was there’s this amazing display that we have in our brain already and it’s processed by our visual cortex. And I thought we would never build a better display than that, so how can we get into that. And that led to the studying of how the visual world outside of us, which we call an analog lightfield, how that interfaces with your brain—the sort of physics and neurobiology interface—and allows that display in your brain to create all the amazing imagery that we’re doing right now. And our digital lightfield is basically mimicking that process. It’s allowing the brain to be a display and not replacing the display you already have with something inferior. It’s trying to use what’s there.

So how does it work? The heavy lifting piece of what we do, we have to think nature and biology. I mean, we evolved this incredible brain that has a hundred trillion connections, hundred of billions of neurons—somewhere around 40%+ is all about visual processing and perception. What we’re doing is we create a digital lightfield signal that is a biomimetic signal that is very similar to the analog lightfield that’s coming in. And our signal blends with that one.

So really you have this one integrated signal coming from your eye retina system into the visual cortex and you don’t have something on top of the world. You don’t have like a cellphone display in front of your eye like what people think of with VR. You just have something that feels like an integrated natural blending of digital and physical, and that’s what we call mixed reality lightfield. All of the details of how those hundred trillion neural connections work—we’ll be here for probably 10 years—but it is a blend of how the brain works plus our intense effort to mimic a signal very naturally into your eyebrain system.

In a July 12 tweet, Abovitz also said this: “Mixed Reality adoption will be a journey—think first iPod through the iPhone—but will be fun all along the way :)”

It will be.

It is still early days in the new AR frontier. “Reality” is changing. The digital and physical worlds have already merged. Dipping a relatively low-investment toe into AR technology makes sense if you’re in enterprise (remember: that doesn’t necessarily mean getting smart glasses—start with what you have). Begin learning how AR works and how to use it as a tool. Don’t roll it out to consumers. Use it to help your design, manufacturing, sales, and ops teams. Augment the people behind the scenes, to make their lives easier (and time more efficient).

What we’re augmenting (especially in enterprise) is not reality, but the ability of humans. The digital and physical worlds have already blended.

Now, it’s just a matter of proportion.

1 Hat tip to my friend Todd Harple for these examples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.148.210