Chapter 3. Augmented Reality Creators and Use Cases You Should Know

We’ve looked at why now is the right time for AR, and we’ve looked at how to invest time and money in AR, given its stage of development. These are the steps you should take to ensure that you are focusing your time and money in smart ways.

When building a business case for a company, however, the next big questions to ask are, “How is this actually being used in the world in meaningful ways?” and “What can I learn from those people and strategies?”

Here’s the best way to answer that: look at what other people are doing.

According to the investment bankers at Goldman Sachs, VR and AR technologies will generate $80 billion in revenue by 2025, $35 billion of that from software. That includes professional sectors like healthcare (projected at $5.1 billion) followed by engineering ($4.7 billion).

In this chapter, we also look at other categories of business that represent the future of where AR is headed. This chapter explores how else it can be used, and what you should know about as you plan your strategy for AR development.

Whom did I choose to talk with and how? I looked for a range of companies and listened for those that have unique insights and insider knowledge, meaning that they are saving front-line workers significant time and helping to save money. I also focused on those that have massive potential, because they represent an area no one is exploring right now.

Some of these case studies are big-name companies, and some are from smaller players. All are projects that you might not have explored in detail, unless you have deep access to corporate and civic AR players.

The goal of these case studies is to give you insight into the thinking, processes, projects, and approaches being used.

I’ve included a broad range of industries. To prove or disprove the Goldman Sachs numbers mentioned in the first chapter and give the broadest perspective possible, I looked at how AR is currently used in industry (a clear winner) as well as architecture and design, education, arts and entertainment, and retail.

Each case study explores how a company is using AR and each has a specific focus:

AR Makes Time-Consuming Industrial Tasks Easier

Is now really the time for AR? In industrial enterprise, it is.

Particularly behind the scenes at big heavy-machinery companies like Caterpillar, aeronautics companies like Boeing, and automotive companies like BMW, AR has been refined for years and transformed into a tool that is actually saving people time and money. All that research and thought has really paid off in the past few years. With advances in hardware, breakthroughs in computer vision technologies, and the distillation of years of thought that have gone into how to build these systems, AR is the beneficiary. Said more simply: the underlying technology that powers AR has finally caught up with the promise of it. As you’ll discover, AR has already quietly taken hold in industry. We’ve begun to exit the R&D phase.

That is just in industrial use. And it’s why industrial AR is the inflection point.

If you’re working on your strategy or budget, one of the first questions you might ask is “What is AR’s place in my operations?” The best way to answer that is to look at how it is being used currently and how it will most likely be used in the future.

The Augmented Reality for Enterprise Alliance imagines use cases from warehouse picking to emergency response to aircraft cabin workflow. Some of these use cases are real. Some are hypothetical. The interviewees here, though, speak to a number of different uses of AR. They are corporate AR program directors for huge companies, AR builders, and experts of various types. All are technologists who have been exploring, refining, and creating different ways of using, building, and piloting AR programs.

Case Study: Index AR/Newport News Shipbuilding

For my first book, I spoke with Paul Davies, the gentleman who heads up Boeing’s landmark AR program, and asked him what other AR creators he admired. His answer was the guys over at Newport News Shipbuilding. This is part of the team he’s talking about. When I first spoke with Dan Arczynski, he said “We’re the best AR guys that no one knows about.” It was a big claim. As I listened to him lay out his perspective on AR, about how it has evolved and how it will evolve, he lived up that. His company, Index AR, just released a perspective report on AR that is one of the most thorough and thoughtful independent perspectives I’ve found.

When Arczynski was an executive at Newport News, he ran huge parts of the company. One day, one of his managers—a man with 1,000 or 2,000 people working for him—came to him to say that AR was going to be big. Bigger than Lean methodology. Big enough to totally change the way companies do business. Now, five-and-a-half years and 50 projects later, Arczynski and his team are proving that to be true. With Newport News Shipbuilding’s blessing and partnership, he left to start his own AR-focused company with the goal of not augmenting reality, but “augmenting people to make them more capable, safer, and more productive.”

When Arczynski and I spoke, he shared a great story about how he is using AR to save significant time and money in shipbuilding—in just one case study, he describes cutting inspection time for a critical ship component from 36 hours down to 90 minutes. (Imagine that multiplied for every cargo, cruise, and passenger ship around the world!)

Here’s how that happened, and more on what he has to say about how AR is evolving.

Cutting Inspection Time from 36 Hours to 90 Minutes

Let’s begin with some context about how massive ships are actually constructed. As it turns out, they are built in modules—huge ones. Each of these modules can be between 600 and 1,000 tons—as big as houses or buildings themselves. Because it costs less and is easier to build them in the factory or on big flat areas and then transport them to the ship, they are assembled elsewhere. Then, after they are built, they are transported to the ship where giant cranes lift each module into place and workers weld it down. But this, as Arczynski says, is where it becomes interesting:

Because these big [modules] are really not made to be lifted, they have to add temporary steel [as reinforcement]—think of it as a frame you build around to make sure it doesn’t flex and bend as it’s going in. At the end, they have to go back in and find the temporary steel, because it’s all still in the module. And they have to pull it out.

It makes sense—you don’t want the extra weight added to the ship. However, the job of finding and removing it isn’t an easy process. That’s because each module in the ship has already been finished, painted, and insulated, with the extra steel inside, all covered up. Finding it requires an inspection. Historically, this has been one person’s job; in the story Arczynski told me, it was the job of well-respected shipbuilder and engineer Larry Munn. Arczynski and his team equipped Munn with an AR tool to detect and locate all of the excess steel. The inspection typically took Munn 36 hours to do. With the Index AR app developed for him—it took him 90 minutes.

90 minutes. Let that sink in.

As Arkzynski points out,

If you’re a senior executive and you’re a $7 billion company, and if somebody said, we can save you three percent, I mean, we would’ve done that. Right? Three percent, that’s a lot of money! We only make 8 or 10 percent profit! That’s the kind of example of things that are possible with this technology.

It is just one example of how AR can make things easier on a human level—and have a massive impact on the bottom line.

To be able to find tasks people do and to save that kind of time and energy is huge. After you start making changes to underlying processes like inspection and repair, things begin to be built faster and a lot cheaper. You change your entire competitive position when you can reduce the complexity and time for tasks that significantly.

I asked Arkzynski whether even though it is case specific, are there guidelines around when to use AR? When is it truly beneficial rather than just technology for its own sake? How do you know what to augment and what not to augment?

As it turns out, he’s thought that question through carefully. He outlined seven use cases—citing the first three as those he recommends to new customers because they usually show the biggest results. Here’s what he says:

The first one is inspection or quality assurance—that’s the one that the guy gave me 36 hours [and cut the work time down to] to 90 minutes, first time. The second is work instructions. And then the third is training. For us, for a first project, we would encourage them to start there, because we can almost guarantee that it’s going to be a successful project. By the way, these are huge, broad categories; there are many things actually involved in each of these.

The next four, he says, are workflow management, operations, safety, and then logistics—things a company should take on after they’re comfortable with the technology. The great thing about AR prototypes and experiments is that they show return relatively quickly and are less expensive than many business systems. Even though AR hasn’t reached the level of ease and turn-it-on-now simplicity of Software as a Service (SaaS), it is easier to implement and turn on than enterprise business systems. Which makes it a sweet spot for those willing to test now—to potentially win big later.

As Arkzynski says:

We brought in SAP, you know, that’s a big technology. You could invest hundreds of millions of dollars in SAP before you get a penny of value out of it, right? You’ve got to build out all these systems, buy all this software, and train all these people before you [even] turn it on. And when you turn it on, of course, there’s all kinds of problems, etc. Then finally after years and years, you start to get some return.

AR is more immediate. And, in industrial use, it is already demonstrating real value.

The question is: how would an industry or a CTO who is new to this even go about thinking about creating an AR pilot? One answer is simple: bring in an experienced AR consultancy like Index or a competitor to do a few test projects. A process like that takes a few months, which includes building an app and getting initial results. It uses your data, off-the-shelf hardware.

And then you get to choose. Do you want to continue to work with a consultancy? Can you do it less expensively or better in house? Are you generating value in the ways you wish? Do you want to stop right there?

With the ability to use AR systems on tablets (which nearly everyone has) and by working with someone—initially—who is experienced with enterprise AR systems, the value you can create is much greater than the investment. But, as Arkzynski says, it requires you to take the first step; you need to test drive the technology:

In a few months, you can start to be exploring and seeing the value of AR in the company, and then—over time—you can grow and grow and grow into it. And that’s a real beautiful thing about this technology. That’s why I believe its S curve, once it hits, it’s going to be really sharp, because companies can really, with very little cost and risk (not having to pay hundreds of millions of dollars)…see what it can do. That’s why I think it’s really going to take off.

AR Helps Reduce Human Errors

In addition to managing day-to-day operations as cofounder, CEO, and CTO of Scope AR, serial entrepreneur Scott Montgomerie is one of the early pioneers in the industrial AR space.

He has worked with virtually every commercial head-mounted display on the market and built his own head-mounted display. He has worked in the AR space for 15 years. That puts him in a really interesting position to speak to strategy (hint: even though he loves the head-mounted display companies, he chooses tablets).

Scott also works with huge companies in the aerospace and automotive industries. He and I geeked out about how the technology has transformed over time. We also discussed the real, human value of AR. Namely: helping people feel proud of a job well done, and avoiding the errors that might prevent that.

Case Study: Scope AR

Montgomerie began with a humorous story with a serious point. The story is about one of his customers in the Middle East who used one of Scope AR’s core products to design a step-by-step maintenance program. He wanted to baseline workers performance, and thus split his crew into two groups:

  • One using Montgomerie’s AR system

  • One doing it the old-fashioned way

After the job was finished, Montgomerie says, the client called him with a surprising message: he was a little disappointed in the results—both of his groups finished the task in the same amount of time.

But the client wasn’t finished. As Montgomerie tells the story, the client also said, “Well, for you guys, there is one big benefit: the guys that didn’t use augmented reality made a lot of mistakes, and the guys that did use augmented reality made exactly zero.” And I’m like, “That’s great. That’s fantastic!” He said, “No, it’s not fantastic. These guys are professionals! They shouldn’t be making any mistakes. I can’t believe they made all these mistakes!”

Another error-reducing use case is in the construction industry: specifically, for building information management. In Montgomerie’s world, building information management is a blanket term that covers the documents and graphical models of a building’s architecture—you can see the studs and the plumbing, electrical, everything you need to know about how to build a structure. His team is taking that information, overlaying it directly on top of either a flat concrete plane or open space for a builder, so that they can visualize how that can be built. They are also starting to test concepts with a company that allows framers to wear eyewear and see the location of the studs so that they can hammer them into the most precise location, as seen in the glasses.

Montgomerie explains the benefits of this system and how they add up for builders:

[Right now], it takes about five minutes to run a single plumb line—the plumb line is where you drop a string with a weight on it, that gives you a straight vertical—it’s got chalk on it, so you pull it back slightly, and when it hits the wall, it leaves a chalk line. To do that, you’ve got to get the chalk line, then measure it again. On average, it takes five minutes. If we’re able to eliminate that step for every stud that goes in, it saves a ton of time and money in terms of construction. And that’s just one of the increased efficiencies by seeing the model in front of you, as it will be built rather than having to refer back to the paper blueprints.

He also points out that AR systems are great for collecting data, via the camera. You can look at things like how long it takes someone to complete an individual step within a larger procedure. And then you can use what you find out to understand how different workers complete that task—and in what timeframe. As Montgomerie explains:

You know, this guy’s doing it in half an hour, this guy’s doing it in 10 minutes. Is one guy just super slow or is he doing the right way and other guys are skipping steps? What’s going on here? There are a lot of really great insights that you’re able to get from collecting that data you never would normally get. And then with the augmented reality overlay, you’re able to point at various parts of the procedure that you should pay special attention to. “Make sure that this nut is properly screwed down to the torque,” or something like that. Amazingly simple steps that you would think that workers would be OK to remember, but they don’t.

To illustrate that point, Montgomerie tells the story of one of his oil and gas clients. It had two high-pressure pipes that were being clamped together with a simple clamp. The oil coming through the pipe is mixed with water at a boiling temperature, and if the seal isn’t properly set, the heat will cause it to just cut through the pipe, creating a spill and a lot of lost revenue for the company. As it turns out, this company was having a lot of failures on one clamp—a really simple clamp, made of two pieces of curved metal that go over two pipes. No one could figure out why.

Montgomerie’s team discovered the answer. The problem was that there was a prescribed procedure for tightening the bolts on this clamp that workers were simply not following because they didn’t understand the importance of the procedure. To do the job properly, workers had to tighten four bolts in a crisscross motion and then use a hammer, which would essentially loosen the clamp, and then you would do it again. They had to do that over and over again until achieving a specific torque measurement. “[But] the workers were just not doing this, and they were getting a ton of failures,” Montgomerie explained. “Because this is up in northern Canada—it’s 10 or minus 20 Fahrenheit—they just want to get the job done as fast as possible. If you’re not well versed in the ramifications of what you need to do, you might just rush through it.” AR can help speed and simplify that process. It can help people understand why things need to happen, as well as what to do.

In the words of Contextere CEO Gabe Batstone, “What creates a job well done is the right insight, delivered at the right time.” AR is a tool for getting information where it needs to be, right when it needs to be there.

During our conversation, Montgomerie also offered up some insightful context about the place of AR in the world. He pointed to the fact that for the past three million years we have interacted with our environment with our hands and our eyes. For the past three decades or so, computers have provided us enormous value. They’ve been able to get analytics for us, crunch numbers, and give us amazing insight, but we’ve always interacted with that information through conventional streams—on a desktop and, more recently, our phones and tablets. That has left a large portion of the world out of the benefits of computers. Montgomerie says:

There’s a whole lot of industries where people are out in the field working with their hands and interfacing with the environment with their eyes, and they don’t have the ability to interact with the amazing power of computers. Augmented reality is all about being able to present information in a more intuitive way, a more convenient way. We can bring the power of computers to the real world in industries like construction and manufacturing and all these really heavy industries…[to give workers] the ability to interact with computers and the internet.

To make his point, he referenced Caterpillar—a company I interview in my previous book. Specifically, he spoke to its Smart Iron initiative, in which every machine the company sells is going to be connected to the cloud to provide data. It’s the epitome of Internet of Things (IoT). Tractors and backhoes sending tons of information back to the cloud about temperature and pressures as well as the engine, pistons, hydraulics, and brakes. And after you have all of this information, you need a way to sift through it, translating the raw data into actionable information. That’s where AR comes in.

In Montgomerie’s words:

What I see coming forward is augmented reality as a way to interact with digital in a new paradigm. You’re going to get the ability to look at machines and have a visual dashboard. [You’ll] be able to see inside the machine of what’s happening, and all this information that’s coming through…with IoT. You’ll be able to see in an intuitive way. You’ll be able to see, for example, if you look at a radiator, where the fluid comes in, all the pressure and heat differentials going to the system and fixing where there’s a leak or where there’s a problem, a bottleneck, maybe a build-up that needs to get figured out.

This is where we are going, but we are really far away. We’re just at the beginning. Which makes this exactly the time to plan and pilot. As you’ll read in the next case study, BMW, Volkswagon, Daimler, and Bosch are already doing just that.

Case Study: RE’FLEKT GmbH

Wolfgang Stelzle is CEO and founder of RE’FLEKT GmbH in Munich, the leading augmented and virtual reality experts in Europe. Wolfgang founded RE’FLEKT in 2012 and heads the 40-member team together with managing director Kerim Ispir. Wolfgang had worked at Adidas in Germany before later becoming consultant at a digital agency creating interactive and immersive solutions with tech-based solutions for well-known brands.

Now, he is working with companies like Bosch, Daimler, Volkswagon, and BMW to create their AR programs, which gives him insight into things most other people never get to see.

AR for Visualizing Product Accessories and Doing Diagnostics

Stelzle explains the ways in which AR is being used, successfully, to create more revenue and help sales teams and engineers within the automotive space and beyond:

We work a lot with Bosch and in a unit called Bosch Automotive Service Solutions. They work on a daily basis with diagnostic system and service information. And for them the goal is quite clear, they really want to put all the traditional service information onto glasses in the long run. And because we work with them, we also work a lot with automotive customers in general like Daimler, VW, and BMW.

A couple use cases which were very successful are coming out of the marketing and sales domain, so talking for example about a car. You are at the dealership, and with AR you can easily augment accessories—for example—to the car, so you can visualize rims, how different rims look like. There’s a direct potential to create more revenue.

[Or] for example, if you work in the automotive industry and you have diagnostic data, you want to actually know where the error is coming from and where the error is located. It doesn’t really help to get a single text error code on the screen without knowing where the error is localized. AR can help to actually localize where the problem is coming from.

Stetzle also mentions diagnostics as a huge use case for AR—one that applies not only in the automotive industry, but in any situation for which seeing “into” an object can help you diagnose and head off issues. You can basically watch whatever is under the hood—whatever is not visible to the outside of a machine or the outside of a car—by an X-ray effect. “For example,” he explains, “visualizing wire harnesses underneath a car. Where are the wires? And where are the connectors? Because that’s where mainly errors are coming from. That’s where the technician knows how a cable is actually running through a car and where to put it. The same goes for machinery. If you have fuel pipes or oil pipes, you want to know where they’re running”

You also want to know what they’re transporting and if there’s some source of error. That’s where AR can really help. Stetzle also pointed out there are not a lot of customers who have rolled out AR on a global scale. And that is for several reasons. First, the technology, the tracking technology, and the hardware have not been mature for long. The devices need to be good. The tracking algorithms need to be very accurate.

“It’s [only been] about a year or two where you can really say ‘Hey now we’re at a stage where we can use mobile AR on tablets and on phones,’” he says.

Another issue has been content creation. We address this in Chapter 5 and show you some techniques for creating content (beyond partnering with a consultancy like Stetzle’s or Index AR). I also give you some tips on how to start if you want to build from scratch. One jumpstart on that: if you do choose to develop enterprise AR yourself, it is critical that you build an effective team.

Stetzle and I talked about how to build the best team for creating enterprise AR apps, and how vital it is to ensure that those teams have manufacturing experience (or are partnered closely with the people who do). Right now, a lot of companies are hiring teams from the gaming or entertainment world because they have the technological knowledge to build AR systems. A better way might be to hire people first for their manufacturing and construction background, and then for their engineering and software development ability. You want people who can walk into a facility and talk directly to the people running it, to help them change things for the better. That means understanding how plants work and where AR will be of benefit. Stetzle and his team have done this by partnering with the manufacturing experts at Bosch (as well as BMW, Daimler, and other companies). Other people have done it by embedding those manufacturing experts in their teams. Either way, you need to have the manufacturing and construction expertise to show the value of your program—because it won’t last long if you don’t show results.

AR Trains Medical Professionals

While AR is not yet being broadly used in education, it is being used by a few progressive institutions as a powerful teaching tool. And more importantly, it is being used as a way to expand the knowledge base of teachers and scientists to discover new techniques for learning and healing.

Case Study: Case Western Reserve

Bob Sopko is an entrepreneurship professor and a tech-connector who helped introduce AR at the Interactive Commons, a 50,000-square foot public (free!) maker lab at Case Western Reserve University. Interactive Commons was one of the first institutions to partner with Microsoft HoloLens to deeply explore AR applications for medicine, dentistry, education, and other industries. Erin Henninger is the executive director of the Interactive Commons and has this to say about the project, “At Case Western we have a 50,000-square-foot maker space. It was a $35.6 million project. We are open to the community. Free. Anybody in the community can come in and we’ll work them through a project.”

I spoke with both Erin and Bob, over the span of a week. My main questions were why are you using AR—and how? What I discovered during those conversations is how powerful AR can be as a tool for community building as well as for medicine and engineering. 3D images can reveal things and help people learn in ways 2D simply cannot. That is one reason their team has become part of the $520 million medical, dental, and nursing school partnership with the Cleveland Clinic. And it is why part of that teaching will be done using the HoloLens instead of cadavers.

AR Offers New Ways to Learn Anatomy

Earlier this year, Henninger and the team at the Interactive Commons created the first third-party app for HoloLens, which was nominated for a prestigious Jackson Hole Science Media award. It’s a free anatomy and physiology primer that allows physicians-in-training or nurses-in-training to learn in a whole new way.

As Henninger describes it, each person puts on HoloLens gear and it “projects a hologram in your world that not only one person can interact with, but multiple people can interact with it.”

They can walk around this “body” and see the outline of it, then see the bones, the muscles, the nerves, and the different interactions among them. As someone using this app, you can stick your head inside the body to see the heart. You can also take live MRI images from a person and then project them into this HoloLens environment. As Sopko explains it, “Physicians can then don the HoloLens devices and walk around and take a look at…this is where this person’s tumor is, and this is how we’re going to attack it. Once they go [into surgery], they will have already seen it before, somewhat, in a 3D image.”

Doctors can also have the patient sit with them as they show and explain all of this. Because HoloLens has a gazing pointer that you can point with your head, the lead physician or surgeon can point out different things and those around him can also see exactly where they’re pointing. People can talk with one another during the experience. It is an immersive and interactive experience that lends itself perfectly to classroom teaching.

To illustrate the benefit of this, Sopko told the story of a physician-in-training who spent more than eight hours analyzing and reading about the ankle and the bones of the ankle. He came in to the Interactive Commons, used the app, “and in 10 minutes stuck his head literally in the body and looked at it and goes, ‘Now I get it.’”

Henninger elaborates with the story of another doctor who is benefitting from AR. “Mark Griswold is director for the center where we’re leading the programming for HoloLens,” she says. “He is also director of MRI research. When he looked at brain track data—it’s sort of a three-dimensional representation of the pathways in the brain—in HoloLens in full 3D structure, he was seeing structures that he didn’t realize were there when he was looking at it in two dimensions. [And] he’s been looking at MRI data for more than 10 years!”

“We’ve also had medical students that have come in and looked and they’ve told us, ‘Just the detail of the models that we can see in this device…this would have saved us hours in the cadaver lab—just looking at the bones and the muscles of the feet.’”

It’s a powerful testament to the value of moving from a 2D to 3D (and higher) perspective.

From Body Awareness to Other Structures

This use case looked at AR that applies to the body. However, we can apply it to anything structural. We can use AR to look at the structure of bridges, structures of buildings from an engineering standpoint, different components of the steel, and the way it would be formed in a building. As Sopko says, “You can basically take it apart and see it in a 3D component. In the arts and sciences, it applies to telling stories and being able to develop storylines and develop characters and be able to engage them and have them grow.”

Henninger elaborated on how HoloLens is now being used for exactly that by faculty in other disciplines as a teaching tool and a way to enhance the curriculum.

When HoloLens came to our campus with this expectation of anatomy learning, once word got ‘round that this technology was possible, that we had access to it here, we got inquiries from faculty from around our campus. We have a list of more than 50 projects that we’re just waiting to embark upon. To be able to look at data, with colleagues, in digital space—that’s something that’s never been possible before.

We tend to draw pictures on a white board in our classrooms, but some learners are just challenged to formulate that 3D picture in their mind. And when we can present it to them in three dimensions, and they can walk around it, they don’t have to do that work cognitively, you know, they can just visualize it. It simplifies the learning. It’s just an intuitive process that we don’t have to imagine; we can look.

You find applications in arts and sciences, engineering, music, dance, history, all of these. We’ve got many ideas.

As this shows, there is potential for AR in applications far beyond just a factory setting. We can use AR for collaborative review sessions, in medicine and in product design. We can use it for education at universities, museums…everywhere. And it is being used in some of these ways as well as to train medical professionals and, ultimately, save lives.

That is a good use of technology.

AR Enhances Museums

In the footer of his email, Ian Kelso, cofounder of Impossible Things, includes an Arthur C. Clarke quote: “The only way of discovering the possible is to venture a little way past them into the impossible.” Along with his cofounder Alex Mayhew, that’s just what he does.

Kelso has a deep digital background. And Mayhew, with a background in both digital art and gaming, is a great example of the way modern technologists are working from cross-disciplinary knowledge to help AR stretch into new areas. As an indicator, he worked with Peter Gabriel for several years. He also worked with the Royal Shakespeare Company and MIT to create a game version of “The Tempest.” That’s some range.

When designing AR experiences Kelso and Mayhew look for inspiration far outside the usual tech world. In addition, they explore an area that is beginning to emerge in a big way: AR technology as a tool to help museums reach new audiences. It is a great example of the way AR technology can bridge entertainment and education. It is also one of the more powerful use cases around the way AR can successfully be envisioned and implemented—with real results.

In July 2017, under the name “Impossible Labs,” Mayhew and Kelso opened an AR-enabled exhibit at the Art Gallery of Ontario. The exhibit is called ReBlink. Although there are articles about the exhibit, the best way to view it—without flying to Toronto to go to the museum—is via this Vimeo page. Mayhew talked about how the exhibit was born and why a museum would choose to integrate a technology like AR in the first place. What I like so much about this conversation is that as both an artist and a technologist, he speaks to AR from multiple perspectives. We began the conversation by talking about how the ReBlink project got started—an interesting look into how AR can help make “old” experiences “new” again:

Where did the origins of this project happen…at the end of a massive project, I was exhausted, and I spent a lot of time in the art museum. I found it very peaceful there. There was one painting in particular with three boys [sitting] on a wall, and they’re drawing lots, and I just used to think, “God, that is such a contrast to the way my life is with my friends—you’re just caught in traffic all the time, it gets noisy, frenetic.” I thought it would make a good [AR] concept: to reflect how we used to live in the past and how we live currently.

The Instagram generation, promotion of quick consumption of media, as well as these mobile devices accelerate through all of that content really quickly. What we wanted to do—and why we found this project to be successful when we tested—we got people to stop and pause when they didn’t stop before. We got them to truly reflect.

We’re very conscious of the way that we’re using AR. We’re not just making something come alive. We’re looking at the original paintings and the experience [while many people are just] looking at the digital intervention. Really, this comes down to the justification of the use of AR.

Yes [in our exhibit and AR app], things come alive, yes things animate, but it’s creating a lens into the past that can help us better understand the past. Also, from looking at the past, that can help us better understand the present. That’s what our particular use of AR enables us to do.

There’s a famous picture, probably seen it, but like three or four kids sat on a bench, using their mobile device, in front of a beautiful, big, classical painting, but they’re all looking at their cell phones [see Figure 3-1]. There is a concern that by bringing mobile phones into the gallery space you’re actually making the situation worse.


Figure 3-1. Looking AT a phone is a different experience than looking THROUGH a phone using augmented reality

What people might not be aware of regarding that picture is that those kids are actually engaging with an app that was intended to increase engagement in art galleries—they were all kind of sucked into it. But the trouble is, they’re looking down. And there’s a big difference between looking down in the gallery and looking up at the painting, even if it’s through a device. Mayhew elaborates:

We have a bit of a saying: “Always look up, never down.” You can get so into something that means you’re not still integrated with the actual reality in front of you. We believe that you need to be looking up.

Mayhew’s commentary makes sense from a humanist and an artist’s perspective—his primary interest is in getting people to pay attention to and notice art and the world. It also makes sense from a technologist’s perspective—working well-executed AR into the world of art/education/museum-going is a wise strategic move. Those areas are ripe for AR exploration and beginning to take off. The question that I asked myself, though, is why would a museum, or any more traditional public institution, choose to invest in AR as a technology. What is the draw for them? Mayhew had this to say:

They saw it as a way of refreshing their gallery experience, because some of those paintings had been hanging there for ages. Regular visitors see the same old thing. To [bring a new] painting in, just the insurance cost would be about $40,000, I believe. 40, 50—just for the insurance. That doesn’t include the shipping, doesn’t include the loan or the deeds. So AR, it’s potentially a cost-effective way of refreshing the experience. But the more important aspect is to create deeper engagement with the painting. It’s always being educational without being overly didactic.

There’s a big problem in museums: that the average time looking at a painting is 16 to 17 seconds. And that doesn’t include the people that just glance up, and casually walk past. We wanted to address that issue, and certainly with some of these paintings. They’re very old. People look at them—younger audiences—it meant nothing, they had nothing in common with [the paintings]. You know, there’s no kind of connection to how they live now.

That answer speaks to one reason AR is taking off as a part of apps like Snapchat and Facebook: demographics. It is about creating a new way to engage new audiences. And for museums and other institutions, it can take objects, ideas, artwork, or landmarks that are 100, 200, or 1,000 years old and make them new again. It can educate, subtly, and add a new layer of information, excitement, or relevance that make people see with new eyes. That is worthwhile.

I also asked Mayhew about who influenced his work in AR. Interestingly, his answer was far outside of technology—as is true of many creators. His instant response: Lewis Carroll. It speaks to the potential of digital to go beyond what the world was previously based on. Mayhew explains:

I studied film in school. Understanding the grammar and the language of film was a long process of discovery that started with the Lumière brothers, and that [visual of a] train coming into the station. It was shooting things on a proscenium stage, and taking the rules—the laws of physics, and the format, and the grammar of that other media—and learning to impose them. But, you know, it was a lot of big words. Mise-en-scene and montage were discovered as dramatic devices. And so, film developed and evolved. And it evolved into understanding the grammar that was implicit in the medium itself. I think every medium has a set of laws, or a grammar.

With AR—and this is its strength but it’s also it’s weakness—there’s a certain magic. “Like, wow, the magic of the technology.” It’s a little bit like when people first saw a train roll up, and people first experienced the cinema. That moment is not gonna last forever.

AR has got everything to do with technology, but the power of AR has really got very little to do with technology. It’s to do with creativity and realizing that you have the potential of medium.

Mayhew and Kelso are clearly onto something. And their approach is now being recognized broadly: in October, Snapchat created a partnership with artist Jeff Koons and put out an all-call for artists to create content for its platform. This move is a smart one; not only does it map well to Snap’s demographics, but also in owning the AR-for-art space, the one who first licenses/owns the content will be queen.

Case Study: Institute for the Future

Toshi Anders Hoo is the emerging media lab director at the Institute for the Future. Hoo is working on the edge of emerging media and currently building out the Institute for the Future’s proprietary AR and content management platform.

Also interesting is his primary field of study: embodied cognition. He is working on giving our body an interface that allows other kinds of new intelligences around the creation of our world. This type of leading-edge thinking is not surprising from a man who has worked closely with Ray Kurzweil. In 2007, Hoo codirected the feature length documentary The Singularity Is Near with Kurzweil based on Kurzweil’s best-selling book. Hoo explores not only the direct applications of emerging technologies, but also the wider implications and impact on individuals, organizations, and society at large.

When I asked him about where AR is going, Hoo referenced work being done at the Singapore Science Center and the Detroit Museum of Art as well as ways AR is being used elsewhere to improve the museum experience. He points to museum and event enhancement as some of the premier use cases in the emerging world of consumer AR. He is right. New companies like Camera IQ are using AR for marketing at events like Coachella. As Hoo puts it:

We are working at the edge of emerging media, specifically looking for what is the communication importance of these new formats. If you see emerging media as having importance, they allow for metaphors and you see metaphor just builds lots of stories. The big question for me was—with all this work—is, what are the new metaphors that allow you to have the kinds of stories and conversations that you couldn’t have before? What are things you can talk about and express that weren’t expressible before?

That’s background on this. In terms of what we’re doing right now, the lab is focused a lot on a variety of virtual reality and augmented reality projects. Specifically in augmented reality, right now we are working on a year-long project at the Tech Museum in San Jose.

They hired us to do R&D on how to create a museum that can incorporate augmented reality technology. It’s going to be a supplementary, augmented overlay on top of—we’ll just say a “body world,” which is a class of a human…specimens. It’s essentially a physical anatomy exhibit. We created [an AR] exhibit that interweaves with that physical exhibit.

Now, in order to create, we built our own augmented reality operating and content management platform, it’s called Artifact. That’s a tool that we created. It’s [on the] Tango operating platform, with content allows us to build models and create basic virtual exhibits that you can place using Tango phones anywhere. It’s a pilot program, but of course this is easily expanded out to a much wider space context within the museum and anywhere outside of the museum.

Tango phones have multicamera capabilities, and they do work for this situation. But they’re not the only technology Hoo and his team could have chosen. Why did they specifically select Tango as a platform? One reason is what makes phones a better choice than a head-mounted display for many use cases: using a phone allows shared experiences. Hoo explains:

We did four months of testing around marker-based as well as head-mounted display technologies like HoloLens. We settled on Tango for a number of reasons. One, because this first project was more specifically around AR, we wanted to be able to consistently and specifically place our visualizations easily as would be similar to a standard museum exhibit. That’s why we’re leveraging the technology from Tango. Something like HoloLens can allow some of those experiences, but the HoloLens is going to be very expensive, delicate, and it’s also a solo experience. There’s a social aspect to a handheld device…It’s actually my perspective, there’s so many technical and deep end UX challenges that need to be solved for head-mounted display, whereas that’s the holy grail, we’re just [seeing] total immersion again, later. In the meantime, I think what we’re going to see is much more narrow use cases using cell phones. It’s going to be the equivalent of having a hi-def camera on your phone and a TV studio in your pocket now.

That’s just a little overview…in addition to that we’re also doing arts and education projects with AR. As soon as this platform is done with the museum, we’re not just going to use it for the museum, we’re going to also use it within the Institute for our own futurists and future scenarios and simulations.

AR Is at an Acceleration Point

AR is at an interesting moment in history. There are spaces developing that everyone should be paying attention to but that you might not be aware of yet. Hoo has been playing in some of these spaces. And as the head of emerging tech at a future-focused institute, he has a good read on what’s coming. My question for him: what are the acceleration points that will really transform what AR is, as opposed to just inching it forward? Here’s what he said:

I think we’re about to have an explosive moment. We do these forecasts at the institute, and three of the forecasts around AR [focus on] what are the acceleration points that are going to really transform what it is.

Those are, first, solo versus multiplaying experience. That’s a big part of the theory of using Tango, is to be able to have shared experience, not just be like, “I placed this visualization on my table and I can see it,” but everybody can see it through their own device. That’s going to transform the very nature of [AR]. As soon as we have multiple perspectives into these alternative worlds, they have not only more realism, but [also] more meaning and more significance. Then they can be realities that we can work from and share and collaborate on.

The second is the more easily usable content generation tools. Where we’re at, right now, the share of the clients, it’s very misleading. Especially in the VR space, it’s mostly tech demos and video games because it’s all created by computer engineers and game designers. But I think the radical shift is about to happen, where it’s going to become easy for people to place—even with AR—ambient objects, places, information. And that’s going to change the nature of it all from being in its demo, emergent state to [an] explosion in these cases—for teachers to doctors to surveyors to kids hanging out and just using it socially.

The places to watch for augmented reality are museums, stores—particularly big box stores—[their] retail experiences, and theme parks. Obviously, we can place AR anywhere, but those are going to be three spots where they’re going to be great test beds. I think we’re going to see a lot of innovation happening. They’re large enough that there’s discoverability, but they’re contained enough that it’s not endless. There’s also very specific information overlays that could be very valuable in all those situations for different reasons, and there’s very specific user journeys. You want to have it open ended, but there’s measurable goals in those scenarios. I think those are the three spots. That’s why we’re working on this museum project. We’re actually in talks with a park right now.

A lot of our institute clients are big retail.

In terms of themes [for AR], it’s not just doing what we did before, just with a new technology, but actually doing new things that we haven’t been able to do before. Being able to give our body an interface into our computing experience, for communication, for design, for simulation, and for creating or testing our models, inhabiting the models that we created, is going to basically allow whole other sides of our intelligences into the creation of our world, both physical and sociological, political.

The [third and final] forecast is breaking out of the siloed content contribution model. The ultimate goal for all this is that everybody—a variety of people and organizations—can create virtual worlds.

And these are all going to be different portals. You could look at each of these virtual worlds through a variety of different portals, whether it be on your phone, or a full immersion experience.

AR Adds Meaning in Retail

Retail is one area in which AR has great potential and is beginning to find its groove. In the consumer-facing realm of retail, companies have found and finally perfected a few ideal cases for AR: testing, sampling, and decorating. The ultimate proof is that, in November 2017, Amazon began rolling out AR to allow shoppers to envision how furniture, home décor, electronics, and other purchases will look in their homes. It’s still early and needs refinement. That said, it is a large-scale deployment of the technology that offers the following:

  • Since 2013, IKEA has been letting customers test-drive furniture with an app—which just got a refresh and relaunch using Apple’s ARKit.

  • Taobao.com (the Chinese equivalent of Amazon.com) has AR cosmetics testing baked into its app.

  • Japanese beauty retailer Shiseido uses digital “cosmetic mirrors” in a similar way, to allow customers to sample makeup as well as receive beauty advice, product recommendations, and shopping lists.

  • The National Football League’s Baltimore Ravens have just announced a similar sort of magic mirror “makeup” app—for fans to wear Snapchat-style facepaint through the Ravens app.

  • Mothercare has an interesting AR app for envisioning what maternity and baby clothes actually look like on live bodies.

  • In the behind-the-scenes retail world, there are some good non-gimmicky examples from UK grocery chain Tesco and Chinese online grocer Yihaodian.

My assessment: AR is improving in the consumer-facing retail space. But now is the perfect time for AR in the backend of retail—in enterprise, sales, and design.

A CB insights review from October 2016 talks about early stage AR/VR startups that are making platforms for retailers “to better show off product or aid in-store design.”

Augment is one of the companies featured there. The company is based in Paris. The founder and CEO’s name is Jean-Francois Chianetta. And, although the company is rarely reported on in the US, Augment has one of the biggest and most valuable augmented corporate reality rollouts in the world. What does that look like? While many AR players are still pilot testing, Augment created a partnership with Coca-Cola that has now rolled out across dozens of countries and tens of thousands of sales people. In addition, Chianetta actually wrote the book on industrial augmented reality—or one of them. How did he get into AR?

Originally from Belgium, Chianetta is a mechanical engineer with formal training in cars and planes who became interested in 3D modeling in simulation. His first job was working for a US company in Paris doing simulation for microsystems. In 2010, he bought his first Android smartphone, and he began playing with augmented reality to see what was doable as a personal project. That was eight years ago. As of our interview, three million people have downloaded the Augment App, and the company’s clients include corporations from Nestle Purina and Nokia Networks to T-fal and Coca-Cola.

Here’s how Chianetta explains it:

I did the first fundraising in 2013 for $220,000. After that, I raised a round for a million. Then, I got Salesforce Ventures as an investor for $3 million. What was first augmented reality in a 3D environment became Augment the way it is today. As we got more users on the platform that helped us understand what augmented reality could do for different industries. Augment the app has users in 200 countries. Three million people have downloaded the app. Since we started, we have 700 or 800 companies who have been users. It goes from surgeons using it during surgeries, to architects, big construction companies, coffin manufacturers, many of the big companies in the world.

The Coca-Cola case is interesting because it is a very big scale deployment that’s been going on for two years now. It’s thousands of sales people and tens of countries to show the fridges in the environment [where they’ll be placed]. When you go to see a customer at a bar or restaurant, they can’t always tell if the refrigerator will fit. This helps them convince the owner of the store or small shop. It’s consistent. When they see it in augmented reality it fits what they expect to get. In all those dimensions, the number of visits are reduced, because they have to see it only once. Then they have a photo of where the fridge should be, and there are fewer returns because they know what they’ll get. And it’s growing. We deploy in more and more countries and more and more salespeople—soon it will be 10,000.

The interesting thing here, to my mind, is scale and Augment’s ability to achieve it. Every industry is actively exploring AR. The key, as Jitjen Dajee from Deloitte said in his talk at the 2017 Augmented World Expo, is in the large-scale rollout. In this nine-minute video, he explains very well how to think through an AR roadmap from a business strategy perspective. In the talk, he says that very few companies have gone beyond the rollout phase into large-scale AR implementation. It is true. But there has been a huge breakthrough from a technical perspective that might change that very soon. Although it hasn’t been reported on yet, someone has actually found a way to bake AR capabilities into one of the largest enterprise manufacturing systems in the world—as standard operating procedure. Chapter 4 has an interview with Gaia Dempsey that speaks to that. It’s a HUGE step forward in making large-scale rollout both possible and relatively easy.

Case Study: AR Can Help Reinvent Fashion and Soft Goods

In the same way that Mark Zuckerberg started Facebook in a dorm room, some powerful new AR applications and developments are underway in schools and garages around the world. I found one such creator who has found a remarkable use case, one that has the potential change the entire fashion industry by streamlining the way garments are made and eliminating rounds of expensive back-and-forth during creation of initial garment prototypes.

Connor Davies is a recent graduate from MassArts, part of a program focused on making arts and design accessible. He used AR as part of his senior thesis in industrial design innovation to look at service design. What he created from it is brilliant.

While his work is still concept, based on the work he has done as well as his insights, it is perfectly designed from a user-experience standpoint, and the thinking has the potential to entirely change several industries. It’s worth the read not only for the solution he created, but also to look at his thought process when building. He took an ethnographic approach, starting from user needs, to learn what to create.

Because AR can be difficult to show unless you are kitted out with the AR gear or an app, he made a video that shows (thanks to the magic of animation) how what he created works.1

Anyone in softgoods or fashion would do well to listen to what he has created—and either partner with him or fund him. I know fashion. And this is spot on. Here’s how he described the genesis of the project and his discovery methods. It is interesting to look at how he approached the discovery and learning phase of the project process AR could significantly improve:

One day I went through the fashion department at school and I noticed they had a sewing machine from 1991. They also had one from 2014 and they were exactly the same, and I was like, “What the heck? Why don’t you guys have, like, anything new?” It’s like, “Oh, it’s just what we have. This is new.” I started talking to them and they’re, like, “Nothing’s changed since like the 1930s. Everything’s just been all the same.” It seemed stupid. People spending way too much time faffing around with old technology.

I had a good contact [in fashion]. She walked me through the process of making a garment. There are always a bunch of steps before you create something. You have to make the pattern. You have to drape it, and then you cut it. Then you put it down on the table, you put paper on it, then you put more fabric on it and then you fold it and cut it out again, sew it all together. Then you can see if it fits. This is really long process, and there’s nothing to expedite that right now.

I watched how she was doing her work and then from that I started to think, “Oh crap, there’s like a lot of things wrong with this.” Seams are a huge pain in the butt. Basically, you have to measure them out, and if you want to change one it’s a huge waste of time. And a lot of iteration in fashion happens. People aren’t easily able to just mess around with fabric without spending a huge amount of time on it like someone who does 3D modeling can do. They can’t just go on there for an hour and just like play with it [in minutes].

I got a bunch of video reference on how you could make this very simple. Originally [I envisioned the solution] as virtual, but I found augmented lent itself better to this because a lot of people using it are not gamers, they’re not people who are used to a lot of technology or someone who isn’t really exposed to that. Augmented lent itself more because it felt more natural; you could actually see where you are in your room, this crazy like, copy the landscape around you, like set at home.

Then, I started to also talk to her about large-scale production. She works for TJX, TJ Maxx and HomeGoods and all those big companies. Then, she also put me in to some contacts with her friend and her boss—the person who runs children’s fashions and then also [the] costume department at TJX. I was talking to them about, like, how this actually works on a big scale. What they said is they create a 2D project—they make a 2D sketch, and then they ship it off to China, and then China would have to try to figure out what they did, and then build it. And then they send it back and then mail some money and then they send it back, and then they get the product. Just circling little pieces—it would go back and forth, like, three or four times for each one of these [garments or pieces]. She was basically saying it was always expedited, it was always overnight because they needed, like, really fast turnaround.

She was talking about the children’s line—they have 200 garments every season. And each one of those costs approximately $150 each way depending on the size. And then the costumes [in] huge boxes had to go overnight to China. Just on the shipping alone I was, like, “OK, this is months in prototyping fashion. How do I make that faster and less expensive from shipping cost alone?” It makes a huge dent, let alone the time spent actually designing a thing, and the time spent fixing errors for more complicated stuff. Sometimes, the big costumes might take, like, nine trips back and forth, and that will be $400 a trip for a big costume. That’s just one thing out of hundreds.

That is like one side of it; the other side is looking into other features they might want. Originally, [with the AR app I designed] it was, “You grab a t-shirt, you pull out the sleeves, and you get, like, a long sleeve t-shirt.” I’m like, “OK, that makes sense. That’s simple enough.” Then, it was turned into more of [an] ability to create the patterns. Not only can you sculpt and put on your sleeves and your buttons and your pockets, but you can also turn that into a pattern that you can use. Then, that leads to its own set of tooltips and you can send that out to, like, local crafters or makers or you can send it to Spoonflower and they can make it for you. It’s kind of nice how it just kind of kept spreading out into more little intricacies. It got bigger and bigger, which I am excited about.

Everyone I showed it to, they were, like, “Ooh, yeah. Please make this. When is this coming out?” I showed it to professors, one of them owns a pattern drafting company. She was, like, “Oh, this is amazing. This might put me out of business, but I love it so much.” I also talked to Erin from Project Runway. She’s the winner of it, and she was, like, “Oh, please do this. Please. That will be amazing; you just read my mind and all my little struggles.” It’s being, like, confirmed by big names.

I’m not a software guy, game designer, software engineer, so it’s going to be tricky.

But if I get something small at least going, I can build up excitement about it. A lot of the current VR things, they don’t do anything. There’s nothing changing other than a better way to convey information. They’re not tools.

This is an example of applying AR to a category that hasn’t yet been lifted by technology. And it is based on really listening to and observing how people work. Both of those things are what makes it such a valuable story to hear. At its best, AR is a tool to help humans—and the broader the set of humans the better. Tools and concepts (like this one) that solve real problems around shipping costs, time savings, and ease of creation and production are deeply valuable to the world. And AR development that starts from needs and problems discovered through watching how people work is wise. Whoever chooses to build on Davies’ work in AR can make sewing and fashion production easier—and make a lot of money.

1 The Vimeo link is private, and the password is ASTRAPIA.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.174.168