You have made it to the final chapter. As we bring this book to a close, we will be opening up the doors to what comes next. This involves looking at next steps for your design process and also the next steps for the XR industry. Here is what we will be covering:
WHERE TO GO FROM HERE Now that you have walked through the design process, the next step is to continue to learn and expand your skills.
BREAKING OUT OF RECTANGLES With XR changing display technology, it is time to reimagine the future of digital computing.
WHY WE NEED XR RIGHT NOW With so many devices and so many screens calling for our attention and focus, we need to bring it all together into one space. This will allow us to look up and re-engage with the world around us.
Now that you have walked through each step of the design process, what is the next step? Create the future that you want to live in. Envision a way that XR can improve or enhance tasks throughout your daily life, and then work through the process of designing it. This can be small; remember, baby steps. Each concept will lead to another that will lead to another, and soon you will be walking through the creation and exploration of your ideas. The best ideas come from your own personal experiences, so use those to inspire you to start and to continue designing for XR.
Knowing that we have not yet reached the mass-adoption stage of XR should not serve as a reason to hold off on designing and creating your ideas. If anything, it is a reason to get started now. Although the perfect set of AR glasses may not be on the market yet, you can use many other technologies in the meantime to get ahead of the curve for when something better is available.
At this point it isn’t a matter of if it happens, but rather when. If you wait around for “the perfect device” to appear, then it will be too late—so it is better to start now. As you read through this book, you learned about different areas that you could focus on, and I encourage you to do so throughout your career:
Work on designing custom 3D models.
Learn and explore different ways to experience XR tech.
Experiment with color and type on various devices.
Observe the way people interact with 3D objects in physical space to inspire you.
Listen to the way sound changes your experiences.
Learn HTML and JavaScript.
Working on all and any of these skills will help you take the first steps toward becoming a stronger and more marketable immersive designer.
As you look at the future of XR, the best way to predict what will come next doesn’t involve any prediction at all. Instead, you can look at all the clues that signal what will come next. Even just looking at Apple, you could prototype an experience that uses their current technology, but perhaps combining multiple products together. This could mean incorporating: head tracking from their AirPods (to create spatial audio), haptic feedback and gesture recognition from the Apple Watch, depth-sensing using the LiDAR Scanner on the iPhone 12 Pro, and the mobile AR platform to create a full sensory experience. When they all work together seamlessly to produce something exceptional, all of these individual capabilities provide cues to what is coming next. So, instead of waiting for the device that utilizes them all, you can start designing experiences that can tap into the current potential.
This example involves one company’s technology, but you can choose others as use cases. By looking into the current product lineup, you could design experiences based on the combination of products that that company offers. This will put you one step out ahead of the next “new” product releases.
If you are looking for a way to get more XR work into your portfolio, you don’t have to wait for the perfect client to come knocking on your door. In fact, they won’t come. Unless you are highly recommended for a job through a strong network connection, it is unlikely that you are going to get hired for a job doing a specific kind of work that you don’t have in your portfolio. Hiring agencies and creative directors want to see examples that showcase the kind of work you are capable of and that show how you work. You may be thinking, “How can I get projects in my portfolio of this kind of work if no one will hire me?” Here’s the answer: Create them for yourself! You are actually the perfect client, because you will push yourself more than anyone else. You just have to create a habit of creating for yourself and stay committed to it.
Create a passion project, where you can explore, expand, and highlight your skills in 3D, interaction design, and AR, VR, or both. If you want to be hired to do immersive design, then you will need to showcase your understanding of designing for 3D space. What better way to practice than to do it every day? You can start off by creating a 30-day challenge. For a month, spend 10 to 15 minutes each day creating something in this space. You could start off building custom 3D models and then play with materials and lights.
For this to work, however, you have to do it every day without exception. Keep it short so it won’t feel as daunting. The reason I suggest 30 days is because it takes 30 days to create a habit. I will warn you that around day 10 or so, the newness and excitement will start to dip, and you will regret that you ever set off to do this challenge. Prepare for this. Save an extra exciting project for that day, something which you have been looking forward to. After you get over that dip, you will re-energize and will continue to climb. As your momentum grows, you will start to see results in your skills and even in the quality of the work. This should help propel you to the finish line.
After 30 days, you may decide to continue or create a new passion project. With each project you complete, you can add it to your portfolio site. These passion projects can help land you the next job, as they will showcase your skills, your vision, and your work ethic.
As you look to get better and better at 3D and immersive design, you will find you can be inspired by every environment you are in. Remember you are an expert at the third dimension because you live and interact within that space every day. It is so normal to you that you may not stop to pay attention to the simple, everyday tasks and products you use. If you spend the time and bring your awareness to your interactions in all the different spaces throughout the day, then you can find new inspiration in the normal.
Within a kitchen, for example, you can consider:
How you turn appliances on
How you open the refrigerator
How you know how to open packages
How you hold utensils
How you know how to bring food accurately to your mouth
What angle to tip a cup to drink without splashing all of it onto your face
All of these tasks hold inspiration for designing interactions within the XR space. From designing UI, affordances, gestures—and creating sensory experiences—you have hundreds of answers to your design questions already around you. You just have to pay attention to them. Pulling inspiration from objects and interactions that happen in physical spaces will make them more natural and easier for the user to learn and understand, as they are familiar. As the artist Andy Warhol said, “You need to let the little things that would ordinarily bore you suddenly thrill you.” To do this, you need to pay attention to them. Change your perspective of them. View things from different angles, watch other people interact with objects, and even try interacting with objects in a different way than you ever have before. FIGURE 14.1 shows how AR can be used to enhance the cooking experience by utilizing the counter and the space above the pan to provide information in context. In addition it provides step-by-step instructions within the UI view space to be seen anywhere you look.
As you explore different ways to expand your immersive portfolio and your perception of interactions in the physical world, it is worth considering what can make your work stand out. The most obvious solution for this, of course, would be to create great visual work that is immersive. However, there is another approach that might leave an even stronger, lasting impression. Consider that even the most incredible AR experience with beautiful 3D graphics, rich and brilliant colors, and a seamless user experience is only great if people discover it. It is really important that you highlight how you will motivate users to enter your experience. In fact, the trigger that brings a user into an AR experience will leave a lasting impression—if you spend extra time designing it into your concept.
Consider a mother shopping in a store with a child. The child doesn’t want to be trapped inside the shopping cart—they would rather be able to explore—but the mother has a list of items that she needs to buy. Each person has a different agenda. However, there is a sign when they walk in that has a recognizable image to the child. The child points it out to the mother, and the mother glances at the sign and reads about a game of hide and seek that can occupy her child as she shops.
Somewhere in the store is the child’s favorite character (let’s call him Spot), and they need to find him. There are clues hidden throughout the store as well, so they need to pay attention. With a scan of the QR (short for “Quick Response,” FIGURE 14.2) code on the sign, the mother launches an AR experience. The child can then use the phone to scan the store in search of the hidden Spot—all while the mother gets her shopping done. Clues can be found throughout the store in both AR and printed form, each one offering an educational fact, perhaps. The search ends when the child spots Spot at the end of an aisle. As they approach, the child sees Spot hide behind a product. As the child searches, the phone recognizes the front of one product as an image target, which launches a celebratory animation and video—Spot has been found! The game is over, the child and mother can check out, and the child can be lifted happily (well, that is the goal anyway) out of the cart.
The real heroes of this story are the triggers that helped launch the various AR experiences throughout the store. The initial sign, which caught the attention of the child, had a QR code on it. There could be additional scannable signs throughout the store, in case the first one was overlooked. Then an image target on a specific product triggered the final animation, letting the child know they won the game. These entry points need as much thought and design as every other part of the experience, without them the experience would not have been discovered.
For AR to become widely adopted, we need to a cast a wider net of discovery options for potential participants. In our post-pandemic world, the concept of touchless has become an essential part of our everyday life. This brought more attention to QR codes, and even scannable codes on our phones to pay one another, without the risk of spreading germs. QR codes have become common as a way to download information on your own device—such as a menu at a restaurant or a trail map at a park. This means you only have to touch your own device and nothing else. This social adaptation is an interesting one that can be leveraged to increase discovery and the launch of AR experiences within physical spaces.
Even young children, such as my 5-year-old, have learned to identify a QR code on a LEGO brochure—just scan it to see the instructions in full 3D step-by-step. Aware of this interactivity, he points out QR codes everywhere now, just from that one experience. Although these codes have been around since 1994, they have been slow to become adapted. They were originally created within the automobile industry, first with Toyota, to keep track of car parts. These QR codes are more powerful than their predecessor, the barcode, with the ability to hold more than 100 times more data. QR codes have proven themselves to be a powerful trigger for AR and will likely continue to be used more prominently even in the post-pandemic world. Use this idea of discovery to explore the relationship between users in different spaces and ways to connect with the digital world.
AR is the future of the display. The advancement of wearables will change the future of display technology as we know it. Instead of being limited to looking at rectangles that confine information, we can break out and engage with our information anywhere we look. Imagine:
You’re watching a movie sitting upright in a chair.
As you move to a different room, you are still watching that same movie.
When you lay down in bed, you see the same movie—this time on the ceiling.
You change your position to be more comfortable, and you can still view the movie without interruption
Even more important than all of this, we can engage with these visuals in context. We don’t have to move from one screen to another, we can see the information right where we need it. As you cook in the kitchen, you can see the recipe on your counter as you prepare the meal. You can select the instructions for any step and even watch a video that demonstrates how to do it—in the same view as your food. The world is the new screen, and our environments are the new interface.
For us to break out of the traditional rectangle screens, we need to break through current barriers to achieve
Growth in the adoption of XR
Advancement in wearable technology
A streamlined release process that is it easy to share
As we just discussed about triggers, the discovery of AR experiences is a necessary step to increase awareness of AR and XR as a whole. The more places people start to see experiences being shared and used, the more people will start to adopt it as a viable communication medium. It has started creeping into our lives in broadcast television in both sports and family-focused programming. In 2020, Verizon and the Macy’s Thanksgiving Parade teamed up to enhance the viewing of the parade at home with AR balloons, in addition to the physical ones. This addition added a bit of extra magic to the tradition of watching the parade. AR enhancement has also become more prominent on social media and the web.
As live streaming continues to grow, more use cases for AR have arisen. AR has already been used across a variety of sporting events, but beyond that adoption, this form of information has continued to grow and expand. With the need to work from home and social distancing requirements, the need for video conferencing has grown immensely. With this came the use of some fun AR filters, including animated stickers and virtual backgrounds that can conceal a messy house and present a vacation paradise.
In the VR space, hangouts and even virtual offices spaces have been making their debut as a new way to connect people, regardless of where they are physically. An example of this is VRChat where you can interact with friends in your own customizable 3D digital forms. Even better, you can access these hangouts via your web browser if you don’t own a VR headset.
Mobile is the future of XR and should be the main focus at this moment, because it currently exists and is already widely adopted. This gets especially exciting when you can combine the processing power of a computer with the mobility of AR—without wires so that it can be brought anywhere you go. In the future, 5G should be able to provide even more processing to a mobile device. We look forward to the perfect blend of high-resolution images with a 360° field of view. In the way that 3G stepped up the use of mobile video and 4G enabled the use of social media and mobile apps, many see 5G as the way to usher in AR and VR into mainstream. FIGURE 14.3 shows how mobile AR can be used to explore your favorite art in your home, which became even more prominent during the COVID pandemic when traditional museum experiences were not possible.
Some of the earliest use cases of XR have been within the social space, so this should continue to help grow the acceptance and adoption. Examples of this include adding filters and lenses to our selfies and images of friends and colleagues. It also has been popular in gaming, specifically social gaming where you can play with others in immersive digital spaces. As mobile AR has become more advanced, multiple users on separate devices can all view the same digital scenes. This can make tabletop games more popular and opens up the possibilities for where this genre can grow in popularity. Tilt Five is an example of how AR gaming can become a social event (FIGURE 14.4).
Having the ability to embed 3D scenes and AR experiences into a web site, without the need for a stand-alone app, has made the spatial web a popular option. The web makes the content much more approachable and reduces barriers to discovery. Users may enter AR by accident at times: They may find an experience through a scanning a trigger and launch an immersive experience without even realizing what is happening.
As browsers continue to grow their support for this kind of experience, there will be more cases using the spatial web platform. Advances such as Mozilla’s WebXR Viewer app and Google’s Model-Viewer show a promising future for mobile AR and growing comfort with the spatial web.
Progress is slow, specifically for AR glasses, with many failed attempts and the buyouts of smaller companies by larger tech giants. There are many challenges to creating a solid product in this space. As such, the release of a really viable product is probably still a few years out. What are we waiting for?
Glasses that are sleek and fashionable and that don’t make you look like you are a character in a sci-fi movie. A balance of style and function is still needed.
Designs that work within a thermal range, so they can be worn comfortably on the face. The device needs to stay cool. To do that, it needs to offload processing to another platform or device, such as the AR Cloud or a smartphone.
Improved field of view and display quality that allows for a truly immersive experience.
Adequate battery life—without adding heat.
Comfort.
So, essentially, we are waiting for advancements and improvement of processors, displays, optics, sensors, and the overall fit for the user. The displays that have been invented in the past have proven to not work for smartglasses. Exploration and innovation into a new kind of display is needed to make this work. Early adopters tried to use refined versions of existing displays, leading to many failed attempts. As of this writing, several companies have been working on staying in the forefront. For example, Vuzix (FIGURE 14.5) is working on a micro-LED display that is expected to work both inside and out. Others have partnered with companies who already know a lot about producing eyewear, such as Facebook partnering with Ray-Ban. These kinds of collaborations are good signs of people sharing what they know and do well to help make strides in this space.
While there are many more platforms to produce XR content than ever before, it still does require a great number of resources to release and share an experience. This is an issue for many content creators, and they are often faced with undesired restrictions on the final release of an experience due to budget or technology constraints.
As we have explored throughout the pages of this book, there are many different choices—programs, software, and hardware—to consider and test before even starting to create an experience. Which direction you choose may depend mostly on what you have access to or what the client wants.
For VR, the journey is a bit clearer with Unity Pro and Unreal Engine taking the lead as forefront software that can be pushed to headsets. However, the process to share VR apps still requires going through many hoops. Each headset manufacturer has their own process and requirements for how to submit an app for consideration. Oculus, for example, requires that you follow their content and proper use of data guidelines, pass a virtual reality check to meet their performance requirements, and then pass an asset check making sure all images meet their guidelines.
The release of the app is reliant on the passing of these checks. Many companies require a developmental license to publish applications, which comes with a fee, although Oculus App Lab does provide access to early versions of an experience to user testers without having to go through the full store certification.
While all these guidelines and requirements provide quality control for the user, they do make the release process more challenging for those of us creating the experiences. The complexity makes it harder for smaller independent teams to successfully launch an experience.
For AR, there are many different software options, many of which we have discussed, but they also come with some added challenges. Adobe has stepped forward with their release of Adobe Aero as a design-first, no-code option for designers. The approachability of the process is great. Unfortunately, at this point, the functionality is limited and often has glitches.
The mobile AR space holds many more development options, but you have to make the choice between a stand-alone app or a web experience. This becomes a balance between control of the design and functionality versus cost and time to create the experience.
WebAR is the faster and more cost-effective option in terms of development time. However, the available options at this time come with monthly fees and high commercial license fees. You also need to continue to pay these monthly fees after release in order to have the experience live. This means months of cost that are added to the cost to keep the experience available for users. Other platforms charge per user and per amount of data used; this could keep costs down if you have a smaller audience, but it makes the cost hard to budget for. Unexpected costs can be a challenge for a smaller company with a tighter budget.
As other technologies have shown, these early challenges will be resolved as time passes and more advancements are made. The challenges of today will become history in the near future. As more and more people adopt XR technology, more resources will be put into resolving these challenges in order to create the innovations that will shape spatial computing for years to come. We are at the beginning of an exciting time. It isn’t perfect, but that shouldn’t stop you from tapping into the potential that exists now.
As you observe human behavior walking down a city street, working in an office, or even interacting in a home, you will likely notice a consistent theme: computers. They may not all look like computers; they may be disguised as sensors, screens, buttons, and even chips. But regardless of their form, they are everywhere. There is a different device for every different task.
This Internet of Things has created a network of devices connecting everything around us through invisible connectivity: from our doorbells and our cars, to our thermostats and baby monitors (or doggie cameras), to our smart speakers, and even to our lights. We carry them with us in our bags, on our wrists, and in our hands to constantly stay connected to all the other things and people around us. They also connect us to our social networks.
We’ve just listed all of these devices, and we haven’t even gotten to our actual desktop computers and laptops. There is a benefit to having all these devices to help us with so many different tasks throughout our day. However, because they are all separate, each of these “computers” fights for our attention—and pulls our focus from one to the next to the next. As I sit writing on my computer screen, I am receiving constant notifications on my phone and my watch, constantly shifting my focus. It is a wonder we can get anything done with so many “things” alerting us elsewhere.
As the chimes, dings, pings, and vibrations become louder than the people next to us, we lose our ability to stay present. This is why it is often referred to as interruptive technology. Sure, you can turn notifications off, but how many of us do? And for how long? Many design decisions on these devices are created to alert the user so that they actively focus on that designer’s product. However, the more devices you have, the more notifications you receive, and the more screens you have to check.
This is why we need XR right now. We need to re-evaluate all the different screens and sensors in our lives and align them together into one place so they can be organized and controlled in one space and on one hub.
Technology may need to communicate with us, but it doesn’t need to be so loud. With XR you can still have the ability to receive information, while keeping your attention in the same space. You can see a notification come in while staying on task, quickly determining whether it needs your attention or can wait. Instead of moving from one screen to another screen or from one device to another device, all the devices could be connected into one pair of glasses, offering a single area of focus—often hands-free. You wouldn’t need to stop what you were doing to check on something else; it could all happen seamlessly in one space and one view. These notifications could be off to the side, where they could be easily seen or ignored, while you carried on about your day and were more present and aware of the people and places all around you.
This is the idea behind calm technology. The constant notifications and focus distractions have led to us yearning for a break with our technology habits to find a sense of peace. Technology is intended to assist and help solve problems, not create more. The combination of our need for more calm technology and the potential for XR will together be the future of communication technology.
Imagine all of your notifications, emails, messages, settings, and preferences all in one place and available wherever you go without taking you out of your environment or task at hand (FIGURE 14.6). Designing calm technology means that the experience is created with thoughtfulness to the human users. Doing this right takes time.
It is easy, especially in new territories of the industry, such as XR, to hurry to try to be the first to do something new or innovative, as we discussed before. However, going too fast and not following the full design process all the way through the testing to see how the experience will be for the users often results in a bad experience. Think first, design second. It is worth taking more time if that means that you get it right. If a device makes the user anxious, then who is going to want to use it? If the device restores a sense of organization, then it could become an essential tool in a user’s life.
We need AR more now than ever before. People are so focused on all these distracting devices that we are constantly looking down at them. I watch people crossing the road reading a message instead of paying attention to the oncoming traffic. AR, and the potential future of AR glasses, will allow us to look up and reengage with the world around us. Say a message is really so important that it needs to be read right away. If you must read it while crossing the street, with smartglasses at least you could do it safely—because you would be able to look up and see beyond the glowing rectangle held in your hand.
What about our posture and need for ergonomic comforts? XR will evolve to allow us to engage with technology with better spine positioning. Let the content adjust to our needs, instead of putting our needs second to those of our devices. Even though you feel your neck strain, you may continue to endure the strain because that is the only way to see your computer screen. Technology should work to help us, not the other way around. Let’s design experiences that put the comfort of the user first, where they can truly personalize and customize their physical space before adding digital components. With all the possibilities of change that are possible with wearable AR and even VR, it is safe to say that things are looking up.
I spend a lot of time in meetings. As an administrator, my role includes a lot of listening, learning, understanding, and discussion over a whole wide range of topics. Then, as a designer, I am in meetings discussing work to refine it before it is launched and shared with the world. As a professor, I meet with a lot of students talking about work, life, the future, you name it.
In these meetings, if you look around the room, each person is on their own device, either taking notes, looking up information about the topic at hand, working on something else entirely, or making plans for the evening. In many of these meetings, people are working through an idea and discussing how to make it better. In theory, everyone should be working together collaboratively. In reality, everyone is focused on their own screen instead of a central one. Sure, there is often a projector displaying some information, perhaps to keep the group on track or to provide alternative context. However, there is nothing collaborative about this. Projected type is often too small to read, so you might pull up the projected document on your own computer where you can read it more easily. Also, this way you can add your own notes to the document versus retyping the content.
When the COVID-19 pandemic hit, I watched these meetings, which used to take place in person, go virtual. That shift made it clear that some things actually work better in this format. If everyone is already looking at their own screen to be part of the video conference, then what can we do with that space to engage everyone more effectively? People began using more shared documents, voting faster using polling, and having more voices heard through the use of the chat feature. Something crazy happened: instead of the meetings running long, they actually finished early—not every meeting, but many.
In the post-pandemic world, people may realize that there is value in keeping some of our events and meetings remote (FIGURE 14.7). If there is a need to all be together in one physical space, then the event format should utilize that physical togetherness. Imagine if everyone sat around a table with an interactive projection that allowed each person to interact with the content to organize and plan. Instead of everyone on their own screens, everyone would be focused on space. All are focused on one goal and all actively participating. If this physical togetherness is not needed, then the reality is the group might be better off having the meeting remotely.
Of everything discussed the previous pages and paragraphs, the common theme that ties everything together is the ability for XR to extend our physical space (FIGURE 14.8).
How we do that, why we do that, where we do that, who we do that for, what it looks like—that is all up to you, as the designer, to decide. How exciting is it to have the power to have such an impact on the physical environments of people all around the world?
It is important to remember that humanity needs time to embrace new technology. Each new application and device brings us one step closer to mass adoption of XR. Tap into the existing knowledge and comforts that your users bring to the experience. If everything feels new, it will become too overwhelming. Just as you should use what you know to build an XR experience, you should similarly allow the user to do the same. We construct new knowledge by building on our previous knowledge.
Though it is easy to want to move quickly in this space, remember the design process that you have learned throughout this book. Each step is there to help create a better experience for the user. Remember the people powering your designs, the users, who will rely on your designs to engage using multiple modalities.
If you recall from the start of this book, hearing or reading a description of the Grand Canyon isn’t the same as being there—that is true for designing these immersive experiences, as well. You need to now go do it. Experience it, fully and immersively. Pay attention to how the experience changes as you focus on each of your senses. Let those experiences guide your ideas, and remember to keep your focus centered on the many unique users, all with different needs, who you are designing for.
The reality is that our future includes XR. That includes AR, where we are right now, and VR, which brings us somewhere new. Designing these 3D immersive experiences will extend our physical space into an enhanced digital world.
35.170.81.33