We’ve made it. Here begins the final chapter in our journey together. We’ve gone from an introduction of the vertex to the creation of interactive VR scenes with dynamic UI text. What’s taken me four years to understand, you hopefully have grasped within the confines of these pages. There’s still one more feature of VR design that we have not addressed.
Learn the principles of movement in VR.
Explore how to design VR for user comfort.
Create your own VR building and environment.
Add the ability to move around a virtual scene while wearing a headset.
Write a simple sorting algorithm to locate the nearest game object.
The Elusive Nature of Movement
Movement in VR is an odd concept. Throughout the exercises we have already completed, we have dealt with vectors, which define the length and direction of a line in space. The physics engine within Unity uses vectors to calculate physics properties like force. Unity even has functions that handle the creation of simulated gravity. All of these parameters affect what we, in our real lives, understand to be movement. Yet, movement in VR can be a very tricky thing.
The reasons for movement’s complicated relationship with VR are things we explore shortly. First, though, it’ll be helpful to come to a clear understanding of the types of movement in VR.
Movement vs. Motion
Movement, as I use it in the context of this chapter, is distinct from motion. Motion, for the sake of our conversation, is the act of objects traveling between two points over time. In Chapter 8, the action performed by our interactors on interactables included motion. With our controllers, we moved interactable shapes from a table to a bin. The shapes underwent the process of motion, which itself was the product of force, mass, and gravity. Movement, however, I define as something more psychological. Movement, in my definition for the basis of the exercises to follow, is the impact motion of the virtual camera in our environment has on the user. Movement, therefore, is subjective. It requires consciousness. It is the name of the valley between expectation and reality.
Perhaps the most obvious example of movement as a mental model is the distinction between 3DoF and 6DoF VR headsets. DoF is an acronym that stands for degrees of freedom. A VR headset that offers three degrees of freedom, like the Oculus Go or the Google Daydream View, tracks a user’s head movement through the three axes related to rotation. In a 3DoF headset, users can rotate their head in any direction. They cannot, however, move their position, or translate their position. If users were to try to translate their position while wearing a 3DoF headset, by walking, for example, then they would immediately become dizzy. Although their brain tells the user they are moving forward, for example, the headset cannot update its image to match the user’s translation of position. The 3DoF headset can only track users’ rotational head movement, not the movement of their body through space. On the other hand, 6DoF VR headsets, like the Oculus Rift, the Oculus Quest, and the HTC Vive, track not only a user’s three degrees of rotation, but also their three degrees of movement through space—forward, backward, and sideways.
Although some VR experts might argue that headsets with more DoF allow for a deeper sense of immersion in a VR scene, I’d argue that it is how we, as VR developers, embrace the strengths and limitations of different headsets that influence the immersive power of a VR scene.
VRTK Tools for Movement
Whether you are designing for a 3DoF or 6DoF headset, VRTK offers specific tools in its Pointers and Locomotion libraries to help streamline the creation of movement actions for users. Pointers in VRTK refer to the design elements that allow a user to visualize direction and distance through the use of a touch controller. Straight VRTK pointers operate like laser pointers, whereas curved VRTK pointers function like parabolas. Locomotion in VRTK refers to the movement, or translation, of a user’s position in virtual space. Two popular examples of locomotion in VRTK are dash locomotion and teleportation.
Before we get into the shortcuts VRTK offers developers for creating user movement in a VR scene, let’s first build up the foundation of this chapter’s exercise.
Exercise: Virtual Tour Guide
Placido Farmiga is a world-renowned contemporary artist. He’s bringing his most recent collection, which wowed at the Pompidou Center in Paris, to the Milwaukee Contemporary Art Museum in Milwaukee, Wisconsin. As tickets for the event sold out in minutes, the museum has asked Penny Powers, an associate in the member services department, to help design a VR experience that museum members who were unable to purchase tickets can use to experience Placido’s work remotely.
Create a building environment using only primitive Unity objects.
Create a C# script that moves a virtual camera through a scene.
Create a button action that controls UI events.
Creating the Environment
As always, before we begin the exercise, create a new Unity 3D project. This time, however, we won’t immediately mark our project for VR support. We won’t install XR Legacy Input Mappings or the VRTK files, just yet. First, we’ll set up a standard 3D scene to begin our understanding of movement in VR.
Step 1: Create the Foundations of a Building
In previous examples we’ve imported 3D models created by different artists to serve as environments and props in our scenes. Convincing 3D models of large spaces, however, are difficult to find and even harder to create. The few that do exist for our use cost money. As I cannot in good conscience ask you to spend any more money than you’ve already spent purchasing this book, I will instead show you how you can prototype your own 3D environment inside Unity.
Transform Values for Primitive Cube Objects as the Walls for My Museum
Transform Values | Wall | Wall_2 | Wall_3 | Wall_4 | Wall_5 | Wall_6 | Wall_7 | Wall_8 | Wall_9 |
---|---|---|---|---|---|---|---|---|---|
Position x | -10.98 | -20.24 | 2.17 | 6.19 | 24.59 | -10.98 | -10.98 | -16.12 | 11.41 |
Position y | 0.122 | 0.122 | 0.122 | 0.122 | 0.122 | 0.122 | 0.122 | 0.122 | 0.122 |
Position z | -12.60 | -12.17 | 2.62 | -13.28 | -11.99 | -24.48 | -0.18 | -25.91 | -25.63 |
Rotation x | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Rotation y | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 90 |
Rotation z | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Scale x | 1.02 | 1 | 1.09 | 1 | 1 | 1.02 | 1.02 | 1.02 | 1.01 |
Scale y | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 |
Scale z | 9.71 | 28.50 | 45.83 | 12.97 | 28.15 | 3.87 | 6.40 | 9.26 | 27.36 |
Step 2: Add Materials and Textures to the Environment
To add Materials to the Floor and Wall objects, first create a new folder called Materials in your Assets folder. Create a new Material object, name it Floor, and change its Albedo color to something you prefer in the Inspector. Do the same for a Material you name Wall. After creating both materials, add each to its respective game object in the Scene Hierarchy.
In the Scene Hierarchy, select the Wall object to which you’ve added the material component. In the Inspector, expand the Material component. With the Material components attached to the Floor and Wall objects in our scene, we can manipulate the textures of the objects to give our scene a bit more believability.
First, download the assets for this exercise from the course GitHub page at http://www.apress.com/source-code.
Select the radio button next to the Metallic property beneath Main Maps on the Material component, too. Navigate to the project asset called rough-plaster-metallic.psd. Select it and close the menu. Use the Smoothness slider beneath the Metallic property to set the value for your desired smoothness.
Finally, beneath the heading Secondary Maps on the Wall’s Material component, select the radio button next to the Normal Map property. Set the secondary normal map to the asset called rough-plaster-ao.png.
After you’ve set the material and texture parameters for the building in your scene, copy the objects in the Scene Hierarchy to which you’ve added the Material and Texture components by selecting them individually and pressing Ctrl+D. Creating all additional wall objects from the original wall object that holds material and texture information will streamline the prototyping process. You will only have to set the material and texture properties once. The properties will cascade to all clones of the object.
Step 3: Add Lights to the Environment
Transform Values for the Lights in the Scene
Transform | Light | Light 1 | Light 2 | Light 3 | Light 4 | Light 5 |
---|---|---|---|---|---|---|
Position x | -2.48 | -12.28 | -15.28 | -2.48 | 15.22 | 15.22 |
Position y | 1.37 | 1.37 | 1.37 | 1.37 | 1.37 | 1.37 |
Position z | -5.70 | -19.29 | -5.29 | -19.19 | -4.89 | -18.30 |
Rotation x | 90 | 90 | 90 | 90 | 90 | 90 |
Rotation y | 0 | 0 | 0 | 0 | 0 | 0 |
Rotation z | 0 | 0 | 0 | 0 | 0 | 0 |
Scale x | 1 | 1 | 1 | 1 | 1 | 1 |
Scale y | 1 | 1 | 1 | 1 | 0.67 | 1 |
Scale z | 1 | 1 | 1 | 1 | 1 | 1 |
Step 4: Add Sculptures to the Environment
Transform Values for Statue Objects in the Scene
Transform Values | Statue 1 | Statue 2 | Statue 3 |
---|---|---|---|
Position x | -3.49 | -1.11 | -11.91 |
Position y | -0.44 | -0.44 | -0.44 |
Position z | 11.03 | -6.57 | -5.14 |
Rotation x | 0 | 27.17 | 0 |
Rotation y | 0 | 113.85 | -76.15 |
Rotation z | 0 | -17.88 | 0 |
Scale x | 1 | 1 | 1 |
Scale y | 3.25 | 3.25 | 3.25 |
Scale z | 1 | 1 | 1 |
Introduce Movement to the Scene Through Keyboard Input
At this point in the exercise, your Unity project’s Scene Hierarchy should include, at the least, a main camera; a building game object with walls, floor, ceiling, and point light child objects; and a statue object.
To introduce movement to the scene we will add two components: a CharacterController and a script. Because we want our users to feel as if they are moving through our museum, we will add the components to the Main Camera game object.
Step 1: Add a CharacterController Component to the Main Camera
What’s a CharacterController?
The CharacterController is a class that inherits from the Collider class in the Physics Module of the UnityEngine library. Unity provides the CharacterController component as a contained, easy-to-use MonoBehaviour that we can attach to an object we’d like the user to control. As the CharacterController does not contain a RigidBody component, it is not affected by forces in the scene. Movement of the object to which it is attached, therefore, will be under total control of the user.
The default settings of the CharacterController that appear in the Inspector are fine as is for this exercise. However, even though we have attached the CharacterController to the game object that we plan to move in our scene, we have not connected it to the input we aim to capture from our users. As we have done several times in this book, to create a connection between a game object and user input, we will create a C# script.
Step 2: Create a C# Script to Move the Camera Controller
The heart of the CameraController script, which I’ve repurposed from the Unity documentation on the CharacterController class, lies in its Vector3 variable moveDirection. The moveDirection variable stores a Vector3 value created every frame update by the Input.GetAxis() function , which takes as its argument the string names of the horizontal and vertical axes. As we know from the Unity Input Manager, the horizontal axis corresponds to the a, d, left, and right arrow buttons. The vertical axis corresponds to the s, w, down, and up arrow buttons. The Update() function of the CameraController script multiplies the moveDirection variable by the speed parameter set in the Editor and subtracts from its y position the value of gravity, also set in the Editor. Finally, the CameraController script calls a Move() function on the CharacterController component of the game object to which it is attached.
The Move() function is a method created by Unity, available in its UnityEngine namespace, which Unity describes as, “a more complex move function taking absolute movement deltas” in a comment above its signature in the CharacterController class. If Unity describes a method as “more complex,” then I feel confident concluding its details are beyond the scope of this exercise. Suffice it to say that the Move() function on the CharacterController component accepts the moveDirection variable as a parameter to create user-influenced character movement in our scene.
Step 3: Attach the CameraController Script to the Camera Game Object
Step 4: Play-Test the Scene
While playing the scene you should be able to move the virtual camera by pressing the horizontal and vertical keyboard inputs as defined in the Input Manager. The Main Camera game object moves through the scene according to the parameters we’ve set for the CharacterController and CameraController components. Because our project does not have VR supported, yet, we can only experience the movement of our camera through the screens of our monitors or laptops.
I don’t know about you, but for me, the impact of the CameraController script and the mechanics of the CharacterController component are pretty good. The motion of the camera through space reminds me of video games from the early 2000s like Golden Eye and Halo. Let’s see how things translate into VR.
Continuous Movement of a Camera in VR
Now that we’ve connected user input to the movement of a camera in our scene, let’s apply what we’ve learned in the previous section to VR.
Step 1: Set Up the Unity Project for VR and VRTK
Duplicate the scene you created in the previous steps by highlighting its name in the Project window and clicking Ctrl+D. Rename the duplicated scene Scene2. Double-click Scene2 in the Project window to open it in the Scene view and the Scene Hierarchy.
Edit ➤ Project Settings ➤ VR Supported
Window ➤ Package Manager ➤ XR Legacy Input Mapping
Clone VRTK through GitHub
UnityXRCameraRig into the Scene Hierarchy
Step 2: Adapt the CameraController Script for the HMD
Duplicate the CameraController script you created in the earlier steps. Rename the cloned script VR_Camera_Controller. Open the script in your IDE and change the name of the class from CameraController to VR_Camera_Controller. Remember, the name of the script has to match the name of its class for Unity to read it as a MonoBehaviour component. Save the script and return to Unity.
Add a CharacterController component and the VR_Camera_Controller script to the UnityXRCameraRig in the Inspector. The default settings on the components will be fine to begin with, and you can tweak them during play-testing.
Step 3: Play-Test the Scene
Press the Play button on the Unity toolbar to test the impact of the Character and Camera controllers on the UnityXRCameraRig. Because we didn’t change the body of the CameraController class after renaming it, all the keyboard functionality related to movement remains. Press the keyboard inputs associated with the horizontal and vertical axes to experience continuous movement in VR.
Movement and VR: It’s Complicated
If you play-tested the scene while wearing an HMD, then you might have noticed the challenges that arise when we try to connect VR headsets with continuous movement in the forward, backward, and sideways directions. Although some have a higher tolerance than others, most users will experience some level of discomfort when moving through a virtual scene continuously. The reasons may vary and can be mitigated by different settings for variables such as speed and gravity. In my experience, after about 10 minutes in a VR experience with poorly designed continuous movement, I develop a headache. I believe my brain’s inability to reconcile the movement I see with the motion I feel causes the discomfort.
I have had positive experiences with continuous movement in VR experiences that forgo gravity. Games that take place in space or through flight, for example, have been fine for me. However, these games also are professionally designed, so the reduced impact of continuous movement on my senses could result from higher quality 3D assets, higher frame rates, more efficient processing, or any other combination of factors. The larger takeaway is that especially for prototyped projects created by small teams, users do not gain more than they lose with continuous movement in immersive experiences.
VRTK to the Rescue!
One solution VR developers use to enable a user’s movement through a VR scene is teleportation. Yes, it is appropriate that a futuristic medium like VR would include as part of its design a futuristic mode of transportation like teleporting. However, unlike the teleportation many of us might have become familiar with through Star Trek, teleportation in VR has its own limitations.
First, teleportation, as we will use it in this exercise, can only move the viewer to a location of a scene within a prescribed radius. Sure, we can create scripts that move our users across wide swaths of space instantaneously, but I would not call that teleportation. I’d call that cutting to a new scene. Teleportation, however, translates a user’s position in the same scene.
Second, teleportation, again, as we will use it in this exercise, remains under the control of the user. There’s no Scottie who will beam our user up. In one way, we, the developer, are the user’s Scottie. However, because we cannot be in the scene with the users as they experience our design, we must create mechanisms in our program that empower the users to determine the parameters of their own teleportation.
To demonstrate how users can define the destination for a translation of their position in a scene let’s first place a VRTK object called a Pointer in our scene.
Adding a VRTK Pointer to the Scene
If You’re Only Seeing Black...
Because we are adding touch controller interaction to our scene, it might be helpful for you to replace the UnityXRCameraRig game object with the SDK-specific CameraRig game object for your system in the Scene Hierarchy. Although I have been able to execute this exercise in its entirety using a UnityXRCameraRig game object and the VRTK Oculus touch controller prefabs, I have encountered some hiccups in the execution. The exercise runs well when I use the Oculus VR Camera Rig prefab from the Oculus Integration package, which I downloaded from the Unity Asset Store. Please feel free to follow along using whatever virtual camera rig best suits your workflow. If you run into problems with the scene in your headset then you can troubleshoot the bug by using a different virtual camera object.
Step 1: Select the VRTK Curved Pointer Prefab from the Project Window
Step 2: Connect the Curved Pointer to a Controller
If you added an SDK-specific Camera Rig game object, like the OVRCameraRig for the Oculus Rift, for example, to your Scene then be sure to add a Linked Alias Collection component to the camera and connect it to the TrackedAlias Camera Rigs Elements list as we did in a previous exercise.
The Activation Action parameter accepts a game object attached to which is a Boolean Action component. Triggering the controller button that holds the Boolean Action connected to the Activation parameter of the Curved Pointer will, of course, activate the Curved Pointer in our scene. For this exercise, let’s connect the Curved Pointer’s activation to the thumb stick of the left controller.
Mapping Madness
Again, be mindful that I am using an Oculus Rift connected to a desktop with two wireless touch controllers. The mapping I use to connect my controller buttons to actions might not apply to the controllers with your system. As always, you can find the input mapping for your device’s controllers in the XR section of the Unity online documentation at https://docs.unity3d.com/Manual/xr_input.html.
In the Hierarchy, expand the UnityXR.LeftController prefab you just dragged from the Project window. Its child objects are prefabs representing the button actions available on the controller. Because I want to connect the Activation Action of my Curved Pointer to the left thumbstick of my Oculus touch controller, I will expand the Thumbstick child object. For child objects, the Thumbstick prefab has two Unity Button Actions attached to two otherwise empty game objects. For the Oculus controller, the child objects are called Touch[16] and Press[8]. If you are using the UnityXR.OpenVR controller prefab, then the corresponding child objects are beneath the Trackpad object of the OpenVR controller prefab. They, too, are called Touch[16] and Press[8]. As always, if you’d like to know more about the default input mappings between Unity and your VR system’s touch controllers, refer to the Unity XR Input resources in the XR section of the online Unity documentation.
To create a selection event to trigger the user’s teleportation, we can use another Unity Button Action. This time we’ll use the Unity Button Action attached to the Press[8] child object on the UnityXR controller prefab in the Hierarchy. Drag and drop the Press[8] child game object from the Hierarchy onto the object field of the Selection Action parameter in the Curved Pointer Facade component (Figure 9-11).
Step 3: Play-Test
Because we have connected our VR_Camera_Controller and CharacterController components to our Tracked Alias, you can also move continuously in the scene by pressing the left thumb pad in the horizontal and vertical directions. If the continuous motion of the virtual camera makes you feel uncomfortable, then you can change the VR_Camera_Controller settings in the Unity Inspector. Reducing the value of the Speed property and increasing the value of the Gravity property might improve the experience. Of course, you might find you don’t even want the option to continuously move the virtual camera in the scene. VRTK’s teleport function could be all that you need.
The VRTK Teleporter Object
Now that we’ve added a VRTK Curved Pointer object to our scene, we can add a VRTK Teleporter to make full use of the Curved Pointer’s potential.
Step 1: Add a Teleporter Prefab to the Scene
Drag and drop the Teleporter.Instant prefab into the Scene Hierarchy.
Step 2: Configure the Teleporter Prefab in the Scene
The Target property on the Teleporter object instructs the Teleporter what game object to teleport, or move. Because we want our virtual camera to teleport, we could connect the Teleporter’s Target property to the active camera object in our scene. However, this will tie the Teleporter to a single camera object. Because part of the appeal of designing with VRTK is that we can target many VR SDKs, it’s best practice to connect the Teleporter to our TrackedAlias game object. Using this method, if we decide to activate our UnityXRCameraRig, or even SimulatedCameraRig for that matter, we do not have to change the Target setting on our Teleporter object.
Step 3: Set the Blink Parameter of the Teleporter
Step 4: Connect the Teleporter Object to the Curved Pointer’s Selection Action
Now that we have our Teleporter object set up in our scene, we need to connect it to the Curved Pointer object we set up earlier. You might recall in Step 2 of the earlier section “Adding a VRTK Pointer to the Scene” that we set up not only an Activation Action for the Curved Pointer, but also a Selection Action. The Activation Action, mapped to the touch sensor on the left controller thumb pad, makes the Curved Pointer’s parabola appear. The Selection Action, mapped to the press of the left controller thumb stick, triggers the Teleporter event. Because we have already connected the Button Action of the Curved Pointer’s Selection Action, we now only have to instruct the Curved Pointer which function to fire when the user trips the Select Action event.
Step 5: Play-Test
Just prior to play-testing the scene, I recommend navigating to the VR_Camera_Controller component on the TrackedAlias prefab, if you choose to attach it. There, change the value of the Speed setting of the VR_Camera_Controller to 1 and its Gravity value to 30. This will reduce the motion sickness that can occur for a user experiencing continuous movement in a headset.
After the settings for all your components are set to your preferences, save the scene. Make sure the requisite applications are running on your machine if your VR system requires a third-party program like Oculus or SteamVR. Then, press Play and test your scene.
If all goes according to plan, while play-testing your scene, you are able to teleport through the scene by pressing the left controller thumb pad button. If you kept the VR_Camera_Controller on your TrackedAlias object active, then you can fine-tune your position in virtual space using continuous movement, as well.
Create a Button Action to Display Statue Information
For the final section of our exercise, we will create a Button Action mapped to the right touch controller that toggles a text display of each statue on or off. We’ll make things a bit more complicated than anything we’ve done so far in this book. Consider this, then, your grand finale.
The desired goal for this action is to provide the user with the ability to toggle on and off a Canvas object that displays information for each statue. What makes this goal challenging is the logic we will use in the script to determine which statue is nearest when the user presses the Canvas toggle button. For example, if the user’s virtual camera is closest to Statue_2 then we do not want to toggle the Canvas for Statue_3. How, though, can we determine to which statue the user is nearest?
Fun with Vector Math
The preceding code is based on a solution to a question posed on the Unity message boards in 2014. The original poster’s Unity handle is EdwardRowe, and you can follow the entire thread at https://forum.unity.com/threads/clean-est-way-to-find-nearest-object-of-many-c.44315/.
In the preceding code for the Menu_Controller , I have presented the comments in italics. The comments in the code, preceded by //, describe the action performed by each line. Although I’ve done my best to use code you’ve already seen in this text, there might be some functions you don’t recognize. You can find more information regarding them in the online Unity documentation.
Copy the code from the Menu_Controller script and save it in a script of your own with the same name in Unity.
You will notice in the code an expression that sets a Vector3 variable called currentPosition to the position of the game object to which the script is attached. Because the positions of the statue objects are stored in the statues array in the code, the position represented by currentPosition is the user’s position in 3D space. If you recall from the step in this exercise in which we connected the PlayAreaAlias child object of the VRTK TrackedAlias prefab as the Target of our Teleporter function, then you’ll remember that the PlayArea is, effectively, what identifies the user’s position in the scene. Knowing this, we can attach our Menu_Controller script to our PlayAreaAlias object, too, as its position will always follow that of the user in the scene.
After saving the Menu_Controller script in your IDE, return to your Unity project. Add the Menu_Controller script as a component on the PlayAreaAlias object in the Hierarchy.
However, before we connect our statues to the Menu_Controller script, let’s first be sure that our statues contain both the Canvas and TextMeshPro objects to which the Menu_Controller script refers. If you set the statue objects in your scene according to the same Transform settings I used, then copying the Transform settings for my Canvas objects will track nicely to your project. If you set your own statue objects in the scene, you can easily attach Canvas objects and set their position to your liking.
Position (x,y,z): -2, 0.1, -0.6
Width x Height: 2 x 3
Pivot (x,y): 0.5, 0.5
Rotation (x,y,z): 0, -48, 0
Scale (x,y,z): 1, 0.3, 1
Position (x,y,z): -2, -0.2, -0.6
Width x Height: 2 x 3
Pivot (x,y): 0.5, 0.5
Rotation: 0, 0, 0
Scale (x,y,z): 1, 0.3, 1
Position (x,y,z): -2, 0.10, -0.6
Width x Height: 2 x 3
Pivot: 0.5, 0.5
Rotation (x,y,z): 0, -48, 0
Scale (x,y,z): 1, 0.3, 1
Further, add a TextMeshPro Text object to each canvas. Set the content of the Text field to any dummy text you’d like. Because I have tilted Statue_2 in my scene, its TextMeshPro transform requires a bit of tweaking.
Position (x,y,z): -0.05, -0.50, -0.90
Rotation (x,y,z): 5, -80, 30
Once our Statues are connected to our Menu_Controller script on the PlayerAliasObject, all that’s left for us to do is connect the FindNearestStatue() function to a Button Action. As we did with the Curved Pointer earlier in this exercise, we will attach the object holding our desired method to the Button Action of a VRTK Controller prefab.
Play-Test
Complete! Congratulations! As Penny Powers, you completed the prototype for the Milwaukee Contemporary Art Museum’s Placido Farmiga exhibit. Your supervisor in the Members’ Benefits Department has forwarded the application on to the Art Department, where designers will swap out the primitive assets you created with higher end work digitized from Placido’s sculptures. The Museum was so impressed with your work, in fact, that they have commissioned a special photography team to capture a 3D scan of the museum’s interior to include in your project as an asset. Sit back and enjoy the countdown to the raise in salary you will inevitably receive for an excellent job.
Summary
Movement is a tricky beast in VR. Camera motion for some users might pose no problem; for others it could cause illness. The best course of action we, as developers, can follow is to provide our users with options. Tools provided by VRTK, like Pointers and Locomotion prefabs, allow us to provide different experiences for users. Fundamentally, movement in VR is not a make-or-break component of a piece. Like lights, textures, materials, and objects, movement is but one more instrument in our case, one more color on our palette as designers of VR.
In this chapter you learned how to prototype your own environment using only primitive objects in Unity. Applying materials and textures to primitive game objects can help developers prototype ideas quickly without sacrificing too much of their vision. You also used your familiarity with Unity Button Actions and the TrackedAlias game object to place a VRTK pointer and locomotion component in your scene. Finally, using original scripting you created two different custom actions: one to move the user through space, and another to create a smart function that not only determines the nearest statue to a user, but also toggles the state of its informational menu.
Conclusion
So ends our time together in this book. I hope you had as positive of an experience learning how to use VRTK with Unity as I did passing on my lessons to you. VR continues to expand as a medium, and every day its promises grow. As the new media of VR, AR, and MR converge, the skills and knowledge you have picked up in these pages will serve you well. I am sure of it. Earlier in the book I offered you a guarantee that you would reach the final page confident that you could prototype any original VR experience you could imagine. I hope you are not disappointed. Although you might feel like you have only seen the tip of the iceberg, I assure you that there are very few additional skills or secrets that you do not know. The only difference between you, now, and a professional VR developer is time and practice. Fortunately, the one thing that for certain makes a strong VR developer you already have—your own unique imagination.
If you close this book curious about what you can make in VR on your own, then I will have considered my job a success. If, however, you feel more confused than you did during Chapter 1, I encourage you to give it time. A year and a half ago, when I started to learn how to program C#, an instructor at a bootcamp told me that learning to code is really learning how to think about problems in a new way. The further I explore the systems and patterns of designing applications, the better I understand what he meant. Coding is not the language. It’s not even the rules of the interface of an application like Unity. Coding is thinking about a problem, breaking it into smaller problems, and solving each smaller problem step-by-step.
If you read all the chapters in this book, if you completed each exercise, then you already know the tools at your disposal to solve the problems you might face. The true, unbridled creativity of programming, especially in a medium like VR, in my opinion, lies in the innumerable ways you can decide to solve a problem. Some solutions might be easy, whereas some might require months of patient research. No matter the answers we find, however, every step of the way is a step into a new idea, a new version of ourselves. The fantasy of VR has been with us ever since man touched paint to stone. The only difference between then and now is that today creating VR is possible. Who knows what tomorrow will bring?