© Rakesh Baruah 2020
R. BaruahVirtual Reality with VRTK4 https://doi.org/10.1007/978-1-4842-5488-2_9

9. Movement in VR

Rakesh Baruah1 
(1)
Brookfield, WI, USA
 

We’ve made it. Here begins the final chapter in our journey together. We’ve gone from an introduction of the vertex to the creation of interactive VR scenes with dynamic UI text. What’s taken me four years to understand, you hopefully have grasped within the confines of these pages. There’s still one more feature of VR design that we have not addressed.

In this chapter you will do the following:
  • Learn the principles of movement in VR.

  • Explore how to design VR for user comfort.

  • Create your own VR building and environment.

  • Add the ability to move around a virtual scene while wearing a headset.

  • Write a simple sorting algorithm to locate the nearest game object.

The Elusive Nature of Movement

Movement in VR is an odd concept. Throughout the exercises we have already completed, we have dealt with vectors, which define the length and direction of a line in space. The physics engine within Unity uses vectors to calculate physics properties like force. Unity even has functions that handle the creation of simulated gravity. All of these parameters affect what we, in our real lives, understand to be movement. Yet, movement in VR can be a very tricky thing.

The reasons for movement’s complicated relationship with VR are things we explore shortly. First, though, it’ll be helpful to come to a clear understanding of the types of movement in VR.

Movement vs. Motion

Movement, as I use it in the context of this chapter, is distinct from motion. Motion, for the sake of our conversation, is the act of objects traveling between two points over time. In Chapter 8, the action performed by our interactors on interactables included motion. With our controllers, we moved interactable shapes from a table to a bin. The shapes underwent the process of motion, which itself was the product of force, mass, and gravity. Movement, however, I define as something more psychological. Movement, in my definition for the basis of the exercises to follow, is the impact motion of the virtual camera in our environment has on the user. Movement, therefore, is subjective. It requires consciousness. It is the name of the valley between expectation and reality.

Perhaps the most obvious example of movement as a mental model is the distinction between 3DoF and 6DoF VR headsets. DoF is an acronym that stands for degrees of freedom. A VR headset that offers three degrees of freedom, like the Oculus Go or the Google Daydream View, tracks a user’s head movement through the three axes related to rotation. In a 3DoF headset, users can rotate their head in any direction. They cannot, however, move their position, or translate their position. If users were to try to translate their position while wearing a 3DoF headset, by walking, for example, then they would immediately become dizzy. Although their brain tells the user they are moving forward, for example, the headset cannot update its image to match the user’s translation of position. The 3DoF headset can only track users’ rotational head movement, not the movement of their body through space. On the other hand, 6DoF VR headsets, like the Oculus Rift, the Oculus Quest, and the HTC Vive, track not only a user’s three degrees of rotation, but also their three degrees of movement through space—forward, backward, and sideways.

Although some VR experts might argue that headsets with more DoF allow for a deeper sense of immersion in a VR scene, I’d argue that it is how we, as VR developers, embrace the strengths and limitations of different headsets that influence the immersive power of a VR scene.

VRTK Tools for Movement

Whether you are designing for a 3DoF or 6DoF headset, VRTK offers specific tools in its Pointers and Locomotion libraries to help streamline the creation of movement actions for users. Pointers in VRTK refer to the design elements that allow a user to visualize direction and distance through the use of a touch controller. Straight VRTK pointers operate like laser pointers, whereas curved VRTK pointers function like parabolas. Locomotion in VRTK refers to the movement, or translation, of a user’s position in virtual space. Two popular examples of locomotion in VRTK are dash locomotion and teleportation.

Before we get into the shortcuts VRTK offers developers for creating user movement in a VR scene, let’s first build up the foundation of this chapter’s exercise.

Exercise: Virtual Tour Guide

Placido Farmiga is a world-renowned contemporary artist. He’s bringing his most recent collection, which wowed at the Pompidou Center in Paris, to the Milwaukee Contemporary Art Museum in Milwaukee, Wisconsin. As tickets for the event sold out in minutes, the museum has asked Penny Powers, an associate in the member services department, to help design a VR experience that museum members who were unable to purchase tickets can use to experience Placido’s work remotely.

As Penny in this exercise, you will do the following:
  • Create a building environment using only primitive Unity objects.

  • Create a C# script that moves a virtual camera through a scene.

  • Create a button action that controls UI events.

Creating the Environment

As always, before we begin the exercise, create a new Unity 3D project. This time, however, we won’t immediately mark our project for VR support. We won’t install XR Legacy Input Mappings or the VRTK files, just yet. First, we’ll set up a standard 3D scene to begin our understanding of movement in VR.

Step 1: Create the Foundations of a Building

In previous examples we’ve imported 3D models created by different artists to serve as environments and props in our scenes. Convincing 3D models of large spaces, however, are difficult to find and even harder to create. The few that do exist for our use cost money. As I cannot in good conscience ask you to spend any more money than you’ve already spent purchasing this book, I will instead show you how you can prototype your own 3D environment inside Unity.

The process is tedious to describe and not unlike the creation of game objects we’ve gone through in previous chapters. Table 9-1 displays the transform settings I have used for primitive cubes to create the walls for my model of the fictional Milwaukee Contemporary Art Museum.
Table 9-1

Transform Values for Primitive Cube Objects as the Walls for My Museum

Transform Values

Wall

Wall_2

Wall_3

Wall_4

Wall_5

Wall_6

Wall_7

Wall_8

Wall_9

Position x

-10.98

-20.24

2.17

6.19

24.59

-10.98

-10.98

-16.12

11.41

Position y

0.122

0.122

0.122

0.122

0.122

0.122

0.122

0.122

0.122

Position z

-12.60

-12.17

2.62

-13.28

-11.99

-24.48

-0.18

-25.91

-25.63

Rotation x

0

0

0

0

0

0

0

0

0

Rotation y

0

0

0

0

0

0

0

0

90

Rotation z

0

0

0

0

0

0

0

0

0

Scale x

1.02

1

1.09

1

1

1.02

1.02

1.02

1.01

Scale y

4

4

4

4

4

4

4

4

4

Scale z

9.71

28.50

45.83

12.97

28.15

3.87

6.40

9.26

27.36

Use Table 9-1 and the image in Figure 9-1 as a reference for the design of your own contemporary art museum. After you have created the Floor object of the museum and one Wall object, pause to add Materials and Textures to the assets.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig1_HTML.jpg
Figure 9-1

This is an overhead view of the building layout

Step 2: Add Materials and Textures to the Environment

To add Materials to the Floor and Wall objects, first create a new folder called Materials in your Assets folder. Create a new Material object, name it Floor, and change its Albedo color to something you prefer in the Inspector. Do the same for a Material you name Wall. After creating both materials, add each to its respective game object in the Scene Hierarchy.

In the Scene Hierarchy, select the Wall object to which you’ve added the material component. In the Inspector, expand the Material component. With the Material components attached to the Floor and Wall objects in our scene, we can manipulate the textures of the objects to give our scene a bit more believability.

First, download the assets for this exercise from the course GitHub page at http://​www.​apress.​com/​source-code.

Second, import the assets into the Assets folder of your Unity project. Locate the asset called rough-plaster-normal-ogl.png in the Project window. Select the Normal Map property radio button in the Main Maps section of the Material component (Figure 9-2). Navigate to the rough-plaster-normal-ogl.png file to select it as the Normal Map for the Wall material.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig2_HTML.jpg
Figure 9-2

Set textures in the Material component

Select the radio button next to the Metallic property beneath Main Maps on the Material component, too. Navigate to the project asset called rough-plaster-metallic.psd. Select it and close the menu. Use the Smoothness slider beneath the Metallic property to set the value for your desired smoothness.

Finally, beneath the heading Secondary Maps on the Wall’s Material component, select the radio button next to the Normal Map property. Set the secondary normal map to the asset called rough-plaster-ao.png.

To apply textures to the Floor object in your scene, expand the object in the Hierarchy and its Material component in the Inspector. If you’d like to add the carpet texture I’ve provided in the chapter assets, then set the Main Albedo Map to worn-braided-carpet-albedo.png; apply the worn-braided-carpet-Normal-ogl.png asset to the Main Normal Map property; and set the worn-braided-carpet-Height.png as the Height Map (Figure 9-3). You can also use the Albedo color picker to change the color of the texture asset you’ve added as an Albedo. I downloaded all the texture assets for the objects from a web site called FreePBR.​com, where PBR stands for physically based rendering. If you’d like to select other texture assets for your scene, you can find more at that site.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig3_HTML.jpg
Figure 9-3

You can select Texture settings for the floor’s material

After you’ve set the material and texture parameters for the building in your scene, copy the objects in the Scene Hierarchy to which you’ve added the Material and Texture components by selecting them individually and pressing Ctrl+D. Creating all additional wall objects from the original wall object that holds material and texture information will streamline the prototyping process. You will only have to set the material and texture properties once. The properties will cascade to all clones of the object.

Step 3: Add Lights to the Environment

Because materials and textures respond to light, add point light objects to the Hierarchy and place them around the scene. Deactivate the ceiling game object in the Inspector Window if it blocks your view of the building’s interior while you are fine-tuning the details of the environment. You can use the Transform settings I applied to the lights in my scene by referring to Table 9-2 and the image in Figure 9-4.
Table 9-2

Transform Values for the Lights in the Scene

Transform

Light

Light 1

Light 2

Light 3

Light 4

Light 5

Position x

-2.48

-12.28

-15.28

-2.48

15.22

15.22

Position y

1.37

1.37

1.37

1.37

1.37

1.37

Position z

-5.70

-19.29

-5.29

-19.19

-4.89

-18.30

Rotation x

90

90

90

90

90

90

Rotation y

0

0

0

0

0

0

Rotation z

0

0

0

0

0

0

Scale x

1

1

1

1

1

1

Scale y

1

1

1

1

0.67

1

Scale z

1

1

1

1

1

1

../images/488645_1_En_9_Chapter/488645_1_En_9_Fig4_HTML.jpg
Figure 9-4

This is an overhead view of the light objects in the scene

Step 4: Add Sculptures to the Environment

Because our user story defines the setting of our scene as a museum, let’s add primitive game objects that can serve as templates for art work in our scene. You can refer to Table 9-3 to use the Transform settings for three game objects I created called Statue_1, Statue_2, and Statue_3. Like the walls, I’ve formed the statues from primitive cube objects.
Table 9-3

Transform Values for Statue Objects in the Scene

Transform Values

Statue 1

Statue 2

Statue 3

Position x

-3.49

-1.11

-11.91

Position y

-0.44

-0.44

-0.44

Position z

11.03

-6.57

-5.14

Rotation x

0

27.17

0

Rotation y

0

113.85

-76.15

Rotation z

0

-17.88

0

Scale x

1

1

1

Scale y

3.25

3.25

3.25

Scale z

1

1

1

Introduce Movement to the Scene Through Keyboard Input

At this point in the exercise, your Unity project’s Scene Hierarchy should include, at the least, a main camera; a building game object with walls, floor, ceiling, and point light child objects; and a statue object.

To introduce movement to the scene we will add two components: a CharacterController and a script. Because we want our users to feel as if they are moving through our museum, we will add the components to the Main Camera game object.

Step 1: Add a CharacterController Component to the Main Camera

Select the Main Camera game object in the Scene Hierarchy of your scene. In the Inspector, click Add Component, and search for CharacterController. Add the CharacterController component to the Main Camera game object as shown in Figure 9-5.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig5_HTML.jpg
Figure 9-5

The CharacterController on the Main Camera object

What’s a CharacterController?

The CharacterController is a class that inherits from the Collider class in the Physics Module of the UnityEngine library. Unity provides the CharacterController component as a contained, easy-to-use MonoBehaviour that we can attach to an object we’d like the user to control. As the CharacterController does not contain a RigidBody component, it is not affected by forces in the scene. Movement of the object to which it is attached, therefore, will be under total control of the user.

The default settings of the CharacterController that appear in the Inspector are fine as is for this exercise. However, even though we have attached the CharacterController to the game object that we plan to move in our scene, we have not connected it to the input we aim to capture from our users. As we have done several times in this book, to create a connection between a game object and user input, we will create a C# script.

Step 2: Create a C# Script to Move the Camera Controller

Copy the following code into a new C# script named CameraController .
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraController : MonoBehaviour
{
    CharacterController characterController;
    public float speed = 6.0f;
    public float jumpSpeed = 8.0f;
    public float gravity = 20.0f;
    private Vector3 moveDirection = Vector3.zero;
    void Start()
    {
        characterController = GetComponent<CharacterController>();
    }
    void Update()
    {
        if (characterController.isGrounded)
        {
            // We are grounded, so recalculate
            // move direction directly from axes
            moveDirection = new Vector3(Input.GetAxis("Horizontal"), 0.0f, Input.GetAxis("Vertical"));
            moveDirection ∗= speed;
            if (Input.GetButton("Jump"))
            {
                moveDirection.y = jumpSpeed;
            }
        }
        // Apply gravity. Gravity is multiplied by deltaTime twice (once here, and once below
        // when the moveDirection is multiplied by deltaTime). This is because gravity should be applied
        // as an acceleration (ms^-2)
        moveDirection.y -= gravity * Time.deltaTime;
        // Move the controller
        characterController.Move(moveDirection * Time.deltaTime);
    }
}

The heart of the CameraController script, which I’ve repurposed from the Unity documentation on the CharacterController class, lies in its Vector3 variable moveDirection. The moveDirection variable stores a Vector3 value created every frame update by the Input.GetAxis() function , which takes as its argument the string names of the horizontal and vertical axes. As we know from the Unity Input Manager, the horizontal axis corresponds to the a, d, left, and right arrow buttons. The vertical axis corresponds to the s, w, down, and up arrow buttons. The Update() function of the CameraController script multiplies the moveDirection variable by the speed parameter set in the Editor and subtracts from its y position the value of gravity, also set in the Editor. Finally, the CameraController script calls a Move() function on the CharacterController component of the game object to which it is attached.

The Move() function is a method created by Unity, available in its UnityEngine namespace, which Unity describes as, “a more complex move function taking absolute movement deltas” in a comment above its signature in the CharacterController class. If Unity describes a method as “more complex,” then I feel confident concluding its details are beyond the scope of this exercise. Suffice it to say that the Move() function on the CharacterController component accepts the moveDirection variable as a parameter to create user-influenced character movement in our scene.

Step 3: Attach the CameraController Script to the Camera Game Object

After saving the CameraController script in your IDE, return to the Unity project. Attach the CameraController script as a component to the Main Camera object as shown in Figure 9-6. You will see the public properties we set to default values in the script appear in the Inspector window. The moveDirection property does not appear because we set its scope to private in our script, which means it is not accessible from outside the CameraController script.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig6_HTML.jpg
Figure 9-6

The Camera Controller script is added to the Main Camera object

Step 4: Play-Test the Scene

While playing the scene you should be able to move the virtual camera by pressing the horizontal and vertical keyboard inputs as defined in the Input Manager. The Main Camera game object moves through the scene according to the parameters we’ve set for the CharacterController and CameraController components. Because our project does not have VR supported, yet, we can only experience the movement of our camera through the screens of our monitors or laptops.

I don’t know about you, but for me, the impact of the CameraController script and the mechanics of the CharacterController component are pretty good. The motion of the camera through space reminds me of video games from the early 2000s like Golden Eye and Halo. Let’s see how things translate into VR.

Continuous Movement of a Camera in VR

Now that we’ve connected user input to the movement of a camera in our scene, let’s apply what we’ve learned in the previous section to VR.

Step 1: Set Up the Unity Project for VR and VRTK

Duplicate the scene you created in the previous steps by highlighting its name in the Project window and clicking Ctrl+D. Rename the duplicated scene Scene2. Double-click Scene2 in the Project window to open it in the Scene view and the Scene Hierarchy.

If you created the Unity project for this exercise without the trappings of VR support, then take the time now to activate the necessary ingredients with which we’ve become so familiar:
  • Edit ➤ Project Settings ➤ VR Supported

  • Window ➤ Package Manager ➤ XR Legacy Input Mapping

  • Clone VRTK through GitHub

  • UnityXRCameraRig into the Scene Hierarchy

Step 2: Adapt the CameraController Script for the HMD

Duplicate the CameraController script you created in the earlier steps. Rename the cloned script VR_Camera_Controller. Open the script in your IDE and change the name of the class from CameraController to VR_Camera_Controller. Remember, the name of the script has to match the name of its class for Unity to read it as a MonoBehaviour component. Save the script and return to Unity.

Add a CharacterController component and the VR_Camera_Controller script to the UnityXRCameraRig in the Inspector. The default settings on the components will be fine to begin with, and you can tweak them during play-testing.

Step 3: Play-Test the Scene

Press the Play button on the Unity toolbar to test the impact of the Character and Camera controllers on the UnityXRCameraRig. Because we didn’t change the body of the CameraController class after renaming it, all the keyboard functionality related to movement remains. Press the keyboard inputs associated with the horizontal and vertical axes to experience continuous movement in VR.

Movement and VR: It’s Complicated

If you play-tested the scene while wearing an HMD, then you might have noticed the challenges that arise when we try to connect VR headsets with continuous movement in the forward, backward, and sideways directions. Although some have a higher tolerance than others, most users will experience some level of discomfort when moving through a virtual scene continuously. The reasons may vary and can be mitigated by different settings for variables such as speed and gravity. In my experience, after about 10 minutes in a VR experience with poorly designed continuous movement, I develop a headache. I believe my brain’s inability to reconcile the movement I see with the motion I feel causes the discomfort.

I have had positive experiences with continuous movement in VR experiences that forgo gravity. Games that take place in space or through flight, for example, have been fine for me. However, these games also are professionally designed, so the reduced impact of continuous movement on my senses could result from higher quality 3D assets, higher frame rates, more efficient processing, or any other combination of factors. The larger takeaway is that especially for prototyped projects created by small teams, users do not gain more than they lose with continuous movement in immersive experiences.

VRTK to the Rescue!

One solution VR developers use to enable a user’s movement through a VR scene is teleportation. Yes, it is appropriate that a futuristic medium like VR would include as part of its design a futuristic mode of transportation like teleporting. However, unlike the teleportation many of us might have become familiar with through Star Trek, teleportation in VR has its own limitations.

First, teleportation, as we will use it in this exercise, can only move the viewer to a location of a scene within a prescribed radius. Sure, we can create scripts that move our users across wide swaths of space instantaneously, but I would not call that teleportation. I’d call that cutting to a new scene. Teleportation, however, translates a user’s position in the same scene.

Second, teleportation, again, as we will use it in this exercise, remains under the control of the user. There’s no Scottie who will beam our user up. In one way, we, the developer, are the user’s Scottie. However, because we cannot be in the scene with the users as they experience our design, we must create mechanisms in our program that empower the users to determine the parameters of their own teleportation.

To demonstrate how users can define the destination for a translation of their position in a scene let’s first place a VRTK object called a Pointer in our scene.

Adding a VRTK Pointer to the Scene

If You’re Only Seeing Black...

Because we are adding touch controller interaction to our scene, it might be helpful for you to replace the UnityXRCameraRig game object with the SDK-specific CameraRig game object for your system in the Scene Hierarchy. Although I have been able to execute this exercise in its entirety using a UnityXRCameraRig game object and the VRTK Oculus touch controller prefabs, I have encountered some hiccups in the execution. The exercise runs well when I use the Oculus VR Camera Rig prefab from the Oculus Integration package, which I downloaded from the Unity Asset Store. Please feel free to follow along using whatever virtual camera rig best suits your workflow. If you run into problems with the scene in your headset then you can troubleshoot the bug by using a different virtual camera object.

Step 1: Select the VRTK Curved Pointer Prefab from the Project Window

In the Project window of Unity, navigate to Assets ➤ VRTK ➤ Prefabs ➤ Pointers as shown in Figure 9-7. Drag and drop the ObjectPointer.Curved prefab into the Scene Hierarchy.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig7_HTML.jpg
Figure 9-7

Drag and drop the VRTK Curved Pointer prefab into your Scene Hierarchy

Step 2: Connect the Curved Pointer to a Controller

Select the Curved Pointer prefab in the Scene Hierarchy. In the Inspector window, locate the Follow Source field on the Pointer Facade component attached to the Curved Pointer prefab game object. Expand the TrackedAlias game object in the Hierarchy, and drag and drop the LeftControllerAlias prefab into the Curved Pointer’s Follow Source field (Figure 9-8).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig8_HTML.jpg
Figure 9-8

Connect the LeftControllerAlias to the Curved Pointer object

If you added an SDK-specific Camera Rig game object, like the OVRCameraRig for the Oculus Rift, for example, to your Scene then be sure to add a Linked Alias Collection component to the camera and connect it to the TrackedAlias Camera Rigs Elements list as we did in a previous exercise.

With a Curved Pointer connected to a controller alias in our scene, we now have to tell Unity which button on our controller activates the Curved Pointer. Beneath the Follow Source parameter on the Curved Pointer’s Facade component, there is an empty object field called Activation Action (Figure 9-9).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig9_HTML.jpg
Figure 9-9

The Activation Action field is shown on the Pointer object

The Activation Action parameter accepts a game object attached to which is a Boolean Action component. Triggering the controller button that holds the Boolean Action connected to the Activation parameter of the Curved Pointer will, of course, activate the Curved Pointer in our scene. For this exercise, let’s connect the Curved Pointer’s activation to the thumb stick of the left controller.

Mapping Madness

Again, be mindful that I am using an Oculus Rift connected to a desktop with two wireless touch controllers. The mapping I use to connect my controller buttons to actions might not apply to the controllers with your system. As always, you can find the input mapping for your device’s controllers in the XR section of the Unity online documentation at https://docs.unity3d.com/Manual/xr_input.html.

To hook into Boolean Actions on our controllers we must first include references to our VR controllers in our scene. As we’ve done in previous exercises, we can use VRTK’s Controller prefabs to point to our touch controllers. To accomplish this, navigate to the Input Mappings folder in your Project window: Assets ➤ VRTK ➤ CameraRig ➤ UnityXRCameraRig ➤ Input Mappings. VRTK comes with four Controller prefabs: two for the Oculus runtime and two for the OpenVR runtime. From the Input Mappings folder, drag and drop the Controller prefabs appropriate for your system into the Scene Hierarchy (Figure 9-10).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig10_HTML.jpg
Figure 9-10

Drag and drop the UnityXR controller prefabs into the Hierarchy

In the Hierarchy, expand the UnityXR.LeftController prefab you just dragged from the Project window. Its child objects are prefabs representing the button actions available on the controller. Because I want to connect the Activation Action of my Curved Pointer to the left thumbstick of my Oculus touch controller, I will expand the Thumbstick child object. For child objects, the Thumbstick prefab has two Unity Button Actions attached to two otherwise empty game objects. For the Oculus controller, the child objects are called Touch[16] and Press[8]. If you are using the UnityXR.OpenVR controller prefab, then the corresponding child objects are beneath the Trackpad object of the OpenVR controller prefab. They, too, are called Touch[16] and Press[8]. As always, if you’d like to know more about the default input mappings between Unity and your VR system’s touch controllers, refer to the Unity XR Input resources in the XR section of the online Unity documentation.

Drag and drop the Touch[16] game object from the Hierarchy onto the Activation Action field of the Curved Pointer’s Facade component (Figure 9-11). The Activation Action will draw the curved pointer from the touch controller’s virtual coordinates to a point on the Floor object in our scene.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig11_HTML.jpg
Figure 9-11

Drag and drop Button Action objects onto the Pointer object

To create a selection event to trigger the user’s teleportation, we can use another Unity Button Action. This time we’ll use the Unity Button Action attached to the Press[8] child object on the UnityXR controller prefab in the Hierarchy. Drag and drop the Press[8] child game object from the Hierarchy onto the object field of the Selection Action parameter in the Curved Pointer Facade component (Figure 9-11).

Move the Camera Controller components onto the TrackedAlias object. Whether or not you replaced the UnityXRCameraRig game object with an SDK-specific virtual camera, move the CharacterController component and the VR_Camera_Controller script onto the TrackedAlias parent game object as components. If you attached these components to the UnityXRCameraRig object earlier in this exercise, you can delete them from the UnityXRCameraRig by clicking on the gear icon next to the name of the component in the Inspector and selecting Remove Component as shown in Figure 9-12.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig12_HTML.jpg
Figure 9-12

Select the Remove Component command to delete a component

Confirm that you have added the CharacterController and VR_Camera_Controller components to the Tracked Alias parent game object. Further, make sure only one virtual camera object is activated in your scene. Finally, confirm that the one camera active in your scene is also included as an Element on the Tracked Alias game object, as shown in Figure 9-13.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig13_HTML.jpg
Figure 9-13

This shows a deactivated UnityXRCameraRig in the Hierarchy

Step 3: Play-Test

While play-testing your scene, you should see a green parabola appear at the end of your left touch controller and fall somewhere on the Floor object when your thumb touches the thumb pad as displayed in Figure 9-14.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig14_HTML.jpg
Figure 9-14

The VRTK Curved Pointer appears as a green parabola during playback

Because we have connected our VR_Camera_Controller and CharacterController components to our Tracked Alias, you can also move continuously in the scene by pressing the left thumb pad in the horizontal and vertical directions. If the continuous motion of the virtual camera makes you feel uncomfortable, then you can change the VR_Camera_Controller settings in the Unity Inspector. Reducing the value of the Speed property and increasing the value of the Gravity property might improve the experience. Of course, you might find you don’t even want the option to continuously move the virtual camera in the scene. VRTK’s teleport function could be all that you need.

The VRTK Teleporter Object

Now that we’ve added a VRTK Curved Pointer object to our scene, we can add a VRTK Teleporter to make full use of the Curved Pointer’s potential.

Step 1: Add a Teleporter Prefab to the Scene

In the Project window, navigate to the Teleporters folder in the VRTK library: Project Window ➤ Assets ➤ VRTK ➤ Prefabs ➤ Locomotion ➤ Teleporters. Inside the Teleporters folder, select the Teleporter.Instant prefab object as shown in Figure 9-15.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig15_HTML.jpg
Figure 9-15

Select the VRTK Instant Teleporter prefab

Drag and drop the Teleporter.Instant prefab into the Scene Hierarchy.

Step 2: Configure the Teleporter Prefab in the Scene

Highlighting the Teleporter.Instant prefab in the Hierarchy opens its properties in the Inspector. Notice the first two parameters beneath the Teleporter Settings on the object’s Facade component (Figure 9-16).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig16_HTML.jpg
Figure 9-16

Note the Target and Offset properties of the Instant Teleporter prefab

The Target property on the Teleporter object instructs the Teleporter what game object to teleport, or move. Because we want our virtual camera to teleport, we could connect the Teleporter’s Target property to the active camera object in our scene. However, this will tie the Teleporter to a single camera object. Because part of the appeal of designing with VRTK is that we can target many VR SDKs, it’s best practice to connect the Teleporter to our TrackedAlias game object. Using this method, if we decide to activate our UnityXRCameraRig, or even SimulatedCameraRig for that matter, we do not have to change the Target setting on our Teleporter object.

With the Teleporter.Instant prefab highlighted in the Scene Hierarchy, expand the TrackedAlias parent prefab. Beneath the Aliases child object, locate the PlayAreaAlias grandchild object. The PlayAreaAlias points to an Observable class in VRTK that contains the game objects to which the Alias can connect. Drag and drop the PlayAreaAlias parent prefab into the Target property of the Teleporter.Instance Facade component (Figure 9-17).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig17_HTML.jpg
Figure 9-17

Connect the PlayAreaAlias as the Target object of the Teleporter

The Offset property on the Teleporter prefab refers to the difference between the user’s expected location and actual location at the Teleporter’s destination. For example, if the user is standing 2 feet to the left of the center of the Play Area, then he or she will still be standing 2 feet to the left of the center of the Play Area after the teleportation takes place. This creates a disruption in the immersion of the experience because most users expect the teleportation event to place them exactly on the point they selected with their curved pointer. The Offset property of the Teleporter prefab calculates the difference between the user’s headset position in relation to their Play Area. To correct for this disruption, drag and drop the HeadsetAlias object, which is a grandchild of the TrackedAlias object in the Scene Hierarchy, onto the Offset field on the Teleporter’s Facade component (Figure 9-18).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig18_HTML.jpg
Figure 9-18

Drag the HeadsetAlias to the Teleporter’s Offset property

Step 3: Set the Blink Parameter of the Teleporter

Both the Instant and Dash Teleporters in VRTK offer an option to provide a “blink” before translating the user’s Play Area in the scene. A blink is akin to a quick camera fade-in, which can reduce the discomfort a user feels when moving through virtual space. To set the blink action on the Teleporter object, drag and drop into its Camera Validity field the SceneCameras grandchild object, which can be found beneath the Aliases object on the TrackedAlias prefab in the Hierarchy (Figure 9-19).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig19_HTML.jpg
Figure 9-19

The Camera Validity property controls the fade-in following a teleport

Step 4: Connect the Teleporter Object to the Curved Pointer’s Selection Action

Now that we have our Teleporter object set up in our scene, we need to connect it to the Curved Pointer object we set up earlier. You might recall in Step 2 of the earlier section “Adding a VRTK Pointer to the Scene” that we set up not only an Activation Action for the Curved Pointer, but also a Selection Action. The Activation Action, mapped to the touch sensor on the left controller thumb pad, makes the Curved Pointer’s parabola appear. The Selection Action, mapped to the press of the left controller thumb stick, triggers the Teleporter event. Because we have already connected the Button Action of the Curved Pointer’s Selection Action, we now only have to instruct the Curved Pointer which function to fire when the user trips the Select Action event.

Select the ObjectPointer.Curved prefab in the Scene Hierarchy. In the Inspector, locate its Pointer Events. The final drop-down list in the Pointer Events list is the Selected (EventData) event. Click the + in the Selected (EventData) box. Into the empty game object field, drag and drop the Teleporter.Instant prefab from the Hierarchy. From the Selected Pointer Events function drop-down menu, select the function TeleporterFacade.Teleport (Figure 9-20).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig20_HTML.jpg
Figure 9-20

Connect the Teleporter function to the Curved Pointer’s event handler

Step 5: Play-Test

Just prior to play-testing the scene, I recommend navigating to the VR_Camera_Controller component on the TrackedAlias prefab, if you choose to attach it. There, change the value of the Speed setting of the VR_Camera_Controller to 1 and its Gravity value to 30. This will reduce the motion sickness that can occur for a user experiencing continuous movement in a headset.

After the settings for all your components are set to your preferences, save the scene. Make sure the requisite applications are running on your machine if your VR system requires a third-party program like Oculus or SteamVR. Then, press Play and test your scene.

If all goes according to plan, while play-testing your scene, you are able to teleport through the scene by pressing the left controller thumb pad button. If you kept the VR_Camera_Controller on your TrackedAlias object active, then you can fine-tune your position in virtual space using continuous movement, as well.

Create a Button Action to Display Statue Information

For the final section of our exercise, we will create a Button Action mapped to the right touch controller that toggles a text display of each statue on or off. We’ll make things a bit more complicated than anything we’ve done so far in this book. Consider this, then, your grand finale.

The desired goal for this action is to provide the user with the ability to toggle on and off a Canvas object that displays information for each statue. What makes this goal challenging is the logic we will use in the script to determine which statue is nearest when the user presses the Canvas toggle button. For example, if the user’s virtual camera is closest to Statue_2 then we do not want to toggle the Canvas for Statue_3. How, though, can we determine to which statue the user is nearest?

Fun with Vector Math

Finding the distance between two points in 3D space is similar to finding the distance in 2D space. Both rely on an application of the Pythagorean theorem. Our FindNearestStatue() function , then, will calculate the distance between the user and each statue in our scene using the Pythagorean theorem with the distance between the user and the statue serving as the hypotenuse of a triangle in 3D space. Of course, the Pythagorean theorem requires us to take the square root of a value to define the length of the hypotenuse. However, because the square root function can be costly to calculate in an immersive application, we will instead square the distance using Unity’s sqrMagnitude() method . Because we are squaring all the distances, their relationship to each other will remain proportional. To see the code in print will be most helpful.
using UnityEngine;
using System.Collections.Generic;
using TMPro;
public class Menu_Controller : MonoBehaviour
{
    // Create an array to hold the Transform components of each statue
    Transform[] statues = new Transform [3];
    // Define 3 public GameObject fields into which we can drag our statues
    public GameObject statue_1;
    public GameObject statue_2;
    public GameObject statue_3;
    void Start()
    {
        // Load the statue array with our scene's statues
        statues[0] = statue_1.transform;
        statues[1] = statue_2.transform;
        statues[2] = statue_3.transform;
    }
    // The function fired by the Button Action click event
    public void FindNearestStatue()
    {
        // Define an empty variable to hold the nearest statue
        Transform bestTarget = null;
        // Define a variable to hold a ridiculously large number
        float closestDistanceSqr = Mathf.Infinity;
        // Store the user's position in a variable
        Vector3 currentPosition = transform.position;
        // Iterate over the collection of statues in the scene
        foreach (Transform potentialTarget in statues)
        {
            // Store the distance between a statue and the user in a Vector3 variable
            Vector3 directionToTarget = potentialTarget.position - currentPosition;
            // Store the distance between a statue and the user squared
            float dSqrToTarget = directionToTarget.sqrMagnitude;
            // Compare the distance between the statue and user to the previously stored smallest distance
            if(dSqrToTarget < closestDistanceSqr)
            {
                // If the distance between the statue and the user is smaller than a previously calculated distance...
                // Store the current distance as the smallest distance
                closestDistanceSqr = dSqrToTarget;
                // Save the transform with the shortest distance as the value of the bestTarget variable
                bestTarget = potentialTarget;
            }
        }
        // Log the name of the closest statue for testing
        Debug.Log(bestTarget.name);
        // Store the canvas object of the nearest statue in a variable
        Canvas canvas = bestTarget.GetComponentInChildren<Canvas>();
        // Store the text object of the nearest object's canvas in a variable
        TextMeshPro text = canvas.GetComponentInChildren<TextMeshPro>();
        // Toggle the state of the nearest statue's canvas
        if (!canvas.enabled)
            canvas.enabled = true;
        else
            canvas.enabled = false;
    }
}

The preceding code is based on a solution to a question posed on the Unity message boards in 2014. The original poster’s Unity handle is EdwardRowe, and you can follow the entire thread at https://forum.unity.com/threads/clean-est-way-to-find-nearest-object-of-many-c.44315/.

In the preceding code for the Menu_Controller , I have presented the comments in italics. The comments in the code, preceded by //, describe the action performed by each line. Although I’ve done my best to use code you’ve already seen in this text, there might be some functions you don’t recognize. You can find more information regarding them in the online Unity documentation.

Copy the code from the Menu_Controller script and save it in a script of your own with the same name in Unity.

You will notice in the code an expression that sets a Vector3 variable called currentPosition to the position of the game object to which the script is attached. Because the positions of the statue objects are stored in the statues array in the code, the position represented by currentPosition is the user’s position in 3D space. If you recall from the step in this exercise in which we connected the PlayAreaAlias child object of the VRTK TrackedAlias prefab as the Target of our Teleporter function, then you’ll remember that the PlayArea is, effectively, what identifies the user’s position in the scene. Knowing this, we can attach our Menu_Controller script to our PlayAreaAlias object, too, as its position will always follow that of the user in the scene.

After saving the Menu_Controller script in your IDE, return to your Unity project. Add the Menu_Controller script as a component on the PlayAreaAlias object in the Hierarchy.

In Figure 9-21, notice that the public properties identified in the Menu_Controller appear in the Inspector when the PlayAreaAlias object is highlighted in the Hierarchy. Dragging the statue game objects from our scene into these parameters will notify Unity of the positions of the statues in our scene.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig21_HTML.jpg
Figure 9-21

The Menu_Controller’s public statue properties are shown here

However, before we connect our statues to the Menu_Controller script, let’s first be sure that our statues contain both the Canvas and TextMeshPro objects to which the Menu_Controller script refers. If you set the statue objects in your scene according to the same Transform settings I used, then copying the Transform settings for my Canvas objects will track nicely to your project. If you set your own statue objects in the scene, you can easily attach Canvas objects and set their position to your liking.

Statue_1 Canvas Rect Transform
  • Position (x,y,z): -2, 0.1, -0.6

  • Width x Height: 2 x 3

  • Pivot (x,y): 0.5, 0.5

  • Rotation (x,y,z): 0, -48, 0

  • Scale (x,y,z): 1, 0.3, 1

Statue_2 Canvas Rect Transform
  • Position (x,y,z): -2, -0.2, -0.6

  • Width x Height: 2 x 3

  • Pivot (x,y): 0.5, 0.5

  • Rotation: 0, 0, 0

  • Scale (x,y,z): 1, 0.3, 1

Statue_3 Canvas Rect Transform
  • Position (x,y,z): -2, 0.10, -0.6

  • Width x Height: 2 x 3

  • Pivot: 0.5, 0.5

  • Rotation (x,y,z): 0, -48, 0

  • Scale (x,y,z): 1, 0.3, 1

Further, add a TextMeshPro Text object to each canvas. Set the content of the Text field to any dummy text you’d like. Because I have tilted Statue_2 in my scene, its TextMeshPro transform requires a bit of tweaking.

Statue_2 TextMeshPro Object Rect Transform
  • Position (x,y,z): -0.05, -0.50, -0.90

  • Rotation (x,y,z): 5, -80, 30

With the Canvas and TextMeshPro objects set on each Statue object in the Scene Hierarchy (Figure 9-22), we can drag and drop them into the public Statue fields on the Menu_Controller script component attached to the PlayAreaAlias game object, as shown in Figure 9-23.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig22_HTML.jpg
Figure 9-22

The Canvas and Text objects are connected to the statues in the scene

../images/488645_1_En_9_Chapter/488645_1_En_9_Fig23_HTML.jpg
Figure 9-23

Drag and drop the Statue objects onto the Menu_Collector script

Once our Statues are connected to our Menu_Controller script on the PlayerAliasObject, all that’s left for us to do is connect the FindNearestStatue() function to a Button Action. As we did with the Curved Pointer earlier in this exercise, we will attach the object holding our desired method to the Button Action of a VRTK Controller prefab.

We already have a UnityXR Left Controller prefab in our scene; we attached our Curved Pointer functions to it. If you haven’t already done so, drag and drop a UnityXR Right Controller into the Scene Hierarchy, too. I’m going to map the FindNearestStatue() function to the A button on my right Oculus touch controller, which VRTK’s UnityXR.Oculus controller prefab defines as ButtonOne. ButtonOne includes a press event called Press[0]. Selecting the Press[0] prefab in the Hierarchy opens it in the Inspector, where I see the familiar Unity Button Action component. Whether you are using the Oculus or OpenVR VRTK controller prefab, highlighting the button object will open a Button Action script in the Inspector. Expand the Activated (Boolean) event in the Button Action component and click the + to add an event. In the empty game object field that appears, drag and drop the PlayerAreaAlias game object from the Hierarchy. In the Function pull-down menu on the Activated (Boolean) event, select the Menu_Controller script and the FindNearestStatue() function , as depicted in Figure 9-24. Once you’ve set these two properties on the Unity Button Action component, you have completed connecting the Menu_Controller to your scene.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig24_HTML.jpg
Figure 9-24

Connect the FindNearestStatue() method to the Button Action event handler

Play-Test

When play-testing the scene, not only should you be able to use the Curved Pointer Teleportation functionality we connected to the left touch controller, but you should also be able to toggle the Canvas object of the statue nearest you by pressing the main button (A on the Oculus touch controller) on the right touch controller. Toggling a Canvas on should display the text you entered into the TextMeshPro object of the canvases attached to each statue (Figure 9-25).
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig25_HTML.jpg
Figure 9-25

The image on the left is the result of a statue object with Canvas toggled off. On the right, the Canvas is toggled on

If, for some reason, the Canvas objects do not appear in your scene when you press the main button, you can look at the Console window in Unity to check whether the Debug function we wrote into our script printed to the console. If your console shows the name of the Statue object to which you were nearest when you pressed the activation button on the right touch controller, as in Figure 9-26, then you know the FindNearestStatue() function is connected to your event handler.
../images/488645_1_En_9_Chapter/488645_1_En_9_Fig26_HTML.jpg
Figure 9-26

Check the Console to see if the Debug function executed

Complete! Congratulations! As Penny Powers, you completed the prototype for the Milwaukee Contemporary Art Museum’s Placido Farmiga exhibit. Your supervisor in the Members’ Benefits Department has forwarded the application on to the Art Department, where designers will swap out the primitive assets you created with higher end work digitized from Placido’s sculptures. The Museum was so impressed with your work, in fact, that they have commissioned a special photography team to capture a 3D scan of the museum’s interior to include in your project as an asset. Sit back and enjoy the countdown to the raise in salary you will inevitably receive for an excellent job.

Summary

Movement is a tricky beast in VR. Camera motion for some users might pose no problem; for others it could cause illness. The best course of action we, as developers, can follow is to provide our users with options. Tools provided by VRTK, like Pointers and Locomotion prefabs, allow us to provide different experiences for users. Fundamentally, movement in VR is not a make-or-break component of a piece. Like lights, textures, materials, and objects, movement is but one more instrument in our case, one more color on our palette as designers of VR.

In this chapter you learned how to prototype your own environment using only primitive objects in Unity. Applying materials and textures to primitive game objects can help developers prototype ideas quickly without sacrificing too much of their vision. You also used your familiarity with Unity Button Actions and the TrackedAlias game object to place a VRTK pointer and locomotion component in your scene. Finally, using original scripting you created two different custom actions: one to move the user through space, and another to create a smart function that not only determines the nearest statue to a user, but also toggles the state of its informational menu.

Conclusion

So ends our time together in this book. I hope you had as positive of an experience learning how to use VRTK with Unity as I did passing on my lessons to you. VR continues to expand as a medium, and every day its promises grow. As the new media of VR, AR, and MR converge, the skills and knowledge you have picked up in these pages will serve you well. I am sure of it. Earlier in the book I offered you a guarantee that you would reach the final page confident that you could prototype any original VR experience you could imagine. I hope you are not disappointed. Although you might feel like you have only seen the tip of the iceberg, I assure you that there are very few additional skills or secrets that you do not know. The only difference between you, now, and a professional VR developer is time and practice. Fortunately, the one thing that for certain makes a strong VR developer you already have—your own unique imagination.

If you close this book curious about what you can make in VR on your own, then I will have considered my job a success. If, however, you feel more confused than you did during Chapter 1, I encourage you to give it time. A year and a half ago, when I started to learn how to program C#, an instructor at a bootcamp told me that learning to code is really learning how to think about problems in a new way. The further I explore the systems and patterns of designing applications, the better I understand what he meant. Coding is not the language. It’s not even the rules of the interface of an application like Unity. Coding is thinking about a problem, breaking it into smaller problems, and solving each smaller problem step-by-step.

If you read all the chapters in this book, if you completed each exercise, then you already know the tools at your disposal to solve the problems you might face. The true, unbridled creativity of programming, especially in a medium like VR, in my opinion, lies in the innumerable ways you can decide to solve a problem. Some solutions might be easy, whereas some might require months of patient research. No matter the answers we find, however, every step of the way is a step into a new idea, a new version of ourselves. The fantasy of VR has been with us ever since man touched paint to stone. The only difference between then and now is that today creating VR is possible. Who knows what tomorrow will bring?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.141.6