In the previous chapter we connected a VRTK 1D Axis Action component to the right trigger button of our touch controller. Unlike buttons, triggers communicate a continuum of data to our programs. However, many times the extra information conveyed through a range of float values is not only unnecessary for a VR application, but also unhelpful. Fortunately, VRTK offers a component to convert a trigger event from a collection of float values to a single boolean value of either on or off.
A Unity Button Action connects a user’s press of a button to a discrete action in our program.
A Unity 1D Axis Action connects a user’s pressure on a trigger to a continuous action in our program.
The Float to Bool action component serves to transform a 1D Axis Action into a Button Action through the medium of a Boolean Action component.
Add a VRTK Interactor to a scene.
Connect a VRTK Interactor to a VR touch controller.
Add an interactable 3D object to a scene.
Create the ability to pick up and move 3D objects using VRTK components.
Add text elements to a scene that respond to user actions.
Script actions triggered by the location of objects placed by a user.
Exercise: A Physician’s Cognitive Testing Tool
Part 1: The Scene
Part 1 of this exercise will address the visual elements of the scene, such as the dressing of our set and the appearance of our props. However, because VRTK prefabs offer functionality without the requirement of code, you can create interaction in a scene with only what we cover in Part 1. Part 2 takes us into the code behind the scene that drives the logic of the application. In concert, Parts 1 and 2 provide a complete picture of the elements that make up an interactive VR experience and the code that connects them.
Step 1: Set Up a VR-Supported 3D Project in Unity with VRTK and a VR SDK
Hopefully by now you feel a bit more comfortable setting up a Unity project with VRTK. If you still feel uneasy, refer back to the previous chapters for reference.
Step 2: Add a TrackedAlias Prefab and a Virtual Camera Rig
As we have done in previous exercises, add the TrackedAlias prefab and the virtual camera rig best suited to the needs of your system. If you are using an SDK-specific camera rig connected to an HMD, then you can forgo the UnityXRCameraRig. Refer to Chapter 6 to review linking an SDK-specific camera rig (e.g., the Oculus OVRCameraRig) to the TrackedAlias prefab.
Step 3: Add 3D Objects to the Scene for Setting
Because we are re-creating a doctor’s office, let’s sketch out the bare bones of the set dressing we will need. Create a plane object, and rename it floor. Create a cube object, and rename it table. Create a plane object, and rename it plane. Refer to my transform settings, shown here, to place your objects in the scene:
Position: 0, 0, 0
Scale: 1, 1, 1
Position: 0, 0, 0
Scale: 1, 1, 1
Position: 0, 0, 0
Scale: 10, 0.5, 10
Position: 0, 0.25, 1
Scale: 3, 0.5, 1
Position: -0.8, 0.05, -0.2
Scale: 0.1, 0.1, 0.1
Step 4: Add VRTK Interactors to the Scene
This step connects our VR touch controllers to the VRTK interface to facilitate communication between our VR SDK and Unity. To do this, drag and drop the VRTK Interactor prefabs from the Project window to the TrackedAlias’s controller aliases. Refer to Chapter 6 to review connecting Interactors to the TrackedAlias prefab.
Step 5: Set Up the Game Objects with Which the User Will Interact
Here, we do something new.
Because we would like three different shapes to meet Dr. Pallavi’s requirements, we will replace the default meshes of each Interactable with a unique shape.
Step 6: Set a Unique Default Mesh Property for Each Interactable Game Object
Highlight the topmost parent object of the cube mesh in the Scene Hierarchy. In the Inspector, change its name to Cube, also, and set its Transform.Position values to (-0.7, 0.75, 0.65).
Troubleshooting: My Transforms Are All Messed Up!
If you find that the transforms for your mesh filters and parent interactable objects don’t align, then you might have edited the position values of the mesh filter instead of the parent object. Because the properties of a parent object cascade down to its child and not the other way around, changing the position values on the mesh filter will not align with the position values of the parent object automatically. Because we only aim to change the scale property of the mesh filter component, we leave its position and rotation transform values at their defaults. Instead, we manipulate the transform position values of the master parent game object so that the position of all children objects remains aligned.
You might be asking yourself why we went through the work of naming our Interactable objects and their default mesh filters the same name. For example, why did we name the Cube Interactable’s mesh filter Cube, the Sphere’s Sphere, and the Cylinder’s Cylinder? In Part 2 we will write the C# code that drives the application logic of the scene. We will need a way to compare whether the object selected by the users is the same shape as the object they were instructed to pick up. By matching the name of the Interactable shape to its mesh filter, we create two values we can compare to test equality. We create the code for this logic in Step 4 of Part 2 of this exercise. You’ll know you’ve reached it when you come across the totally random keyword CUPCAKE.
Patient
Actions: Grab_Object, Release_Object
Evaluator
Actions: Start_Test, Issue_Object_To_Select, Determine_If_Correct
Marquee
Actions: Write_To_Screen
Collector
Actions: Identify_Released_Object
If we were creating a traditional 2D program, we would embellish the Patient class with code to define the actions the user will perform. However, because we are developing a 3D, VR application, we can avail ourselves of the GUI provided by Unity to facilitate our scripting.
Step 7: Create a User Grab Action
We have determined that the user must be able to, at the very least, grab and release game objects in our scene. Our input will be the user’s press of a trigger button on the controller that simulates a grab. Our output will be the visual feedback to the user of his or her hand holding a virtual object. How, then, can we connect a user’s action in the real world with an object in the virtual world?
The Interactor Facade component has two parameters in its Interactor Settings: Grab Action and Velocity Tracker. In this exercise we only concern ourselves with the Interactor Facade’s Grab Action. A Grab Action is a property unique to VRTK. Its data type is a VRTK-defined BooleanAction, which means the Grab Action property holds either a true or false value. The event wired as a publisher to the Grab Action property determines whether Grab Action represents a true BooleanAction or a false BooleanAction. What is the event for which a Boolean Grab Action listens? It is the press of the VR controller button that simulates for a user what it is like to grab an object. In this example, let’s make that button the right grip trigger for an Oculus Rift touch controller.
What’s a Facade?
Computer programming, like architecture, industrial engineering, and even writing for that matter, is as much an art as it is a science. Consequently, elegant solutions to complex problems are as subjective to the developer as drama is to playwrights. In short, there is no right answer. There can be, however, right answers. In scripting, a facade is a design pattern that uses one class as an interface for a subsystem of interconnected parts. The control panel in a cockpit implements a facade design pattern, for example. The moving parts responsible for the plane’s flight are not isolated in the nose of the plane, beneath the control panel. They, like the functions and code libraries in a Unity project, are distributed across the system. A facade, then, is the hub through which subcomponents connect, exposing their functionality to the user through a discrete interface.
Now, we have an event handler listening to our touch controller’s right grip trigger as mapped through the VRTK TrackedAlias prefab. The specific event to which we’d like to listen is the Value Changed (Single) event. In Chapter 7 we listened for this same event on our main trigger button to control the intensity of a point light. Listening to this input from the user will yield a float value correlated to the degree to which they’ve pressed their trigger.
Unlike dimming a light, however, grabbing and holding an object does not require a range of values. To grab an object, users only have to press their grip trigger button past a certain point to determine whether they’ve grabbed the object or not. Therefore, a continuous range of float values is not helpful toward our goal of allowing users to grab, hold, and move an object with their hand. We need a mechanism that converts the information communicated by the 1D Axis Action Value Changed event handler from a float value to a boolean value. The boolean value will allow us to determine, simply, whether the user has grabbed an interactable object or whether not.
Step 8: Convert the Float Value to a Boolean Value
VRTK offers a convenient component to help us accomplish this task. It is called, appropriately, Float to Boolean. To map the float value captured by the Unity Axis 1D Action component to a bool we must chain the 1D Action component to a Float to Boolean component. To do this we first create another empty game object in our Hierarchy called RightTriggerPressed. With the new game object highlighted in the Hierarchy, click Add Component in the Inspector and search for Float to Boolean.
The Positive Bounds field on the Float to Boolean component sets the threshold at which the Transformed event will fire. Pressing the right grip trigger only halfway, for example, will not trigger the event. The user must press the grip trigger at least 75% of the way down to trigger the function that converts the Unity Axis 1D Action float value to a transformed boolean. This serves our purpose well because we’d only like the users to be able to grab and hold an object if they intentionally clench their fist on the controller in the presence of an Interactable object.
A Boolean Action is a VRTK-specific data type, which means it is a unique class VRTK has constructed from the elements of a base class. The creation of classes that build on the functionality of other, broader classes is called inheritance, and it is one of the main principles of the OOP paradigm.
The OOP Paradigm
Most developers agree that the four main principles of OOP are encapsulation, abstraction, inheritance, and polymorphism. We’ve addressed all four in this text, broadly, but I’ll leave it to the more ambitious among you to delve deeper into the numerous resources for OOP available online.
Trigger ➤ Axis 1D Action ➤ Float to Bool ➤ Boolean Action ➤ Grab Action
This is the chain of actions we are attempting to devise to avoid creating any original code to drive our users’ ability to grab, hold, and move objects in our scene. To connect our Boolean Action component to our Float to Bool component on our RightTriggerPressed game object, we need to call the Boolean Action Receive() method from the Float to Boolean component’s Transformed() event handler.
By this point, we’ve created a mechanism to measure the input from the user’s right grab trigger through the RightTriggerAxis game object’s Unity Axis 1D Action Component; we’ve created a mechanism to convert the float value from the Axis 1D Action component into a boolean value; and we’ve sent that boolean value to a Boolean Action component on the RightTriggerPressed game object. There are two steps remaining to complete the chain of events.
Step 9: Connect the RightTriggerPressed Object to the RightTriggerAxis Object
The first step is a step we glossed over to create our RightTriggerPressed game object. We haven’t yet defined how our float value captured by our RightTriggerAxis Axis 1D Action component will reach the Float to Boolean action component on the RightTriggerPressed game object. To do so, let’s add a listener to our Value Changed event handler on our RightTriggerAxis Unity Axis 1D Action component.
Step 10: Connect the Controller to the Interactor Facade
We also know that the last link in our chain of events we touched was the BooleanAction component on our RightTriggerPressed game object. Therefore, we know the value published by our Receive() method on our BooleanAction component on our RightTriggerPressed game object will be of type BooleanAction. To complete our chain of events, we simply need to connect the message sent from our BooleanAction.Receive() method to our Interactor object’s InteractorFacade.GrabAction property.
- 1.
User pulls trigger.
- 2.
Unity Axis 1D Action Value Changed event fires.
- 3.
At 75% pressure Float to Boolean Transformed event fires.
- 4.
Float value converted to bool.
- 5.
Bool value received by Boolean Action Receive event.
- 6.
Sends the bool value to the Grab Action property on the Interactor prefab.
- 7.
The Interactor prefab attaches to the users’ virtual hand to simulate grabbing and moving an object.
Save the scene and press Play to test the application. When you are through, if everything has worked according to plan, and you are able to pick up the interactable shapes by gripping the correct trigger on the controller, then repeat the steps we covered to create a Grab Action for the Left Trigger Grip button.
Step 11: Add User Interface Canvas Object to Scene
One fundamental element of VR design we have not addressed yet is UI design. The UI of an application is the client-facing portion that often handles visual input and output. Text, for example, is a common, simple component of UI design.
Set the Rect Transform properties of the Canvas object to the values shown in Figure 8-34.
Step 12: Add a Second TextMeshPro Object as a Child to the Canvas Object
With the Canvas object selected in the Hierarchy, right-click and add a second Text - TextMeshPro object. Name the first text object Text_go and the second object Text_response.
If you play-test the scene you should see the text fields appear slightly above your head to the left through your HMD. You can set the parameters of the Canvas and text object transforms to your liking.
The larger takeaway from adding the Canvas object and its TextMeshPro children is the role of the Render Mode property on the Canvas’s Canvas component. Placing a Canvas in a scene in World Space allows us to manipulate the size and appearance of the Canvas relative to the size and appearance of other objects in our scene. As it is a game object unto itself, the Canvas and its children reside in our scene just like every other game object, and like every other game object we can transform the value of its properties through scripting, even its text.
Part 2: The Code
By now you’ve learned all there is to know about connecting Interactors and Interactables in a Unity scene using VRTK. Without touching any C# code, you leveraged the Unity Event System to create event handlers, assign publishers, assign listeners, send delegates, and fire events.
What’s All This Talk of Delegates?
Delegates are usually discussed in advanced sections of C# instruction. For the scope of this book, it is sufficient to understand a delegate as a variable for a function. Much like variables, which can store the values of objects, delegates can store the value of a function. The principles of OOP frown on exposing the innards of classes to other classes. As a result, accessing the methods of classes from outside their scope can be difficult. The benefit is loosely coupled code; the cost is the overhead of extra planning. By storing the values of functions, delegates serve as a messenger service that can call functions between classes. VRTK Actions and Unity Events are a relative of delegates in this regard.
What follows is a quick and dirty prototype of a program that executes the logic of Dr. Pallavi’s test. Whereas she, as the physician, can control the flow and pace of the exercise in person, in a virtual application we need software to serve the role of adjudicator. The detail of the code I have written is beyond the scope of our lessons together, but it is available on the course GitHub page with commentary. I have taken pains to keep the code limited to concepts we have already discussed in this book. Although the code is not production-ready, I do feel it adequately stays within the scope of our lessons. I include images of the complete code in this text, but I only sparingly draw attention to its details for the sake of both clarity and brevity.
Step 1: Set Up the GameManager Script
To begin, create a folder called Scripts in your Assets folder in the Project window and create a new C# script. Name the script GameManager and open it in your IDE. Beneath the class title define eight public properties: three GameObjects called “shape” 1, 2, 3, respectively, and three Vector3 objects called “origin” 0, 1, 2, respectively. Create two public arrays, one of type GameObject and one of type Vector3.
- 1.
Creates an array of three indexes called shapes_array, which holds three shape objects of type GameObject.
- 2.
Creates an array of three indexes called origins_array, which holds three Vector3 objects that save the starting location of the three interactable shape objects in our scene.
- 3.
Stores references to the Plane, Collector, and Marquee objects in our scene.
What’s an Array?
An array is a primitive data structure that holds objects in indexes. The first index of an array is conventionally identified as index 0. Arrays commonly hold collections of values that together comprise a set. Notably, arrays in C# require an exact number of indexes specified on creation. Capping the size of an array at compile time allows the computer to allot the exact amount of memory required to store the collection.
Step 2: Create the Marquee and Collector Classes
The Awake() function in the Marquee script executes before Unity calls the Start function in the GameManager class, even though the GameManager’s Start() function initializes the Marquee object. It is a good rule of thumb to use the Awake() method on a MonoBehaviour to initialize any components that need to be present before the game loop executes.
In the Marquee class, the Awake function essentially stores references to the TextMeshPro objects we placed as children of the Canvas object in the Scene Hierarchy. In this regard, the Marquee script controls the appearance of the billboard, or marquee, represented by the Canvas in the scene. Using the FindObjectOfType() method, which is also part of the UnityEngine code library, the Marquee class stores references to the other classes in our project.
Beware: Tightly Coupled Classes
Storing references to other classes in a class is a great way to stumble into bad coding behavior. The result is what is called tightly coupled code. Tightly coupled code leads to spaghetti code, the horrible condition of interdependent logic with the potential to break at every point of connection. As the master of software architecture, Grady Booch, noted, designing classes and their interactions is a fine art derived through iteration. It is near impossible to write loosely coupled code on a first pass. As this application is an exercise in prototyping, I won’t stress ideal software design composition. It is valuable to understand that although loosely coupled code is the goal, perfection cannot be the enemy of good when prototyping rapidly through iteration. However, as always, practice makes perfect.
If the GameManager is the brains of our application, setting everything up and coordinating the flow of control, and the Marquee class is the logic of our Canvas billboard, what then is the Collector class?
Step 3: Create a Trigger Collider
Selecting both Convex and Is Trigger creates an actionable area around the Plane object. Any object with a RigidBody component attached can trigger an event on entering the Plane’s mesh collider. Because the Interactable prefabs we dragged and dropped from the VRTK prefab folder have RigidBody components attached to them by default, they have the potential to trigger an action on entering the Plane’s mesh collider. We will use this trigger event handling feature of the Plane’s mesh collider to prompt a change in our Marquee object.
Step 4: Create an Event Handler for the Trigger Collider
In the OnTriggerEnter() method , the parameters of the BroadcastMessage() function, which comes from the Component class of the UnityEngine code library, are the name of the function to call and the value to send to it.
Step 5: Create a Method to Tell User Which Shape to Select
The SetMarqueeObjectName() method takes an integer as a parameter. The integer represents the “round” of the test the user is in. Each round corresponds to one shape object in the GameManager’s shapes_array collection, which is comprised of the interactable objects we placed in our scene—cube, sphere, and cylinder. As long as the round number is lower than the number of shapes in our array (considering we initialized our “round” variable to begin at zero), the Marquee object will broadcast what shape the user is to select to the billboard in our scene. The GameManager asks the Marquee to set the name of the object it would like the user to select by calling the Marquee’s WriteToObject() method and passing as a parameter the name of the shape pulled from the array.
Step 6: Create a Method to Tell Collector Which Shape Is Correct
Called by the SetMarqueeObjectName() method , SetCollectorObjectName() sets the objectName property on the Collector class equal to the name of the shape the Marquee has broadcast to the user to select.
If the value of the objectName variable matches the value of the collected_objectName value, then the Collector broadcasts a message to the other components attached to its game object, calling the SetCorrect method and passing as a parameter the value true.
Step 7: Define a Listener Function for the OnTriggerEnter Event Handler
Speaking of which, the BroadcastMessage() method called from the Collector’s OnTriggerEnter() event handler requests a message from all attached components within earshot (on the same game object) to call a method called SetCorrect and set as its argument either true or false, depending on whether the user has provided the same shape expected by the Marquee.
Step 8: Create a GameOver() Method in the GameManager Class
Step 9: Define Methods in the Marquee Class to Write to the Canvas Object’s Text Fields
The GameManager’s GameOver() method makes a call to a function the GameManager has called once before in its SetMarqueeObjectName() method: marquee.WriteToObject().
Step 10: Connect the GameManager Script to an Empty Game Object
Finally, to connect our game logic to our Unity Scene, create an empty game object in the Scene Hierarchy in the Unity Editor and name it GameManager. If you have coded along with the exercise, attach the GameManager.cs script as a component to the GameManager game object.
Step 11: Drag and Drop the Interactable Shape Objects into the GameManager
Step 12: Play-test
What’s Up with My Velocity Tracking?
If you’re using the Oculus Rift touch controllers, like me, then you have to go through a few more steps to correctly calibrate the velocity trackers on your controllers with the VRTK TrackedAlias object. You can find details about this process in the documentation on the VRTK GitHub page. Velocity tracking for touch controllers is essential for re-creating the realistic physics of throwing an object, for example.
In a roundabout way, you’ve now seen every line of code in the application. The manner in which I presented it to you generally traces the path of execution of the program. It’s important to note that the code in this section of the chapter, Part 2, does not have any relationship to the VRTK components we set up in Part 1. You can apply interactivity to your VR experiences using only VRTK Interactors and Interactable prefabs. The code I’ve presented to you in Part 2 is a rudimentary example of how you could implement application logic and flow control behind the scenes of your experience. Once you are comfortable using out-of-the-box features of Unity scripting, like GameObject.Find() and GetComponent<T>(), then you can begin study of more advanced scripting techniques like custom delegates and events.
However, until then it is no small feat to prototype behavioral logic using Unity and C#, even if the design of your or my code would not meet the muster of a startup’s code review. Design patterns such as Singletons and Mediators are tremendously helpful tools to create functionality and should serve as goals for your further development. Meanwhile, the fundamentals we have addressed in this chapter and so far in this book provide you with the necessary skills to begin prototyping your own VR experiences immediately. After all, updating game objects, such as text, in response to a user’s input is but one example of many simple events achievable through beginning-level code that can facilitate convincing immersion in a VR scene.
Summary
Running Dr. Pallavi’s test with the VRTK components wired up and the GameManager script applied provides a reasonable prototype for the doctor. She approves of the work we’ve done for her and looks forward to iterating the project with more bells and whistles.
In this chapter you added Interactable prefabs from the VRTK library into a scene. You facilitated user interaction with those prefabs by connecting the user’s VR controllers with VRTK’s Interactor prefabs. Using the VRTK Float to Boolean component, you converted a user’s pressure on a trigger button into a yes/no event. Through C# code you created trigger events connected to game objects’ Collider components. You used built-in Unity functions to broadcast messages to game objects to create complex logic, and you configured code to write dynamic text to a canvas in VR world space.
In the next and final chapter, we address the last looming concept in VR design left for us to explore: getting around a scene in a way that collapses geography into geometry. In other words, how do we create movement without moving? Until VR-synced omnidirectional treadmills become commonplace, it is our responsibility as immersive experience designers to devise ways for users to travel through a scene while taking into account the limitations of their bodies, systems, and environments. Fortunately, VRTK has tools for that. They’re called Pointers and Locomotion, and we dive into them next.