© Rakesh Baruah 2020
R. BaruahVirtual Reality with VRTK4 https://doi.org/10.1007/978-1-4842-5488-2_8

8. Interactors and Interactables

Rakesh Baruah1 
(1)
Brookfield, WI, USA
 

In the previous chapter we connected a VRTK 1D Axis Action component to the right trigger button of our touch controller. Unlike buttons, triggers communicate a continuum of data to our programs. However, many times the extra information conveyed through a range of float values is not only unnecessary for a VR application, but also unhelpful. Fortunately, VRTK offers a component to convert a trigger event from a collection of float values to a single boolean value of either on or off.

The Float to Bool component in VRTK is the third and final input handler we will discuss. The other two input handlers have been the Unity Button Action and the 1D Axis Action.
  • A Unity Button Action connects a user’s press of a button to a discrete action in our program.

  • A Unity 1D Axis Action connects a user’s pressure on a trigger to a continuous action in our program.

  • The Float to Bool action component serves to transform a 1D Axis Action into a Button Action through the medium of a Boolean Action component.

A Boolean Action component converts the float values from a Unity Axis Action and connects the Axis Action to an event that has either an on or off state (Figure 8-1). For example, if we created a program that simulated the behavior of a nail gun, then we would want to convert a user’s pressure on a trigger to one of two states: fire a nail or don’t fire a nail. The pressure a user places on a trigger is immaterial. We’d simply want to know if they pulled the trigger on a touch controller past a certain value to activate an event that fired a virtual nail. For such a purpose, the VRTK Float to Bool and Boolean Action components perform well. The Float to Bool component will tell our program if and when a user pulls a touch trigger past a certain point; the Boolean Action component will then trip the “fire nail” event in the program’s script.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig1_HTML.jpg
Figure 8-1

An Axis 1D Action component captures a range of float values. A Float to Bool component transforms the spectrum of float information into one of two states: true or false

Like many of the concepts conveyed in this text, it will be most helpful to put the theory into practice. Therefore, in this chapter you will do the following:
  • Add a VRTK Interactor to a scene.

  • Connect a VRTK Interactor to a VR touch controller.

  • Add an interactable 3D object to a scene.

  • Create the ability to pick up and move 3D objects using VRTK components.

  • Add text elements to a scene that respond to user actions.

  • Script actions triggered by the location of objects placed by a user.

Exercise: A Physician’s Cognitive Testing Tool

Dr. Tanisha Pallavi is a practicing neurologist. She specializes in seeing patients who suffer from cognitive impairment. One exercise she uses to test a patient’s reasoning abilities is a simple shape selector. She instructs a patient to pick up an object of a particular shape and place it in a bin. If the patient succeeds, she can rule out any degenerative diagnosis. However, if the patient either selects the incorrect shape or demonstrates an inability to place it in a bin, then she knows further examination of the patient’s cognitive and motor skills is prudent. To help Dr. Pallavi perform the test at scale and even administer it remotely, we will create a VR app that reproduces the requirements of the examination (Figure 8-2).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig2_HTML.jpg
Figure 8-2

This is a Scene view of the final prototyped testing application

Part 1: The Scene

Part 1 of this exercise will address the visual elements of the scene, such as the dressing of our set and the appearance of our props. However, because VRTK prefabs offer functionality without the requirement of code, you can create interaction in a scene with only what we cover in Part 1. Part 2 takes us into the code behind the scene that drives the logic of the application. In concert, Parts 1 and 2 provide a complete picture of the elements that make up an interactive VR experience and the code that connects them.

Step 1: Set Up a VR-Supported 3D Project in Unity with VRTK and a VR SDK

Hopefully by now you feel a bit more comfortable setting up a Unity project with VRTK. If you still feel uneasy, refer back to the previous chapters for reference.

Step 2: Add a TrackedAlias Prefab and a Virtual Camera Rig

As we have done in previous exercises, add the TrackedAlias prefab and the virtual camera rig best suited to the needs of your system. If you are using an SDK-specific camera rig connected to an HMD, then you can forgo the UnityXRCameraRig. Refer to Chapter 6 to review linking an SDK-specific camera rig (e.g., the Oculus OVRCameraRig) to the TrackedAlias prefab.

Step 3: Add 3D Objects to the Scene for Setting

Because we are re-creating a doctor’s office, let’s sketch out the bare bones of the set dressing we will need. Create a plane object, and rename it floor. Create a cube object, and rename it table. Create a plane object, and rename it plane. Refer to my transform settings, shown here, to place your objects in the scene:

CameraRig
  • Position: 0, 0, 0

  • Scale: 1, 1, 1

TrackedAlias
  • Position: 0, 0, 0

  • Scale: 1, 1, 1

Floor
  • Position: 0, 0, 0

  • Scale: 10, 0.5, 10

Table
  • Position: 0, 0.25, 1

  • Scale: 3, 0.5, 1

Plane
  • Position: -0.8, 0.05, -0.2

  • Scale: 0.1, 0.1, 0.1

Further, you can set the color of the materials for each object to your preferences. Your scene should resemble Figure 8-3.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig3_HTML.jpg
Figure 8-3

This is a bird’s-eye view of the game objects that define the setting of the scene

Step 4: Add VRTK Interactors to the Scene

This step connects our VR touch controllers to the VRTK interface to facilitate communication between our VR SDK and Unity. To do this, drag and drop the VRTK Interactor prefabs from the Project window to the TrackedAlias’s controller aliases. Refer to Chapter 6 to review connecting Interactors to the TrackedAlias prefab.

Step 5: Set Up the Game Objects with Which the User Will Interact

Here, we do something new.

VRTK offers prefab objects replete with the necessary features to facilitate user interaction. They are called Interactables , and, if you downloaded VRTK from GitHub correctly, they are available in the VRTK folder in your Unity Project window. Navigate to Assets ➤ VRTK ➤ Prefabs ➤ Interactions ➤ Interactables (Figure 8-4).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig4_HTML.jpg
Figure 8-4

The VRTK library offers prefab objects containing within them the logic required to create items a user can grab

The final object interactable listed in my VRTK/Prefabs/Interactions/Interactable folder is called Interactable.Primary_Grab.Secondary_Swap (Figure 8-5).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig5_HTML.jpg
Figure 8-5

VRTK offers interactable objects with logic suited for different actions

Drag and drop the object three times into the Scene Hierarchy so you have three instances of the object (Figure 8-6).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig6_HTML.jpg
Figure 8-6

Create three instances of interactable objects by dragging them into the Scene Hierarchy from the VRTK folder in the Project window

Clicking the triangle to the left of one of the Interactable objects in the Hierarchy exposes its children, of which there are three (Figure 8-7).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig7_HTML.jpg
Figure 8-7

Clicking the triangle to the left of the parent Interactable object reveals its three child objects

Expand the Meshes child object to reveal another child game object named DefaultMesh (Figure 8-8).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig8_HTML.jpg
Figure 8-8

The DefaultMesh child object of a VRTK Interactable is shown

Highlighting DefaultMesh reveals its components in the Inspector. Each Interactable we dragged into our Hierarchy has a DefaultMesh called Cube (Figure 8-9).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig9_HTML.jpg
Figure 8-9

A VRTK Interactable object has attached to it by default a Cube mesh

Because we would like three different shapes to meet Dr. Pallavi’s requirements, we will replace the default meshes of each Interactable with a unique shape.

Step 6: Set a Unique Default Mesh Property for Each Interactable Game Object

We will create three shapes to serve as the meshes for our Interactables: a cube, a sphere, and a cylinder. Because the cube mesh is the default mesh for the Interactable, we don’t need to change its shape, only its size. Edit the Scale property of the cube default mesh child game object to (0.2, 0.2, 0.2). Change its name to Cube (Figure 8-10).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig10_HTML.jpg
Figure 8-10

Edit the properties of the Interactable’s mesh in the Inspector

Highlight the topmost parent object of the cube mesh in the Scene Hierarchy. In the Inspector, change its name to Cube, also, and set its Transform.Position values to (-0.7, 0.75, 0.65).

For the second Interactable object in our Scene Hierarchy, let’s change its DefaultMesh to a sphere. To do so, navigate to the child DefaultMesh object of the second Interactable object in the Hierarchy. With the DefaultMesh object highlighted in the Hierarchy, turn your attention to the Mesh Filter component in the Inspector (Figure 8-11). As we noted, the default mesh filter on a VRTK interactable is a cube. To change the shape of the mesh filter, click the small circle to the right of the object field in the Mesh Filter component.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig11_HTML.jpg
Figure 8-11

Select the DefaultMesh child object of the second Interactable object to change its properties in the Inspector

Double-click the Sphere mesh to add it as the default mesh on the second Interactable game object (Figure 8-12). Resize the Scale values of the DefaultMesh transform of which the Sphere mesh filter component is a part to (0.2, 0.2, 0.2), and change its name to Sphere (Figure 8-13).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig12_HTML.jpg
Figure 8-12

Clicking the circle with your mouse will open a new Select Mesh window

../images/488645_1_En_8_Chapter/488645_1_En_8_Fig13_HTML.jpg
Figure 8-13

Changing the DefaultMesh value of the second Interactable object to Sphere distinguishes its shape from the Cube interactable

Highlight the Interactable parent object of the Sphere mesh filter, and, in the Inspector, change its name to Sphere and its Transform.Position to (0, 0.9, 0.6) as shown in Figure 8-14.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig14_HTML.jpg
Figure 8-14

Change the position information of an Interactable object’s parent game object to prevent misalignment with its mesh child object

For the third Interactable object in our Hierarchy, set its mesh filter to a Cylinder; rename the default mesh to Cylinder; set its x, y, and z scale values to 0.2; and rename its parent object Cylinder. Its names and properties should match those shown in Figure 8-15.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig15_HTML.jpg
Figure 8-15

Make sure the name of the DefaultMesh matches the name of the parent Interactable object. Otherwise, the scripting for the application will not work correctly

Set the Transform.Position values of the parent Cylinder interactable object to (0.6, 0.85, 0.6). The properties of the Cylinder interactable object and the broader scene’s view should resemble Figure 8-16.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig16_HTML.jpg
Figure 8-16

The final Transform settings for the Cylinder interactable and an example of the Scene view after all Interactable properties have been set are displayed

Troubleshooting: My Transforms Are All Messed Up!

If you find that the transforms for your mesh filters and parent interactable objects don’t align, then you might have edited the position values of the mesh filter instead of the parent object. Because the properties of a parent object cascade down to its child and not the other way around, changing the position values on the mesh filter will not align with the position values of the parent object automatically. Because we only aim to change the scale property of the mesh filter component, we leave its position and rotation transform values at their defaults. Instead, we manipulate the transform position values of the master parent game object so that the position of all children objects remains aligned.

You might be asking yourself why we went through the work of naming our Interactable objects and their default mesh filters the same name. For example, why did we name the Cube Interactable’s mesh filter Cube, the Sphere’s Sphere, and the Cylinder’s Cylinder? In Part 2 we will write the C# code that drives the application logic of the scene. We will need a way to compare whether the object selected by the users is the same shape as the object they were instructed to pick up. By matching the name of the Interactable shape to its mesh filter, we create two values we can compare to test equality. We create the code for this logic in Step 4 of Part 2 of this exercise. You’ll know you’ve reached it when you come across the totally random keyword CUPCAKE.

Before we proceed, let’s better understand what it is we need to do. Dr. Pallavi has asked us to help her simulate her cognitive impairment exercise. The exercise requires a patient to select an object, grasp it, and place it in a bin. So, we know one entity in our program:
  • Patient

    Actions: Grab_Object, Release_Object

Second, because Dr. Pallavi will not be present for the virtual test, we will need something in our program that replaces her. What functions does Dr. Pallavi perform during the test? She begins the test, she instructs the patient which object to choose, and she evaluates if the patient responded correctly. If Dr. Pallavi were a machine, we could categorize this domain of her job during the test as follows:
  • Evaluator

    Actions: Start_Test, Issue_Object_To_Select, Determine_If_Correct

We also need an object in our scene that performs Dr. Pallavi’s function of communicating with the patient.
  • Marquee

    Actions: Write_To_Screen

Finally, we’ll need an object that captures the patient’s object on release so that the program can evaluate it:
  • Collector

    Actions: Identify_Released_Object

If we were creating a traditional 2D program, we would embellish the Patient class with code to define the actions the user will perform. However, because we are developing a 3D, VR application, we can avail ourselves of the GUI provided by Unity to facilitate our scripting.

Step 7: Create a User Grab Action

We have determined that the user must be able to, at the very least, grab and release game objects in our scene. Our input will be the user’s press of a trigger button on the controller that simulates a grab. Our output will be the visual feedback to the user of his or her hand holding a virtual object. How, then, can we connect a user’s action in the real world with an object in the virtual world?

If you expand the TrackedAlias object in the Scene Hierarchy and highlight the Interactor prefab you added previously to preview it in the Inspector, then you will see a component called Interactor Facade (Script) like Figure 8-17.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig17_HTML.jpg
Figure 8-17

The Interactor prefab provided by VRTK includes a Facade component, which exposes events and properties into which we can hook user actions to facilitate grabbing an object in our scene

The Interactor Facade component has two parameters in its Interactor Settings: Grab Action and Velocity Tracker. In this exercise we only concern ourselves with the Interactor Facade’s Grab Action. A Grab Action is a property unique to VRTK. Its data type is a VRTK-defined BooleanAction, which means the Grab Action property holds either a true or false value. The event wired as a publisher to the Grab Action property determines whether Grab Action represents a true BooleanAction or a false BooleanAction. What is the event for which a Boolean Grab Action listens? It is the press of the VR controller button that simulates for a user what it is like to grab an object. In this example, let’s make that button the right grip trigger for an Oculus Rift touch controller.

What’s a Facade?

Computer programming, like architecture, industrial engineering, and even writing for that matter, is as much an art as it is a science. Consequently, elegant solutions to complex problems are as subjective to the developer as drama is to playwrights. In short, there is no right answer. There can be, however, right answers. In scripting, a facade is a design pattern that uses one class as an interface for a subsystem of interconnected parts. The control panel in a cockpit implements a facade design pattern, for example. The moving parts responsible for the plane’s flight are not isolated in the nose of the plane, beneath the control panel. They, like the functions and code libraries in a Unity project, are distributed across the system. A facade, then, is the hub through which subcomponents connect, exposing their functionality to the user through a discrete interface.

To prompt the cascade of events that culminate in an object responding to a user’s touch, we will begin with our old friend, the Unity Axis 1D Action component. In your Scene Hierarchy, create an empty game object and name it RightTriggerAxis. In the Inspector, click Add Component to add a Unity Axis 1D Action component (Figure 8-18).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig18_HTML.jpg
Figure 8-18

Add an Axis 1D Action component to an empty game object called RightTriggerAxis

Navigate to the Unity Input Manager through Menu Bar ➤ Edit ➤ Project Settings. Locate the Input Axis named VRTK_Axis_12_RightGrip (Figure 8-19).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig19_HTML.jpg
Figure 8-19

Locate the Right Grip Axis information in the Input Manager

Copy the value of the Name property: VRTK_Axis12_RightGrip. Paste the name of the axis in the Axis Name field of the 1D Axis Action component connected to the RightTriggerAxis game object (Figure 8-20).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig20_HTML.jpg
Figure 8-20

Pasting the name of the Right Grip Axis input as the Axis Name in a 1D Action component connects trigger input to a Unity event handler

Now, we have an event handler listening to our touch controller’s right grip trigger as mapped through the VRTK TrackedAlias prefab. The specific event to which we’d like to listen is the Value Changed (Single) event. In Chapter 7 we listened for this same event on our main trigger button to control the intensity of a point light. Listening to this input from the user will yield a float value correlated to the degree to which they’ve pressed their trigger.

Unlike dimming a light, however, grabbing and holding an object does not require a range of values. To grab an object, users only have to press their grip trigger button past a certain point to determine whether they’ve grabbed the object or not. Therefore, a continuous range of float values is not helpful toward our goal of allowing users to grab, hold, and move an object with their hand. We need a mechanism that converts the information communicated by the 1D Axis Action Value Changed event handler from a float value to a boolean value. The boolean value will allow us to determine, simply, whether the user has grabbed an interactable object or whether not.

Step 8: Convert the Float Value to a Boolean Value

VRTK offers a convenient component to help us accomplish this task. It is called, appropriately, Float to Boolean. To map the float value captured by the Unity Axis 1D Action component to a bool we must chain the 1D Action component to a Float to Boolean component. To do this we first create another empty game object in our Hierarchy called RightTriggerPressed. With the new game object highlighted in the Hierarchy, click Add Component in the Inspector and search for Float to Boolean.

After adding a Float to Boolean component to the RightTriggerPressed game object, you will see the component contains two fields: (1) the Transformed (Boolean) event handler list, and (2) the Positive Bounds parameter. Set the Positive Bounds parameter to 0.75, as shown in Figure 8-21.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig21_HTML.jpg
Figure 8-21

The Positive Bounds parameter of a VRTK Float to Boolean component sets the threshold at which a trigger event sparks an action. VRTK normalizes the boundary of a trigger input’s range: 0 is no pressure, and 1 is fully pressed

The Positive Bounds field on the Float to Boolean component sets the threshold at which the Transformed event will fire. Pressing the right grip trigger only halfway, for example, will not trigger the event. The user must press the grip trigger at least 75% of the way down to trigger the function that converts the Unity Axis 1D Action float value to a transformed boolean. This serves our purpose well because we’d only like the users to be able to grab and hold an object if they intentionally clench their fist on the controller in the presence of an Interactable object.

Once our Float to Boolean action transforms our float value to a boolean value we need to capture the boolean value somewhere to connect it to the Grab Action on our Interactor prefab in our TrackedAlias game object. To accomplish this, we will use the Boolean Action component we introduced in the Interest Calculator exercise of Chapter 6. Recall that in that exercise we connected the CalculateInterest script to a button on our controller by attaching it to our cake object and wiring it through the Activated (Boolean) event. For this exercise, attach a Boolean Action component to the RightTriggerPressed game object, the same game object to which we’ve added the Float to Boolean component (Figure 8-22).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig22_HTML.jpg
Figure 8-22

Add a VRTK Boolean Action component to the same game object as a VRTK Float to Boolean component to receive the boolean value created by the Transformed event

A Boolean Action is a VRTK-specific data type, which means it is a unique class VRTK has constructed from the elements of a base class. The creation of classes that build on the functionality of other, broader classes is called inheritance, and it is one of the main principles of the OOP paradigm.

The OOP Paradigm

Most developers agree that the four main principles of OOP are encapsulation, abstraction, inheritance, and polymorphism. We’ve addressed all four in this text, broadly, but I’ll leave it to the more ambitious among you to delve deeper into the numerous resources for OOP available online.

The VRTK Boolean Action class builds on the broader VRTK Action class, which contains as part of its definition the methods we can see inside the Unity Editor’s Inspector: Activated, ValueChanged, and Deactivated (Figure 8-23).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig23_HTML.jpg
Figure 8-23

A view of the abstract Action class in the VRTK code base, from which Boolean Action inherits

The Action class definition also contains methods we do not see in the Unity Editor directly. One of these methods, seen in Figure 8-24, is called Receive.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig24_HTML.jpg
Figure 8-24

The method signature for the Receive function in the VRTK Action class

Because the BooleanAction class inherits from the more general Action class, it too has the Receive function as part of its definition. It is the Receive method from the BooleanAction class (and its component in our editor) that we would like to call to save the boolean value converted from our Axis 1D Action float value.
  • Trigger ➤ Axis 1D Action ➤ Float to Bool ➤ Boolean Action ➤ Grab Action

This is the chain of actions we are attempting to devise to avoid creating any original code to drive our users’ ability to grab, hold, and move objects in our scene. To connect our Boolean Action component to our Float to Bool component on our RightTriggerPressed game object, we need to call the Boolean Action Receive() method from the Float to Boolean component’s Transformed() event handler.

With the RightTriggerPressed game object highlighted in the Scene Hierarchy, click + beneath the Float to Boolean Transformed (Boolean) event handler in the Inspector. Drag and drop the RightTriggerPressed game object from the Hierarchy into the empty game object field on the Transformed event handler in the Float to Boolean component. From the function pull-down menu in the Transformed (Boolean) event handler, select BooleanAction ➤ (Dynamic Bool) Receive as shown in Figure 8-25.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig25_HTML.jpg
Figure 8-25

Dragging the RightTriggerPressed game object on to its Float to Boolean component chains the Boolean Action Receive method to the Transformed event handler

By this point, we’ve created a mechanism to measure the input from the user’s right grab trigger through the RightTriggerAxis game object’s Unity Axis 1D Action Component; we’ve created a mechanism to convert the float value from the Axis 1D Action component into a boolean value; and we’ve sent that boolean value to a Boolean Action component on the RightTriggerPressed game object. There are two steps remaining to complete the chain of events.

Step 9: Connect the RightTriggerPressed Object to the RightTriggerAxis Object

The first step is a step we glossed over to create our RightTriggerPressed game object. We haven’t yet defined how our float value captured by our RightTriggerAxis Axis 1D Action component will reach the Float to Boolean action component on the RightTriggerPressed game object. To do so, let’s add a listener to our Value Changed event handler on our RightTriggerAxis Unity Axis 1D Action component.

With the RightTriggerAxis game object selected in the Hierarchy, click the + sign on the Unity Axis 1D Action component’s Value Changed (Single) event handler. Drag and drop the RightTriggerPressed game object on to the empty game object field, and in the event handler’s function pull-down menu select FloatToBoolean ➤ (Dynamic Float) DoTransform (Figure 8-26).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig26_HTML.jpg
Figure 8-26

Dragging the RightTriggerPressed game object on to the RightTriggerAxis’s 1D Action component creates a listener on the Trigger Axis’s Value Changed event

Step 10: Connect the Controller to the Interactor Facade

The final step we take to complete the creation of our right grip trigger grab action is to connect our Boolean Action event on our RightTriggerPressed game object to the Grab Action property on our right controller interactor (Figure 8-27).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig27_HTML.jpg
Figure 8-27

The Grab Action property on the Interactor object’s Facade component connects the value emitted by the RightTriggerPressed object’s Boolean Action Receive event

The Interactor prefab created by VRTK, which we added as a child object to our TrackedAlias game object in the Hierarchy early in this exercise, contains a VRTK component called Interactor Facade (Figure 8-28). Because we can attach the Interactor Facade to a game object, we know it inherits from the UnityEngine class MonoBehaviour. Therefore, it is a C# script. After opening the InteractorFacade script in an IDE by double-clicking the script’s name in the Interactor Facade component, we can see that the Grab Action property exposed in the Unity Editor’s Inspector window is of data type BooleanAction (Figure 8-29).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig28_HTML.jpg
Figure 8-28

The Facade component attached to the VRTK Interactor prefab is a C# script we can view in an IDE

../images/488645_1_En_8_Chapter/488645_1_En_8_Fig29_HTML.jpg
Figure 8-29

The Interactor Facade component is a C# class inheriting from Unity’s MonoBehaviour class. The data type of its GrabAction property is a BooleanAction, which means it can hold the result returned by a Boolean Action method like Receive()

We also know that the last link in our chain of events we touched was the BooleanAction component on our RightTriggerPressed game object. Therefore, we know the value published by our Receive() method on our BooleanAction component on our RightTriggerPressed game object will be of type BooleanAction. To complete our chain of events, we simply need to connect the message sent from our BooleanAction.Receive() method to our Interactor object’s InteractorFacade.GrabAction property.

The BooleanAction.Receive() method receives the boolean value from the Float to Bool component on the RightTriggerPressed game object. The Receive() method then passes that value as a parameter to a function called ProcessValue() also in the VRTK Action class (Figure 8-30). The ProcessValue() class in turn sends a delegate that updates the Interactor Facade’s Grab Action property with the most recent boolean value—true if the user has pressed the right grip trigger button down further than 75%, and false if they have not.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig30_HTML.jpg
Figure 8-30

The Boolean Action Receive() method calls the ProcessValue() method, which invokes a delegate whose responsibility it is to update the Interactable object’s GrabAction property with a true or false BooleanAction value. This complex chain of event messaging is what VRTK conveniently, and invisibly, handles for us

We connect the BooleanAction.Receive() method with the InteractorFacade.GrabAction property by simply dragging and dropping the RightTriggerPressed game object, the game object to which we’ve attached the Boolean Action Receive() method, into the GrabAction field on the RightControllerAlias’s Interactor child’s Interactor Facade component (Figure 8-31).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig31_HTML.jpg
Figure 8-31

Dragging and dropping the RightTriggerPressed game object into the Interactor Facade’s Grab Action field wires up all the communication scripting that VRTK handles behind the scenes. Such is the convenience provided by a facade design pattern

With the Grab Action property on our Interactor connected with the Axis 1D Action component, we have successfully wired up our chain of events from user input to object interactable. To review, the chain of events transpires as follows:
  1. 1.

    User pulls trigger.

     
  2. 2.

    Unity Axis 1D Action Value Changed event fires.

     
  3. 3.

    At 75% pressure Float to Boolean Transformed event fires.

     
  4. 4.

    Float value converted to bool.

     
  5. 5.

    Bool value received by Boolean Action Receive event.

     
  6. 6.

    Sends the bool value to the Grab Action property on the Interactor prefab.

     
  7. 7.

    The Interactor prefab attaches to the users’ virtual hand to simulate grabbing and moving an object.

     

Save the scene and press Play to test the application. When you are through, if everything has worked according to plan, and you are able to pick up the interactable shapes by gripping the correct trigger on the controller, then repeat the steps we covered to create a Grab Action for the Left Trigger Grip button.

Step 11: Add User Interface Canvas Object to Scene

One fundamental element of VR design we have not addressed yet is UI design. The UI of an application is the client-facing portion that often handles visual input and output. Text, for example, is a common, simple component of UI design.

Unity offers a graphical text library through its package manager called TextMeshPro . Navigate to the Unity Package Manager through the Window tab of the menu and download the Text Mesh Pro library (Figure 8-32).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig32_HTML.jpg
Figure 8-32

The TextMesh Pro library, available through the Unity Package Manager, provides features for creating custom UI assets

Once the library has been added to Unity, right-click in the Scene Hierarchy and select UI ➤ Text - TextMeshPro (Figure 8-33).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig33_HTML.jpg
Figure 8-33

Right-click in the Scene Hierarchy to add a TextMeshPro object to the scene

A Canvas with a Text child and an EventSystem object will appear. With the Canvas object selected, turn your attention to the Inspector. A Canvas object in Unity comes with four components. The second component, Canvas, has a property called Render Mode, which has three options in its drop-down menu: Screen Space - Overlay, Screen Space - World, and World Space. Select World Space, leaving the Event Camera field empty as shown in Figure 8-34.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig34_HTML.jpg
Figure 8-34

The Render Mode parameter of the Canvas component determines where the Canvas will render its UI objects

Set the Rect Transform properties of the Canvas object to the values shown in Figure 8-34.

Step 12: Add a Second TextMeshPro Object as a Child to the Canvas Object

With the Canvas object selected in the Hierarchy, right-click and add a second Text - TextMeshPro object. Name the first text object Text_go and the second object Text_response.

Set the Rect Transform properties of each text object to the values shown in Figures 8-35 and 8-36.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig35_HTML.jpg
Figure 8-35

Rect Transform settings are shown for the Text_go game object

../images/488645_1_En_8_Chapter/488645_1_En_8_Fig36_HTML.jpg
Figure 8-36

Rect Transform settings are shown for the Text_response game object

Once you have set the Canvas and TextMesh objects’ properties in the Inspector, your scene should resemble Figure 8-37.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig37_HTML.jpg
Figure 8-37

This is a view of the Canvas object rendered in World Space

If you play-test the scene you should see the text fields appear slightly above your head to the left through your HMD. You can set the parameters of the Canvas and text object transforms to your liking.

The larger takeaway from adding the Canvas object and its TextMeshPro children is the role of the Render Mode property on the Canvas’s Canvas component. Placing a Canvas in a scene in World Space allows us to manipulate the size and appearance of the Canvas relative to the size and appearance of other objects in our scene. As it is a game object unto itself, the Canvas and its children reside in our scene just like every other game object, and like every other game object we can transform the value of its properties through scripting, even its text.

Part 2: The Code

By now you’ve learned all there is to know about connecting Interactors and Interactables in a Unity scene using VRTK. Without touching any C# code, you leveraged the Unity Event System to create event handlers, assign publishers, assign listeners, send delegates, and fire events.

What’s All This Talk of Delegates?

Delegates are usually discussed in advanced sections of C# instruction. For the scope of this book, it is sufficient to understand a delegate as a variable for a function. Much like variables, which can store the values of objects, delegates can store the value of a function. The principles of OOP frown on exposing the innards of classes to other classes. As a result, accessing the methods of classes from outside their scope can be difficult. The benefit is loosely coupled code; the cost is the overhead of extra planning. By storing the values of functions, delegates serve as a messenger service that can call functions between classes. VRTK Actions and Unity Events are a relative of delegates in this regard.

What follows is a quick and dirty prototype of a program that executes the logic of Dr. Pallavi’s test. Whereas she, as the physician, can control the flow and pace of the exercise in person, in a virtual application we need software to serve the role of adjudicator. The detail of the code I have written is beyond the scope of our lessons together, but it is available on the course GitHub page with commentary. I have taken pains to keep the code limited to concepts we have already discussed in this book. Although the code is not production-ready, I do feel it adequately stays within the scope of our lessons. I include images of the complete code in this text, but I only sparingly draw attention to its details for the sake of both clarity and brevity.

Step 1: Set Up the GameManager Script

To begin, create a folder called Scripts in your Assets folder in the Project window and create a new C# script. Name the script GameManager and open it in your IDE. Beneath the class title define eight public properties: three GameObjects called “shape” 1, 2, 3, respectively, and three Vector3 objects called “origin” 0, 1, 2, respectively. Create two public arrays, one of type GameObject and one of type Vector3.

Finally, create a Marquee property called marquee , a Collector property called collector, and an integer property called round and initialize it to 0. The content of the script should appear as shown in Figure 8-38.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig38_HTML.jpg
Figure 8-38

Define the public properties of the GameManager class in Visual Studio 2017

The Start() function of the GameManager class does the following:
  1. 1.

    Creates an array of three indexes called shapes_array, which holds three shape objects of type GameObject.

     
  2. 2.

    Creates an array of three indexes called origins_array, which holds three Vector3 objects that save the starting location of the three interactable shape objects in our scene.

     
  3. 3.

    Stores references to the Plane, Collector, and Marquee objects in our scene.

     

What’s an Array?

An array is a primitive data structure that holds objects in indexes. The first index of an array is conventionally identified as index 0. Arrays commonly hold collections of values that together comprise a set. Notably, arrays in C# require an exact number of indexes specified on creation. Capping the size of an array at compile time allows the computer to allot the exact amount of memory required to store the collection.

You might also notice that in the Start() function of the GameManager class we added two original components to the Plane object in our scene (Figure 8-39). These original components are original classes I created called Marquee and Collector. If you’re following along with your own IDE, the Collector and Marquee data types probably appear with a red line beneath them indicating an error. To correct this, let’s create our original classes.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig39_HTML.jpg
Figure 8-39

The GameManager’s Start() method adds two original components to the Plane object in our scene

Step 2: Create the Marquee and Collector Classes

Back in Unity, create two new scripts in your Scripts folder: Marquee.cs and Collector.cs. In the Marquee class , create the properties as seen in Figure 8-40.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig40_HTML.jpg
Figure 8-40

The properties of the Marquee class are shown here

In the Collector class , create the properties shown in Figure 8-41.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig41_HTML.jpg
Figure 8-41

Create the properties of the Collector class shown here

In the Marquee class, create a function called void Awake() and copy its code block from Figure 8-42.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig42_HTML.jpg
Figure 8-42

The Awake() method is part of the UnityEngine library, like Start() and Update(). Awake() is like Start() except Unity calls the Awake method before the Start() method, which it calls immediately before the first Update.

The Awake() function in the Marquee script executes before Unity calls the Start function in the GameManager class, even though the GameManager’s Start() function initializes the Marquee object. It is a good rule of thumb to use the Awake() method on a MonoBehaviour to initialize any components that need to be present before the game loop executes.

In the Marquee class, the Awake function essentially stores references to the TextMeshPro objects we placed as children of the Canvas object in the Scene Hierarchy. In this regard, the Marquee script controls the appearance of the billboard, or marquee, represented by the Canvas in the scene. Using the FindObjectOfType() method, which is also part of the UnityEngine code library, the Marquee class stores references to the other classes in our project.

Beware: Tightly Coupled Classes

Storing references to other classes in a class is a great way to stumble into bad coding behavior. The result is what is called tightly coupled code. Tightly coupled code leads to spaghetti code, the horrible condition of interdependent logic with the potential to break at every point of connection. As the master of software architecture, Grady Booch, noted, designing classes and their interactions is a fine art derived through iteration. It is near impossible to write loosely coupled code on a first pass. As this application is an exercise in prototyping, I won’t stress ideal software design composition. It is valuable to understand that although loosely coupled code is the goal, perfection cannot be the enemy of good when prototyping rapidly through iteration. However, as always, practice makes perfect.

If the GameManager is the brains of our application, setting everything up and coordinating the flow of control, and the Marquee class is the logic of our Canvas billboard, what then is the Collector class?

Step 3: Create a Trigger Collider

Returning to the Unity Scene Hierarchy in the Unity Editor, we can find the Plane game object we created in Step 3 of Part 1 in this exercise. Selecting the Plane object and opening its properties in the Inspector, notice the component called Mesh Collider. There are two check boxes at the top of the Mesh Collider components: Convex and Is Trigger. Select both check boxes as shown in Figure 8-43.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig43_HTML.jpg
Figure 8-43

Check the Convex and Is Trigger check boxes on the Plane object’s Mesh Collider component

Selecting both Convex and Is Trigger creates an actionable area around the Plane object. Any object with a RigidBody component attached can trigger an event on entering the Plane’s mesh collider. Because the Interactable prefabs we dragged and dropped from the VRTK prefab folder have RigidBody components attached to them by default, they have the potential to trigger an action on entering the Plane’s mesh collider. We will use this trigger event handling feature of the Plane’s mesh collider to prompt a change in our Marquee object.

Step 4: Create an Event Handler for the Trigger Collider

Return to the Collector.cs script in your IDE and create a function called void OnTriggerEnter. Copy the signature and body of the OnTriggerEnter method from Figure 8-44.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig44_HTML.jpg
Figure 8-44

Unity implicitly understands a method named OnTriggerEnter listens for an event on the game object’s collider

When the Is Trigger and Convex check boxes are selected on a mesh collider, Unity knows to listen for an OnTriggerEnter event handler. We are then free to wire up the event handler to the action we’d like to drive as a result. The code I have written in the body of the OnTriggerEnter() method stores the name of the shape entering the Plane’s mesh collider and compares it to the name of the shape the Marquee is expecting for a correct answer (CUPCAKE!). If the shape the user drops into the collider matches the shape the billboard asked the user to select, then my script sends a message to the other components on its game object (Figure 8-45).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig45_HTML.jpg
Figure 8-45

Interactable objects dropped on the Plane object fire the plane collider’s OnTriggerEnter() method, which exists in the Collector class attached to the Plane as a component

In the OnTriggerEnter() method , the parameters of the BroadcastMessage() function, which comes from the Component class of the UnityEngine code library, are the name of the function to call and the value to send to it.

If you are wondering how the Collector class knows the name of the object the Marquee class is expecting for a correct answer, then look at the final line of the GameManager’s Start() method (Figure 8-46).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig46_HTML.jpg
Figure 8-46

The GameManager, in its Start() method, calls a function on itself that sets the application in motion. The this keyword that precedes the function call refers to the fact that the method exists on the object that is calling it. In this context, this refers to the GameManager object created by Unity on the application’s start

Step 5: Create a Method to Tell User Which Shape to Select

The final line of the GameManager class’s Start() method calls a method on the GameManager class called SetMarqueeObjectName(). I define the signature and body of the method below the Start() method in the GameManager class (Figure 8-47).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig47_HTML.jpg
Figure 8-47

Because the GameManager class’s Start() method stored a reference to the Marquee class, a method defined in the GameManager class can call a public method defined in the Marquee class. Events and delegates are more mature mechanisms for handling interclass communication than publicly defined methods

The SetMarqueeObjectName() method takes an integer as a parameter. The integer represents the “round” of the test the user is in. Each round corresponds to one shape object in the GameManager’s shapes_array collection, which is comprised of the interactable objects we placed in our scene—cube, sphere, and cylinder. As long as the round number is lower than the number of shapes in our array (considering we initialized our “round” variable to begin at zero), the Marquee object will broadcast what shape the user is to select to the billboard in our scene. The GameManager asks the Marquee to set the name of the object it would like the user to select by calling the Marquee’s WriteToObject() method and passing as a parameter the name of the shape pulled from the array.

Step 6: Create a Method to Tell Collector Which Shape Is Correct

The SetMarqueeObjectName() method in the GameManager class also calls another function on the GameManager object through the code, this.SetCollectorObjectName(marquee). Beneath the SetMarqueeObjectName() method on the GameManager class, I have created another method called—you guessed it—SetCollectorObjectName(), as shown in Figure 8-48.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig48_HTML.jpg
Figure 8-48

Chaining methods to access public properties in different classes is the type of bad coding practice events and delegates exist to replace. However, understanding how to connect classes through events in a rudimentary fashion leads to deeper appreciation of the structural complexity the VRTK interface and the Unity Event system abstract away

Called by the SetMarqueeObjectName() method , SetCollectorObjectName() sets the objectName property on the Collector class equal to the name of the shape the Marquee has broadcast to the user to select.

When you’ve completed the scripts, the Marquee object will instruct the Canvas object to display the name of the first shape in the GameManager shapes_array in the TextMeshPro Text_go text object (Figure 8-49). By storing a reference to this shape’s name in its own member variable, the Collector object, attached to the Plane trigger collider game object, knows what shape entering the Plane’s mesh collider triggers a correct response.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig49_HTML.jpg
Figure 8-49

The name of the first interactable in our scene arrives on the Canvas object through an event passing pipeline provided by the Marquee class

The Collector’s OnTriggerEnter method grabs the name of the RigidBody entering the Plane’s mesh collider trigger sensor and stores it in its member variable collected_objectName, as shown in Figure 8-50. Because the GameManager has already set the value of the Collector object’s objectName property in its SetCollectorObjectName() method , the Collector knows what shape the Marquee object has instructed the user to select through the Canvas’s text field.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig50_HTML.jpg
Figure 8-50

The Collector class contains the logic that will evaluate whether the shape dropped by the user into the Plane collider matches the name of the shape published to the Marquee

If the value of the objectName variable matches the value of the collected_objectName value, then the Collector broadcasts a message to the other components attached to its game object, calling the SetCorrect method and passing as a parameter the value true.

In its Start() method, the GameManager class also set the Collector and Marquee objects, which both derive from the MonoBehaviour class, as components of the Plane game object that holds the mesh collider trigger (Figure 8-51). Because the Collector and Marquee objects both exist as components on the same game object, Plane, they can communicate via the BroadcastMessage() function, which exists on every Unity Component class.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig51_HTML.jpg
Figure 8-51

The GameManager's Start() function attaches Marquee and Collector components on to the Plane object to establish communication between the Canvas and the trigger collider

Step 7: Define a Listener Function for the OnTriggerEnter Event Handler

Speaking of which, the BroadcastMessage() method called from the Collector’s OnTriggerEnter() event handler requests a message from all attached components within earshot (on the same game object) to call a method called SetCorrect and set as its argument either true or false, depending on whether the user has provided the same shape expected by the Marquee.

SetCorrect is a method I created in the Marquee class to control the value of its isCorrect member variable (Figure 8-52). The logic of the function isn’t germane to the larger lesson we’re addressing so I won’t go too deep into it. It is sufficient to explain that if the user’s answer is correct, then the Marquee writes “Correct” to the Canvas object (Figure 8-53), moves the application into the next “round,” and writes “Game Over” to the Canvas when no more shapes exist in the GameManager’s shapes_array.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig52_HTML.jpg
Figure 8-52

The SetCorrect() method on the Marquee class responds to the BroadcastMessage() function called from the Collector class’s OnTriggerEnter() method

../images/488645_1_En_8_Chapter/488645_1_En_8_Fig53_HTML.jpg
Figure 8-53

The Marquee’s SetCorrect() method writes the result of the Collector class’s evaluation method to the TextMesh_reference object on the Canvas. A correct response prompts the text shown

If the user’s answer is incorrect, the function writes “Wrong” to the Canvas (Figure 8-54), replaces the shape selected by the user back to its origin, and moves the game to the next round.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig54_HTML.jpg
Figure 8-54

The Marquee’s SetCorrect() method writes the result of the Collector class’s evaluation method to the TextMesh_reference object on the Canvas. An incorrect response prompts the text shown

Step 8: Create a GameOver() Method in the GameManager Class

Whether the user’s answer is correct or incorrect, the Marquee class’s SetCorrect() method moves the test into the next round by calling the GameManager’s SetMarqueeObjectName() method, the definition of which can be seen in Figure 8-55. Of course, as we’ve already seen, if the “round” is greater than the number of shapes in the scene, the program jumps to the GameManager’s GameOver() method .
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig55_HTML.jpg
Figure 8-55

The signature of the GameOver() method is defined in the GameManager class

Step 9: Define Methods in the Marquee Class to Write to the Canvas Object’s Text Fields

The GameManager’s GameOver() method makes a call to a function the GameManager has called once before in its SetMarqueeObjectName() method: marquee.WriteToObject().

The Marquee class’s WriteToObject() and WriteToResponse() methods set the words to appear in the TextMeshPro fields on the scene’s Canvas object (Figure 8-56). The Marquee class saved references to the TextMeshPro game objects in its Awake() method.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig56_HTML.jpg
Figure 8-56

The Marquee methods defined in the figure control the text that appears on the canvas

Step 10: Connect the GameManager Script to an Empty Game Object

Finally, to connect our game logic to our Unity Scene, create an empty game object in the Scene Hierarchy in the Unity Editor and name it GameManager. If you have coded along with the exercise, attach the GameManager.cs script as a component to the GameManager game object.

Step 11: Drag and Drop the Interactable Shape Objects into the GameManager

In the Inspector, you will see the public properties exposed in the GameManager’s class definition. Drag and drop the Interactable shape objects from the Scene Hierarchy into the fields of the GameManager’s Shapes_array property (Figure 8-57). The values of the Origins_array property you can leave as is because they will be defined by Unity at runtime.
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig57_HTML.jpg
Figure 8-57

Drag and drop the interactable shapes from the Scene Hierarchy into the GameManager object’s Game Manager script to create references to the objects in code

Step 12: Play-test

After making sure there are no errors in any of your scripts and that all files have been saved and compiled, press Play to test your scene. The scene should begin with the name of the shape to select broadcast to the Canvas in your HMD or monitor. If your touch controllers are activated, then holding the grip action triggers while hovering within the container of any interactable shape object should activate that shape’s Grab Action. Dropping the incorrect shape onto the Plane trigger should reset the shape to its original position and write the incorrect selection text to the Canvas marquee. Correct responses should trigger a correct message. After cycling through the shapes, the game loop should call Game Over in the Canvas’s text (Figure 8-58).
../images/488645_1_En_8_Chapter/488645_1_En_8_Fig58_HTML.jpg
Figure 8-58

The Game Manager’s GameOver() method sends a string to the Marquee class to print to the canvas

What’s Up with My Velocity Tracking?

If you’re using the Oculus Rift touch controllers, like me, then you have to go through a few more steps to correctly calibrate the velocity trackers on your controllers with the VRTK TrackedAlias object. You can find details about this process in the documentation on the VRTK GitHub page. Velocity tracking for touch controllers is essential for re-creating the realistic physics of throwing an object, for example.

In a roundabout way, you’ve now seen every line of code in the application. The manner in which I presented it to you generally traces the path of execution of the program. It’s important to note that the code in this section of the chapter, Part 2, does not have any relationship to the VRTK components we set up in Part 1. You can apply interactivity to your VR experiences using only VRTK Interactors and Interactable prefabs. The code I’ve presented to you in Part 2 is a rudimentary example of how you could implement application logic and flow control behind the scenes of your experience. Once you are comfortable using out-of-the-box features of Unity scripting, like GameObject.Find() and GetComponent<T>(), then you can begin study of more advanced scripting techniques like custom delegates and events.

However, until then it is no small feat to prototype behavioral logic using Unity and C#, even if the design of your or my code would not meet the muster of a startup’s code review. Design patterns such as Singletons and Mediators are tremendously helpful tools to create functionality and should serve as goals for your further development. Meanwhile, the fundamentals we have addressed in this chapter and so far in this book provide you with the necessary skills to begin prototyping your own VR experiences immediately. After all, updating game objects, such as text, in response to a user’s input is but one example of many simple events achievable through beginning-level code that can facilitate convincing immersion in a VR scene.

Summary

Running Dr. Pallavi’s test with the VRTK components wired up and the GameManager script applied provides a reasonable prototype for the doctor. She approves of the work we’ve done for her and looks forward to iterating the project with more bells and whistles.

In this chapter you added Interactable prefabs from the VRTK library into a scene. You facilitated user interaction with those prefabs by connecting the user’s VR controllers with VRTK’s Interactor prefabs. Using the VRTK Float to Boolean component, you converted a user’s pressure on a trigger button into a yes/no event. Through C# code you created trigger events connected to game objects’ Collider components. You used built-in Unity functions to broadcast messages to game objects to create complex logic, and you configured code to write dynamic text to a canvas in VR world space.

In the next and final chapter, we address the last looming concept in VR design left for us to explore: getting around a scene in a way that collapses geography into geometry. In other words, how do we create movement without moving? Until VR-synced omnidirectional treadmills become commonplace, it is our responsibility as immersive experience designers to devise ways for users to travel through a scene while taking into account the limitations of their bodies, systems, and environments. Fortunately, VRTK has tools for that. They’re called Pointers and Locomotion, and we dive into them next.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.168.56