Project setup

We will start by getting acquainted with the project; if you have not done so already, clone or download the repository for this book from https://github.com/PacktPublishing/Microsoft-HoloLens-By-Example. Once downloaded, launch Unity and load the Starter project for this chapter found in the Chapter6/Starter directory. 

Once loaded, your project should look similar to this:

Credit and thanks to Menagy for the use of the model used in this project; it is available at http://www.blendswap.com/blends/view/19629.

As mentioned already, scanning and placement have been implemented using the techniques we covered in the last two chapters, leaving us to focus solely on the ways of interacting with the hologram. In this section, we will overview the main components to give you a sense of how the application is structured.

The scene states are managed by the SceneManager script, which hands over control to the PlayStateManager script once the hologram has been placed, and the entry point from where we will begin our journey.

The hologram we will be interacting with is a Kuka robot arm (shown in the preceding image) with the bulk of the logic encapsulated in the RobotController script attached to the root GameObject of the robot. The methods we are interested in are Rotate and MoveIKHandleRotate takes in the name of a child GameObject (obtained from a collision), Euler angle containing the change in rotation, and origin of rotation (local or world); this method then simply finds the child transform using the name, and applies the specified rotation relative to the specific origin. The MoveIKHandle method expects a translation vector, which is then applied to an external target GameObject that the arm will seek using a simple Inverse Kinematics (IK) solver. When rotating using the Rotate method, the IK solver must be disabled and only enabled when manipulating this handle; this can be done by toggling the solverActive variable of RobotController.

Inverse Kinematics (IK) refers to an algorithm used to determine the joint parameters that place the end-effector at a specific location. The algorithm used in this chapter is called Cyclic Coordinate Descent (CCD) and provides a simple solution by only solving for a single joint at a time, working from the end-effector (the tip of the robot) to the root (base), with each iteration rotating in the direction to reduce the angle distance between the end-effector and target.

Each part is constrained to the axis which it can rotate around.

  • Base can be rotated around the world's y axis
  • Arm 1 and Arm 2 can each be rotated around the world's x axis
  • The tool acts as the end-effector of the inverse kinematics chain and rotates the Base, Arm 1, and Arm 2 toward its target, the handle, as described earlier

The following figure illustrates these parts, along with the axes they can be manipulated around, starting at the bottom:

It will be our responsibility, via the PlayStateManager script, to interpret the user's intention using gestures and/or voice, and proxy these to the RobotController. However, before we move on to adding gestures, let's quickly explore the PlayStateManager script to get familiar with it and its nuances. In the Unity editor, look for the script by clicking on the search field of the Project panel and entering PlayStateManager; when visible, double-click to open it in Visual Studio.

Currently, the PlayStateManager is concerned with the following:

  • Knowing when it's active, is done by registering for the OnStateChangedEvent event in the Start method, which the SceneManager script broadcasts when the state changes. This, in turn, calls the OnStateActiveChanged method. Later on, we can use this method to register for gestures and voice keywords.
  • Next, is knowing what the user is currently gazing at so that we know what to modify when the user performs a gesture or voice command. This is done by polling the GazeManager in the LateUpdate method and assigning the current FocusedObject to the equivalent local variable. Within this property, we call OnFocusedObjectChanged, passing in the previous and currently focused GameObject. This method will query the currently focused GameObject for the Interactible component and, if attached, will assign it to the CurrentInteractible property, which simply passes GazeExited on any previously set Interactible and GazeEntered for the CurrentInteractible.
  • Finally, PlayStateManager manages selecting the CurrentInteractible, which essentially locks the CurrentInteractible until deselected. Selecting is done by calling SelectCurrentInteractible when a CurrentInteractible is available, and calling DeselectCurrentInteractible to unselect.
Like the Update method, LateUpdate is called once every frame, but is called after all the Update functions have been called. It is useful for anything that is dependent on something being updated, such as having a camera follow an object.

You will now hopefully have a high-level understanding of the relevant parts for this project, at least enough for us to move on and look at gestures and voice. To help make the concepts more relevant, build and deploy the application in the emulator or device to get a feel of how the application currently works (and that it does work); then, return here to continue with the next section, where we will look at adding our first form of interaction--gestures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.220.114