Understanding and using the real world

One of the opportunities (and challenges) with MR is understanding enough of the real world to make it useful in the experiences you create. We have seen how we can make use of our understanding of the surrounding surfaces, but imagine how much more magical the experience would be if we could recognize objects and make appropriate use of them. For example, being able to recognize a chair would give us the ability to let virtual characters sit on the chair or know that a holographic phone (the ones that sit on a desk, if they still exist) is best placed on a desk.

This process of recognizing objects from a digital camera feed sits in the field of computer vision. Luckily, for us, there are a few libraries out there that bundle together a suite of tools to help us better understand the image; one of those libraries is Vuforia. A computer vision library born from the research labs at Qualcomm, it quickly became the de facto AR tool amongst many budding mobile application developers during the early 2000s. Now, owned by PTC, Vuforia still remains popular and provides us with a straightforward way of implementing our proof of concept of being able to manipulate our hologram using physical objects, which is the topic of this section.

To use Vuforia, you will first need to sign up as a developer. It's free (as far as I'm aware), so head to https://developer.vuforia.com/ and register (or login if you already have an account). To make use of Vuforia, we need to obtain a license key and register/create targets for our application. Licensing is simply a matter of registering your application and its purpose. Let's do that now. Once logged in, head to https://developer.vuforia.com/targetmanager/licenseManager/licenseListing and click on the Add License Key button. Next, select the Development - my app is in development option along with entering a compelling name for your application. Once you click Next and agree to the terms and conditions, you will be taken back to the Licence Manager screen with, assuming everything went smoothly, your application on a list. Click on the name to see the details, including the licence, and make note of this as you will need it shortly.

 As mentioned before, Vuforia is a library that provides a way to recognize and track targets. Targets can be anything from a planar (flat) image, a cuboid (box), cylinder, or 3D object, and is normally predefined. To register our targets, we will use Vuforia's online Target Manager tab, which, at the time of writing, is visible next to Licence Manager. Click to open the relevant page:

Next, we need to create a Database, which is essentially a repository for a set of targets; create this database by clicking on the Add Database button, as shown. This will open a dialog prompting you to enter the details for this database. Enter a name and leave Type as Device:

Types refer to the types of target used and where they will reside; Device and Cloud are self-explanatory, while VuMark is Vuforia's proprietary bar code. For more information about target types, refer to the documentation at, https://library.vuforia.com/articles/Training/Getting-Started-with-the-Vuforia-Target-Manager

After clicking Create, you will be able to access the newly created database. The last few steps add our targets and then download the convenient Unity package, which we will import into our project. Back on the Target Manager page, click on the database name you just created. To create a new target, click on the Add Target button. This will open a dialog, where you define your target. As mentioned before, there are a few types of target and you can learn all about them from the link shown arlier. For us, we simply want Single Image, so leave Type as its default.

Next, we need to provide it with an image. There are many approaches to recognizing objects from a camera frame. Vuforia favours images with a lot of features. Here, features are distinct corners. You can use any image or use one of the images I have included in the Texture folder of the project. Next, we need to provide the Width; the target will be in augmented reality. For this project, I have chosen 0.066 meters (or 6.6 centimeters). Height can be ignored, as it is inferred by the aspect ratio of your image and the given width. The last piece of data required is the target's Name. Enter a meaningful name in the Name field shown in the following screenshot:

Once clicking on Add, you will be returned to the database's Target page with your newly created target visible. Along with its Name, Type, Status, and Date Modified, it includes Status. This status provides some indication of how confidently Vuforia is able to recognize the target. We will be using one target for rotation and another for translation, so go ahead and repeat the process to add one more targets (a different image).

The final task remaining is to download the package. To do this, simply click on the Download Database (All) button and select Unity Editor as the development platform as in the following screenshot:

We now have our targets but have yet to add the SDK to use of them; let's do that now. Near the top of the page, click on Downloads. This will take you to the SDK section of the available downloads. Click on the Download for Unity link and accept their terms and conditions to begin downloading. Once downloaded, it is simply a matter of importing the package, either by double-clicking on the package within Windows Explorer or importing from within the Unity Editor using the menu Assets | Import Package | Custom Packages ... menu item. Import the whole package or remove the dependencies of iOS and Android if you want to save a bit of room. Next, we can import our targets we downloaded earlier using the same process; either double-click on the Unity package we downloaded or import it from within the Unity Editor. 

We are now ready to give our users the ability to manipulate the hologram using physical objects. Of course, you will need to print out the targets we have just created and fasten them to some physical object that can easily rotate without obstructing the target too much.

Now is probably a good time to pause for a moment and revise our goal. What we are trying to achieve is providing our users with the ability to manipulate the hologram using physical objects. Ideally (and it is feasible), we would achieve this without having any artificial artefacts (that is, our targets/markers). These physical objects will be assumed to be in close proximity to the user and, in this example, we will have two objects, one controlling rotation around the y-axis and the other controlling translation along the y-axis. The user will make adjustments (rotation or translation) by rotating the physical object. With the concept now fleshed out, it's time to make it real. 

First, we will need to add ARCamera to the scene. This is a camera Vuforia will use to capture the camera feed from the HoloLens. Enter ARCamera in the Project panel's Search bar and, once visible, drag the prefab onto the scene. Select it from the Hierarchy panel to make its properties visible in the Inspector panel and, from within the Vuforia Behaviour component, click on Open Vuforia configuration and make the following amendments:

  • Copy and paste your applications licence key into the App Licence Key field. 
  • Set Max Simultaneous Tracked Images to 2; this will allow both of our targets to be recognised and tracked at the same time. 
  • Expand Digital Eyewear; select Optical See-Through for Type and HoloLens for See Through Config
  • Expand Datasets; check Load <Name of Vuforia Database>. Once checked, the Activate option will become visible. Check this also. 
  • Expand Webcam and check Disable Vuforia Play Mode

Return to the properties of ARCamera by selecting the associated GameObject from within the Hierarchy panel; with the properties now visible in the Inspector panel, make sure the Vuforia Behaviour component's World Center Model is set to CAMERA (this determines how the camera updates its position) and, finally, click and drag Main Camera from the Hierarchy panel onto the Center Anchor Point field below (reference camera).

Now we have our camera in place, our next task is to add the targets we're interested in tracking. Vuforia has made this easy for us by providing a prefab with most of the work done. From within the Project panel, type ImageTarget into the search box and once it's visible click and drag the prefab into the Scene. We will need a prefab for each target; I'll walk through setting up one and leave the second as an exercise for you. 

Click the newly added GameObject (ImageTarget) from within the Hierarchy panel to bring up its properties in the Inspector panel and make the following changes to the Image Target Behaviour component:

  •  Select <Name of Vuforia Database> from the Database drop-down.
  • Next select your first target (I have called mine dial_1 and the other dial_2) from the Image Target dropdown. Once set, the Width and Height will be automatically populated.
  • Uncheck Enable Extended Tracking if checked. 
  • Uncheck Enable Smart Terrain if checked.
  • Remove the Default Trackable Event Handler component; we will build our own version of this script. 
If the GameObject SmartTerrain_ImageTarget in present in your scene, delete it. This, as the name suggests, is specifically for the SmartTerrain functionality Vuforia offers, a feature less applicable to the HoloLens.  
It's worth noting that, for our example, we have disabled Extended Tracking; Vuforia introduced this as a way to take advantage of the solid tracking HoloLens provides. Without HoloLens, Vuforia handles tracking by performing fairly expensive processing on each camera frame. Extending Tracking transfers this responsibility to the HoloLens, leaving Vuforia only concerned with the recognition and the hand-off of the targets.
We have disabled it in this example, leaving Vuforia responsible for both, recognition and tracking. We have done this for two main reasons; the first is that HoloLens tracking performs poorly with anything less that 0.8 meters (the distance our objects will likely be at); the second is that we want to detect changes in rotation and, during development, I found this wasn't detected when Extended Tracking was enabled.

If you look at your Scene, you will be presented with a white plane where your target is located. This is due to the malformed import setting on the associated textures. Although just aesthetics, it provides a useful visual aid when navigating around your scene. You can resolve this by extending Editor/QCAR/ImageTargetTextures/<Name of Vuforia Database>/ from within your Project panel. For each texture, update the following properties:

  • Set Texture Type to Default 
  • Set Texture Shape to 2D 

Once updated, click on the Apply button and, once applied, you should see the update in your scene (if not, then try selecting it from within the Hierarchy panel). 

The next step is to add some visual feedback for the user, that is, a virtual representation of the dial. This allows us to provide feedback for the current state (degrees of change in our case) as well as keeping the user informed about the tracking state for each image. Within the App/Models/ folder of the Project panel, there is a model that we will use. Once located, click and drag onto ImageTarget we have just created, ensuring its Position is set to X: 0, Y: 0, Z: 0 and Rotation is set to X: -90, Y: 0, Z: 0. This is a simple model made up of two parts, a circle and an arrow, with the intention of portraying the current offset using the arrow to point in the GameObjects, initial forward direction. Your project should look similar to the following; now is a good time to build and deploy to your HoloLens to test out the tracking:

Of course, nothing happens when you rotate the targets. This is our next task. For this, we will create a script, to be attached to each target, that will be responsible for detecting and broadcasting changes in rotation. SceneManager will observe these events and, when detected, send the appropriate operation to the BlenderLIVE service.

Create a new script by clicking on Create from within the Project panel and selecting C# Script. Give the script the name ARUIDial and double-click to open it in Visual Studio. First, we will need to include the Vuforia namespace. Add the following at the top of the script: 

using Vuforia;

Next, have the class implement the interface ITrackableEventHandler. This interface has a single method that Vuforia will call when the state of tracking changes; let's also add this: 

public class ARUIDial : MonoBehaviour, ITrackableEventHandler
{
void Start () {

}

void Update () {

}

public void OnTrackableStateChanged(TrackableBehaviour.Status
previousStatus, TrackableBehaviour.Status newStatus)
{

}
}

As mentioned earlier, we will notify interested parties of the changes via an event which will hold reference to the ARUIDial that raised the event and changes, such as the change in degrees the object had been rotated. Add the following delegate and event at the top of the ARUIDial class: 

 public delegate void ARUIDialChanged(ARUIDial dial, float change);
public event ARUIDialChanged OnARUIDialChanged = delegate { };

In order to be useful, ARUIDial needs to be aware whether its currently being tracked or not. To do this, we will register for tracking events using the TrackableBehaviour component attached to the same GameObject. Because we will be frequently referencing TrackableBehaviour, we will use a variable to cache its reference. Add the following variable to the ARUIDial class:

    private TrackableBehaviour trackableBehaviour;

And now, add the following code to the Start method that will register itself to receive tracking events (and obtain a reference to TrackableBehaviour): 

 void Start()
{
trackableBehaviour = GetComponent<TrackableBehaviour>();

if (trackableBehaviour)
{
trackableBehaviour.RegisterTrackableEventHandler(this);
}
}

We will receive tracking events via the OnTrackableStateChanged method. In addition, we will create a property to provide us with a convenient way of knowing if we are currently tracking the target or not. Add the following property to the ARUIDial class: 

 public bool IsTracking
{
get
{
if (trackableBehaviour)
{
return trackableBehaviour.CurrentStatus ==
TrackableBehaviour.Status.DETECTED ||
trackableBehaviour.CurrentStatus ==
TrackableBehaviour.Status.TRACKED ||
trackableBehaviour.CurrentStatus ==
TrackableBehaviour.Status.EXTENDED_TRACKED;
}

return false;
}
}
The term tracking refers to a concept in computer vision where you have recognized an object and are able to identify and report on its current location.

The reason we are interested in knowing when the target is being tracked or not is mainly concerned with showing or hiding the visual components of the target, in our case, the model of a dial. When TrackableBehaviour notifies us of a state change, we will simply show or hide any MeshRenderer attached accordingly. The obvious place for this code is the callback itself. Make the following changes to the OnTrackableStateChanged method:

public void OnTrackableStateChanged(TrackableBehaviour.Status previousStatus, TrackableBehaviour.Status newStatus)
{
if(IsStatusApproxTracking(previousStatus) ==
IsStatusApproxTracking(newStatus))
{
return; // no substantial change, so ignore
}

if (IsStatusApproxTracking(newStatus))
{
OnTrackingFound();
}
else
{
OnTrackingLost();
}
}

Here, we are calling a helper method that generalises the state, as we did with the IsTracking property . If there is no change, then we simply ignore the request; otherwise, we call OnTrackingFound when tracking and OnTrackingLost when tracking has been lost. Let's add all three methods (IsStatusApproxTracking, OnTrackingFound, and OnTrackingLost). Now, add the following code to the ARUIDial class:

 private bool IsStatusApproxTracking(TrackableBehaviour.Status status)
{
return status == TrackableBehaviour.Status.DETECTED ||
status == TrackableBehaviour.Status.TRACKED ||
status == TrackableBehaviour.Status.EXTENDED_TRACKED;
}

Similar to our IsTracking property, whenever the target has been identified it is considered as being, as we are not too concerned with the details: 

 private void OnTrackingFound()
{
Renderer[] rendererComponents = GetComponentsInChildren<Renderer>
(true);
Collider[] colliderComponents = GetComponentsInChildren<Collider>
(true);

foreach (Renderer component in rendererComponents)
{
component.enabled = true;
}

foreach (Collider component in colliderComponents)
{
component.enabled = true;
}
}

private void OnTrackingLost()
{
Renderer[] rendererComponents = GetComponentsInChildren<Renderer>
(true);
Collider[] colliderComponents = GetComponentsInChildren<Collider>
(true);

foreach (Renderer component in rendererComponents)
{
component.enabled = false;
}

foreach (Collider component in colliderComponents)
{
component.enabled = false;
}
}

Here, we implement the methods responsible for enabling and disabling all the attached and nested Colliders and Renderers.
The last task for the ARUIDial is to monitor and broadcast rotational changes, along with updating its visual state to make it more apparent to the user what is happening. For this example, we have taken a very simplistic approach; when first detected, we will remember the GameObject's current forward and up direction.

If any significant change is detected in the up direction, we will assume the user is moving the object and ignore the change; otherwise, we will compare the current forward direction for transform with the previous and use this to determine any change. Let's do this now; start by adding the following variables to the ARUIDial class:

 private bool initilised = false;

private Vector3 previousUp = Vector3.zero;
private Vector3 previousForward = Vector3.zero;

We will use previousUp and previousForward to store the previous directions of the GameObject for each update and use initialised to flag when we are ready to make the comparison; all of this will occur within the Update method as such. Make the following amendments: 

 void Update()
{
if (IsTracking)
{
if (initilised)
{
if (Vector3.Dot(previousUp, transform.up) < 0.8f)
{
initilised = false;
}
else
{
float change = Vector3.Angle(transform.forward, previousForward);
Vector3 cross = Vector3.Cross(transform.forward, previousForward);
if (cross.y < 0) change = -change;

if (Mathf.Abs(change) > 5f)
{
OnARUIDialChanged(this, change);
previousForward = transform.forward;
}
}
}
else
{
initilised = true;
previousUp = transform.up;
previousForward = transform.forward;
}
}
}

As mentioned earlier, we are only concerned when the object is being tracked and, if it is being tracked, we first test the current and previous up direction of the GameObject's transform. If we detect a large change, then we assume tracking is unstable and flag initialised to false to force the script to reinitialize the up direction. If considered stable (the transform up direction hasn't changed significantly), we compare the transform GameObject's current forward direction with the previous forward direction, and use the difference to determine the change in angles. If this change is greater than 5 degrees, we make the subscribed listeners aware by firing the OnARUIDialChanged event.

If not initialized, we simply update previousUp and previousForward for processing in subsequent updates. 

Thus far, we have detected the changes but provided no feedback to the user. For this, we will leave the outer ring to rotate freely with the target (the direction the user is rotating) but keep the child GameObject (arrow) locked to the initialized forward direction. First things first, we need a reference to the GameObject we will be locking and a variable to store the rotation on initialization. At the top of the ARUIDial class, add the following public property:

    public GameObject lockedGameObject; 
private Quaternion lockedGameObectsRotation = Quaternion.identity;

We will assign the nested object to this variable once back in the Editor; for now, we will continue updating the code. Our next task is to keep track of the rotation on initialization; as you would expect, this is done when setting initialised to true. Jump back into the Update method and make the following amendment:

 void Update()
{
if (IsTracking)
{
if (initilised)
{
// detect displacement by checking the current up with the previous
if (Vector3.Dot(previousUp, transform.up) < 0.8f)
{
initilised = false;
}
else
{
float change = Vector3.Angle(transform.forward, previousForward);
Vector3 cross = Vector3.Cross(transform.forward, previousForward);
if (cross.y < 0) change = -change;

if (Mathf.Abs(change) > 5f)
{
OnARUIDialChanged(this, change);
previousForward = transform.forward;
}
}
}
else
{
initilised = true;
previousUp = transform.up;
previousForward = transform.forward;

if (lockedGameObject)
{
lockedGameObectsRotation = lockedGameObject.transform.rotation;
}
}
}
}

Here, we are simply caching the rotation of the assigned lockedGameObject. Next, we will force this rotation by continuously resetting the lockedGameObjects rotation to this cached value. To avoid conflicts with Vuforia trying to update its rotation, we will perform its update in the LateUpdate method, essentially allowing us to have the last say. Add the following to the ARUIDial class: 

 private void LateUpdate()
{
if (IsTracking)
{
if (initilised)
{
if (lockedGameObject)
{
lockedGameObject.transform.rotation = lockedGameObectsRotation;
}
}
}
}

Here, we are simply checking whether we are currently tracking the target and are a valid state before resetting the lockedGameObject.transform.rotation with the cached value. This now concludes our ARUIDial script. Jump back into the Unity Editor and attach this script onto each of our targets by simply clicking on each from within the Hierarchy panel, then clicking on the Add Component button from the Inspector panel, and typing ARUIDial. With the script attached to both, expand each GameObject in the Hierarchy  panel and drag the Arrow GameObject onto the Locked Game Object field. After this, your target GameObject properties should look similar to the screenshot shown:

With our targets now wired up, the final task is to subscribe to the associated events and relay the adjustments to BlenderLiveManager. As mentioned earlier, this will be the responsibility of SceneManager. So, with that in mind, let's open SceneManager in Visual Studio and make it responsible. Start off by adding the following instance variables:

 public ARUIDial rotationUIDial;

public ARUIDial translateXUIDial;

public float rotationPerDegree = 1.0f;

public float metersPerDegree = 0.00138f;

Here, we have declared two variables, one for each of our targets; rotationUIDial for rotating and translateXUIDial for translating the hologram (as discussed earlier). Next, we declare rotationPerDegree, which defines the transformative relationship between the physical and virtual object. Setting it to 1 means that a change of 1 degree of rotationUIDial will change the hologram by 1 degree. Similarly, with metersPerDegree; this variable defines the relationship between a change in the degree of the target with the corresponding translation. We will register for the relevant event in the Start method:

 void Start () {
if (rotationUIDial)
{
rotationUIDial.OnARUIDialChanged +=
RotationUIDial_OnARUIDialChanged;

}

if (translateXUIDial)
{
translateXUIDial.OnARUIDialChanged +=
TranslateXUIDial_OnARUIDialChanged;

}

...
}

And finally, we will write the code to handle each of the events, starting with the event associated with rotationUIDial. Add the following method to the SceneManager class: 

private void RotationUIDial_OnARUIDialChanged(ARUIDial dial, float change)
{
float rotation = rotationPerDegree * change;

var bgoNames =
BlenderServiceManager.Instance.GetAllBlenderGameObjectNames();

if (bgoNames.Count == 0)
{
return;
}

var bgo =
BlenderServiceManager.Instance.GetBlenderGameObjectWithName(bgoNames[0 ]);
BlenderServiceManager.Instance.SendOperation(bgo,
BlenderServiceManager.ObjectOperations.Rotate, new Vector3(0, 0,
rotation));
}

When the associated target is rotated, it broadcasts the event passing along change in degrees. We use this with the associated weight to determine the change to be applied to the imported Blender object. After this, we verify that we have at least one Blender object loaded (in this example, we are assuming only a single model will be  loaded). After obtaining a reference to the first Blender object, we will send the request by calling SendOperation of BlenderServiceManager. Once the BlenderLIVE service receives the operation, it will execute the operation locally and broadcast the changes to all the connected clients. We do exactly the same for the translation dial but using a different weight and operation. Add the following method to your SceneManager class: 

private void TranslateXUIDial_OnARUIDialChanged(ARUIDial dial, float change)
{
float displacement = metersPerDegree * change;

var bgoNames =
BlenderServiceManager.Instance.GetAllBlenderGameObjectNames();

if (bgoNames.Count == 0)
{
return;
}

var bgo =
BlenderServiceManager.Instance.GetBlenderGameObjectWithName(bgoNames[0 ]);
BlenderServiceManager.Instance.SendOperation(bgo,
BlenderServiceManager.ObjectOperations.Translate, new Vector3(0,
displacement, 0));
}

With this in place, we now have realized the functionality to manipulate the hologram using physical objects. Build, deploy, and test to see it in action before returning, where we talk about how we can create a shared virtual environment between multiple users: 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.190.93