© Alexander Meijers 2020
A. MeijersImmersive Office 365https://doi.org/10.1007/978-1-4842-5845-3_7

7. HoloLens Development

Alexander Meijers1 
(1)
RIJSWIJK, The Netherlands
 

This chapter goes into more detail of HoloLens development using Unity and Visual Studio. This incorporates the architecture of your application and what you will need to debug your application.

Application Architecture

To understand how your applications run and behave you will need to understand the architecture and lifecycle of your application. There are several parts you need to understand, to get a grip on how your application acts in certain conditions. A mixed reality application is built using Unity and Visual Studio. The application runs mainly on the Unity engine using scripts on GameObjects and possibly on code that is written outside the Unity space to perform, for example, other functionality that is not available via Unity. In our case, that is using DLLs to run asynchronous calls to web services due to the single thread mechanism of Unity.

Scripting Lifetime

Unity works with an event system using event functions. Event functions are called at a certain time once or more, depending on the function, to execute scripting in a MonoBehaviour-derived class in the current running scene. This order is called the order of execution. The order in which those event functions are called influences how your application responds. That means you need to understand the order and when each function is called, as you are building a mixed reality application using Unity. Which events are called differs from running the application outside or inside the Unity Editor.

There is no particular order in which objects inside a scene receive the events. You can’t assume that object X is called before object Y. To resolve such issues, you can make use of the Singleton pattern. Do not use the Singleton pattern on a class derived from MonoBehaviour, since that will result in unexpected behavior due to timing issues.

In Figure 7-1 you see a diagram of the different phases of the scripting lifecycle in Unity. The same diagram shows some of the most important user callbacks and the order in which they are called during the phase.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig1_HTML.jpg
Figure 7-1

Scripting lifecycle in Unity for applications

The several phases shown in the diagram are described as follows:
  • Initialization: The Initialization phase is called for each object in the scene when the scene is loaded the first time. Objects added later to that scene are handled in the same way. These events are all called once before the object proceeds to the next phase. Several different phases are passed through during each frame cycle.

  • Physics: The Physics phase handles all events related to the physics of objects in the scene. Think of processing animations, triggers, and collisions, but also the call to yield WaitForFixedUpdate.

  • Input events: This phase handles the implemented OnMouse events.

  • Game Logic: The Game Logic phase is supposed to be used for handling all game logic in your application. Think of updating the location of an object or data presented from a source or specific functions. This phase is also used to process moves for animations. Yields methods and coroutine methods like yield null, yield WaitForSeconds, and yield StartCoroutine() are also handled in this phase. Most of your code will happen in the Update() method. But keep in mind that since these methods are called for each frame cycle, heavy code can cause disruption and performance issues in your application.

  • Rendering: The Rendering phase is used for rendering the scene, the graphical user interface, and Gizmos. Gizmos can be used as a visual aid for debugging your scene. It allows you to draw all kind of different graphical objects like lines, cubes, text, and others. Gizmos can only be used when running in the Unity Editor. It is not available in your application outside the Unity Editor. In that case, you will need to write your own code to draw lines, cubes, and other graphical output.

  • Decommissioning: This phase takes care of cleaning everything up. The events called are mostly for quitting the application, and disabling and destroying objects.

Custom Events

It is also possible to create your own events in Unity by using UnityEvents or C# events. Both can be used for different types of functionality like content driven callbacks, predefined calling events, and decoupling of systems. Decoupling of systems is a very important one. Unity scripting code can easily and quickly become entangled, hard to read and convert to complex systems where objects are relying on each other by deep links. Using an event system can prevent this.

Both event systems use the same principle. You define and create an event. Then you add listeners to your event. Listeners are callbacks that you register for a certain event. As soon as the event is invoked, all registered listeners are called.

Let’s first start explaining UnityEvents. There are two types of supported function calls with UnityEvents :
  • Static: These function calls are set through the UI of Unity. They use preconfigured values defined in that UI. When the event is invoked, the preconfigured value in the UI is used.

  • Dynamic: These function calls use the arguments that are sent from the scripting code. This is bound to the type of UnityEvent. The filters used in the UI filter the available callbacks and show only valid calls for the event.

UnityEvents are added to a class derived from MonoBehaviour. Their behavior is the same as standard .NET delegates.

It requires you to create, for example, a new script called EventManager. The script needs to have the UnityEngine.Events implemented and a UnityEvent property defined as follows:
using UnityEngine;
using UnityEngine.Events;
public class EventManager : MonoBehaviour
{
    public UnityEvent actionEvent;
    public EventManager()
    {
    }
    public void DoAction()
    {
    }
    public void DoAction(int id)
    {
    }
}
Add the script to an empty GameObject in your scene and open the Inspector window. Add a new listener through the UI by clicking the + symbol, as shown in Figure 7-2.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig2_HTML.jpg
Figure 7-2

A static function event added through the UI of Unity

Select the type Runtime Only or Editor and Runtime, which determines if the listener only works in runtime or both editor and runtime. In this example, I have dragged the EventManager GameObject into the Object field. That allowed me to select the DoAction(int id) method defined in that class. Since this is a static function call, I must define its value in the UI.

By default, an event is registered without arguments and binds to a void function. It is also possible to use arguments for your event. UnityEvents can be specified with up to four arguments. For that, you will need to define a custom UnityEvent<> class that supports the multiple arguments. Open the EventManager script file and add the following code above the definition of the EventManager class:
Using UnityEngine;
using UnityEngine.Events;
[Serializable]
public class IDEvent : UnityEvent<int>
{
}
Add an instance of this custom UnityEvent class to the EventManager class.
public class EventManager : MonoBehaviour
{
    public IDEvent idEvent;
}
Select the EventManager GameObject in the hierarchy and check the Inspector window. The event will now appear in the Event Manager script component, as shown in Figure 7-3. Press the + sign to add a new event. Leave Runtime only as type. Drag the EventManager GameObject into the Object field.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig3_HTML.jpg
Figure 7-3

A dynamic event is added through the UI of Unity

In this case, you will notice that when you select the function it shows all dynamic functions at the top in the dropdown. That is done through the filtering mechanism, based on your custom-defined event. You also do not need to specify a value. The value is set through the Invoke() method on the event by calling idEvent.Invoke(23);.

Make sure that the custom UnityEvent class is defined with [Serializable]. Otherwise, it will not appear in the Inspector window of the UI of Unity.

Using UnityEvents is easy and allows you to create an event system for your application by only using the UI interface of Unity. UnityEvents are easy for people who use drag and drop or when they are creating Unity Editor plugins.

But UnityEvents are not built as native code and therefore are less well performing and use more memory then C# events. If you do not need to use UnityEvents, use C# events instead. The following code shows an example how to use C# events:
public class NativeEventManager
{
    public static NativeEventManager instance;
    public event Action myEvent;
    public event Action<int> myIdEvent;
    public static NativeEventManager Instance
    {
        get
        {
            return instance != null ? instance : instance = new NativeEventManager();
        }
    }
    protected NativeEventManager()
    {
        myEvent += DoAction;
        myIdEvent += DoAction;
        myEvent.Invoke();
        myIdEvent.Invoke(23);
    }
    public void DoAction()
    {
    }
    public void DoAction(int id)
    {
    }
}
As you can see, it is almost the same principle as working with UnityEvents. But it allows me to use the singleton pattern, since the class does not have to be derived from MonoBehaviour. And that allows me to invoke the event from any location in my application.
NativeEventManager.Instance.myEvent.Invoke();
NativeEventManager.Instance.myIdEvent.Invoke(23);

Time and Framerate

Most of the work takes place in the Update() event of MonoBehaviour-derived classes. The event allows you to handle data and use scripts to execute certain tasks. In some cases these tasks require timing. Think of moving a certain hologram over time or other time-based actions. While the normal framerate for Microsoft HoloLens is around 60 frames per second, its stability and number are influenced by several factors:
  • The current actions of the CPU

  • Using video streaming or Miracast will downgrade the framerate by half.

  • Complex and long execution of code written in the Update() event running inside a single framerate

Even if the system is stable enough, it does not guarantee that the length of time between two Update() event calls is always the same. Moving an object using the Update() event will cause an irregular speed, since the time between two Update() calls are not the same.

This can be resolved to scale the size of the movements by the frame time. The frame time is available through the property Time.deltaTime. The following is an example of code using this property to compensate for the irregularity of framerates:
using UnityEngine;
public class MoveObject : MonoBehaviour
{
    public float distancePerSecond = 0.1f;
    void Update()
    {
        gameObject.transform.Translate(0, 0, distancePerSecond * Time.deltaTime);
    }
}

The property distancePerSecond specifies the distance that the object is moving per second. Multiplying this by the delta time of the framerate will assure that the object is moving the same speed over time. In this case, it is 10 centimeters per second.

Physics in the application makes use of a fixed timestep to maintain accuracy and consistency during the run of the scenario. The system uses a kind of alarm, which will define when physics is doing the calculations. Physics are updated when the alarm goes off. It is possible to change the time of the fixed timestep by using the Time.fixedDeltaTime property . A lower value will result in more frequent updates of the physics but will result in more load on the CPU.

The system takes care of an accurate simulation of physics due to the fixed timestep. But if framerates drop due to various reasons, it can cause issues for the regular updates to be executed, which will result in frozen objects or graphics that get out of sync. As soon as the framerate time takes longer than the maximum allowed timestep, the physics engine stops and gives room for the other steps taken place in the update() event.

It is also possible to slow down the time, to allow animations and script to respond at a slower rate. For this, Unity has a Time Scale property. The Time Scale property controls how fast time is executed relative to real time. A value above 1 will speed up the execution time, while a value below 1 will slow down the execution time. Time Scale is not really slowing down the execution time, but changes the values of both properties Time.deltaTime and Time.fixedDeltaTime to achieve the same result. Time Scale is set through the Time.timeScale property.

With Microsoft HoloLens it sometimes happens that you want to record or stream the video output. Both situations will drop the current framerate by half. That means that your application runs at only 30 frames per second. This is caused by the capturing mechanism of the camera. The performance will also drop, and other visual effects like drifting can take place. There seems to be a solution for this with Unity. Unity provides a Time.captureFramerate property. When the property is set higher than zero, the execution time will be slowed down and frame updates will be executed at a precise regular interval. This can result in better recording or streaming. Nowadays it is possible to stream the output of the Microsoft HoloLens 2 using Miracast to other devices like a laptop without having a framerate drop.

Unity DOTS

DOTS stands for Data-Oriented Technology Stack, and at the time of writing still is in preview. With DOTS it is possible to fully utilize the latest multicore processors without the need for knowledge of complex programming. In Figure 7-4 shows an architectural overview of Unity DOTS.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig4_HTML.jpg
Figure 7-4

An architectural overview of Unity DOTS

DOTS include several features:
  • C# Job System: This system is used for running multithreaded code efficiently using C# scripting. It allows users to write safe and fast code by exposing the native C++ Job System through C# scripting.

  • Entity Component System: This system allows programmers to write high-performance code. It takes care of everything for you and allows you to focus mainly on writing your code for data and behavior in your application.

  • Burst compiler: This compiler can produce highly optimized native code across multiple platforms by using a new LLVM-based backend compiler.

Due to these features, programmers can write multithreaded code with massive performance gains running inside a sandbox. There are several advantages to using this system. It is easier to build scenes with complex simulations that run on a large set of different hardware. Code can be more easily reused due to moving from object-oriented to data-oriented design.

Debugging

Creating applications for HoloLens will not always go as smoothly as you want. This chapter explains how you can debug your application, even when it is running on a HoloLens device or running in the HoloLens emulator.

Lifecycle of Application

Deploying a Unity project results in a Visual Studio project using IL2CPP. IL2CPP stands for Intermediate Language to C++. The problem with that is that each part of the script files is converted to native code running in your project. When you look at your project, you will notice that it does not generate the same classes, functions, and properties. This makes it very hard to debug your code. You instead want to debug the script files which are written in C#. Debugging your scripts can be done at two levels:
  • Debugging your script files in the Unity Editor

  • Debugging you script files when your app runs at a device or emulator

Debugging with Unity

Debugging with Unity is one of the easiest forms of debugging your C# scripts with a managed debugger. But it requires you to run your solution in the Unity Editor. While that is OK when testing basic stuff like scene setup, camera, and other Unity specific functions, as soon as you are accessing external code or want to test gestures outside the Unity Editor, you will need to start debugging script on your device or emulator.

This method of debugging does not require any additional configurations except for having the Unity Editor and Visual Studio. Open your project in Unity and double-click a C# script that you would like to debug, as shown in Figure 7-5.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig5_HTML.jpg
Figure 7-5

Double-click the script you wish to debug

That will start a Visual Studio, which allows you to start a managed debugger. When Visual Studio has loaded all the scripts from the project, the double-clicked script is selected.

Press the play button in the Unity Editor to start your project in Unity, as shown in Figure 7-6.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig6_HTML.jpg
Figure 7-6

Start the project in the Unity Editor by pressing the play button

Now switch back to Visual Studio and place your breakpoint at the code. Select debug in the dropdown box of the debug bar. Finally, click Attach to Unity in the dropdown box of the debug bar, as shown in Figure 7-7.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig7_HTML.jpg
Figure 7-7

Set the breakpoint and attach the managed debugger to Unity

The managed debugger will start, and your breakpoint is hit as soon as the code is executed, as shown in Figure 7-8.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig8_HTML.jpg
Figure 7-8

The managed debugger hits the breakpoint, allowing you to debug the C# script

Debugging at Your Device or Emulator

The following will describe how we can debug our scripts when running the application on a Microsoft HoloLens 2 device or in the Microsoft HoloLens 2 emulator. Before we can start debugging, some settings and prerequisites must be met to get it to work.

We need to configure the player settings in the project settings of our Unity project, as shown in Figure 7-9.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig9_HTML.jpg
Figure 7-9

Configuring the Player settings of the Unity project

Select the menu option Edit ➤ Project settings. The Project Settings tab is opened next to the Inspector window. Select at the left the option Player. In the player settings, go to the tab publishing settings and view the capabilities. Make sure that the following capabilities are enabled:
  • InternetClient

  • InternetClientServer

  • PrivateNetworkClientServer

These capabilities will allow the debugger to connect through the network capabilities of the device or the emulator.

The second thing we need to do is configure several settings in the build window before we start generating the Visual Studio solution. Select the menu option File ➤ Build Settings to open the build settings dialog, as shown in Figure 7-10.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig10_HTML.jpg
Figure 7-10

Configure build settings for managed debugging

Make sure that the options Copy References, Development Build, and Script Debugging are selected.

There is also an option called Wait for Managed Debugger. This option will pop up a dialog in your device or emulator and will wait till you have connected a managed debugger. Till then, your application will not continue unless you press the close button in the dialog. An example is shown in Figure 7-11. In some situations this can be handy, but for now we’ll leave the option disabled.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig11_HTML.jpg
Figure 7-11

This dialog will be shown when the option Wait For Managed Debugger is selected

The last thing you need to take care of is making sure that Visual Studio is not blocked by the firewall on your machine. Make sure that Visual Studio is allowed to put anything through via TCP and UDP.

Build the Visual Studio solution and open it in Visual Studio. Depending on the target device, you will need to configure the dropdown boxes for building the solution in Visual Studio. This can be seen in Figure 7-12.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig12_HTML.jpg
Figure 7-12

Configure the build settings for building and deploying to the device or emulator

In this example, we will be debugging the app using a Microsoft HoloLens 2 Emulator. Select Debug and x86 from the dropdown boxes at the top. Select the HoloLens 2 Emulator as the target.

Build the application using the configured build settings in Visual Studio. Deploy and run the application on the device or in the emulator. Although you can run the debugger, you don’t need it. Our debug session will be running from a second instance of Visual Studio. Deploy and run the application to the emulator. When the application is started, you will see the emulator as shown in Figure 7-13.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig13_HTML.jpg
Figure 7-13

The application is running in the Microsoft HoloLens 2 emulator

As soon as your application is running, go back to your Unity project. Select one of the scripts by double-clicking, as shown in Figure 7-14.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig14_HTML.jpg
Figure 7-14

Double-click the script you want to debug to open Visual Studio

This will open another Visual Studio, as shown in Figure 7-15.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig15_HTML.jpg
Figure 7-15

Visual Studio as the managed debugger with a breakpoint set in C#

This time, we have the C# scripts in front of us. Place your breakpoint at the row that you want to debug into.

We need to attach the debugger to the running instance of the application on the device or the emulator. Select from the menu Debug ➤ Attach Unity Debugger. This will pop up a dialog showing all running instances of Unity and applications, as shown in Figure 7-16.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig16_HTML.jpg
Figure 7-16

Select the instance running on the device or emulator

If you instead click Attach to Unity at the top of Visual Studio, that would connect to the Unity instance by default, rather than showing all available instances.

There are some issues with different versions of the Microsoft HoloLens 2 emulator that cause the device not to appear when you want to connect your managed debugger. At the time of writing this book we used the Visual Studio 2019 version 16.4.2 and the Microsoft HoloLens 2 emulator version 10.0.18362.1042 together with Windows 10 version 1909 build 18363.535. Make sure you are using at least the same or higher versions.

Select the instance running on your device or emulator. The managed debugger will start and your breakpoint is hit as soon as the code is executed, as shown in Figure 7-17.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig17_HTML.jpg
Figure 7-17

The managed debugger hits the breakpoint, allowing you to debug the C# script

This technique allows you to debug your C# scripts using a managed debugger. Keep in mind that changing the C# scripts will require a solution build from Unity and a build and deploy from Visual Studio.

Performance monitoring

Building applications for devices like the Microsoft HoloLens requires a certain skill to think further than only programming and creating scenes in Unity. Everything you do has an impact on how the CPU, GPU, and the HPU are utilized. Using, for example, more CPU can impact the performance of your application. But it can also impact the battery life of the device. It is possible to build applications that literally drain your batteries in less than 1 hour.

Heavy scenes will affect the framerate of your applications. Normal applications should run around 60 frames per second (fps). The lower the fps, the more unstable your application becomes. Instability could develop into drifting of holograms, color changes, and even hanging holograms.

There are several techniques that can help to create better performing applications for Microsoft HoloLens 2. This book we will not go into detail. Check the appendices for references to improvements to your application.

Microsoft HoloLens offers different performance tools through the Windows Device Portal:
  • Performance tracing: Performance tracing captures Windows Performance Recorder (WPR) traces from your device. There are several profiles available to choose from. To perform a trace, you can select the profile and start tracing. As soon as you want to stop tracing, click the stop link and wait till the complete trace file has been downloaded.

  • Viewing processes: You can view all the running processes through the processes page. Each process has memory in use, CPU utilization, and the account that is used to run the process specified. This list contains both application processes as system processes.

  • System performance: This allows you to view current system metrics as graphs of different systems in your device in real time. Metrics like system-on-chip power and system power utilization, GPU utilization, percentages of CPU used, how many I/O read and writes are happening, network connectivity, memory, and the number of frame rates per second. The information shown can differ if you are running on an emulator or HoloLens device.

An example of the graphical interface for system performance is shown in Figure 7-18.
../images/486566_1_En_7_Chapter/486566_1_En_7_Fig18_HTML.jpg
Figure 7-18

An example of the system performance graphs on an emulator

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.207.129