Chapter 4: Creating an AR User Framework

In this chapter, we will develop a framework for building Augmented Reality (AR) applications that manage user interaction modes and the corresponding user interface (UI). The framework includes important user experience (UX) steps when starting up the AR session at runtime and interacting with AR features. This framework will form the basis for new scenes for projects later in this book.

This is a Unity framework for building mode-based applications. It generalizes some of the scene structure that I have found myself repeating from one project to the next. For example, when an AR app first starts, it must verify that the device supports AR. Once the AR session is initialized, the app may prompt the user to begin scanning the environment to establish tracking. At some point later in the application, the user might be prompted to tap the screen to place a virtual object, often in Add-object mode. These steps are common to many AR applications, including the projects in this book, so we will set up some infrastructure beforehand in a scene that may be used as a template.

This chapter involves some advanced C# coding. If you're already an intermediate or advanced programmer, you should be able to follow along fairly easily. If you're a novice, you can just copy/paste the code provided here and learn from it. Or, you have the option of skipping the chapter altogether and using the scene template from this chapter found in this book's GitHub repository.

In this chapter, we will cover the following topics:

  • Installing prerequisite assets for our framework
  • Starting with a new scene
  • Creating the UI canvas and panels
  • Creating the UI controller, using a Singleton class
  • Creating an interaction modes controller
  • Creating the interaction modes, including startup, scan, main, and non-AR modes
  • Usng the Unity onboarding UX assets
  • Creating a scene template for new scenes

By the end of the chapter, you'll have a scene template, named ARTemplate, with AR onboarding features, and a user interaction framework that can be used as a starting point for other AR projects.

Technical requirements

To implement the project in this chapter, you need Unity installed on your development computer, connected to a mobile device that supports AR applications. We'll use the Unity project set up for AR development in Chapter 1, Setting Up for AR Development. In review, the project configuration included the following:

  • It created a new project (via Unity Hub) using the Universal Render Pipeline template.
  • It set Target Platform for Android or iOS in Build Settings, and the corresponding required Player Settings.
  • It installed an XR Plugin, AR Foundation package, and configured the URP Forward Renderer for AR.
  • It installed the Input System package and sets Active Input Handling (to Input System Package or Both).

The completed scene from this chapter can be found in this book's GitHub repository at https://github.com/PacktPublishing/Augmented-Reality-with-Unity-AR-Foundation.

Understanding AR interaction flow

In an Augmented Reality application, one of the first things the user must do is scan the environment with the device camera, slowly moving their device around until it detects geometry for tracking. This might be horizontal planes (floor, tabletop), vertical planes (walls), a human face, or other objects. A simplistic user flow given in many example scenes is shown in the following diagram:

Figure 4.1 – A simple AR onboarding user workflow

Figure 4.1 – A simple AR onboarding user workflow

As shown in the preceding diagram, the app starts by checking for AR support, asking the user for permission to access the device camera and other initializations. Then, the app asks the user to scan the environment for trackable objects, and may need to report scanning problems, such as if the room is too dark or there's not enough texture to detect features. Once tracking is achieved, the user is prompted to tap the screen to place a virtual object in the scene.

This is great for demo scenes but is probably too simplistic for a real AR application. For example, in the Art Gallery app that we are going to build in Chapter 6, Gallery: Building an AR App, after the application starts, the environment is scanned for vertical planes (walls).

Then, the app enters Main mode, where the user must tap an Add button to add a new picture. That, in turn, displays a modal Select Image menu. With pictures added to the scene, the user can pick one and enter Edit mode to move, resize, or otherwise modify the virtual object. Part of this general interaction flow is shown in the following diagram:

Figure 4.2 – User interaction flow, including Main, Add, and Edit modes

Figure 4.2 – User interaction flow, including Main, Add, and Edit modes

Naturally, each application has its own interaction flows. The framework we are building in this chapter supports this scenario and can be adapted for other projects that require managing a current modal state and corresponding UI.

This framework implements a state machine design pattern, where the scene has a current state (interaction mode and visible UI). Specific conditions must be met to then transition from one state to another.

There are two major areas of this framework – the UI panels and the interaction modes. Generally, there will be a one-to-one correlation between the modes and the UI used by the modes. For example, in Main mode, there will be the main menu UI. In Add-object mode, there will be a UI prompt for the user to tap to place an object in the scene. This implements a design pattern called view-controller, with UI views and mode controllers.

Let's now begin to implement this basic workflow in our scene by adding a number of additional prerequisite packages to the project.

Installing prerequisite assets

Our user interaction framework uses several additional packages that need to be installed in your project, namely, TextMeshPro, DOTween, and Serialized Dictionary Lite. In this section, I will also include some utility assets. Let's install them now.

TextMeshPro

TextMeshPro provides high-quality text assets that replace the built-in text element. It is not mandatory, but I strongly recommend it. To import TextMeshPro, if you haven't installed it yet in your project, perform the following steps:

  1. Go to Window | TextMeshPro | Import TMP Essential Resources.
  2. In the Import Unity Package window, click Import.

The TextMeshPro package is now installed. You may also install the TMP Examples and Extras package, which includes additional fonts and other assets that may be useful and fun for your projects.

DOTween

DOTween is, in my opinion, an indispensable free package for doing small, lightweight animation effects on just about any MonoBehaviour property. Without it, you may need to write a dozen lines of code to do what DOTween does in one. Documentation for DOTween can be found online at http://dotween.demigiant.com/documentation.php.

To add DOTween, perform the following steps:

  1. Go to its Unity Asset Store page: https://assetstore.unity.com/packages/tools/animation/dotween-hotween-v2-27676.
  2. Press Add to My Assets and/or Open In Unity.
  3. This will take you to the Package Manager window in your Unity project.
  4. Ensure My Assets is selected from the Packages filter dropdown in the upper-left corner of the Package Manager window.
  5. Search for DOTween using the search text input field in the upper-right corner of the Package Manager window.
  6. Select the DOTween package and then click Install.
  7. Once imported, you are prompted to Open DOTween Utility Panel to set up the package.
  8. Then, click the Setup DOTween button.

DOTween is now installed and set up on your project.

Serialized Dictionary Lite

A C# dictionary is a key-value list structure where values in the list can be referenced by a key value. For example, we will use dictionaries to look up a UI panel or interaction mode object by name. Unfortunately, Unity does not provide native support for dictionaries in the Editor's Inspector window. Serialized Dictionary Lite is a free extension to the Unity Editor that allows dictionaries to be edited using Inspector. To add Serialized Dictionary Lite to your project, perform the following steps:

  1. Go to its Unity Asset Store page, https://assetstore.unity.com/packages/tools/utilities/serialized-dictionary-lite-110992
  2. Press Add to My Assets and/or Open In Unity.
  3. This will take you to the Package Manager window in your Unity project.
  4. Ensure My Assets is selected from the Packages filter dropdown in the upper-left corner of the Package Manager window.
  5. Search for Serialized using the search text input field in the upper-right corner of the Package Manager window.
  6. Select the Serialized Dictionary Lite package and click Install (or, if prompted, click Download and then Import).

Serialized Dictionary Lite is now installed in your project.

Other prerequisite assets

In addition to the aforementioned packages, we will assume that you have the following already added to your Unity project:

  • Assets from the Unity arfoundation-samples project imported from the ARF-samples.unity package created in Chapter 2, Your First AR Scene.
  • In Chapter 2, Your First AR Scene, we also created an AR Input Actions asset containing an Action Map named ARTouchActions, including (at least) one PlaceObject action.

With our prerequisite assets present, we can get started with building the scene.

Starting with a new scene

We start this project with a new empty scene and set it up with the AR Foundation objects: AR Session and AR Session Origin. Create a new scene named ARFramework using the following steps:

  1. Create a new scene using File | New Scene.
  2. Choose the Basic (Built-in) template. Press Create.
  3. Save the scene using File | Save As, navigate to your Assets/Scenes/ folder, give it the name ARFramework, and then click Save.

Next, we'll set up the scene with the basic AR Foundation game objects as follows:

  1. Delete Main Camera from the Hierarchy window by right-clicking and selecting Delete (or the pressing Del key on your keyboard).
  2. Add an AR session by selecting GameObject from the main menu, and then XR | AR Session.
  3. Add an AR Session Origin object by selecting GameObject from the main menu, and then XR | AR Session Origin.
  4. Select the AR Session Origin object in the Hierarchy window. In the Inspector window, click Add Component, search for raycast, and then add an AR Raycast Manager component.
  5. Unfold AR Session Origin and select its child AR Camera. In the Inspector window, use the Tag selector in the upper-left corner to set its tag to MainCamera. (This is not required, but it is a good practice to have one camera in the scene tagged as MainCamera).
  6. In the Inspector window, click Add Component, search for audio listener, and add an Audio Listener component to the camera.

For demo purposes, we'll add an AR Plane Manager component for detecting and tracking horizontal planes. This may change based on the requirements of a specific project:

  1. With AR Session Origin selected in the Hierarchy window, click Add Component in the Inspector window, search for ar plane manager, and then add an AR Plane Manager component.
  2. Choose an AR plane visualizer prefab and add it to the Plane Prefab slot. For example, try the AR Plane Debug Visualizer prefab found in the ARF-samples/Prefabs folder.

We can also set up some basic AR light estimation as follows:

  1. Select Main Camera in the Hierarchy window. On its AR Camera Manager component, set Light Estimation to Everything.
  2. In the Hierarchy window, select the Directional Light game object. In the Inspector window, click Add Component, search for light estimation, and then add a Basic Light Estimation component.
  3. Drag the AR Camera object from the Hierarchy window onto the Basic Light Estimation | Camera Manager slot.
  4. Save your work using File | Save.

We now have a scene named ARFramework with a few things set up, including the AR Session, AR Session Origin, AR Camera, and basic light estimation. We can now begin to construct our framework's UI panels.

Creating the UI canvas and panels

The main screen space UI canvas will contain various user interface panels that may be displayed at various times throughout the application. Presently, we'll include the following UI panels.

  • The Startup UI panel with any initialization messages
  • The Scan UI panel, which prompts the user to scan for trackable features
  • The Main UI panel for the main mode that could display the main menu buttons
  • The NonAR UI panel, which could be shown when the device does not support Augmented Reality

Creating the screen space canvas

First, we need to create a Canvas to contain these panels. Follow these steps:

  1. From the main menu, select GameObject | UI | Canvas and rename the Canvas UI Canvas. We can leave the default Render Mode as Screen Space – Overlay. This will also add an Event System game object to the scene if one is not already present.
  2. By default, the new Canvas is in screen space, and this is what we want here. Some people prefer to change Canvas Scaler UI Scale Mode from Constant Pixel Size to Scale With Screen Size.
  3. To edit a Screen Space canvas, let's switch the Scene window to a 2D view by clicking the 2D button in the Scene window toolbar. Then, double-click the UI Canvas object in the Hierarchy window to focus the Scene view on this object.
  4. It's also helpful to arrange the Game window and Scene window side by side. Because we're developing for AR, set the Game window's display to a fixed portrait aspect ratio, such as 2160x1080 Portrait using the dimension select list in the Game window's top toolbar.

On this canvas, we will add the separate panels. First, let's add an app title at the top of the screen.

Adding an app title

Let's add a placeholder for an app title as a text panel positioned at the top of the screen. Add the title using the following steps:

  1. Right-click on UI Canvas and select UI | Panel. Rename the panel App Title Panel.
  2. With the App Title Panel object selected, in its Inspector window, open the Anchor Presets menu (found in the upper-left corner of the Rect Transform component), and click the Stretch-Top button. The Anchor Presets menu is shown open in the following screenshot, to the left of the Rect Transform component:
    Figure 4.3 – Anchor Presets menu for App Title Panel set to Top-Stretch

    Figure 4.3 – Anchor Presets menu for App Title Panel set to Top-Stretch

  3. Then, press Shift + Alt + Stretch-Top to set its pivot and position.
  4. Set Rect Transform | Height to 100.
  5. Next, right-click on App Title Panel, select UI | Text – TextMeshPro, and rename the object Title Text.
  6. In its TextMeshPro – Text component, set Text Input to My AR Project.
  7. Using the Anchor Presets menu in the upper-left corner of Rect Transform, select Stretch-Stretch. Then, press Shift + Alt + Stretch-Stretch.
  8. Set Alignment to Center and Middle.
  9. You may also choose to adjust the Font Size and Vertex Color fields as you wish.

There isn't much to see, but the Game window, along with the title of the app, is shown in the following screenshot:

Figure 4.4 – Game window (cropped) with the App Title panel anchored as Top-Stretch

Figure 4.4 – Game window (cropped) with the App Title panel anchored as Top-Stretch

Now that you have experience using the Anchor Presets menu, I'll abbreviate the instructions going forward. Next, we'll add a panel for the start up mode.

Creating the UI panels

We'll now create the UI panels for each of the initial interaction modes supported by the framework. Since they are all very similar, we'll create the first one, and then duplicate and modify it for the others.

The first UI panel, Startup UI, will be a text panel displayed when the app is initializing. Create it using the following steps:

  1. In the Hierarchy window, right-click the UI Canvas object and select UI | Panel. Rename it Startup UI.
  2. We don't need a background image so, in the Inspector window, remove the Image component using the 3-dot context menu | Remove Component.
  3. Click the Add Component button, search for canvas group, and add a Canvas Group component to the panel. We're going to use this component to fade panels on and off later in this chapter.
  4. Right-click the Startup UI object and select UI | Text – TextMeshPro.
  5. Set Text Input to Initializing….
  6. Using its Anchor Presets menu, select Stretch-Stretch. Then, press Shift + Alt + Stretch-Stretch.
  7. Set Alignment to Center and Middle.

Next, we can add a panel that can be displayed if the device we're running on does not support AR. Create this panel as follows:

  1. Right-click the Startup UI panel and select Duplicate. Rename it to NonAR UI.
  2. Unfold the object and select its child text object. Change the text content to Augmented reality not supported on this device.

The Scan UI panel will be used to prompt the user to scan the room while the app tries to detect AR features. Create the panel by following these steps:

  1. Right-click the Startup UI panel and select Duplicate. Rename it to Scan UI.
  2. Unfold the object and select its child text object. Change the text content to Scanning… Please move device slowly.

Lastly, we'll add a placeholder panel for the main mode UI. This panel could later include, for example, a main menu for the app:

  1. Right-click the Startup UI panel and select Duplicate. Rename it to Main UI.
  2. Unfold the object and select its child text object. For development purposes, change the text content to Main Mode Running.

The current UI Canvas hierarchy is shown in the following screenshot:

Figure 4.5 – UI Canvas hierarchy

Figure 4.5 – UI Canvas hierarchy

So far, we have created a simple hierarchy of UI panels under a screen space UI Canvas. The panels are acting as a placeholder, for the most part, containing a text element so that you can see which panel is active at runtime. As you build your own apps from this scene, you'll fill in the panels with app-specific UI elements.

Next, we'll create the UI controller script.

Creating the UI controller

It will be convenient to have a script with a small API that makes it easy to switch between UI panels. For the controller scripts in our framework, I've decided to define them as singletons.

A singleton is a software design pattern that ensures there is only a single instance of a script object at runtime. Then, the object's instance can be more easily referenced, using a static reference to Instance in the class definition. Learn more at https://wiki.unity3d.com/index.php/Singleton.

Then, we'll write a UIController script that controls the visibility of your UI panels. Lastly, we'll implement some code to fade in and out for a more pleasing user experience when we hide and show the panels.

Creating a Singleton class script

We'll begin by writing a Singleton class to use (or, if you already have a favorite, feel free to use that Singleton class definition instead). You can find some singleton implementations available as packages in the Unity Asset Store, but all we need is a short script that you can now create as follows:

  1. In your Project window, create a new C# script in your Scripts/ folder by right-clicking and selecting Create | C# Script, and name it Singleton.
  2. Write the script as follows:

    using UnityEngine;

    ///     Singleton behaviour class, used for components         that should only have one instance

    /// </summary>

    /// <typeparam name="T"></typeparam>

    public class Singleton<T> : MonoBehaviour where T : Singleton<T>

    {

        public static T Instance { get; private set; }

        /// <summary>

        ///     Returns whether the instance has been             initialized or not.

        /// </summary>

        public static bool IsInitialized {

            get { return Instance != null; }

        }

        /// <summary>

        ///     Base awake method that sets the singleton's             unique instance.

        /// </summary>

        protected virtual void Awake()

        {

            if (Instance != null)

                Debug.LogError($"Trying to instantiate a                 second instance of singleton class                     {GetType().Name}");

            else

                Instance = (T)this;

        }

        protected virtual void OnDestroy()

        {

            if (Instance == this)

                Instance = null;

        }

    }

  3. Save the file.

    Info: A singleton as an anti-pattern

    Note that the singleton pattern can be abused, and some programmers are adamantly opposed to using it, as it can cause problems down the road should your application grow and get more complex. But it's a powerful tool when you are certain that the app will only ever require one instance of the class, as will be the case in this interaction framework. One of the main advantages of singletons is that you can then reference the object instance as a static variable on the object class itself. An alternative technique is to find the instance to the component at runtime, for example, by calling FindObjectOfType<T>() from the script's Start() function.

This script can be used to declare a singleton's MonoBehaviour class, as we'll see next in UIController and other scripts.

Writing the UIController script

With our Singleton class in hand, we can now write a UI controller. This component provides a way to switch between UI panels visible to the user. Perform the following steps to write the UIController class:

  1. Begin by creating a new script in your Project Scripts/ folder by right-clicking and selecting Create | C# Script. Name the script UIController.
  2. Double-click the file to open it for editing and replace the default content, starting with the following declarations:

    using UnityEngine;

    using RotaryHeart.Lib.SerializableDictionary;

    [System.Serializable]

    public class UIPanelDictionary : SerializableDictionaryBase<string, CanvasGroup> { }

    public class UIController : Singleton<UIController>

    {

        [SerializeField] UIPanelDictionary uiPanels;

        CanvasGroup currentPanel;

    At the top, we declare a serializable dictionary, UIPanelDictionary, using the Serializable Dictionary Lite package's base class (we installed this package as a prerequisite earlier in this chapter. See https://assetstore.unity.com/packages/tools/utilities/serialized-dictionary-lite-110992 and the associated Unity Forum for documentation). The dictionary lookup key is the UI's name, and its value is a reference to the UI panel's CanvasGroup component.

    Instead of declaring UIController as a MonoBehaviour class, we declare it a Singleton (which itself derives from MonoBehaviour). Don't worry about the syntax of the declaration, public class UIController : Singleton<UIController>. This is what our Singleton class expects.

    The script declares a uiPanels variable as a UIPanelDictionary. We also declare a currentPanel variable to track which panel is presently active.

  3. Next, add the following functions to the script, which ensure all the UI panels are disabled when the app is started, by iterating through the uiPanels list and calling SetActive(false):

        void Awake()

        {

            base.Awake();

            ResetAllUI();

        }

        void ResetAllUI()

        {

            foreach (CanvasGroup panel in uiPanels.Values)

            {

                panel.gameObject.SetActive(false);

            }

        }

    }

    Note that Awake calls base.Awake() because the parent Singleton class also has an Awake that must be called in order for this to work. Then it calls ResetAllUI.

  4. Then, add the following functions to the script:

        public static void ShowUI(string name)

        {

            Instance?._ShowUI(name);

        }

        void _ShowUI(string name)

        {

            CanvasGroup panel;

            if (uiPanels.TryGetValue(name, out panel))

            {

                ChangeUI(uiPanels[name]);

            }

            else

            {

                Debug.LogError("Undefined ui panel " + name);

            }    }

        void ChangeUI(CanvasGroup panel)

        {

            if (panel == currentPanel)

                return;

            if (currentPanel)

                currentPanel.gameObject.SetActive(false);

            currentPanel = panel;

            if (panel)

                panel.gameObject.SetActive(true);

        }

    _ShowUI is an instance function that, given a panel name, calls ChangeUI. ChangeUI hides the current panel and then activates the required one (note that I'm using an underscore prefix to distinguish private instance functions from the public one). The C# dictionary, TryGetValue, looks up the value for the given key.

The static ShowUI class function simply calls the instance's _ShowUI function. In this way, another script can show a panel by calling UIController.ShowUI(panelname); without requiring a direct reference to the instance. It uses the null-conditional operator (https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators--and-) as a shortcut to make sure the instance is defined before we reference it.

Now, add the script as a component on the UI Canvas and set up its properties by performing the following steps:

  1. In the Hierarchy window, select UI Canvas.
  2. Drag the UIController script onto UI Canvas, adding it as a component.
  3. In the Inspector window, on the UI Controller component, unfold the UI Panels dictionary list.
  4. Click the + button in the bottom-right corner of the UI Panels list.
  5. In the elements Id slot, write Startup.
  6. Unfold the element and then, from the Hierarchy window, drag the Startup UI game object onto the Value slot.
  7. Repeat steps 4 – 6 three times for each of the following: NonAR : NonAR UI, Scan : Scan UI, and Main : Main UI.

The UI Controller component should now look like the following:

Figure 4.6 – UI Controller component populated with UI panel references

Figure 4.6 – UI Controller component populated with UI panel references

Thus far, we have created a simple UI for an AR application, organized on one canvas as a set of separate panels. Our plan is to present only one panel at a time to the user, depending on what the application is doing. We also wrote a UIController script to handle switching between panels.

Fading the UI panels

An improvement we can make is to fade the UI in and out while transitioning instead of abruptly hiding/showing a panel. Presently, we call SetActive to change the panel's visibility. Instead, we can use the panel's CanvasGroup component and animate its Alpha value, and the DOTween library is very handy for this. (You can skip this modification if you do not want to install DOTween). To do this, follow these steps:

  1. Open the UIController script for editing and add the following declaration at the top of the file:

    using DG.Tweening;

  2. Add these two fader helper functions at the bottom of the class:

        void FadeIn(CanvasGroup panel)

        {

            panel.gameObject.SetActive(true);

            panel.DOFade(1f, 0.5f);

        }

        void FadeOut(CanvasGroup panel)

        {

            panel.DOFade(0f, 0.5f).OnComplete(() => panel            gameObject.SetActive(false));

        }

  3. Then, modify the ChangeUI function to call the fader helps instead of SetActive, as shown here (the lines in comments are replaced):

        void ChangeUI(CanvasGroup panel)

        {

            if (panel == currentPanel)

                return;

            if (currentPanel)

                FadeOut(currentPanel);

                //currentPanel.gameObject.SetActive(false);

            currentPanel = panel;

            if (panel)

                FadeIn(panel);

                //panel.gameObject.SetActive(true);

        }

Eventually, when you run the scene, the UI panels will fade in and out when shown and hidden, respectively.

Next, we will write an Interaction Controller that handles the application interaction modes and uses the UI Controller to display the specific UI it needs.

Creating an Interaction Controller mode

For our user framework, we will make a clever use GameObject with a mode script on it to represent interaction modes. Modes will be enabled (and disabled) by enabling (and disabling) the corresponding objects. We'll organize these objects in a hierarchy, like the UI panels we created in the previous section, but separated to keep the "controllers" apart from the "views," as prescribed by the controller/view software pattern. Presently, we'll include the following modes:

  • Startup mode: Active while the AR session is initializing, and then it initiates Scan mode.
  • NonAR mode: A placeholder should you want your application to run even if the device does not support AR.
  • Scan mode: This prompts the user to scan for trackable features until the AR session is ready, and then it initiates Main mode.
  • Main mode: This displays the main menu and handles non-modal interactions.

First, we'll create the object hierarchy representing each of these modes, under an Interaction Controller game object. With separate GameObjects representing each mode, we'll be able to enable one mode or another separately.

Creating the interaction mode hierarchy

To create the interaction mode hierarchy, perform the following steps:

  1. From the main menu, select GameObject | Create Empty, and rename the object Interaction Controller.
  2. Right-click the Interaction Controller object and select Create Empty. Rename it Startup Mode.
  3. Repeat step 2 three more times to create objects named NonAR Mode, Scan Mode, and Main Mode.

The mode hierarchy game objects now look like the following:

Figure 4.7 – Interaction Controller modes hierarchy

Figure 4.7 – Interaction Controller modes hierarchy

Now we can write and set up the InteractionController script.

Writing the Interaction Controller

The role of our Interaction Controller is to manage the top-level user interaction of the application. We'll begin by writing the script as follows:

  1. Create a new script in your Project Scripts/ folder by right-clicking Create C# Script and name the script InteractionController.
  2. Double-click the file to open it for editing and replace the default content, starting with the following declarations:

    using System.Collections;

    using UnityEngine;

    using RotaryHeart.Lib.SerializableDictionary;

    [System.Serializable]

    public class InteractionModeDictionary : SerializableDictionaryBase<string, GameObject> { }

    public class InteractionController : Singleton<InteractionController>

    {

        [SerializeField] InteractionModeDictionary         interactionModes;

        GameObject currentMode;

    }

    At the top, we declare a serializable dictionary, InteractionModeDictionary, using the Serializable Dictionary Lite package's base class. The dictionary key is the mode's name, and its value is a reference to the mode game object.

    Instead of declaring InteractionController as a MonoBehaviour class, we declare it a Singleton (which itself derives from MonoBehaviour).

    Then we declare the interactionModes variable as this type of dictionary. We also declare a currentMode variable that tracks the current enabled mode.

  3. Next, add the following functions to the script, which ensures all the modes are disabled when the app is started, by iterating through the interactionModes list by calling SetActive(false):

        protected override void Awake()

        {

            base.Awake();

            ResetAllModes();

        }

        void ResetAllModes()

        {

            foreach (GameObject mode in interactionModes             Values)

            {

                mode.SetActive(false);

            }

        }

    Note that Awake calls base.Awake() because the parent Singleton class also has an Awake that must be called in order for this to work. It then calls ResetAllModes.

  4. Then, add the following functions to the script:

       public static void EnableMode(string name)

        {

            Instance?._EnableMode(name);

        }

        void _EnableMode(string name)

        {

            GameObject modeObject;

            if (interactionModes.TryGetValue(name, out             modeObject))

            {

                StartCoroutine(ChangeMode(modeObject));

            }

            else

            {

                Debug.LogError("undefined mode named " +                name);

            }    }

        IEnumerator ChangeMode(GameObject mode)

        {

            if (mode == currentMode)

                yield break;

            if (currentMode)

            {

                currentMode.SetActive(false);

                yield return null;

            }

            currentMode = mode;

            mode.SetActive(true);

        }

    _EnableMode is an instance function that, given a mode name, calls ChangeMode. ChangeMode disables the current mode and then activates the requested one.

    Note that ChangeMode is called as a coroutine to allow the current mode an extra frame to be disabled before activating the new one. (To learn more about coroutines, see https://docs.unity3d.com/Manual/Coroutines.html).

    The static EnableMode class function simply calls the instance's _EnableMode function. In this way, another script can show a panel by calling InteractionController.EnableMode(modename); without requiring a direct reference to the instance. It uses the null-conditional operator (https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/member-access-operators#null-conditional-operators--and-) as a shortcut to make sure the instance is defined before we reference it.

  5. Lastly, assuming we want the app to start in Startup mode, add the following:

        void Start()

        {

            _EnableMode("Startup");

        }

    This assumes we will include a "Startup" mode in the interactionModes dictionary.

UIController will contain references to each of the app's mode game objects. When the app needs to switch modes, it will call InteractionController.EnableMode(modeName) with the name of the mode. The current mode will be disabled, and the required one will be enabled.

Add the script as a component on the Interaction Controller game object and set up its properties by following these steps:

  1. In the Hierarchy window, select the Interaction Controller game object.
  2. Drag the InteractionController script onto the Interaction Controller, adding it as a component.
  3. In the Inspector window, on the Interaction Controller component, unfold the Interaction Modes dictionary list.
  4. Click the + button in the bottom-right corner of the Interaction Modes list.
  5. On the elements Id slot, write Startup.
  6. Unfold the element and then, from the Hierarchy window, drag the Startup Mode game object onto the Value slot.
  7. Repeat steps 4 – 6 three times for each of the following: NonAR : NonAR Mode, Scan : Scan Mode, and Main : Main Mode.

    The Interaction Controller component should now look like the following:

    Figure 4.8 – Interaction Controller component populated with interaction mode object references

    Figure 4.8 – Interaction Controller component populated with interaction mode object references

  8. The Interaction Controller component will be responding to user input, so we need to add a Player Input component (assuming your project is using the new Input system).

    With Interaction Controller selected in the Hierarchy window, click Add Component in the Inspector window.

  9. Search for player inp ut and add a Player Input component.
  10. Locate the AR Input Actions asset in your Project window (for example, the Inputs/ folder) and drag it to the Player Input | Actions slot. (As noted in the Technical requirements earlier in the chapter, I assume you already have this asset as created in Chapter 2, Your First AR Scene).
  11. Set Player Input | Behavior to Broadcast Messages.

    THIS IS IMPORTANT! We need to make sure the player actions are forwarded to the child mode objects.

In this section, we have created a hierarchy for interaction modes, organized under one Interaction Controller game object that has a script for enabling/disabling mode objects. Our plan is to allow only one mode to be active at a time. Of course, we still need to write the scripts that control each mode, and handle conditions when it's time to transition from one particular mode to a different one.

Creating the interaction modes behavior

When the app enables a mode, it will enable the corresponding game object, which has a script that controls the behavior of that mode. When the app changes modes, the current mode object will be disabled, and the new one enabled. Each mode is responsible for the following:

  • Displaying its corresponding UI
  • Transitioning to a different mode when specific conditions are met

We will write mode scripts for each of the modes.

The StartupMode script

Startup mode begins when the application starts (it's enabled from the InteractionController Start() function). It displays the Startup UI panel. Then it waits for the ARSession state to become ready, and transitions to Scan mode. Or, if the ARSession reports that AR is not supported on the current device, it transitions to NonAR mode.

Follow these steps to create Startup mode:

  1. Create a new script in your Project Scripts/ folder by right-clicking and selecting Create | C# Script, and name the script StartupMode.
  2. Drag the StartupMode script onto the Startup Mode game object in the Hierarchy window.
  3. Double-click the StartupMode script file to open it for editing and write it as follows:

    using UnityEngine;

    using UnityEngine.XR.ARFoundation;

    public class StartupMode : MonoBehaviour

    {

        [SerializeField] string nextMode = "Scan";

        void OnEnable()

        {

            UIController.ShowUI("Startup");

        }

        void Update()

        {

            if (ARSession.state ==            ARSessionState.Unsupported)

            {

                InteractionController.EnableMode("NonAR");

            }

            else if (ARSession.state >= ARSessionState.Ready)

            {

                InteractionController.EnableMode(nextMode);        }

        }

    }

The script uses the AR Foundation's ARSession class state variable, ARSession.state, to determine when the session is initialized or whether AR is unsupported. The state is an enum ARSessionState with one of the following values:

  • None: The session has not yet been initialized.
  • Unsupported: The device does not support AR.
  • CheckingAvailability: The session is in the process of checking availability.
  • NeedsInstall: The device needs to install or update AR support software.
  • Installing: The device is in the process of installing AR support software.
  • Ready: The device supports AR and you can enable the ARSession component.
  • SessionInitializing: The AR session is scanning the environment and trying to detect trackable objects.
  • SessionTracking: The AR session has found trackable objects and can determine the device's location within the real-world 3D environment.

When state is Unsupported, we transition to NonAR mode.

When state is Ready (or higher), we transition to Scan mode.

The ScanMode script

Scan mode is enabled when the device is scanning the environment, trying to detect trackable features in the real world. It displays a prompt asking the user to point the camera into the room and slowly move the device.

The conditions for ending Scan mode may vary depending on the AR application. For example, it may wait until at least one horizontal or vertical plane has been detected, or a reference image has been recognized, or a selfie face is being tracked. Presently, we'll check ARPlaneManager if any trackables have been detected.

Perform the following steps to create Scan mode:

  1. Create a new script in your Project Scripts/ folder by right-clicking and selecting Create | C# Script and name the script ScanMode.
  2. Drag the ScanMode script onto the Scan Mode game object in the Hierarchy window.
  3. Double-click the ScanMode script file to open it for editing and write it as follows:

    using UnityEngine;

    using UnityEngine.XR.ARFoundation;

    public class ScanMode : MonoBehaviour

    {

        [SerializeField] ARPlaneManager planeManager;

        void OnEnable()

        {

            UIController.ShowUI("Scan");

        }

        void Update()

        {

            if (planeManager.trackables.count > 0)

            {

                InteractionController.EnableMode("Main");

            }

        }

    }

  4. Drag the AR Session Origin object from the Hierarchy window onto the Scan Mode | Plane Manager slot.

When Scan mode is enabled, the Scan UI panel is shown. Then, it waits for at least one trackable plane has been detected by the AR system by checking for planeManager.trackables.count > 0 before switching to Main mode.

The MainMode script

Main mode, as its name implies, is the main operating mode of the application. It may display the main menu, for example, and handle main user interactions. For our default framework, there's not much to do yet apart from display the Main UI panel.

Perform the following steps to create Main mode:

  1. Create a new script in your Project's Scripts/ folder by right-clicking and selecting Create | C# Script and name the script MainMode.
  2. Drag the MainMode script onto the Main Mode game object in the Hierarchy window.
  3. Double-click the MainMode script file to open it for editing and write it as follows:

    using UnityEngine;

    public class MainMode : MonoBehaviour

    {

        void OnEnable()

        {

            UIController.ShowUI("Main");

        }

    }

Lastly, we define NonAR mode.

The NonARMode script

NonAR mode will be enabled when the device you're running does not support AR. You might simply notify the user that the app cannot run, and gracefully exit. Alternatively, you may continue to run the app without AR capabilities if that makes sense for your project.

Perform the following steps to create a NonAR mode placeholder:

  1. Create a new script in your Project's Scripts/ folder by right-clicking and selecting Create | C# Script, and name the script NonARMode.
  2. Drag the NonARMode script onto the NonAR Mode game object in the Hierarchy window.
  3. Double-click the NonARMode script file to open it for editing and write it as follows:

    using UnityEngine;

    public class NonARMode: MonoBehaviour

    {

        void OnEnable()

        {

            UIController.ShowUI("NonAR");

        }

    }

That about does it. We've created a hierarchy with each of the interaction modes as children of Interaction Controller. To enable a mode, you'll call InteractionController.EnableMode(), which disables the current mode and activates a new one. When a mode is enabled, its mode script begins running, showing its UI, and potentially interacting with the user until specific conditions are met, and then transitions to a different mode. Let's try running the scene on your device.

Testing it out

Now is a good time to Build And Run the scene to make sure things are working as expected so far. Perform the following steps:

  1. First, be sure to save your work by using File | Save.
  2. Select File | Build Settings to open the Build Settings window.
  3. Click Add Open Scenes to add the ARFramework scene to Scenes In Build, and ensure it is the only scene in the list with a checkmark.
  4. Ensure that your target device is plugged into a USB port and that it is ready.
  5. Click Build And Run to build the project.

Once the project builds without errors and launches on your device in Startup mode. You'll first see the words Initializing… from the Startup UI panel.

Once the AR Session is started, the app transitions to Scan mode and you will see the words Scanning... Please move device slowly.

Once a horizontal plane is being tracked, Scan mode transitions to Main mode. You will then see on the screen the words Main Mode Running....

If all goes well, the framework is working as intended. To accomplish this, we have implemented the Canvas UI and child panels for the user interface. We have implemented the Interaction Controller and child mode controllers with scripts that implement the UI and interactions required in each mode. And it's all wired together. This is a basic framework for an AR project that we will use for projects in this book.

There are many ways in which we can improve and build on this framework. For one, we can make the UI a little more interesting by replacing some of the text prompts with animated graphics from the AR Onboarding UX from Unity.

Using the Unity onboarding UX assets

Unity provides a set of AR onboarding UX assets useful for prompting users in an AR application. Onboarding refers to the user experience when your app starts up and prompts the user to interact with AR features. First, I'll explain some of what this package provides. Then we'll prepare the assets for use in our own projects.

Introducing the onboarding assets

The onboarding UX assets are part of the AR Foundation Demos project found at https://github.com/Unity-Technologies/arfoundation-demos. (This is different from the AR Foundation Samples project we explored in Chapter 2, Your First AR Scene). And its documentation can be found on that project's GitHub page.

The onboarding UX assets include icons and video graphics to prompt the user when scanning is required. It automatically tells the user the reasons why tracking may be failing, such as the room is too dark, or the camera view does not see sufficient details. It provides components to manage that process that are composed into an example prefab, named ScreenspaceUI, which can be customized to the look and feel of your own project.

For example, when the app is scanning, you can use an animated graphic prompt to Move Device Slowly while scanning the room. If there's a problem, it will display the reason, as shown in the left-side panel of the following image (where I have my finger covering the camera lens). It says Look for more textures or details in the area. If you want to prompt the user to tap the screen to place an object, there's a Tap to Place animated graphic, and so on:

Figure 4.9 – Using the onboarding UX assets

Figure 4.9 – Using the onboarding UX assets

Furthermore, the package supports localization of the text prompts, should your project require multi-language support for various countries. It also includes some good default assets for visualizing AR planes and point clouds that you can use.

The package includes the following components.

  • ARUX Animation Manager: This displays instructional graphic animations to prompt the user to find a plane or tap to place, for example.
  • ARUX Reasons Manager: This checks the AR Session's status and displays reasons why tracking may be failing as hints to the user.
  • Localization Manager: This supports localized text and graphics for adapting the instructional and reasons UI to different languages.
  • UI Manager: This is an example script for managing the user workflow.

    Info: The UI Manager script is an example script

    The UIManager script from the AR Foundation Demos project is a useful control script, but it is only an example of how to interface with the ARUXAnimationManager. Reading the script is informative but not reusable. In our framework, we have implemented our own solution for the user flow that replaces the UIManager script.

UI Manager lets you set up one or two goals via the Inspector window. A goal may be Found a Plane or Placed an Object. You then set the instructional UI to prompt the user to perform the current activity until its goal has been completed.

Preparing the Unity AR onboarding assets

While the onboarding UX assets are also available as a package in the Unity Asset Store, I recommend you clone the GitHub project version because it has more examples and assets, including Universal Render Pipeline (URP) shader-graph shaders. Both versions are full Unity projects, so either way, you will need to open it in a new Unity project and then export the assets into a package that you can import into your own projects.

We will clone the project and then export the AR Foundation Demos assets into a .unitypackage file that we can import into our own project. I will also provide a copy of this Unity package with the files for this book in the GitHub repository.

To clone the project and export the folders we want, perform the following steps:

  1. Clone a copy of the project from GitHub to your local machine. The project can be found at https://github.com/Unity-Technologies/arfoundation-demos. Please use whatever cloning method you prefer, for example, GitHub Desktop (https://desktop.github.com/) or Command Line (https://git-scm.com/download/).
  2. Open the Unity Hub application on your desktop.
  3. Add the project to Unity Hub by selecting Projects | Add, navigating to the cloned project's root folder, and then press Select Folder.
  4. In the Unity Hub projects list, if you see a yellow warning icon indicating that the Unity version used by the cloned project is not presently installed on your system, use the Unity Version selection to choose a newer version of the editor that you do have installed (preferably the same major release number).
  5. Open the project by selecting it from the Unity Hub projects list.
  6. We're going to move selected folders into a root folder named ARFoundationDemos that we can export into a package.

    In Unity, in the Project window, create a new folder using the + button in the top-left of the Project window and name it ARFoundationDemos.

  7. With your mouse, move the following four folders into this ARFoundationDemos/ folder: AddressableAssetsData, Common, Shaders, and UX.
  8. In the Project window, right-click on the ARFoundationDemos/ folder and select Export Package.
  9. The Exporting Package window will open. Click Export.
  10. Choose a directory outside of this project's root and name the file (such as ARF-OnboardingUX). Then, click Save.

Before you close the ARFoundationDemos project, you may want to look in the Package Manager window and note the AR Foundation package version used in the given project, to make sure your own project uses the same or later version of AR Foundation.

You can close the ARFoundationDemos project now. You now have an asset package you can use in this and other projects.

Installing dependency packages

The AR onboarding UX has some dependencies on other Unity packages that you must install in your own project: Addressables and Localization. Open your AR project and install them now.

The Addressable Asset system simplifies loading assets at runtime with a unified scheme. Assets can be loaded from any location with a unique address, whether they reside in your application or on a content delivery network. Assets can be accessed via direct references, traditional asset bundles, or Resource folders. The Addressables package is required by the onboarding UX assets. To learn more, see https://docs.unity3d.com/Packages/[email protected]/manual/index.html.

To import the Addressables package, perform the following steps:

  1. Open the Package Manager window by using Window | Package Manager.
  2. Ensure Unity Registry is selected from the Packages filter dropdown in the upper-left corner of the Package Manager window.
  3. Search for Addressables using the search text input field in the upper-right corner of the Package Manager window.
  4. Select the Addressables package and click Install.

The Addressables package is now installed.

The Localization package translates text strings and other assets into local languages. See https://docs.unity3d.com/Packages/com.unity.l[email protected]/manual/index.html. To import the Localization package, perform the following steps (these steps may have changed by the time you read this):

  1. If you have not already done so, enable Preview Packages by navigating to the Edit | Project Settings | Package Manager settings and checking the Enable Preview Packages checkbox.
  2. Then, in the Package Manager window, use the + button in the top-left corner and select Add Package From Git URL.
  3. Then, type com.unity.localization to begin installing the package.

    Info: Using Preview packages and Git URLs

    As I write this, the Localization package is in preview, that is, not yet fully released by Unity. Also, it is not yet included in the Unity package registry. To enable preview packages, you must click Enable Preview Packages in Project Settings. Also if a package is not included in the built-in Unity registry, you can add a package from a Git URL, from disk, or from a tarball file.

The Localization package is now installed. We can now install the AR onboarding UX assets themselves.

Importing the OnboardingUX package

We saved the assets exported from the AR Foundation Demos project into a file named OnboardingUX.unitypackage. Importing the package is straightforward. Follow these steps to add it to your project. Back in your own Unity project, do the following:

  1. Select Assets | Import Package | Custom Package. Alternatively, drag the OnboardingUX.unitypackage file from your Explorer or Finder directly into the Unity Project window.
  2. In the Import Unity Package window, click Import.
  3. The assets include materials that use the built-in render pipeline. Since our project is using the URP, you need to convert the materials by selecting Edit | Render Pipeline | Universal Render Pipeline | Upgrade Project Materials to URP.

    Tip: SphereObject shadow material in the URP

    The SphereObject prefab that comes with the onboarding UX demo assets is configured to cast a shadow using the built-in render pipeline, not the URP. As such, the shadow appears as a missing shader and is colored magenta. To fix this, locate the material named ShadowMat in the ARFoundationDemos/Common/Materials/ folder and, in the Inspector window, change its Shader, using the drop-down menu, to ShaderGraphs/BlurredShadowPlane.

The onboarding UX assets are now imported into your project. We can now add it to our framework scene.

Currently, our app renders a UI panel with text to prompt the user to scan the environment. This panel is a game object that is enabled when needed. Basically, we want to replace the panel text with animated graphics.

Writing the AnimatedPrompt script

Let's start by writing a new script, AnimatedPrompt, that displays a specific animation when it is enabled and hides the animation when disabled:

  1. Create a new script in your Project Scripts/ folder by right-clicking and selecting Create | C# Script and name the script AnimatedPrompt.
  2. Double-click the file to open it for editing and replace the default content, starting with the following declarations:

    using UnityEngine;

    public class AnimatedPrompt : MonoBehaviour

    {

        public enum InstructionUI

        {

            CrossPlatformFindAPlane,

            FindAFace,

            FindABody,

            FindAnImage,

            FindAnObject,

            ARKitCoachingOverlay,

            TapToPlace,

            None

        };

        [SerializeField] InstructionUI instruction;

        [SerializeField] ARUXAnimationManager         animationManager;

        bool isStarted;

    In this script, we declare a public property, instruction, whose value is an enum InstructionUI type that indicates which animation to play (borrowed from the UIManager script from the onboarding assets, to be consistent).

  3. When the script is started or enabled, it will initiate the animated graphics. Inversely, when the object is disabled, the graphics are turned off:

        void Start()

        {

            ShowInstructions();

            isStarted = true;

        }

        void OnEnable()

        {

            if (isStarted)

                ShowInstructions();

        }

        void OnDisable()

        {

            animationManager.FadeOffCurrentUI();

        }

    I've added a fix to ensure the animation does not restart when both Start and OnEnable are called at the start.

  4. When the script is enabled, it calls the helper function, ShowInstructions, which calls a corresponding function in ARUXAnimationManager:

        void ShowInstructions()

        {

            switch (instruction)

            {

                case InstructionUI.CrossPlatformFindAPlane:

                    animationManager.                   ShowCrossPlatformFindAPlane();

                    break;

                case InstructionUI.FindAFace:

                    animationManager.ShowFindFace();

                    break;

                case InstructionUI.FindABody:

                    animationManager.ShowFindBody();

                    break;

                case InstructionUI.FindAnImage:

                    animationManager.ShowFindImage();

                    break;

                case InstructionUI.FindAnObject:

                    animationManager.ShowFindObject();

                    break;

                case InstructionUI.TapToPlace:

                    animationManager.ShowTapToPlace();

                    break;

                default:

                    Debug.LogError("instruction switch                 missing, please edit AnimatedPrompt.cs "                + instruction);

                    break;

            }

        }

    }}

Now we can add this to our scene.

Integrating the onboarding graphics

To integrate the onboarding graphics, we can add the demo prefab (unfortunately named ScreenspaceUI) from the AR Foundation Demos package. Follow these steps:

  1. In the Project window, navigate to the ARFoundationDemos/UX/Prefabs/ folder and drag the ScreenspaceUI prefab into the Hierarchy window root of the scene.
  2. Give it a more indicative name; rename the object OnboardingUX.
  3. Our framework replaces the demo UI Manager component, so you should remove this.

    With the OnboardingUX object selected in Hierarchy, click the 3-dot context menu in the top-right corner of the UI Manager component in the Inspector window and select Remove Component.

We can now use AnimatedPrompt to replace the text in our UI prompt panels. To use it, perform the following steps:

  1. In the Hierarchy window, select the Scan UI panel object, right-click Create Empty, and rename the new object Animated Prompt.
  2. With the Animated Prompt object selected, drag the new AnimatedPrompt script from the Project window onto the object.
  3. Set the Animated Prompt | Instruction to Cross-Platform Find A Plane.
  4. From the Hierarchy window, drag the OnboardingUX object into the Inspector window and drop it on to the Animation Manager slot.
  5. You can disable the Text (TMP) child element of Scan Prompt Panel so that it won't be rendered.

If you Build And Run the project again, when it enters Scan mode, you will be greeted with nice, animated graphics instead of the text prompt.

With a working AR user framework, let's make this scene into a template that we can use when creating new scenes.

Creating a scene template for new scenes

We can save this ARFramework scene we've been working on as a template to use for starting new scenes in this Unity project. To create a scene template, perform the following steps.

  1. With the ARFramework scene open, select File | Save As Scene Template.
  2. In the Save window, navigate to your Scenes/ folder, verify the template name (ARFramework.scenetemplate), and then press Save.
  3. Subsequently, when you want to start a new AR scene, use this template. By default, Unity will duplicate any dependencies within the scene into a separate folder. In our case, this is generally not what we want to do.

    To prevent cloning the scene dependencies when the template is used, click on this new scene template file in your Project Assets/ window.

  4. In its Inspector window, in the Dependencies panel, uncheck each of the assets you do not want to be cloned and want to be shared between your scenes. In our case, we do not want to clone any, so use the checkbox at the top of the Clone column to change the checkboxes in bulk.

    Tip: Updating a template scene

    A Unity scene template contains metadata used when selecting and instantiating a new scene via the File | New Scene menu. It does not include the scene's GameObjects. Rather, it contains a reference to your prototype Unity scene. If you want to modify this prototype, don't re-save the scene as a new template. You simply edit the scene it is referencing. In this case, that will be the scene named ARFramework. Just remember to check the Dependencies list in the template if you've added any new assets to the scene as these will default to be cloned.

To use the template when creating a new scene in this project, use File | New Scene as usual. The dialog box will now contain the ARFramework template as an option. Select the location in your assets folder and press Create. If the template specifies any assets to be cloned, those copies will be added to a subfolder with the same name as the new scene.

We are now ready to build upon the work we did in this chapter, using the ARFramework template for new project scenes.

Summary

In this chapter, we developed a framework for building AR applications and saved it as a template we can use for projects in this book. The framework provides a state-machine structure for implementing modes and identifying the conditions when to transition to a different mode. The framework also offers a controller-view design pattern where, when a mode is active, its corresponding UI is visible, keeping the mode control objects separate from the UI view objects.

For the framework template, we implemented four modes: Startup mode, Scan mode, Main mode, and NonAR mode, along with four UI panels: Startup UI, Scan UI, Main UI, and NonAR UI. Scan mode uses the onboarding UX assets from the AR Foundation Demos project to prompt the user to scan for trackable features and report problems with detection and the AR session.

In the next chapter, I will demonstrate the use of this framework with a simple demo project and then build upon the framework more extensively in subsequent chapters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset