Chapter 5: Using the AR User Framework

In this chapter, we will learn how to use the Augmented Reality (AR) user framework that we set up in the previous chapter, Chapter 4, Creating an AR User Framework. Starting with the ARFramework scene template, we will add a main menu for placing virtual objects in the environment. If you skipped that chapter or just read through it, you can find the scene template and assets in the files provided on this book's GitHub repository.

For this project, we'll extend the framework with a new PlaceObject-mode that prompts the user to tap to place a virtual object in the room. The user will have a choice of objects from the main menu.

In the latter half of the chapter, I'll discuss some advanced AR application issues including making an AR-optional project, determining whether a device supports a specific AR feature, and adding localization to your User Interface (UI).

This chapter will cover the following topics:

  • Planning the project
  • Starting with the ARFramework scene template
  • Adding a main menu
  • Adding PlaceObject mode and instructional UI
  • Wiring the menu buttons
  • Doing a Build And Run
  • Hiding tracked objects when not needed
  • Making an AR-optional project
  • Determining whether a device supports specific AR features at runtime
  • Adding localization features to a project

By the end of the chapter, you'll be more familiar with the AR user framework developed for this book, which we'll use in subsequent chapters as we build a variety of different AR application projects.

Technical requirements

To implement the project in this chapter, you need Unity installed on your development computer, with a mobile device connected that supports AR applications (see Chapter 1, Setting Up for AR Development, for instructions), including the following:

  • Universal Render Pipeline
  • Input System package
  • XR Plugin for your target device
  • AR Foundation package

We assume you have installed the assets from the Unity arfoundation-samples project imported from ARF-samples.unitypackage created in Chapter 2, Your First AR Scene.

Also from Chapter 2, Your First AR Scene, we created an AR Input Actions asset that we'll use in this project, containing an action map named ARTouchActions including (at least) a PlaceObject action.

We also assume you have the ARFramework scene template created in Chapter 4, Creating an AR User Framework, along with all the prerequisite Unity packages detailed at the beginning of Chapter 4 Creating an AR User Framework. A copy of the template and assets can be found in this book's GitHub repository at https://github.com/PacktPublishing/Augmented-Reality-with-Unity-AR-Foundation (not including the third-party packages that you should install yourself).

The AR user framework requires the following prerequisites, as detailed in Chapter 4, Creating an AR User Framework, including the following:

  • The Addressables package
  • The Localization package
  • TextMesh Pro
  • The DOTween package from the Asset Store
  • The Serialized Dictionary Lite package from the Asset Store

The completed scene for this chapter can also be found in the GitHub repository.

Planning the project

For this project, we'll create a simple demo AR scene starting with the ARFramework scene template and building up the user framework structure we have set up.

With the framework, when the app first starts, Startup-mode is enabled and the AR Session is initialized. Once the session is ready, it transitions to Scan-mode.

If the AR Session determines that the current device does not support AR, Scan-mode will transition to NonAR-mode instead. Presently this just puts a text message on the screen. See the Making an AR-optional project section near the end of this chapter for more information.

In Scan-mode, the user is prompted to use their device camera to slowly scan the room until AR features are detected, namely, horizontal planes. The ScanMode script checks for any tracked planes and then transitions to Main-mode.

Given this, our plan is to add the following features:

  • The AR session will be configured to detect and track horizontal planes. We'll also render point clouds.
  • Main-mode will show a main menu with buttons that lets the user choose objects to place in the real-world environment. You can find your own models to use here, but we'll include three buttons for a cube, a sphere, and a virus (created in Chapter 2, Your First AR Scene).
  • When a place-object button is selected, it will enable a new PlaceObject-mode that prompts the user to tap to place the objects onto a detected plane.
  • Tapping on a tracked horizontal plane will create an instance of the object in the scene. The app then goes back to Main-mode.
  • Tracked AR features (planes and point clouds) will be hidden in Main-mode, and visible in PlaceObject-mode.

I have chosen to provide a cube, a sphere, and a virus (the virus model was created in Chapter 2, Your First AR Scene). Feel free to find and use your own models instead. The prefab assets I will be using are the following:

  • AR Placed Cube (found in the Assets/ARF-samples/Prefabs/ folder)
  • AR Placed Sphere (found in the Assets/ARF-samples/Prefabs/ folder)
  • Virus (found in Assets/_ARFBookAssets/Chapter02/Prefabs/ folder)

This is a simple AR demo that will help you become more familiar with the AR user framework we developed and will use in subsequent projects in this book.

Let's get started.

Starting with the ARFramework scene template

To begin, we'll create a new scene named FrameworkDemo using the ARFramework scene template, using the following steps:

  1. Select File | New Scene.
  2. In the New Scene dialog box, select the ARFramework template.
  3. Press Create.
  4. Select File | Save As. Navigate to the Scenes/ folder in your project's Assets folder, give it the name FrameworkDemo, and press Save.

    Note: Unintended clone dependencies

    When creating a new scene from a scene template, if you're prompted right away for a name to save the file under, this indicates your scene template has some clone dependencies defined. If this is not your intention, cancel the creation, select the template asset in your Project window, and ensure all the Clone checkboxes are cleared in the Dependencies list. Then try creating your new scene again.

The new AR scene already has the following game objects included from the template:

  • The AR Session game object
  • The AR Session Origin rig with the raycast manager and plane manager components.
  • UI Canvas is a screen space canvas with Startup UI, Scan UI, Main UI, and NonAR UI child panels. It also has the UI Controller component script that we wrote.
  • Interaction Controller is a game object with the Interaction Controller component script we wrote that helps the app switch between interaction modes, including Startup, Scan, Main, and NonAR modes. It also has a Player Input component configured with the AR Input Actions asset we previously created.
  • The OnboardingUX prefab from the AR Foundation Demos project that provides AR session status and feature detection status messages, and animated onboarding graphics prompts.

Set up the app title now as follows:

  1. In the Hierarchy window, unfold the UI Canvas object, and unfold its child App Title Panel.
  2. Select the Title Text object.
  3. In its Inspector, change its text content to Place Object Demo.

The default AR Session Origin already has an AR Plane Manager component. Let's ensure it's only detecting horizontal planes. Let's add a point cloud visualization too. Follow these steps:

  1. In the Hierarchy window, select the AR Session Origin object.
  2. In the Inspector, set the AR Plane Manager | Detection Mode to Horizontal by first selecting Nothing (to clear the list) and then selecting Horizontal.
  3. Click the Add Component button, search for ar point cloud, then add an AR Point Cloud Manager component.
  4. Find a point cloud visualizer prefab and set the Point Cloud Prefab slot (for example, AR Point Cloud Debug Visualizer can be found in the Assets/ARF-samples/Prefabs/ folder).
  5. Save your work with File | Save.

We've created a new scene based on the ARFramework template and added AR trackables managers for point clouds and horizontal planes. Next, we'll add the main menu.

Adding a main menu

The main menu UI resides under the Menu UI panel (under UI Canvas) in the scene hierarchy. We will add a menu panel with three buttons to let you add a cube, a sphere, and a virus. We'll create a menu sub-panel and arrange the menu buttons horizontally. Follow these steps:

  1. In the Hierarchy, unfold the UI Canvas, and unfold its child Main UI object.
  2. First, remove the temporary Main mode text element. Right-click the child Text object and select Delete.
  3. Right-click the Menu UI and select UI | Panel, then rename it Main Menu.
  4. On the Main Menu panel, use the Anchor Presets to set Bottom-Stretch, and use Shift + Alt + click Bottom-Stretch to make a bottom panel. Then set Rect Transform | Height to 175.
  5. I set my background Image | Color to opaque white with Alpha: 255.
  6. Select Add Component, search layout, then select | Horizontal Layout Group.
  7. On the Horizontal Layout Group component check the Control Child Size | Width and Height checkboxes (leave the others at their default values, Use Child Scale unchecked, and Child Force Expand checked). The Main Menu panel looks like this in the Inspector:
Figure 5.1 – The Main Menu panel settings

Figure 5.1 – The Main Menu panel settings

Now we'll add three buttons to the menu using the following steps:

  1. Right-click the Main Menu, select UI | Button – TextMeshPro, and rename it to Cube Button.
  2. Select its child text object, and set the Text value to Cube and Font Size to 48.
  3. Right-click the Cube Button and select Duplicate (or press Ctrl + D). Rename it Sphere Button and change its text to Sphere.
  4. Repeat step 3 again, renaming it Virus Button, and changing the text to Virus.

The resulting scene hierarchy of the Main Menu is shown in the following screenshot:

Figure 5.2 – Main Menu hierarchy

I decided to go further and add a sprite image of each model to the buttons. I created the images by screen-capturing a view of each model, edited them with Photoshop, saved them as PNG files, and in Unity made sure the image's Texture Type is set to Sprite (2D and UI). I then added a child Image element to the buttons. The result is as shown in the following image of my menu:

Figure 5.3 – Main Menu with icon buttons

Figure 5.3 – Main Menu with icon buttons

Thus far we have created a Main Menu panel with menu buttons under the Main UI. When the app goes into Main-mode, this menu will be displayed.

Next, we'll add a UI panel that prompts the user to tap the screen to place an object into the scene.

Adding PlaceObject-mode with instructional UI

When the user picks an object from the main menu, the app will enable PlaceObject-mode. For this mode, we need a UI panel to prompt the user to tap the screen to place the object. Let's create the UI panel first.

Creating the PlaceObject UI panel

The PlaceObject UI panel should be similar to the Scan UI one, so we can duplicate and modify it using the following steps:

  1. In the Hierarchy window, unfold the UI Canvas.
  2. Right-click the Scan UI game object and select Duplicate. Rename the new object PlaceObject UI.
  3. Unfold PlaceObject UI and select its child Animated Prompt.
  4. In the Inspector, set the Animated Prompt | Instruction to Tap To Place. The resulting component is shown in the following screenshot:
    Figure 5.4 – Animated Prompt settings for the PlaceObject UI panel

    Figure 5.4 – Animated Prompt settings for the PlaceObject UI panel

  5. Now we add the panel to the UI Controller.

    In the Hierarchy, select the UI Canvas object.

  6. In the Inspector, at the bottom-right of the UI Controller component, click the + button to add an item to the UI Panels dictionary.
  7. Enter PlaceObject as text in the Id field.
  8. Drag the PlaceObject UI game object from the Hierarchy onto the Value slot. The UI Controller component now looks like the following:
Figure 5.5 – UI Controller's UI Panels list with PlaceObject added

Figure 5.5 – UI Controller's UI Panels list with PlaceObject added

We added an instructional user prompt for the PlaceObject UI. When the user chooses to add an object to the scene, this panel will be displayed. Next, we'll add the PlaceObject mode and script.

Creating the PlaceObject mode

To add a mode to the framework, we create a child GameObject under the Interaction Controller and write a mode script. The mode script will show the mode's UI, handle any user interactions, and then transition to another mode when it is done. For PlaceObject-mode, it will display the PlaceObject UI panel, wait for the user to tap the screen, instantiate the prefab object, and then return to Main-mode.

Let's write the PlaceObjectMode script as follows:

  1. Begin by creating a new script in your Project Scripts/ folder using right-click | Create C# Script, and name the script PlaceObjectMode.
  2. Double-click the file to open it for editing and replace the default content, starting with the following declarations:

    using System.Collections;

    using System.Collections.Generic;

    using UnityEngine;

    using UnityEngine.InputSystem;

    using UnityEngine.XR.ARFoundation;

    using UnityEngine.XR.ARSubsystems;

    public class PlaceObjectMode : MonoBehaviour

    {

       [SerializeField] ARRaycastManager raycaster;

        GameObject placedPrefab;

        List<ARRaycastHit> hits = new List<ARRaycastHit>();

    The script will use APIs from ARFoundation and ARSubsystems so we specify these in the using statements at the top of the script. It will use the ARRaycastManager to determine which tracked plane the user has tapped. Then it will instantiate the placedPrefab into the scene.

  3. When the mode is enabled, we will show the PlaceObject UI panel:

        void OnEnable()

        {

            UIController.ShowUI("PlaceObject");

        }

  4. When the user selects an object from the Main Menu, we need to tell PlaceObjectMode which prefab to instantiate, given the following code:

        public void SetPlacedPrefab(GameObject prefab)

        {

            placedPrefab = prefab;

        }

  5. Then when the user taps the screen, the Input System triggers an OnPlaceObject event (given the AR Input Actions asset we previously set up), using the following code:

        public void OnPlaceObject(InputValue value)

        {

            Vector2 touchPosition = value.Get<Vector2>();

            PlaceObject(touchPosition);

        }

        void PlaceObject(Vector2 touchPosition)

        {

            if (raycaster.Raycast(touchPosition, hits,            TrackableType.PlaneWithinPolygon))

            {

                Pose hitPose = hits[0].pose;

                Instantiate(placedPrefab, hitPose.position,                hitPose.rotation);

                InteractionController.EnableMode("Main");

            }

        }

    }

    When a touch event occurs, we pass the touchPosition to the PlaceObject function, which does a Raycast to find the tracked horizontal plane. If found, we Instantiate the placedPrefab at the hitPose location and orientation. And then the app returns to Main-mode.

  6. Save the script and return to Unity.

We can now add the mode to the Interaction Controller as follows:

  1. In the Hierarchy window, right-click the Interaction Controller game object and select Create Empty. Rename the new object PlaceObject Mode.
  2. Drag the PlaceObjectMode script from the Project window onto the PlaceObject Mode object adding it as a component.
  3. Drag the AR Session Origin object from the Hierarchy onto the Place Object Mode | Raycaster slot.

    Now we'll add the mode to the Interaction Controller.

  4. In the Hierarchy, select the Interaction Controller object.
  5. In the Inspector, at the bottom-right of the Interaction Controller component, click the + button to add an item to the Interaction Modes dictionary.
  6. Enter the PlaceObject text in the Id field.
  7. Drag the PlaceObject Mode game object from the Hierarchy onto the Value slot. The Interaction Controller component now looks like the following:
Figure 5.6 – The Interaction Controller's Interaction Modes list with PlaceObject added

Figure 5.6 – The Interaction Controller's Interaction Modes list with PlaceObject added

We have now added a PlaceObject Mode to our framework. It will be enabled by the Interaction Controller when EnableMode("PlaceObject") is called by another script or, in our case, by a main menu button. When enabled, the script shows the PlaceObject instructional UI, then listens for an OnPlaceObject input action event. Upon the input event, we use Raycast to determine where in the 3D space the user wants to place the object, then the script instantiates the prefab and returns to Main-mode.

The final step is to wire up the main menu buttons.

Wiring the menu buttons

When the user presses a main menu button to add an object to the scene, the button will tell PlaceObjectMode which prefab is to be instantiated. Then PlaceObject mode is enabled, which prompts the user to tap to place the object and handles the user input action. Let's set up the menu buttons now using the following steps:

  1. Unfold the Main Menu game object in the Hierarchy by navigating to UI Canvas / Main UI / Main Menu and select the Cube Button object.
  2. In its Inspector, on the Button component, in its OnClick section, press the + button in the bottom right to add an event action.
  3. From the Hierarchy, drag the PlaceObject Mode object onto the OnClick Object slot.
  4. In the Function selection list, choose PlaceObject Mode | SetPlacedPrefab.
  5. In the Project window, locate a cube model prefab to use. For example, navigate to your Assets/ARF-samples/Prefabs/ folder and drag the AR Placed Cube prefab into the Game Object slot for this click event in Inspector.
  6. Now let the button enable PlaceObject Mode. In its Inspector, on the Button component, in its OnClick section, press the + button in the bottom right to add another event action.
  7. From the Hierarchy, drag the Interaction Controller object onto the OnClick event's Object slot.
  8. In the Function selection list, choose InteractionController | EnableMode.
  9. In the string parameter field, enter PlaceObject.

The Cube Button object's Button component now has the following OnClick event settings:

Figure 5.7 – The OnClick events for the Cube Button

Figure 5.7 – The OnClick events for the Cube Button

Repeat these steps for the Sphere Button and Virus Button. As a shortcut we can copy/paste the component settings as follows:

  1. With the Cube Button selected in the Hierarchy, over in the Inspector, click the three-dot context menu for the Button component, and select Copy Component.
  2. In the Hierarchy, select the Sphere Button object.
  3. In its Inspector, click the three-dot context menu for the Button component, and select Paste Component Values.
  4. In the Project window, locate a sphere model prefab to use. For example, navigate to your Assets/ARF-samples/Prefabs/ folder and drag the AR Placed Sphere prefab into the Game Object slot for this click event in Inspector.
  5. Likewise, repeat steps 1-4 for the Virus Button, and set the GameObject to the Virus prefab (perhaps located in your own Prefabs folder).
  6. Save your work using File | Save.

Everything should be set up now. We created a new scene using the ARFramework template, added a main menu with buttons, added the PlaceObject-mode with instructional user prompt, wrote the PlaceObjectMode script that handles user input actions and instantiates the prefab, and wired it all up to the main menu buttons. Let's try it out!

Performing a Building and Run

To build and run the project, use the following steps:

  1. Open the Build Settings window using File | Build Settings.
  2. Click the Add Open Scenes button if the current scene (FrameworkDemo) is not already in the Scenes In Build list.
  3. Ensure that the FrameworkDemo scene is the only one checked in the Scenes In Build list.
  4. Click Build And Run to build the project.

When the project builds successfully, it starts up in Startup-mode while the AR Session is initializing. Then it goes into Scan-mode that prompts the user to scan the environment, until at least one horizontal plane is detected and tracked. Then it goes into Main-mode and displays the main menu. Screen captures of the app running on my phone in each of these modes are shown in the following figure:

Figure 5.8 – Screen captures of Startup-mode, Scan-mode, and Main-mode

Figure 5.8 – Screen captures of Startup-mode, Scan-mode, and Main-mode

On pressing one of the menu buttons, the app goes into PlaceObject-mode, prompting the user to tap to place an object. Tapping the screen instantiates the object at the specified location in the environment. Then the app returns to Main-mode.

We now have a working demo AR application for placing various virtual objects onto horizontal surfaces in your environment. One improvement might be to hide the trackable objects in Main-mode and only display them when needed in PlaceObject-mode.

Hiding tracked objects when not needed

When the app first starts tracking, we show the trackable planes and point clouds. This is useful feedback to the user when the app first starts and subsequently when placing an object. But once we have objects placed in the scene, these trackable visualizations can be distracting and unwanted. Let's only show the object while in PlaceObject-mode and hide them after at least one virtual object has been placed.

In AR Foundation, hiding the trackables requires two separate things: hiding the existing trackables that have already been detected, and preventing new trackables from being detected and visualized. We will implement both.

To implement this, we can write a separate component on PlaceObject mode that shows the trackables when enabled and hides them when disabled. Follow these steps:

  1. Create a new C# script in your Scripts/ folder named ShowTrackablesOnEnable and open it for editing.
  2. At the top of the class, add variable references to ARSessionOrigin, ARPlaneManager, and ARPointCloudManager. Also, we will now remember the most recently placed object in lastObject, and initialize them in Awake, as follows:

    using UnityEngine;

    using UnityEngine.XR.ARFoundation;

    public class ShowTrackablesOnEnable : MonoBehaviour

    {

        [SerializeField] ARSessionOrigin sessionOrigin;

        ARPlaneManager planeManager;

        ARPointCloudManager cloudManager;

        bool isStarted;

        void Awake()

        {

            planeManager =             sessionOrigin.GetComponent<ARPlaneManager>();

            cloudManager = sessionOrigin.GetComponent             <ARPointCloudManager>();

        }

        private void Start()

        {

            isStarted = true;

        }

    I've also added an isStarted flag that we'll use to prevent the visualizers from being hidden when the app starts up.

    Info: OnEnable and OnDisable can be called before Start

    In the life cycle of a MonoBehaviour component, OnEnable is called when the object becomes enabled and active. OnDisable is called when the script object becomes inactive. Start is called on the first frame the script is enabled, just before Update. See https://docs.unity3d.com/ScriptReference/MonoBehaviour.Awake.html.

    In our app, it is possible for OnDisable to get called before Start (when we're initializing the scene from InteractionController). To prevent ShowTrackables(false) from getting called before the scene has started, we use an isStarted flag in this script.

  3. We will show the trackables when the mode is enabled and hide them when disabled using the following code:

        void OnEnable()

        {

            ShowTrackables(true);

        }

        void OnDisable()

        {

            if (isStarted)

            {

                ShowTrackables(false);

            }

        }

  4. These call ShowTrackables, which we implement as follows:

        void ShowTrackables(bool show)

        {

            if (cloudManager)

            {

                cloudManager.SetTrackablesActive(show);

                cloudManager.enabled = show;

            }

            if (planeManager)

            {

                planeManager.SetTrackablesActive(show);

                planeManager.enabled = show;

            }

        }

    }

    Setting SetTrackablesActive(false) will hide all the existing trackables. Disabling the trackable manager component itself will prevent new trackables from being added. We check for null managers in case the component is not present in ARSessionOrigin.

  5. Save the script.
  6. Back in Unity, select the PlaceObject Mode game object in the Hierarchy.
  7. Drag the ShowTrackablesOnEnable script onto the PlaceObject Mode object.
  8. Drag the AR Session Origin object from the Hierarchy into the Inspector and drop it onto the Show Trackables On Enable | Session Origin slot.
  9. Save the scene using File | Save.

Now when you click Build And Run again, the trackables will be shown when PlaceObject Mode is enabled, and will be hidden when disabled. Thus, the trackables will be visible when Main mode is first enabled but after an object has been placed and the app goes back to Main-mode, the trackables will be hidden. This is the behavior we desire. The PlaceObject-mode and subsequent Main-mode are shown in the following screen captures of the project running on my phone:

Figure 5.9 – Screen captures of PlaceObject-mode, and subsequent Main-mode with trackables hidden

Figure 5.9 – Screen captures of PlaceObject-mode, and subsequent Main-mode with trackables hidden

Tip: Disable trackables by modifying the plane detection mode

To disable plane detection, the method I'm using is to disable the manager component. This is the technique given in the example PlaneDetectionController.cs script in the AR Foundation Samples project. Alternatively, the Unity ARCore XR Plugin docs ( https://docs.unity3d.com/Packages/[email protected]/manual/index.html) recommend disabling plane detection by setting the ARPlaneManager detection mode to the value PlaneDetectionMode.None.

We've now completed a simple AR project to place various virtual objects on horizontal planes detected in the environment, using our AR user framework.

Further improvements you could add to the project include the following:

  • A reset button in the main menu to remove any virtual objects already placed in the scene.
  • Only allow one instance of a virtual object to be placed in the scene at a time.
  • The ability to move and resize an existing object (see Chapter 7, Gallery: Editing Virtual Objects).
  • Can you think of more improvements? Let us know.

In the rest of this chapter, we'll discuss some advanced onboarding and user experience features you may want to include in your projects at a later time.

Advanced onboarding issues

In this section, we'll review some other issues related to AR onboarding, AR sessions, and devices, including the following:

  • Making an AR-optional project
  • Determining whether the device supports a specific AR feature
  • Adding localization to your project

Making an AR-optional project

Some applications are intended to be run specifically using AR features and should just quit (after a friendly notification to the user) if it's not supported. But other applications may want to behave like an ordinary mobile app with an extra optional capability of supporting AR features.

For example, a game I recently created, Epoch Resources (available for Android at https://play.google.com/store/apps/details?id=com.parkerhill.EpochResources&hl=en_US&gl=US, and iOS at https://apps.apple.com/us/app/epoch-resources/id1455848902) is a planetary evolution incremental game with a 3D planet you mine for resources. It offers an optional AR-viewing mode where you can "pop" the planet into your living room and continue playing the game in AR, as shown in the following image.

Figure 5.10 – Epoch Resources is an AR-optional game

Figure 5.10 – Epoch Resources is an AR-optional game

For an AR-optional application, your app will probably start up as an ordinary non-AR app. Then at some point the user may choose to turn on AR-specific features. That's when you'll activate the AR Session and handle the onboarding UX.

None of the projects in this book implement AR-optional so this is an informational discussion only. To start, you'll tell the XR Plugin that AR is optional by going to Edit | Project Settings | XR Plug-in Management and selecting Requirement | Optional (instead of Required) for each of your platforms (ARCore and ARKit are set separately).

You will need a mechanism for running with or without AR. One approach is to have separate AR and non-AR scenes that are loaded as needed (see https://docs.unity3d.com/ScriptReference/SceneManagement.SceneManager.html).

In the case of the Epoch Resources game, we did not create two separate scenes. Rather the scene contains two cameras, the normal default camera for non-AR mode and the AR Session Origin (with child camera) for AR mode. We then flip between the two cameras when the user toggles viewing modes.

Another issue you may run into is determining whether the user's device supports a specific AR feature at runtime.

Determining whether the device supports a specific AR feature

It is possible that your app requires a specific AR feature that is not supported by all devices. We can ask the Unity AR subsystems what features are supported by getting the subsystem descriptor records.

For example, suppose we are interested in detecting vertical planes. Some older devices may support AR but only horizontal planes. The following code illustrates how to get and check plane detection support:

using System.Collections.Generic;

using UnityEngine;

using UnityEngine.XR.ARSubsystems;

public class CheckPlaneDetectionSupport : MonoBehaviour

{

    void Start()

    {

        var planeDescriptors =             new List<XRPlaneSubsystemDescriptor>();

        SubsystemManager.            GetSubsystemDescriptors(planeDescriptors);

        Debug.Log("Plane descriptors count: " +            planeDescriptors.Count);

        if (planeDescriptors.Count > 0)

        {

            foreach (var planeDescriptor in planeDescriptors)

            {

                Debug.Log("Support horizontal: " +                    planeDescriptor.                        supportsHorizontalPlaneDetection);

                Debug.Log("Support vertical: " +                    planeDescriptor.                        supportsVerticalPlaneDetection);

                Debug.Log("Support arbitrary: " +                    planeDescriptor.                        supportsArbitraryPlaneDetection);

                Debug.Log("Support classification: " +                    planeDescriptor.supportsClassification);

            }

        }

    }

}

The types of descriptors available in AR Foundation include the following (their purpose is self-evident from their names):

  • XRPlaneSubsystemDescriptor
  • XRRaycastSubsystemDescriptor
  • XRFaceSubsystemDescriptor
  • XRImageTrackingSubsystemDescriptor
  • XREnvironmentProbeSubsystemDescriptor
  • XRAnchorSubsystemDescriptor
  • XRObjectTrackingSubsystemDescriptor
  • XRParticipantSubsystemDescriptor
  • XRDepthSubsystemDescriptor
  • XROcclusionSubsystemDescriptor
  • XRCameraSubsystemDescriptor
  • XRSessionSubsystemDescriptor
  • XRHumanBodySubsystemDescriptor

Documentation for the AR Subsystems API and these descriptor records can be found at https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.html. For example, the XRPlaneSubsystemDescriptor record we used here is documented at https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.XRPlaneSubsystemDescriptor.Cinfo.html.

If you are planning to distribute your application in different countries, you may also be interested in localization.

Adding localization

Localization is the translation of text strings and other assets into local languages. It can also specify date and currency formatting, alternative graphics for national flags, and so on, to accommodate international markets and users. The Unity Localization package provides a standard set of tools and data structures for localizing your application. More information can be found at https://docs.unity3d.com/Packages/[email protected]/manual/QuickStartGuide.html. We do not use localization in any projects in this book, except where already supported by imported assets such as the Onboarding UX assets from the AR Foundation Demos project.

The Unity Onboarding UX assets has built-in support for localization of the user prompts and explanation of scanning problems. The ReasonsUX localization tables given with the Onboarding UX project, for example, can be opened by selecting Window | Asset Management | Localization Tables and is shown in the following screenshot. You can see, for example, the second-row INIT key says in English, Initializing augmented reality, along with the same string translated into many other languages:

Figure 5.11 – The ReasonsUX localization tables included in Onboarding UX assets

Figure 5.11 – The ReasonsUX localization tables included in Onboarding UX assets

In the code, the Initializing augmented reality message, for example, is retrieved with a call like this:

string localizedInit = reasonsTable.GetEntry("INIT").GetLocalizedString();

When we added the onboarding UX prefab (ARFoundationDemos/UX/Prefabs/ScreenspaceUI) to our scene, I had you disable the Localization Manager component because it gives runtime errors until it is set up. Provided you've installed the Localization package via Package Manager as described earlier in this chapter, we can set it up now for the project using the following steps:

  1. Open the Localization settings window by going to Edit | Project Settings | Localization.
  2. In the Project window, navigate to Assets/ARFOundationDemos/Common/Localization/ and drag the LocalizationSettings asset onto the Location Settings slot (or use the doughnut icon to open the Location Setting Select dialog box).
  3. In the settings window, click Add All.
  4. In the Hierarchy window, select the OnboardingUX object and in the Inspector, enable the Localization Manager component.
  5. Open the Addressables Groups window using Window | Asset Management | Addressables | Groups.
  6. From the Addressables Groups menu bar, select Build | New Build | Default Build Script. You will need to do this for each target platform you are building (for example, once for Android and once for iOS).

As you can see in this last step, the Localization package uses Unity's new Addressables system for managing, packing, and loading assets from any location locally or over the internet (https://docs.unity3d.com/Packages/[email protected]/manual/index.html).

Note that as I'm writing this, the Onboarding UX LocalizationManager script does not select the language at runtime. The language must be set in the Inspector and compiled into your build.

The AR UI framework we built in this chapter can be used as a template for new scenes. Unity makes it easy to set that up.

Summary

In this chapter, we got a chance to use the AR user framework we developed in the previous Chapter 4, Creating an AR User Framework, in a simple AR Place Object Demo project. We created a new scene using the ARFramework scene template that implements a state machine mechanism for managing user interaction modes. It handles user interaction with a controller-view design pattern, separating the control scripts from the UI graphics.

By default, the scene includes the AR Session and AR Session Origin components required by AR Foundation. The scene is set up with a Canvas UI containing separate panels that will be displayed for each interaction mode. It also includes an Interaction Controller that references separate mode objects, one for each interaction mode.

The modes (and corresponding UI) given with the template are Startup, Scan, Main, and NonAR. An app using this framework first starts in Startup-mode while the AR Session is initializing. Then it goes into Scan-mode, prompting the user to scan the environment for trackable features, until a horizontal plane is detected. Then it goes into Main-mode and displays the main menu.

For this project, we added a main menu that is displayed during Main-mode and that contains buttons for placing various virtual objects in the environment. Pressing a button enables a new PlaceObject-mode that we added to the scene. When PlaceObject-mode is enabled, it displays an instructional animated prompt for the user to tap to place an object in the scene. After an object is added, the app returns to Main-mode, and the trackables are hidden so you can see your virtual objects in the real world without any extra distractions.

In the next chapter, we will go beyond a simple demo project and begin to build a more complete AR application – a photo gallery where you can place framed photos of your favorite pictures on the drab walls in your home or office.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset