In this chapter, we will learn how to use the Augmented Reality (AR) user framework that we set up in the previous chapter, Chapter 4, Creating an AR User Framework. Starting with the ARFramework scene template, we will add a main menu for placing virtual objects in the environment. If you skipped that chapter or just read through it, you can find the scene template and assets in the files provided on this book's GitHub repository.
For this project, we'll extend the framework with a new PlaceObject-mode that prompts the user to tap to place a virtual object in the room. The user will have a choice of objects from the main menu.
In the latter half of the chapter, I'll discuss some advanced AR application issues including making an AR-optional project, determining whether a device supports a specific AR feature, and adding localization to your User Interface (UI).
This chapter will cover the following topics:
By the end of the chapter, you'll be more familiar with the AR user framework developed for this book, which we'll use in subsequent chapters as we build a variety of different AR application projects.
To implement the project in this chapter, you need Unity installed on your development computer, with a mobile device connected that supports AR applications (see Chapter 1, Setting Up for AR Development, for instructions), including the following:
We assume you have installed the assets from the Unity arfoundation-samples project imported from ARF-samples.unitypackage created in Chapter 2, Your First AR Scene.
Also from Chapter 2, Your First AR Scene, we created an AR Input Actions asset that we'll use in this project, containing an action map named ARTouchActions including (at least) a PlaceObject action.
We also assume you have the ARFramework scene template created in Chapter 4, Creating an AR User Framework, along with all the prerequisite Unity packages detailed at the beginning of Chapter 4 Creating an AR User Framework. A copy of the template and assets can be found in this book's GitHub repository at https://github.com/PacktPublishing/Augmented-Reality-with-Unity-AR-Foundation (not including the third-party packages that you should install yourself).
The AR user framework requires the following prerequisites, as detailed in Chapter 4, Creating an AR User Framework, including the following:
The completed scene for this chapter can also be found in the GitHub repository.
With the framework, when the app first starts, Startup-mode is enabled and the AR Session is initialized. Once the session is ready, it transitions to Scan-mode.
If the AR Session determines that the current device does not support AR, Scan-mode will transition to NonAR-mode instead. Presently this just puts a text message on the screen. See the Making an AR-optional project section near the end of this chapter for more information.
In Scan-mode, the user is prompted to use their device camera to slowly scan the room until AR features are detected, namely, horizontal planes. The ScanMode script checks for any tracked planes and then transitions to Main-mode.
Given this, our plan is to add the following features:
I have chosen to provide a cube, a sphere, and a virus (the virus model was created in Chapter 2, Your First AR Scene). Feel free to find and use your own models instead. The prefab assets I will be using are the following:
This is a simple AR demo that will help you become more familiar with the AR user framework we developed and will use in subsequent projects in this book.
Let's get started.
Note: Unintended clone dependencies
When creating a new scene from a scene template, if you're prompted right away for a name to save the file under, this indicates your scene template has some clone dependencies defined. If this is not your intention, cancel the creation, select the template asset in your Project window, and ensure all the Clone checkboxes are cleared in the Dependencies list. Then try creating your new scene again.
The new AR scene already has the following game objects included from the template:
Set up the app title now as follows:
The default AR Session Origin already has an AR Plane Manager component. Let's ensure it's only detecting horizontal planes. Let's add a point cloud visualization too. Follow these steps:
The main menu UI resides under the Menu UI panel (under UI Canvas) in the scene hierarchy. We will add a menu panel with three buttons to let you add a cube, a sphere, and a virus. We'll create a menu sub-panel and arrange the menu buttons horizontally. Follow these steps:
Now we'll add three buttons to the menu using the following steps:
I decided to go further and add a sprite image of each model to the buttons. I created the images by screen-capturing a view of each model, edited them with Photoshop, saved them as PNG files, and in Unity made sure the image's Texture Type is set to Sprite (2D and UI). I then added a child Image element to the buttons. The result is as shown in the following image of my menu:
Thus far we have created a Main Menu panel with menu buttons under the Main UI. When the app goes into Main-mode, this menu will be displayed.
Next, we'll add a UI panel that prompts the user to tap the screen to place an object into the scene.
When the user picks an object from the main menu, the app will enable PlaceObject-mode. For this mode, we need a UI panel to prompt the user to tap the screen to place the object. Let's create the UI panel first.
In the Hierarchy, select the UI Canvas object.
To add a mode to the framework, we create a child GameObject under the Interaction Controller and write a mode script. The mode script will show the mode's UI, handle any user interactions, and then transition to another mode when it is done. For PlaceObject-mode, it will display the PlaceObject UI panel, wait for the user to tap the screen, instantiate the prefab object, and then return to Main-mode.
public class PlaceObjectMode : MonoBehaviour
[SerializeField] ARRaycastManager raycaster;
List<ARRaycastHit> hits = new List<ARRaycastHit>();
The script will use APIs from ARFoundation and ARSubsystems so we specify these in the using statements at the top of the script. It will use the ARRaycastManager to determine which tracked plane the user has tapped. Then it will instantiate the placedPrefab into the scene.
public void SetPlacedPrefab(GameObject prefab)
placedPrefab = prefab;
public void OnPlaceObject(InputValue value)
Vector2 touchPosition = value.Get<Vector2>();
void PlaceObject(Vector2 touchPosition)
if (raycaster.Raycast(touchPosition, hits, TrackableType.PlaneWithinPolygon))
Pose hitPose = hits.pose;
Instantiate(placedPrefab, hitPose.position, hitPose.rotation);
When a touch event occurs, we pass the touchPosition to the PlaceObject function, which does a Raycast to find the tracked horizontal plane. If found, we Instantiate the placedPrefab at the hitPose location and orientation. And then the app returns to Main-mode.
We can now add the mode to the Interaction Controller as follows:
Now we'll add the mode to the Interaction Controller.
We have now added a PlaceObject Mode to our framework. It will be enabled by the Interaction Controller when EnableMode("PlaceObject") is called by another script or, in our case, by a main menu button. When enabled, the script shows the PlaceObject instructional UI, then listens for an OnPlaceObject input action event. Upon the input event, we use Raycast to determine where in the 3D space the user wants to place the object, then the script instantiates the prefab and returns to Main-mode.
The final step is to wire up the main menu buttons.
When the user presses a main menu button to add an object to the scene, the button will tell PlaceObjectMode which prefab is to be instantiated. Then PlaceObject mode is enabled, which prompts the user to tap to place the object and handles the user input action. Let's set up the menu buttons now using the following steps:
The Cube Button object's Button component now has the following OnClick event settings:
Everything should be set up now. We created a new scene using the ARFramework template, added a main menu with buttons, added the PlaceObject-mode with instructional user prompt, wrote the PlaceObjectMode script that handles user input actions and instantiates the prefab, and wired it all up to the main menu buttons. Let's try it out!
When the project builds successfully, it starts up in Startup-mode while the AR Session is initializing. Then it goes into Scan-mode that prompts the user to scan the environment, until at least one horizontal plane is detected and tracked. Then it goes into Main-mode and displays the main menu. Screen captures of the app running on my phone in each of these modes are shown in the following figure:
On pressing one of the menu buttons, the app goes into PlaceObject-mode, prompting the user to tap to place an object. Tapping the screen instantiates the object at the specified location in the environment. Then the app returns to Main-mode.
We now have a working demo AR application for placing various virtual objects onto horizontal surfaces in your environment. One improvement might be to hide the trackable objects in Main-mode and only display them when needed in PlaceObject-mode.
When the app first starts tracking, we show the trackable planes and point clouds. This is useful feedback to the user when the app first starts and subsequently when placing an object. But once we have objects placed in the scene, these trackable visualizations can be distracting and unwanted. Let's only show the object while in PlaceObject-mode and hide them after at least one virtual object has been placed.
In AR Foundation, hiding the trackables requires two separate things: hiding the existing trackables that have already been detected, and preventing new trackables from being detected and visualized. We will implement both.
To implement this, we can write a separate component on PlaceObject mode that shows the trackables when enabled and hides them when disabled. Follow these steps:
public class ShowTrackablesOnEnable : MonoBehaviour
[SerializeField] ARSessionOrigin sessionOrigin;
planeManager = sessionOrigin.GetComponent<ARPlaneManager>();
cloudManager = sessionOrigin.GetComponent <ARPointCloudManager>();
private void Start()
isStarted = true;
Info: OnEnable and OnDisable can be called before Start
In the life cycle of a MonoBehaviour component, OnEnable is called when the object becomes enabled and active. OnDisable is called when the script object becomes inactive. Start is called on the first frame the script is enabled, just before Update. See https://docs.unity3d.com/ScriptReference/MonoBehaviour.Awake.html.
In our app, it is possible for OnDisable to get called before Start (when we're initializing the scene from InteractionController). To prevent ShowTrackables(false) from getting called before the scene has started, we use an isStarted flag in this script.
void ShowTrackables(bool show)
cloudManager.enabled = show;
planeManager.enabled = show;
Setting SetTrackablesActive(false) will hide all the existing trackables. Disabling the trackable manager component itself will prevent new trackables from being added. We check for null managers in case the component is not present in ARSessionOrigin.
Now when you click Build And Run again, the trackables will be shown when PlaceObject Mode is enabled, and will be hidden when disabled. Thus, the trackables will be visible when Main mode is first enabled but after an object has been placed and the app goes back to Main-mode, the trackables will be hidden. This is the behavior we desire. The PlaceObject-mode and subsequent Main-mode are shown in the following screen captures of the project running on my phone:
Tip: Disable trackables by modifying the plane detection mode
To disable plane detection, the method I'm using is to disable the manager component. This is the technique given in the example PlaneDetectionController.cs script in the AR Foundation Samples project. Alternatively, the Unity ARCore XR Plugin docs ( https://docs.unity3d.com/Packages/[email protected]/manual/index.html) recommend disabling plane detection by setting the ARPlaneManager detection mode to the value PlaneDetectionMode.None.
In the rest of this chapter, we'll discuss some advanced onboarding and user experience features you may want to include in your projects at a later time.
Some applications are intended to be run specifically using AR features and should just quit (after a friendly notification to the user) if it's not supported. But other applications may want to behave like an ordinary mobile app with an extra optional capability of supporting AR features.
For example, a game I recently created, Epoch Resources (available for Android at https://play.google.com/store/apps/details?id=com.parkerhill.EpochResources&hl=en_US&gl=US, and iOS at https://apps.apple.com/us/app/epoch-resources/id1455848902) is a planetary evolution incremental game with a 3D planet you mine for resources. It offers an optional AR-viewing mode where you can "pop" the planet into your living room and continue playing the game in AR, as shown in the following image.
For an AR-optional application, your app will probably start up as an ordinary non-AR app. Then at some point the user may choose to turn on AR-specific features. That's when you'll activate the AR Session and handle the onboarding UX.
None of the projects in this book implement AR-optional so this is an informational discussion only. To start, you'll tell the XR Plugin that AR is optional by going to Edit | Project Settings | XR Plug-in Management and selecting Requirement | Optional (instead of Required) for each of your platforms (ARCore and ARKit are set separately).
You will need a mechanism for running with or without AR. One approach is to have separate AR and non-AR scenes that are loaded as needed (see https://docs.unity3d.com/ScriptReference/SceneManagement.SceneManager.html).
In the case of the Epoch Resources game, we did not create two separate scenes. Rather the scene contains two cameras, the normal default camera for non-AR mode and the AR Session Origin (with child camera) for AR mode. We then flip between the two cameras when the user toggles viewing modes.
Another issue you may run into is determining whether the user's device supports a specific AR feature at runtime.
It is possible that your app requires a specific AR feature that is not supported by all devices. We can ask the Unity AR subsystems what features are supported by getting the subsystem descriptor records.
For example, suppose we are interested in detecting vertical planes. Some older devices may support AR but only horizontal planes. The following code illustrates how to get and check plane detection support:
public class CheckPlaneDetectionSupport : MonoBehaviour
var planeDescriptors = new List<XRPlaneSubsystemDescriptor>();
Debug.Log("Plane descriptors count: " + planeDescriptors.Count);
if (planeDescriptors.Count > 0)
foreach (var planeDescriptor in planeDescriptors)
Debug.Log("Support horizontal: " + planeDescriptor. supportsHorizontalPlaneDetection);
Debug.Log("Support vertical: " + planeDescriptor. supportsVerticalPlaneDetection);
Debug.Log("Support arbitrary: " + planeDescriptor. supportsArbitraryPlaneDetection);
Debug.Log("Support classification: " + planeDescriptor.supportsClassification);
Documentation for the AR Subsystems API and these descriptor records can be found at https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.html. For example, the XRPlaneSubsystemDescriptor record we used here is documented at https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARSubsystems.XRPlaneSubsystemDescriptor.Cinfo.html.
If you are planning to distribute your application in different countries, you may also be interested in localization.
Localization is the translation of text strings and other assets into local languages. It can also specify date and currency formatting, alternative graphics for national flags, and so on, to accommodate international markets and users. The Unity Localization package provides a standard set of tools and data structures for localizing your application. More information can be found at https://docs.unity3d.com/Packages/[email protected]/manual/QuickStartGuide.html. We do not use localization in any projects in this book, except where already supported by imported assets such as the Onboarding UX assets from the AR Foundation Demos project.
The Unity Onboarding UX assets has built-in support for localization of the user prompts and explanation of scanning problems. The ReasonsUX localization tables given with the Onboarding UX project, for example, can be opened by selecting Window | Asset Management | Localization Tables and is shown in the following screenshot. You can see, for example, the second-row INIT key says in English, Initializing augmented reality, along with the same string translated into many other languages:
In the code, the Initializing augmented reality message, for example, is retrieved with a call like this:
string localizedInit = reasonsTable.GetEntry("INIT").GetLocalizedString();
When we added the onboarding UX prefab (ARFoundationDemos/UX/Prefabs/ScreenspaceUI) to our scene, I had you disable the Localization Manager component because it gives runtime errors until it is set up. Provided you've installed the Localization package via Package Manager as described earlier in this chapter, we can set it up now for the project using the following steps:
As you can see in this last step, the Localization package uses Unity's new Addressables system for managing, packing, and loading assets from any location locally or over the internet (https://docs.unity3d.com/Packages/[email protected]/manual/index.html).
Note that as I'm writing this, the Onboarding UX LocalizationManager script does not select the language at runtime. The language must be set in the Inspector and compiled into your build.
The AR UI framework we built in this chapter can be used as a template for new scenes. Unity makes it easy to set that up.
In this chapter, we got a chance to use the AR user framework we developed in the previous Chapter 4, Creating an AR User Framework, in a simple AR Place Object Demo project. We created a new scene using the ARFramework scene template that implements a state machine mechanism for managing user interaction modes. It handles user interaction with a controller-view design pattern, separating the control scripts from the UI graphics.
By default, the scene includes the AR Session and AR Session Origin components required by AR Foundation. The scene is set up with a Canvas UI containing separate panels that will be displayed for each interaction mode. It also includes an Interaction Controller that references separate mode objects, one for each interaction mode.
The modes (and corresponding UI) given with the template are Startup, Scan, Main, and NonAR. An app using this framework first starts in Startup-mode while the AR Session is initializing. Then it goes into Scan-mode, prompting the user to scan the environment for trackable features, until a horizontal plane is detected. Then it goes into Main-mode and displays the main menu.
For this project, we added a main menu that is displayed during Main-mode and that contains buttons for placing various virtual objects in the environment. Pressing a button enables a new PlaceObject-mode that we added to the scene. When PlaceObject-mode is enabled, it displays an instructional animated prompt for the user to tap to place an object in the scene. After an object is added, the app returns to Main-mode, and the trackables are hidden so you can see your virtual objects in the real world without any extra distractions.
In the next chapter, we will go beyond a simple demo project and begin to build a more complete AR application – a photo gallery where you can place framed photos of your favorite pictures on the drab walls in your home or office.