Chapter 7: Gallery: Editing Virtual Objects

In this chapter, we will continue building the project we started previously in Chapter 6, Gallery: Building an AR App, where we created an AR gallery that lets users place virtual framed photos on their real-world walls. In this chapter, we will build out more features related to interacting with and editing virtual objects that have already been added to a scene. Specifically, we'll let users select an object for editing, including moving, resizing, deleting, and replacing the image in the picture frame. In the process, we'll add new input actions, make use of Unity collision detection, and see more C# coding techniques using the Unity API.

In this chapter, we will cover the following topics:

  • Detecting collisions to avoid intersecting objects
  • Building an edit mode and edit menu UI
  • Using a physics raycast to select an object
  • Adding touch input actions to drag to move and pinch to scale
  • C# coding and the Unity API, including collision hooks and vector geometry

By the end of this chapter, you'll have a working AR application with many user interactions implemented.

Technical requirements

To complete the project in this chapter, you will need Unity installed on your development computer, connected to a mobile device that supports augmented reality applications (see Chapter 1, Setting Up for AR Development, for instructions). We will also assume you have created the ARGallery scene that we started in Chapter 6, Gallery: Building an AR App, where you'll also find additional dependencies detailed for you in the Technical requirements section. You can find that scene, as well as the one we will build in this chapter, in this book's GitHub repository at https://github.com/PacktPublishing/Augmented-Reality-with-Unity-AR-Foundation.

Note that in this book's repository, some of the scripts (and classes) for this chapter have been post-fixed with 07, such as AddPictureMode07, to distinguish them from the corresponding scripts that were written for the previous chapter. In your own project, you can leave the un-post-fixed name as is when you edit the existing scripts described in this chapter.

Creating an Edit mode

To get started with this chapter, you should have the ARGallery scene open in Unity where we left off at the end of Chapter 6, Gallery: Building an AR App. To recap, after launching the app, it starts by initializing the AR session and scanning to detect features in your real-world environment. Once the vertical planes (walls) have been detected, the main menu will be presented. Here, the user can tap the Add button, which opens an image select menu where the user can pick a photo to use. Then, the user will be prompted to tap on a trackable vertical plane to place the framed photo on. Once the photo is hanging on their wall, the user is returned to Main-mode.

In this chapter, we'll let users modify existing virtual framed photos that have been added to the scene. The first step is for the user to select an existing object to edit from Main-mode, which then activates EditPicture-mode for the selected object. When an object is selected and being edited, it should be highlighted so that it's apparent which object has been selected.

Using the AR user framework that's been developed for this book, we will start by adding an EditPicture-mode UI to the scene. First, we'll create the edit menu user interface, including multiple buttons for various edit functions, and an Edit-mode controller script for managing it.

Creating an edit menu UI

To create the UI for editing a placed picture, we'll make a new EditPicture UI panel. It's simpler to duplicate the existing Main UI and adapt it. Perform the following steps:

  1. In the Hierarchy window, right-click Main UI (child of UI Canvas) and select Duplicate. Rename the copy EditPicture UI. Delete any child objects, including Add Button, by right-clicking | Delete.
  2. Create a subpanel for the menu by right-clicking EditPicture UI and selecting UI | Panel. Rename it Edit Menu.
  3. Use the Anchors presets to set Bottom-Stretch, and then use Shift + Alt + Bottom-Stretch to make a bottom panel. Then, set its Rect Transform | Height value to 175.
  4. I set my background Image | Color to opaque white with Alpha at55.
  5. Select Add Component, search for layout, and select Horizontal Layout Group.
  6. On the Horizontal Layout Group component, check the Control Child Size | Width and Height checkboxes. (Leave the others at their default values, Use Child Scale unchecked, and Child Force Expand checked). The Edit Menu panel looks like this in the Inspector window:
    Figure 7.1 – The Edit Menu panel settings

    Figure 7.1 – The Edit Menu panel settings

  7. Now, we will add four buttons to the menu. Begin by right-clicking Edit Menu and selecting UI | Button – TextMeshPro. Rename it Replace Image Button.
  8. Select its child text object, set the Text value to Replace Image, and set Font Size to 48.
  9. Right-click the Replace Image button and select Duplicate (or Ctrl + D). Repeat this two more times so that there are four buttons in total.
  10. Rename the buttons and change the text on the buttons so that they read as Replace Frame, Remove Picture, and Done.
  11. We may not use the Replace Frame feature soon, so disable that button by unchecking its Interactable checkbox in the Button component. The resulting menu will look as follows:
Figure 7.2 – Edit Menu buttons

Figure 7.2 – Edit Menu buttons

Add the panel to the UI Controller, as follows:

  1. To add the panel to the UI Controller, in the Hierarchy window, select the UI Canvas object.
  2. In the Inspector window, at the bottom right of the UI Controller component, click the + button to add an item to the UI Panels dictionary.
  3. Enter EditPicture in the Id field.
  4. Drag the EditPicture UI game object from the Hierarchy window onto the Value slot.

The next step is to create an EditPicture mode object and controller script.

Creating EditPicture mode

As you now know, our framework manages interaction modes by activating game objects under the Interaction Controller. Each mode has a control script that displays the UI for that mode and handles any user interactions until certain conditions are met; then, it transitions to a different mode. In terms of our EditPicture-mode, its control script will have a currentPicture variable that specifies which picture is being edited, a DoneEditing function that returns the user to Main-mode, among other features.

Create a new C# script named EditPictureMode and begin to write it, as follows:

using UnityEngine;

public class EditPictureMode : MonoBehaviour

{

    public FramedPhoto currentPicture;

    void OnEnable()

    {

        UIController.ShowUI("EditPicture");

    }

}

Now, we can add it to our Interaction Controller object, as follows:

  1. In the Hierarchy window, right-click the Interaction Controller game object and select Create Empty. Rename the new object EditPicture Mode.
  2. Drag the EditPictureMode script from the Project window onto the EditPicture Mode object, adding it as a component.
  3. Now, we'll add the mode to the Interaction Controller. In the Hierarchy window, select the Interaction Controller object.
  4. In the Inspector window, at the bottom right of the Interaction Controller component, click the + button to add an item to the Interaction Modes dictionary.
  5. Enter EditPicture in the Id field.
  6. Drag the EditPicture Mode game object from the Hierarchy window onto the Value slot.

With that, we have created an EditPicture UI containing edit buttons that is controlled by UIController. After this, we created an EditPicture Mode game object with an EditPictureMode script that is controlled by InteractionController.

With this set up, the next thing we must do is enhance Main-mode so that it detects when the user taps on an existing FramedPhoto and can start EditPicture-mode for the selected object.

Selecting a picture to edit

While in Main-mode, the user should be able to tap on an existing picture to edit it. Utilizing the Unity Input System, we will add a new SelectObject input action. Then, we'll have the MainMode script listen for that action's messages, find which picture was tapped using a Raycast, and enable Edit-mode on that picture. Let's get started!

Defining a SelectObject input action

We will start by adding a SelectObject action to the AR Input Actions asset by performing the following steps:

  1. In the Project window, locate and double-click the AR Input Actions asset we created previously (it may be in the Assets/Inputs/ folder) to open it for editing (alternatively, use its Edit Asset button).
  2. In the middle Actions section, select + and name it SelectObject.
  3. In the rightmost Properties section, select Action Type | Value and Control Type | Vector 2.
  4. In the middle Actions section, select the <No Binding> child. Then, in the Properties section, select Path | Touchscreen | Primary Touch | Position to bind this action to a primary screen touch.
  5. Press Save Asset (unless Auto-Save is enabled).

The updated AR Input Actions asset is shown in the following screenshot:

Figure 7.3 – AR Input Actions asset with the SelectObject action added

Figure 7.3 – AR Input Actions asset with the SelectObject action added

Although we're defining this action with the same touchscreen binding that we used for the PlaceObject action we created earlier (Touchscreen Primary Position), it serves a somewhat different purpose (tap-to-select versus tap-to-place). For example, perhaps, in the future, if you decide to use a double-tap for selecting an item instead of a single tap, you can simply change its input action.

Now, we can add the code for this action.

Replacing the MainMode script

First, because we're deviating from the default MainMode script provided in the ARFramework template, we should make a new, separate script for this project. Perform the following steps to copy and edit the new GalleryMainMode script:

  1. In the Project window's Scripts/ folder, select the MainMode script. Then, from the main menu bar, select Edit | Duplicate.
  2. Rename the new file GalleryMainMode.
  3. You'll see a namespace error in the Console window because we now have two files defining the MainMode class.

    Open GalleryMainMode for editing and change the class name to GalleryMainMode, as highlighted here:

    using UnityEngine;

    using UnityEngine.InputSystem;

    public class GalleryMainMode : MonoBehaviour

    {

        void OnEnable()

        {

            UIController.ShowUI("Main");

        }

    }

  4. Save the script. Then, back in Unity, in the Hierarchy window, select the Main Mode game object (under Interaction Controller).
  5. Drag the GalleryMainMode script onto the Main Mode object, adding it as a new component.
  6. Remove the previous Main Mode component from the Main Mode object.

Now, we're ready to enhance the behavior of Main-mode.

Selecting an object from Main-mode

When the user taps the screen, the GalleryMainMode script will get the touch position and use a Raycast to determine whether one of the PlacedPhoto objects was selected. If so, it will enable EditPicture mode on that picture.

We have seen Raycasts previously in our tap-to-place scripts, including AddPictureMode. In that case, our scripts used the AR Raycast Manager class's version of the function because we were only interested in hitting a tracked AR plane. But in this case, we're interested in selecting a regular GameObject – an instantiated FramedPhoto prefab. For this, we'll use the Physics.Raycast function (https://docs.unity3d.com/ScriptReference/Physics.Raycast.html). As part of the Unity Physics system, it requires the raycast-able object to have a Collider (which FramedPhoto does, and I'll show you soon).

Also, we will be using the AR Camera's ScreenPointToRay function to define the 3D ray that corresponds to the touch position that we're going to Raycast into the scene.

To add this, open the GalleryMainMode script for editing and follow these steps:

  1. We're going to listen for Input System events, so to begin, we need to add a using statement for that namespace. Ensure the following line is at the top of the file:

    using UnityEngine.InputSystem;

  2. We need a reference to tell EditPictureMode which object to edit. Add it to the top of the class, as follows:

    public class GalleryMainMode : MonoBehaviour

    {

        [SerializeField] EditPictureMode editMode;

  3. We're going to be using AR Camera here, so it's good practice to get a reference to that at the start by using the Camera.main shortcut. (This requires the AR Camera to be tagged as MainCamera, which should be done from the scene template.) Add a private variable at the top of the class and initialize it using Start:

        Camera camera;

        void Start()

        {

            camera = Camera.main;

        }

  4. Now for the meat of our task – add the following OnSelectObject and FindObjectToEdit functions:

        public void OnSelectObject(InputValue value)

        {

            Vector2 touchPosition = value.Get<Vector2>();

            FindObjectToEdit(touchPosition);

        }

        void FindObjectToEdit(Vector2 touchPosition)

        {

            Ray ray = camera.ScreenPointToRay(touchPosition);

            RaycastHit hit;

            int layerMask =             1 << LayerMask.NameToLayer("PlacedObjects");

            if (Physics.Raycast(ray, out hit, Mathf.Infinity,            layerMask))

            {

                FramedPhoto picture = hit.collider.                GetComponentInParent<FramedPhoto>();

                editMode.currentPicture = picture;

                InteractionController.                EnableMode("EditPicture");

            }

        }

Let's walk through this code together. The OnSelectObject function is automatically called when the SelectObject Input System action is used (the On prefix is a standard Unity convention for event interfaces). It grabs Vector2 touchPosition from the input value and passes it to our private FindObjectToEdit function. (You don't need to separate this into two functions, but I did for clarity.)

FindObjectToEdit gets the 3D ray corresponding to the touch position by calling camera.ScreenPointToRay. This is passed to Physics.Raycast to find an object in the scene that intersects with the ray. Rather than casting to every possible object, we'll limit it to ones on a layer named PlacedObjects using its layermask. (For this, we need to make sure FramedPhoto is assigned to this layer, which we'll do soon.)

Information – Layer Name, Layer Number, and Layermask

A layermask uses the binary bits of a 32-bit integer to identify up to 32 layers, one bit each. We define the mask by getting the layer number from its name (LayerMask.NameToLayer) and shifting one bit to the left that many times. To manage the layers in your project and see what name has been assigned to each layer number, click the Layers button in the top-right corner of the Editor.

If the raycast gets a hit, we must grab a reference to the FramedPhoto component in the prefab and pass it to the EditPictureMode component. Then, the app will transition to EditPicture-mode.

Save the script. Now, let's fix the housekeeping things on our game objects that I mentioned: set the camera tag to MainCamera, set the FramedPhoto object so that it resides on the PlacedObjects layer, and ensure FramedPhoto has a collider component. In Unity, do the following:

  1. In the Hierarchy window, with the Main Mode game object selected, drag the EditPicture Mode object from the Hierarchy window into the Inspector window and drop it onto the Gallery Main Mode | Edit Mode slot.
  2. In the scene Hierarchy, unfold AR Session Origin and select its child AR Camera. In the top-left position of the Inspector window, verify that Tag (atop the Inspector window) is set to MainCamera. If not, set it now.
  3. Next, open the FramedPhoto prefab for editing by double-clicking the asset in the Project window.
  4. With its root FramedPhoto object selected, in the top right of its Inspector window, click the Layer drop-down list and select PlacedObjects.

    If the layer named PlacedObjects doesn't exist, select Add Layer to open the Layers manager window. Add PlacedObjects to one of the empty slots. In the Hierarchy window, click the FramedPhoto Prefab object to get back to its Inspector window. Again, using the Layers drop-down list, select PlacedObjects.

    You will then be prompted with the question Do you want to set layer to PlacedObjects for all child objects as well?. Click Yes, Change Children.

  5. While we're here, let's also verify that the prefab has a collider, as required for Physics.Raycast. If you recall, when we constructed the prefab, we started with an Empty game object for the root and added another Empty child for AspectScaler. Then, we added a 3D Cube for the Frame object. Click this Frame object.
  6. In the Inspector window, you will see that the Frame object already has a Box Collider. Perfect. Note that if you press its Edit Collider button, you can see (and edit) the collider's shape in the Scene window, as shown in the following screenshot, where its edges are outlined and there are little handles to move the faces. But there's no need for us to change it here:
    Figure 7.4 – Editing the Box Collider of the Frame object

    Figure 7.4 – Editing the Box Collider of the Frame object

  7. Save the prefab and exit the prefab editor to get back to the Scene hierarchy.

If you were to Build and Run the scene now, and then add a picture to a wall, when you tap on that picture, it should hide the main menu and show the edit menu. Now, we need a way to get back from Edit-mode to Main-mode. Let's wire up the Done button.

Wiring the Done edit button

In this section, we will set up the Done button to switch from EditPicture-mode to Main-mode. It simply needs to call EnableMode in InteractionController. Follow these steps:

  1. In the Hierarchy window, select the Done button, which should be located under UI Canvas | EditPicture UI | Edit Menu.
  2. In the Inspector window, click the + button on the bottom right of the Button | OnClick area to add a new event action.
  3. Drag the Interaction Controller object from the Hierarchy window and drop it onto the Object slot of the OnClick action.
  4. In the function select list, choose InteractionController | EnableMode.
  5. Type Main into the mode string parameter slot.

Now, if you Build and Run the scene where you have a picture instantiated in the scene and tap the picture, you'll switch to Edit-mode and see the edit menu. Tap the Done button to get back to Main-mode.

This is progress. But if there's more than one picture on your wall, it's not obvious which one is currently being edited. We need to highlight the currently selected picture.

Highlighting the selected picture

There are many ways to highlight objects in Unity to indicate that an object has been selected by the user. Often, you'll find that a custom shader will do the trick (there are many on the Asset Store). The decision comes down to what "look" you want. Do you want to change the selected object's color tint, draw a wireframe outline, or create some other effect? Instead of doing this and to keep things easy, I'll just introduce a "highlight" game object in the FramedPhoto prefab as a thin yellow box that extends from the edges of the frame. Let's make that now:

  1. Open the FramedPhoto prefab for editing by double-clicking it in the Project window.
  2. In the Hierarchy window, right-click on the AspectScaler object and select 3D Object | Cube. Rename the cube Highlight.
  3. Set its Transform | Scale setting to (1.05, 1.05, 0.005) so that it is thin and extends past the edges of the frame.
  4. Set its Transform | Position setting to (0, 0, -0.025).
  5. Create a yellow material. In the Project window, right-click in your Materials/ folder (create one if needed) and select Create | Material. Rename it Highlight Material.
  6. Set Highlight Material | Shader | Universal Render Pipeline | Unlit.
  7. Set its Base Map color (using the color swatch) to yellow.
  8. Drag Highlight Material onto the Highlight game object. The Scene view should now look as follows:
Figure 7.5 – FramedPhoto with highlight

Figure 7.5 – FramedPhoto with highlight

We can now control this from the FramedPhoto script. You may want the highlight the picture for different reasons, but for this project, I've decided that when the object is selected and highlighted, that means it is being edited. So, we can toggle the highlight when making the object editable. Open the script in your editor and make the following changes:

  1. Declare a variable for highlightObject:

        [SerializeField] GameObject highlightObject;

        bool isEditing;

  2. Add a function to toggle the highlight:

        public void Highlight(bool show)

        {

            if (highlightObject) // handle no object or app                                 end object destroyed

                highlightObject.SetActive(show);

        }

  3. Ensure the picture isn't highlighted at the beginning:

        void Awake()

        {

            Highlight(false);

        }

  4. Add a BeingEdited function. This will be called when the object is being edited. It'll highlight the object and enable other editing behavior later. Likewise, when we stop editing and pass a false value, the object will be un-highlighted:

       public void BeingEdited(bool editing)

        {

            Highlight(editing);

            isEditing = editing;

        }

  5. Save the script. In Unity, select the root FramedPhoto object.
  6. Drag the Highlight object from the Hierarchy window onto the Framed Photo | Highlight Object slot.

This is great! Now, we can update EditPictureMode to tell the picture when it's being edited or not. Open the EditPictureMode script and make the following edits:

  1. Add the BeingEdited call to OnEnable:

       void OnEnable()

        {

            UIController.ShowUI("EditPicture");

            if (currentPicture)

                currentPicture.BeingEdited(true);

        }

  2. Also, add the BeingEdited call to OnDisable for when it's not being edited; that is, when Edit-mode has been exited:

        void OnDisable()

        {

            if (currentPicture)

                currentPicture.BeingEdited(false);

        }

    Notice that although we would never intentionally enter Edit-mode without currentPicture defined, I've added null checks in case the mode is activated or deactivated during the app startup or teardown sequences.

If you play the scene now and add a picture, when you tap the picture via Main-mode, Edit-mode will become enabled, and the picture will be highlighted. When you exit back to Main-mode, the picture will be un-highlighted.

Let's keep going. Suppose you have multiple pictures on your walls. Currently, when you're editing one picture and you want to edit a different one, you must press Done to exit Edit-mode and then select the other picture from Main-mode. To switch between objects that are currently being editing, we can add that code to the EditMode script.

Selecting an object from Edit mode

When in Edit-mode for one picture, to let the user choose a different picture without exiting Edit-mode, we can use the same SelectObject input action we used in Main-mode. In fact, the code is mostly the same. Open the EditPictureMode script for editing and make the following changes:

  1. We're going to listen for Input System events, so to begin, we need to add a using statement for that namespace. Ensure the following line is at the top of the file:

    using UnityEngine.InputSystem;

  2. Add a private camera variable at the top of the class and initialize it in Start:

        Camera camera;

        void Start()

        {

            camera = Camera.main;

        }

  3. The OnSelectObject action listener will call FindObjectToEdit. Like in GalleryMainMode, it does a Raycast on the PlacedObjects layer. But now, we must check whether it has hit an object other than the current picture. If so, we must stop editing currentPicture and make the new selection current:

        public void OnSelectObject(InputValue value)

        {

            Vector2 touchPosition = value.Get<Vector2>();

            FindObjectToEdit(touchPosition);

        }

        void FindObjectToEdit(Vector2 touchPosition)

        {

            Ray ray = camera.ScreenPointToRay(touchPosition);

            RaycastHit hit;

            int layerMask =             1 << LayerMask.NameToLayer("PlacedObjects");

            if (Physics.Raycast(ray, out hit, 50f,            layerMask))

            {

                if (hit.collider.gameObject !=                currentPicture.gameObject)

                {

                    currentPicture.BeingEdited(false);

                    FramedPhoto picture = hit.collider.                    GetComponentInParent<FramedPhoto>();

                    currentPicture = picture;

                    picture.BeingEdited(true);

                }

            }

        }

To summarize, when you have more than one FramedPhoto instantiated in the scene and you are editing one, if you tap on a different picture, the current one will be un-highlighted and the new one will be highlighted and become the currentPicture object being edited.

Here's another problem: if you've been playing with the project, you may have noticed that you can place pictures on top of one another, or actually, inside one another, as they do not seem to have any physical presence! Oops. Let's fix this.

Avoiding intersecting objects

In Unity, to specify that an object should participate in the Unity Physics system, you must add a Rigidbody component to the GameObject. Adding a Rigidbody gives an object mass, velocity, collision detection, and other physical properties. We can use this to prevent objects from intersecting. In many games and XR apps, Rigidbody is important for applying motion forces to objects to let them bounce when they collide, for example.

In our project, if a picture collides with another picture, it should simply move out of the way so that they're never intersecting. But it should also stay flush with the wall plane. Although a Rigidbody allows you to constrain movement along any of the X, Y, and Z directions, these are the orthogonal world space planes, not the arbitrary angled wall plane. In the end, I decided to position the picture manually when a collision is detected rather than using physics forces. My solution is to constrain the position (and rotation) of all the pictures so that physics forces won't move them. Then, I can use the collision as a trigger to manually move the picture out of the way.

Information – Collision Versus Trigger Detection

When two GameObjects with Rigidbody and Collider collide, physics forces will be applied to the objects, sending them in different directions. You can add constraints and other properties to limit this behavior. In that case, you can write functions for OnCollisionEnter, OnCollisionStay, and OnCollisionExit to hook into these events.

However, you can completely disable Unity applying physical forces by marking a Collider as Is Trigger. When it's a trigger, you would instead write functions for OnTriggerEnter, OnTriggerStay, and OnTriggerExit to hook into these events.

To add collision detection to the FramedPhoto prefab, follow these steps:

  1. In the Project window, locate and double-click on the FramedPhoto prefab to open it for editing.
  2. Ensure you have selected the root FramedPhoto object in the Hierarchy window.
  3. In the Inspector window, click Add Component, search for rigidbody, and add a Rigidbody to the object.
  4. Unfold the Constraints properties and check all six boxes; that is, Freeze Position: X, Y, Z and Freeze Rotation: X, Y, Z.
  5. Uncheck its Use Gravity checkbox. (This is not necessary since we set constraints, but I like to be clear about this anyway.)
  6. We need a Collider. As we've seen, there is one on the Frame child object. So, select the Frame game object.
  7. In the Inspector window, in its Box Collider component, check the Is Trigger checkbox.
  8. To avoid any problems, disable (or remove) other colliders in the prefab. Namely, remove Mesh Collider from Image and Box Collider from Highlight.

Now, we can handle the collision trigger and move the picture out of the way when another picture is in the same space. We just want to make sure it moves along the wall. We can make use of the fact that the wall plane's normal vector (the vector that's perpendicular to the surface of the plane) is also the forward direction vector of our picture prefab since we originally placed it there. Also, we only want to consider collisions with objects on the placed object plane (for example, not the AR tracked plane objects).

My algorithm determines the distance between this picture and the other intersecting picture, in 3D. Then, it finds the direction to move this picture in by projecting the distance vector onto the wall plane and scaling it. The picture will continue moving away from the other frames until it is no longer intersecting.

Let's write the code for this. Open the FramedPhoto script for editing and follow these steps:

  1. Begin by adding a reference to the collider and layer numbers at the top of the class, as follows:

        [SerializeField] Collider boundingCollider;

        int layer;

  2. Initialize the layer number from its name. It's good to initialize this ahead of time because OnTriggerStay may be called every frame:

        void Awake()

        {

            layer = LayerMask.NameToLayer("PlacedObjects");

            Highlight(false);

        }

  3. We'll use OnTriggerStay here, which is called with each update while the object is colliding with another object, as follows:

        void OnTriggerStay(Collider other)

        {

            const float spacing = 0.1f;

            if (isEditing && other.gameObject.layer == layer)

            {

                Bounds bounds = boundingCollider.bounds;

                if (other.bounds.Intersects(bounds))

                {

                    Vector3 centerDistance =                     bounds.center - other.bounds.center;

                    Vector3 distOnPlane =                     Vector3.ProjectOnPlane(centerDistance,                        transform.forward);

                    Vector3 direction =                     distOnPlane.normalized;

                    float distanceToMoveThisFrame =                     bounds.size.x * spacing;

                    transform.Translate(direction *                    distanceToMoveThisFrame);

                }

            }

        }

  4. Save the script. In Unity, drag the Frame object (which has a Box Collider) from the Hierarchy window onto the Framed Photo | Bounding Collider slot. The Framed Photo component now looks as follows:
    Figure 7.6 – Framed Photo component properties, including Bounding Collider

    Figure 7.6 – Framed Photo component properties, including Bounding Collider

  5. Save the prefab and return to the scene hierarchy.

When you play the scene now, place a picture on a wall, and then place another picture in the same space, the new picture will move away from the first one until they're no longer colliding.

Now that we can have many pictures on our walls, you might want to learn how to remove one from the scene. We'll look at this in the next section.

Deleting a picture

Deleting the picture that is being edited is straightforward. We just need to destroy the currentPicture GameObject and go back to Main-mode. Perform the following steps:

  1. Open the EditPictureMode script and add the following function:

        public void DeletePicture()

        {

            GameObject.Destroy(currentPicture.gameObject);

            InteractionController.EnableMode("Main");

        }

  2. Save the script.
  3. In Unity, in the Hierarchy window, select Remove Button (located under UI Canvas | EditPicture UI | Edit Menu).
  4. In the Inspector, click the + button at the bottom right of the Button | OnClick area.
  5. Drag the EditPicture Mode object from the Hierarchy window onto the OnClick Object slot.
  6. From the function selection, choose EditPictureMode | DeletePicture.

When you play the scene, create a picture, go into EditPicture-mode, and then tap the Remove Picture button, the picture will be deleted from the scene, and you will be back in Main-mode.

We now have two of the Edit menu buttons operating – Remove Picture and Done. Now, let's add the feature that lets you change the picture in an existing FramedPhoto from the Image Select menu panel.

Replacing the picture's image

When you add a picture from the Main menu, the Select Image menu is displayed. From here, you can pick a picture. At this point, you will be prompted to add a FramedPhoto to the scene using the image you selected. We implemented this by adding a separate SelectImage Mode. We now want to make that mode serve two purposes. It's called from Main-mode when you're adding a new, framed photo to the scene, and it's called from EditPicture-mode when you want to replace the image of an existing framed photo that's already in the scene. This requires us to refactor the code.

Currently, when we build the Select Image buttons (in the ImageButtons script) we have it configure and enable AddPicture-mode directly. Instead, it now needs to depend on how SelectImage-mode is being used, so we'll move that code from ImageButtons to SelectImageMode, as follows:

  1. Edit the SelectImageMode script and add a reference to AddPictureMode at the top of the class:

        [SerializeField] AddPictureMode addPicture;

  2. Then, add a public ImageSelected function:

        public void ImageSelected(ImageInfo image)

        {

            addPicture.imageInfo = image;

            InteractionController.EnableMode("AddPicture");

        }

  3. Edit the ImageButtons script and add a reference to SelectImageMode at the top of the class:

        [SerializeField] SelectImageMode selectImage;

  4. Then, replace the OnClick code with a call to ImageSelected, which we just wrote:

        void OnClick(ImageInfo image)

        {

            selectImage.ImageSelected(image);

        }

    This refactoring has not added any new functionality, but it restructures the code for SelectImageMode to decide how the modal menu will be used. Now, let's edit SelectImageMode again and add support for replacing the currentPicture image.

  5. At the top of the SelectImageMode script, add the following declarations:

        [SerializeField] EditPictureMode editPicture;

        public bool isReplacing = false;

  6. Then, update the ImageSelected function, as follows:

        public void ImageSelected(ImageInfo image)

        {

            if (isReplacing)

            {

                editPicture.currentPicture.SetImage(image);

                InteractionController.                EnableMode("EditPicture");

            }

            else

            {

                addPicture.imageInfo = image;

                InteractionController.                EnableMode("AddPicture");

            }

        }

    So, now, when the menu is being used for replacing, it sends the selected image data to the edit mode's currentPicture object. Otherwise, it behaves as it did previously for AddPicture-mode.

    Now, we need to make sure the isReplacing flag is set to false when adding and set to true when replacing. Again, this requires some refactoring. Currently, the main menu's Add button enables SelectImage-mode directly. Let's replace this with a SelectImageToAdd function in the GalleryMainMode script.

  7. At the top of the GalleryMainMode class, add a reference to SelectImageMode:

        [SerializeField] SelectImageMode selectImage;

  8. Then, add a SelectImageToAdd function, as follows:

        public void SelectImageToAdd ()

        {

            selectImage.isReplacing = false;

            InteractionController.EnableMode("AddPicture");

        }

    We just need to remember to update the Add button OnClick action before we're done.

  9. Likewise, now, we can add a SelectImageToReplace function to the EditPictureMode script. Declare selectImage at the top of the class:

        [SerializeField] SelectImageMode selectImage;

    Then, add the function, as follows:

        public void SelectImageToReplace()

        {

            selectImage.isReplacing = true;

            InteractionController.EnableMode("SelectImage");

        }

Save all the scripts. Now, we need to connect it up in Unity, including setting the Add and Replace Image buttons' OnClick actions, and then setting the new SelectImage Mode parameters. Back in Unity, starting with the Add button, follow these steps:

  1. In the Hierarchy window, select the Add button under UI Canvas | Main UI.
  2. From the Hierarchy window, drag the Main Mode game object (under Interaction Controller) onto the Button | OnClick action's Object slot.
  3. In the Function selector, choose Gallery Main Mode | Select Image To Add.
  4. Now, we'll wire up the Replace Image button, which is located under UI Canvas | EditPicture UI | Edit Menu.
  5. In the Inspector window, on its Button component, click the + button at the bottom right of the OnClick actions.
  6. From the Hierarchy window, drag the EditPicture Mode game object onto the OnClick Object slot.
  7. In the Function selector, choose Edit Picture Mode | Select Image To Replace.

    The buttons are now set up. All we have to do now is assign the other references.

  8. In the Hierarchy window, select the Main Mode game object (under Interaction Controller).
  9. Drag the SelectImage Mode object from the Hierarchy window onto the Select Image slot.
  10. In the Hierarchy window select the SelectImage Mode game object (under Interaction Controller).
  11. Drag the AddPicture Mode object from the Hierarchy window onto the Add Picture slot.
  12. Drag the EditPicture Mode object from the Hierarchy window onto the Edit Picture slot.
  13. In the Hierarchy window, select the EditPicture Mode game object (under Interaction Controller).
  14. Drag the SelectImage Mode object from the Hierarchy window onto the Select Image slot.
  15. In the Hierarchy window, select the Image Buttons game object (under UI Canvas | SelectImage UI).
  16. Drag the SelectImage Mode object from the Hierarchy window onto the Select Image slot.

That should do it!

In summary, we have refactored the ImageButtons script to call SelectImageMode.ImageSelected when a button is pressed. SelectImageMode will know whether the user is adding a new picture or replacing the image with an existing one. In the former case, the modal was called from Main-mode. In the latter case, the modal was called from EditPicture-mode and has an isReplacing flag set.

Go ahead and Build and Run the scene. Add a picture and then edit it. Then, tap the Replace Image button. The Select Image menu should appear. At this point, you can pick another image, and it will replace the one in the currently selected FramedPhoto. There are more features you could add to this project, including letting the user choose a different frame for their pictures.

Replacing the frame

The last Edit button we must implement is Replace Frame. I will leave this feature up to you to build since at this point, you may have the skills to work through this challenge on your own. A basic solution may be to keep the current FramedPhoto prefab and let the user just pick a different color for the frame. Alternatively, you could define separate frame objects within the FramedPhoto prefab, perhaps using models found on the Asset Store or elsewhere, and pick a frame that enables one or another frame object. Here are some suggestions regarding where to find models:

So far, we've been interacting with the placed object indirectly through the Edit menu buttons. Next, we'll consider directly interacting with the virtual object.

Interacting to edit a picture

We will now implement the ability to move and resize a virtual object we have placed in the AR scene. For this, I've decided to give the object being edited responsibility for its own interactions. That is, when FramedPhoto is being edited, it'll listen for input action events and move or resize itself.

I've also decided to implement these features as separate components, MovePicture and ResizePicture, on the FramedPhoto prefab. This will only be enabled while FramedPhoto is being edited. First, let's ensure that instantiated FramedPhoto objects receive Input Action messages so that they can respond to user input.

Ensuring FramedPhoto objects receive Input Action messages

We are currently using the Unity Input System, which lets you define and configure user input actions, as well as listening for those action events with a Player Input component. Currently, the scene has one Player Input component, attached to the Interaction Controller game object. The component is configured to broadcast messages down the local hierarchy. Therefore, if we want the FramedPhoto script to receive input action messages (which we now do), we must make sure the FramedPhoto object instances are children of the Interaction Controller. Let's simply parent the FramedPhoto objects under the AddPicture Mode game object where it's instantiated, as follows:

  1. Edit the AddPictureMode script.
  2. In the PlaceObject function, set the spawned object's parent as the AddPicture Mode game object by adding this line of code:

                GameObject spawned = Instantiate(placedPrefab,                position, rotation);

                spawned.transform.SetParent(                transform.parent);

The instantiated FramedPhoto prefabs will now be parented by the AddPicture Mode game object.

Information – Scene Organization and Input Action Messages

It's advisable to consider how you will organize your scene object hierarchy and where to place instantiated objects. For example, generally, I'd prefer to keep all our FramedPhotos in a separate root object container. If we did that now, we would have to set Player Input Behavior to invoke events, instead of broadcasting messages down the local hierarchy. And then, scripts responding to those input actions would subscribe (add listeners) to those messages (see https://docs.unity3d.com/Packages/[email protected]/manual/Components.html#notification-behaviors). On the other hand, for tutorial projects such as the ones in this book, I've decided that using the built-in input action messages is cleaner and more straightforward to explain.

Let's start by creating the empty scripts and adding them to the scene. Then, we'll build them out.

Adding the interaction components

To expedite the implementation, we must create the script files first by performing the following steps:

  1. In your Project assets, create a new C# script named MovePicture.
  2. Create another new C# script named ResizePicture.
  3. Open the FramedPhoto prefab for editing.
  4. Drag the MovePicture script and the ResizePicture script from the Project assets folder onto the root FramedPhoto object.
  5. Edit the FramedPhoto script in your code editor. Add the following declarations at the top of the class:

    MovePicture movePicture;

    ResizePicture resizePicture;

  6. Initialize it in Awake and start with the components disabled:

        void Awake()

        {

            movePicture = GetComponent<MovePicture>();

            resizePicture = GetComponent<ResizePicture>();

            movePicture.enabled = false;

            resizePicture.enabled = false;

            layer = LayerMask.NameToLayer("PlacedObjects");

            Highlight(false);  

    }

  7. Then, enable these components when editing:

       public void BeingEdited(bool editing)

        {

            Highlight(editing);

            movePicture.enabled = editing;

            resizePicture.enabled = editing;

            isEditing = editing;

        }

We've now prepared ourselves to add the move and resize direct manipulation features to the FramedPhoto object. These will be separate components that are enabled only while the picture is in EditPicture mode.

OK. Let's start by interactively moving the picture along the wall by dragging it with our finger on the screen.

Using our finger to move the picture

We will start by implementing the drag-to-move feature by adding a MoveObject action to the AR Input Actions asset. Like the SelectObject action (and PlaceObject) that we already have, this will be bound to the touchscreen's primary touch position. We'll keep this action separate from the others, for example, should you decide to use a different interaction technique, such as a touch and hold, to start the dragging operation. But for now, we can just copy the other one, as follows:

  1. In the Project window, double-click the AR Input Actions asset (in the Assets/Inputs/ folder) to open it for editing (or use its Edit Asset button).
  2. In the middle section, right-click the SelectObject action and select Duplicate.
  3. Rename the new one MoveObject.
  4. Press Save Asset (unless Auto-Save is enabled).

Now, we can add the code that will listen for this action. Edit the MovePicture script and write the following:

using System.Collections.Generic;

using UnityEngine;

using UnityEngine.EventSystems;

using UnityEngine.InputSystem;

using UnityEngine.XR.ARFoundation;

using UnityEngine.XR.ARSubsystems;

public class MovePicture : MonoBehaviour

{

    ARRaycastManager raycaster;

    List<ARRaycastHit> hits = new List<ARRaycastHit>();

    void Start()

    {

        raycaster = FindObjectOfType<ARRaycastManager>();

    }

    void Start(){ }

    public void OnMoveObject(InputValue value)

    {

        if (!enabled) return;

        if (EventSystem.current.IsPointerOverGameObject(0))            return;

        Vector2 touchPosition = value.Get<Vector2>();

        MoveObject(touchPosition);

    }

    void MoveObject(Vector2 touchPosition)

    {

        if (raycaster.Raycast(touchPosition, hits,            TrackableType.PlaneWithinPolygon))

        {

            ARRaycastHit hit = hits[0];

            Vector3 position = hit.pose.position;

            Vector3 normal = -hit.pose.up;

            Quaternion rotation =                 Quaternion.LookRotation(normal, Vector3.up);

            transform.position = position;

            transform.rotation = rotation;

        }

    }

}

This code is very similar to that in the AddPictureMode script. It's using AR Raycast Manager to find a trackable plane and place the object so that it's flush with the plane and upright. The difference is that we're not instantiating a new object, we're just updating the transform of the existing one. And we're doing this continuously, so long as the input action events are being generated (that is, so long as the user is touching the screen).

The OnMoveObject function is skipped if the input action message is received but this component is not enabled. It also checks that the user is not tapping a UI element (an event system object), such as one of our edit menu buttons.

Try it out. If you play the scene, create a picture, and begin editing it, you should be able to drag the picture with your finger and it will move along the wall plane. In fact, since we are raycasting each update, it could find a newer, refined tracked plane as you're dragging, or even move the picture to a different wall.

As we mentioned previously, if you tap the screen on any tracked plane, the current picture will "jump" to that location. If that is not your desired behavior, we can check that the initial touch is on the current picture before we start updating the transform position. The modified code is as follows:

  1. Declare and initialize references to camera and layerMask:

        Camera camera;

        int layerMask;

        void Start() {

            raycaster = FindObjectOfType<ARRaycastManager>();

            camera = Camera.main;

            layerMask =             1 << LayerMask.NameToLayer("PlacedObjects");

        }

  2. Add a raycast to MoveObject to ensure the touch is on a picture before you move it:

    void MoveObject(Vector2 touchPosition)

    {

       Ray ray = camera.ScreenPointToRay(touchPosition);

        if (Physics.Raycast(ray, Mathf.Infinity, layerMask))

        {

            if (raycaster.Raycast(touchPosition, hits,                TrackableType.PlaneWithinPolygon))

            {

                ARRaycastHit hit = hits[0];

                Vector3 position = hit.pose.position;

                Vector3 normal = -hit.pose.up;

                Quaternion rotation =                     Quaternion.LookRotation(normal,                         Vector3.up);

                transform.position = position;

                transform.rotation = rotation;

            }

        }

    }

Currently, we only have the tracked planes visible in AddPicture-mode. I think it would be useful to also show them in Edit-mode. We can use the same ShowTrackablesOnEnable script we wrote in a previous chapter that's already been applied to the AddPicture Mode game object. Add this as follows:

  1. In the Hierarchy window, select the EditPicture Mode game object (under Interaction Controller).
  2. Locate the ShowTrackablesOnEnable script in your Project Scripts/ folder.
  3. From the Hierarchy window, drag the AR Session Origin game object onto the Show Trackables On Enable | Session Origin slot.
  4. Drag the script onto the EditPicture Mode object, adding it as a component.

Now, when EditPicture Mode is enabled, the trackable planes will be displayed. When it's disabled and you go back to Main-mode, they'll be hidden again.

Next, we'll implement the pinch-to-resize feature.

Pinching to resize the picture

To implement pinch-to-resize, we'll also use an Input Action, but this will require a two-finger touch. As such, the action is not simply returning a single value (for example, Vector2). So, this time, we'll use a PassThrough Action Type. Add it by performing the following steps:

  1. Edit the AR Input Actions asset, as we did previously.
  2. In the middle Actions section, select + and name it ResizeObject.
  3. In the rightmost Properties section, select Action Type | Pass Through, and Control Type | Vector 2.
  4. In the middle Actions section, select the <No Binding> child. Then, in the Properties section, select Properties | Path | Touchscreen | Touch #1 | Position to bind this action to a second finger screen touch.
  5. Press Save Asset (unless Auto-Save is enabled).

Now, we can add the code to listen for this action. Edit the ResizePicture script and write it as follows. In the first part of the script, we declare several properties that we can use to tweak the behavior of the script from the Unity Inspector. pinchspeed lets you adjust the sensitivity of the pinch, while minimumScale and maximumScale let you limit how small or big the user will end up making the picture, respectively. Follow these steps:

  1. Begin the script with the following code:

    using UnityEngine;

    using UnityEngine.EventSystems;

    using UnityEngine.InputSystem;

    public class ResizePicture : MonoBehaviour

    {

        [SerializeField] float pinchSpeed = 1f;

        [SerializeField] float minimumScale = 0.1f;

        [SerializeField] float maximumScale = 1.0f;

        float previousDistance = 0f;

        void Start() { }

    Note that I declared an empty Start() function. This is needed because a MonoBehaviour component without a Start or Update function cannot be disabled (you'll see this for yourself if you remove Start from the code and look at it in the Inspector window – you'll see that the Enable checkbox is missing).

  2. The OnResizeObject function is the listener for the input action messages. Because we specified the Action Type as Pass Through, there are no incoming arguments to the function. Instead, we can read the current state of Touchscreen to get the first and second finger touches. Then, we can pass those touch positions to our TouchToResize function:

        public void OnResizeObject()

        {

            if (!enabled) return;

            if (EventSystem.current.            IsPointerOverGameObject(0)) return;

            Touchscreen ts = Touchscreen.current;

            if (ts.touches[0].isInProgress &&             ts.touches[1].isInProgress)

            {

                Vector2 pos =                 ts.touches[0].position.ReadValue();

                Vector2 pos1 =                 ts.touches[1].position.ReadValue();

                TouchToResize(pos, pos1);

            }

            else

            {

                previousDistance = 0;

            }

        }

  3. The TouchToResize algorithm is straightforward. It gets the distance between the two finger touches (in screen pixels) and compares it against the previous distance. Dividing the new distance by the previous distance gives us the percentage change, which we can use to directly modify the transform scale. It seems to work pretty well for me:

        void TouchToResize(Vector2 pos, Vector2 pos1)

        {

            float distance = Vector2.Distance(pos, pos1);

            if (previousDistance != 0)

            {

                float scale = transform.localScale.x;

                float scaleFactor = (distance /                previousDistance) * pinchSpeed;

                scale *= scaleFactor;

                if (scale < minimumScale)

                    scale = minimumScale;

                if (scale > maximumScale)

                    scale = maximumScale;

                Vector3 localScale = transform.localScale;

                localScale.x = scale;

                localScale.y = scale;

                transform.localScale = localScale;

            }

            previousDistance = distance;

        }

    }

Try it out. If you play the scene, create a picture, and begin editing it, you should be able to use two fingers to resize the picture, pinching your fingers together to make it smaller and un-pinching them apart to increase the picture's size. Here's a screen capture from my phone with some pictures arranged on my dining room wall, all of which are various sizes:

Figure 7.7 – Virtual framed photos arranged on my dining room wall

Figure 7.7 – Virtual framed photos arranged on my dining room wall

In this section, we looked at how to directly interact with virtual objects. Using input actions, we added features using the touchscreen to drag and move a picture on a wall, as well as pinching to resize a picture.

We could improve this by adding a Cancel Edit feature that restores the picture to its pre-edited state. One way to do this is to make a temporary copy of the object when it enters edit mode, and then restore or discard it if the user cancels or saves their changes, respectively.

Another feature worth considering is persisting the picture object arrangements between sessions, so that the app saves your pictures when you exit the app and restores them when you restart the app. This is an advanced topic that I will not cover in this book since it is outside of Unity AR Foundation itself. Each provider has its own proprietary solutions. If you're interested, take a look at ARCore Cloud Anchors, which is supported by Unity ARCore Extensions (https://developers.google.com/ar/develop/unity-arf/cloud-anchors/overview) and ARKit ARWorldMap (https://developer.apple.com/documentation/arkit/arworldmap), as exposed in the Unity ARKit XR Plugin (https://docs.unity3d.com/Packages/[email protected]/api/UnityEngine.XR.ARKit.ARWorldMap.html).

This concludes our exploration of, and building, an AR photo gallery project.

Summary

In this chapter, you expanded on the AR gallery project we began in Chapter 6, Gallery: Building an AR App. That project left us with the ability to place framed photos on our walls. In this chapter, you added the ability to edit virtual objects in the scene.

You implemented the ability to select an existing virtual object in Main-mode, where the selected object is highlighted and the app goes into EditPicture-mode. Here, there is an edit menu with buttons for Replace Image, Replace Frame, Remove Picture, and Done (return to Main-mode). The Replace Image feature displayed the same SelectImage modal menu that is used when we're creating (adding) new pictures. We had to refactor the code to make it reusable.

While placing and moving a picture on the wall, you implemented a feature to avoid overlapping or colliding objects, automatically moving the picture away from the other ones. After that, you implemented some direct interactions with the virtual objects by using touch events to drag a picture to a new location. You also implemented pinching to resize pictures on the wall. Finally, you learned how to use more Unity APIs from C#, including collision trigger hooks and vector geometry.

In the next chapter, we'll begin a new project while using a different AR tracking mechanism – tracked images – as we build a project for visualizing 3D data; namely, the planets in our Solar System.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset