In this chapter, we will begin building a full Augmented Reality (AR) app, an AR art gallery that lets you hang virtual framed photos on your real-world walls.
First, we'll define the goals of the project and discuss the importance of project planning and user experience (UX) design. When the user presses the Add button in the main menu, they'll see a Select Image menu. When they pick one, they'll be prompted to place a framed copy of the image on their real-world wall.
To implement the project, we will start with the AR user framework scene template that we created earlier in this book. We'll build a Select Image UI panel and interaction mode, and define the image data used by the app.
In this chapter, we will cover the following topics:
By the end of the chapter, you'll have a working prototype of the app that implements one scenario: placing pictures on the wall. Then we'll continue to build and improve the project in the next chapter.
To implement the project in this chapter, you need Unity installed on your development computer, connected to a mobile device that supports AR applications (see Chapter 1, Setting Up for AR Development, for instructions). We also assume that you have the ARFramework template and its prerequisites installed; see Chapter 5, Using the AR User Framework. The completed project can be found in this book's GitHub repository, https://github.com/PacktPublishing/Augmented-Reality-with-Unity-AR-Foundation.
An important step before beginning any new project is to do some design and specifications ahead of time. This often entails writing it down in a document. For games, this may be referred to as the Game Design Document (GDD). For applications, it may be a Software Design Document (SDD). Whatever you call it, the purpose is to put into writing a blueprint of the project before development begins. A thorough design document for a Unity AR project might include details such as the following:
Separately, you may also include UI graphic designs that define actual style guides and graphics, for example, color schemes, typography, button graphics, and so on.
For very large projects, these sections could be separate documents. For small projects, the entire thing may only be a few pages long with bullet points. Just keep in mind that the main purpose is to think through your plans before committing to code. That said, don't over-design. Keep in mind one of my favorite quotes from Albert Einstein:
Assume things can and will change as the project progresses. Rapid iteration, frequent feedback from stakeholders, and engaging real users may reaffirm your plans. Or it may expose serious shortcomings with an original design and can take a project in new, better directions. As I tell my clients and students:
In this book, I'll provide an abbreviated design plan at the beginning of each project that tries to capture the most important points without going into a lot of detail. Let's start with this AR Gallery project, and spec out the project objective, use cases, a UX design, and a set of user stories that define the key features of the project.
We are going to build an AR art gallery project that allows users to place their favorite photos on walls of their home or office as virtual framed images using AR.
Persona: Jack. Jack works from home and doesn't have time to decorate his drab apartment. Jack wants to spruce up the walls by adding some nice pictures on the wall. But his landlord doesn't allow putting nails in the walls. John also wants to be able to change his hung pictures frequently. Jack spends many hours per day using his mobile phone, so looking at the walls through his phone is satisfying.
Persona: Jill. Jill has a large collection of favorite photos. She would like to hang them on the walls of her office but it's not very appropriate for a work environment. Also, she is a bit obsessive and thus would like to frequently rearrange the photos and swap the pictures.
The user experience (UX) for this application must include the following requirements and scenarios:
I asked a professional UX designer (and friend of mine) Kirk Membry (https://kirkmembry.com/) to prepare UX wireframe sketches specifically for this book's project. The following image shows a few frames of a full storyboard:
The leftmost frame shows the image gallery menu that appears when the user has chosen to add a new photo into the scene. The middle frame depicts the user choosing a location to hang the photo on a wall. And the rightmost frame shows the user editing an existing picture, including finger gestures to move and resize, and a menu of other edit options on the bottom of the screen.
Storyboards like this can be used to communicate the design intent to graphic designers, coders, and stakeholders alike. It can form the basis of discussion for ironing out kinks in the user workflow and inconsistencies in the user interface. It can go a long way to make the project management more efficient by preventing unnecessary rework when it's most costly – after features have been implemented.
With enough of the design drafted, we can now select some of the assets we'll use while building the project.
It is useful to break up the features into a set of "user stories" or bite-sized features that can be implemented incrementally, building up the project a piece at a time. In an agile-managed project, the team may choose a specific set of stories to accomplish in one- or two-week sprints. And these stories could be managed and tracked on a shared project board such as Trello (https://trello.com/) or Jira (https://www.atlassian.com/software/jira). Here are a set of stories for this project:
That seems like a good set of features. We'll try to get through the first half of them in this chapter and complete it in the next chapter. Let's get started.
To begin, we'll create a new scene named ARGallery using the ARFramework scene template, with the following steps:
The new AR scene already has the following objects:
We now have a plan for the AR gallery project, including a statement of objectives, use cases, and a UX design with some user stories to implement. With this scene, we're ready to go. Let's find a collection of photos we can work with and add them to the project.
In Unity, images can be imported for use in a variety of purposes. Textures are images that can be used for texturing the materials for rendering the surface of 3D objects. The UI uses images as sprites for button and panel graphics. For our framed photos, we're going to use images as… images.
The most basic approach to using images in your application is to import them into your Assets folder and reference them as Unity textures. A more advanced solution would be to dynamically find and load them at runtime. In this chapter, we'll use the former technique and build the list of images into the application. Let's start by importing the photos you want to use.
Go ahead and choose some images for your gallery from your favorites. Or you can use the images included with the files in this book's GitHub repository, containing a collection of freely usable nature photos found on Unsplash.com (https://unsplash.com/) that I found, along with a photo of my own named WinterBarn.jpg.
To import images into your project, use the following steps:
Now we'll add a way to reference your images in the scene.
To add the image data to the scene, we'll create an empty GameObject with an ImagesData script that contains a list of images. First, create a new C# script in your project's Scripts/ folder, name it ImagesData, and write it as follows:
using UnityEngine;
[System.Serializable]
public struct ImageInfo
{
public Texture texture;
public int width;
public int height;
}
public class ImagesData : MonoBehaviour
{
public ImageInfo[] images;
}
The script starts by defining an ImageInfo data structure containing the image Texture and the pixel dimensions of the image. It is public so it can be referenced from other scripts. Then the ImagesData class declares an array of this data in the images variable. The ImageInfo structure requires a [System.Serializable] directive so it will appear in the Unity Inspector.
Now we can add the image data to the scene, using the following steps:
My Images Data looks like this in the Inspector:
Using ScriptableObjects
A different, and probably better, approach to providing the list of images is to use ScriptableObjects instead of GameObjects. ScriptableObjects are data container objects that live in your Assets/ folder rather than in the scene hierarchy. You can learn more about ScriptableObjects at https://docs.unity3d.com/Manual/class-ScriptableObject.html and https://learn.unity.com/tutorial/introduction-to-scriptable-objects.
It is a little tedious having to manually enter the pixel dimensions of each image. It would be nice if there were a better way because that's not very easy.
Unfortunately, when Unity imports an image as a texture, it resizes it to a power of two to optimize runtime performance and compression, and the original dimension data is not preserved. There are several ways around this, none of which are very pretty:
Given that, we'll stick with the manual approach in this chapter, and you can explore the other options on your own.
Perhaps you're also wondering, what if I don't want to build the images into my project and want to find and load them at runtime?
Loading assets at runtime from outside your build is an advanced topic and outside the scope of this chapter. There are several different approaches that I will briefly describe, and I will point you to more information:
If you want to implement these features, I'll leave that up to you.
We have now imported the photos we plan to use, created a C# ImageInfo data structure including the pixel dimensions of each image, and populated this image data in the scene. Let's create a framed photo prefab containing a default image and a picture frame that we can place on a wall plane.
The user will be placing a framed photo on their walls. So, we need to create a prefab game object that will be instantiated. We want to make it easy to change images and frames, as well as resize them for various orientations (landscape versus portrait) and image aspect ratios. For the default frame, we'll create a simple block from a flattened 3D cube and mount the photo on the face of it. For the default image, you may choose your own or use one that's included with the files for this chapter in the GitHub repository.
First, create an empty prefab named FramedPhoto in your project's Assets/ folder. Follow these steps:
We're now editing the empty prefab.
In the Project window, navigate to your Materials/ folder (create one if needed). Then right-click in the folder and select Create | Material. Rename the new material Black Frame Material.
The current frame properties are shown in the following screenshot:
Next, we'll add a default image to the FramedPhoto rig. I'm using the one named WinterBarn.jpg that is included with the files for this book. Use the following steps to create an image object with a material that uses this photo as its texture image:
The prefab hierarchy now looks like the following screenshot, where the image is currently selected and visible in the Inspector:
Next, let's add a simple script that will help our other code set the image of a FramedPhoto object.
We are going to need to set various properties of each instance of the FramedPhoto prefab. Specifically, the user will be able to choose which image belongs in the frame of each picture. So, we can provide a SetImage function for this that gets the image data for this picture.
Create a new C# script named FramedPhoto, open it for editing, and write the script as follows::
using UnityEngine;
public class FramedPhoto : MonoBehaviour
{
[SerializeField] Transform scalerObject;
[SerializeField] GameObject imageObject;
ImageInfo imageInfo;
public void SetImage(ImageInfo image)
{
imageInfo = image;
Renderer renderer = imageObject.GetComponent<Renderer>();
Material material = renderer.material;
material.SetTexture("_BaseMap", imageInfo.texture);
}
}
At the top of the FramedPhoto class, we declare two properties. The imageObject is a reference to the child Image object, for when the script needs to set its image texture. scalerObject is a reference to the AspectScaler for when the script needs to change its aspect ratio (we do this at the end of this chapter).
When a FramedPhoto gets instantiated, we are going to call SetImage to change the Image texture to the one that should be displayed. The code required to do this takes a few steps. If you look at the Image object in the Unity Inspector, you can see it has a Renderer component that references its Material component. Our script gets the Renderer, then gets its Material, and then sets its base texture.
We can now add this script to the prefab as follows:
Our prefab is now almost ready to be used. Of course, the picture we're using isn't really supposed to be square, so let's scale it.
The photo I'm using by default is landscape orientation, but our frame is square, so it looks squished. To fix it, we need to get the original pixel size of the image and calculate its aspect ratio. For example, the WinterBarn.jpg image included on GitHub for this book is 4,032x3,024 (width x height), or 3:4 (height:width landscape ratio). Let's scale it now for the image's aspect ratio (0.75). Follow these steps:
The properly scaled prefab now looks like the following:
When assembling a prefab, thinking through how it can head off gotchas later.
In this section, we created a scalable FramedPhoto prefab made from a cube and an image mounted on the face of the frame block that we can now add to our scene. It is saved in the project Assets folder so copies can be instantiated in the scene when the user places a picture on a wall. The prefab includes a FramedPhoto script that manages some aspects of the behavior of the prefab, including setting its image texture. This script will be expanded later in the chapter. We now have a FramedPhoto prefab with a frame. We're ready to add the user interaction for placing pictures on your walls.
For this project, the app scans the environment for vertical planes. When the user wants to hang a picture on the wall, we'll show a UI panel that instructs the user to tap to place the object, using an animated graphic. Once the user taps the screen, the AddPicture mode instantiates a FramedPhoto prefab, so it appears to hang on the wall, upright and flush against the wall plane. Many of these steps are similar to what we did in Chapter 5, Using the AR User Framework, so I'll offer a little less explanation here. We'll start with a similar script and then enhance it.
Given the AR Session Origin already has an AR Plane Manager component (provided in the default ARFramework template), use the following steps to set up the scene to scan for vertical planes (instead of horizontal ones):
Now let's create the AddPicture UI panel that prompts the user to tap a vertical plane to place a new picture.
The AddPicture UI panel is similar to the Scan UI one included with the scene template, so we can duplicate and modify it as follows:
We added an instructional user prompt for the AddPicture UI. When the user chooses to add a picture to the scene, we'll go into AddPicture mode, and this panel will be displayed. Let's create the AddPicture mode now.
To add a mode to the framework, we create a child GameObject under the Interaction Controller and write a mode script. The mode script will show the mode's UI, handle any user interactions, and then transition to another mode when it is done. For AddPicture mode, it will display the AddPicture UI panel, wait for the user to tap the screen, instantiate the prefab object, and then return to main mode.
The script starts out like the PlaceObjectMode script we wrote in Chapter 5, Using the AR User Framework. Then we'll enhance it to ensure the framed picture object is aligned with the wall plane, facing into the room, and hanging straight.
Let's write the AddPictureMode script, as follows:
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.InputSystem;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class AddPictureMode : MonoBehaviour
{
[SerializeField] ARRaycastManager raycaster;
[SerializeField] GameObject placedPrefab;
List<ARRaycastHit> hits = new List<ARRaycastHit>();
void OnEnable()
{
UIController.ShowUI("AddPicture");
}
public void OnPlaceObject(InputValue value)
{
Vector2 touchPosition = value.Get<Vector2>();
PlaceObject(touchPosition);
}
void PlaceObject(Vector2 touchPosition)
{
if (raycaster.Raycast(touchPosition, hits, TrackableType.PlaneWithinPolygon))
{
Pose hitPose = hits[0].pose;
Instantiate(placedPrefab, hitPose.position, hitPose.rotation);
InteractionController.EnableMode("Main");
}
}
}
At the top of AddPictureMode, we declare a placedPrefab variable that will reference the FramedPhoto Prefab asset we created. We also define and initialize references to the ARRaycastManager and a private list of ARRaycaseHit hits that we'll use in the PlaceObject function.
When the mode is enabled, we show the AddPicture UI panel. Then, when there's an OnPlaceObject user input action event, PlaceObject does a Raycast on the trackable planes. If there's a hit, it instantiates a copy of the FramedPhoto into the scene, and then goes back to main mode.
Let's go with this initial script for now and fix any problems we discover later. The next step is to add the AddPicture mode to the app.
We can now add the AddPicture mode to the scene by creating an AddPicture Mode object under the Interaction Controller, as follows:
We now have an AddPicture mode that will be enabled from Main mode when the user clicks an Add button. Let's create this button now.
When the app is in Main mode, the Main UI panel is displayed. On this panel, we'll have an Add button for the user to press when they want to place a new picture in the scene. I'll use a large plus sign as its icon, with the following steps:
Our button now looks like the following:
The On Click property of now looks like this:
We have now added AddPicture Mode to our framework. It will be enabled by the Interaction Controller when the Add button is clicked. When enabled, the script shows the AddPicture instructional UI, then waits for a PlaceObject input action event. Then it uses Raycast to determine where in 3D space the user wants to place the object, instantiates the prefab, and then returns to Main mode. Let's try it out.
Save the scene. If you want to try and see how it looks, you can now Build And Run, as follows:
The app will start and prompt you to scan the room. Slowly move your device around to scan the room, concentrating on the general area of the walls where you want to place the photos.
What makes for good plane detection?
When AR plane detection uses the device's built-in white light camera for scanning the 3D environment, it relies on good visual fidelity of the camera image. The room should be well lit. The surfaces being scanned should have distinctive and random textures to assist the detection software. For example, our AR Gallery project may have difficulty detecting vertical planes if your walls are too smooth. (Newer devices may include other sensors, such as laser-based LIDAR depth sensors that don't suffer from these limitations). If your device has trouble detecting vertical wall planes, try strategically adding some sticky notes or other markers on the walls to make the surfaces more distinctive to the software.
When at least one vertical plane is detected, the scan prompt will disappear, and you'll see the Main UI Add button. Tapping the Add button will enable AddPicture Mode, showing the AddPicture UI panel with its tap-to-place instructional graphic. When you tap a tracked plane, the FramedPhoto prefab will be instantiated in the scene. Here's what mine looks like, on the left side:
Oops! The picture is sticking out of the wall perpendicularly, as shown in the preceding screenshot (on the left side). We want it to hang like a picture on the wall like in the right-hand image. Let's update the script to take care of this.
There are a number of improvements we need to implement to complete the AddPictureMode script, including the following:
The AddPictureMode script contains the following line in the code that sets the rotation to hitPose.rotation:
Instantiate(placedPrefab, hitPose.position, hitPose.rotation);
As you can see in the previous screenshot, the "up" direction of a tracked plane is perpendicular to the surface of the plane, so with this code the picture appears to be sticking out of the wall. It makes sense to instantiate a placed object using this default up direction for horizontal planes, where you want your object standing up on the floor or a table. But in this project, we don't want to do that. We want the picture to be facing in the same direction as the wall. And we want it hanging straight up/down.
Instead of using the hit.pose.rotation, we should calculate the rotation using the plane's normal vector (pose.up). Then we call the Quaternion.LookRotation function to create a rotation with the specified forward and upward directions (see https://docs.unity3d.com/ScriptReference/Quaternion.LookRotation.html).
Quaternions
A quaternion is a mathematical construct that can be used to represent rotations in computer graphics. As a Unity developer, you simply need to know that rotations in Transforms use the Quaternion class. See https://docs.unity3d.com/ScriptReference/Quaternion.html. However, if you'd like an explanation of the underlying math, check out the great videos by 3Blue1Brown such as Quaternions and 3D rotation, explained interactively at https://www.youtube.com/watch?v=zjMuIxRvygQ.
Another thing we need is the ability to tell the FramedPhoto which image to display. We'll add a public variable for the imageInfo that will be set by the Image Select menu (developed in the next section of this chapter).
Also, we will add a defaultScale property that scales the picture when it's instantiated. If you recall, we defined our prefab as normalized to 1 unit max size, which would make it 1 meter wide on the wall unless we scale it. We're only scaling the X and Y axes, leaving the Z at 1.0 so that the frame's depth is not scaled too. I'll set the default scale to 0.5, but you can change it later in the Inspector.
Modify the AddPictureMode script as follows:
public ImageInfo imageInfo;
[SerializeField] float defaultScale = 0.5f;
void PlaceObject(Vector2 touchPosition)
{
if (raycaster.Raycast(touchPosition, hits, TrackableType.PlaneWithinPolygon))
{
ARRaycastHit hit = hits[0];
Vector3 position = hit.pose.position;
Vector3 normal = -hit.pose.up;
Quaternion rotation = Quaternion.LookRotation (normal, Vector3.up);
GameObject spawned = Instantiate(placedPrefab, position, rotation);
FramedPhoto picture = spawned.GetComponent<FramedPhoto>();
picture.SetImage(imageInfo);
spawned.transform.localScale = new Vector3(defaultScale, defaultScale, 1.0f);
InteractionController.EnableMode("Main");
}
}
Note that I had to negate the wall plane normal vector (-hit.pose.up), because when we created our prefab, by convention, the picture is facing in the minus-Z direction.
When you place a picture, it should now hang properly upright and be flush against the wall, as shown in right-hand panel of the screenshot at the top of this section.
Another enhancement might be to hide the tracked planes while in Main mode and show them while in AddPicture mode. This would allow the user to enjoy their image gallery without that distraction. Take a look at how we did that in the Hiding tracked object when not needed topic of Chapter 5, Using the AR User Framework. At that time, we wrote a script, ShowTrackablesOnEnable, that we can use now too. Follow these steps:
That is all we need to implement this feature.
To recap, we configured the scene to detect and track vertical planes, for the walls of your room. Then we created an AddPicture UI panel that prompts the user with an instructional graphic to tap to place. Next, we created an AddPicture mode, including the interaction AddPicture Mode game object and added a new AddPictureMode script. The script instantiates a copy of the FramedPhoto prefab when the user taps on a vertical plane. Then we improved the script by ensuring the picture is oriented flat on the wall and upright. The script also lets us change the image in the frame and its scale. Lastly, we display the trackable planes when in AddPicture mode and hide them when we return to Main mode.
The next step is to give the user a choice to select an image before hanging a new picture on the wall. We can now go ahead and create an image select menu for the user to pick one to use.
The next thing we want to do is create an image select menu containing image buttons for the user to choose a photo before adding it to the scene. When the Add button is pressed, rather than immediately prompting the user to place a picture on the wall, we'll now present a menu of pictures to select from before hanging the image chosen on the wall. I'll call this SelectImage mode. We'll need to write an ImageButtons script that builds the menu using the Images list you've already added to the project (the Image Data game object). And then we'll insert the SelectImage mode before AddPicture mode, so the selected image is the one placed on the wall. Let's define the SelectImage mode first.
When SelectImage mode is enabled by the user, all we need to do is display the SelectImage UI menu panel with buttons for the user to pick which image to use. Clicking a button will notify the mode script by calling the public function, SetSelectedImage, that in turn tells the AddPictureMode which image to use.
Create a new C# script named SelectImageMode and write it as follows:
using UnityEngine;
public class SelectImageMode : MonoBehaviour
{
void OnEnable()
{
UIController.ShowUI("SelectImage");
}
}
Simple. When SelectImageMode is enabled, we display the SelectImage UI panel (containing the buttons menu).
Now we can add it to the Interaction Controller as follows:
Next, we'll add the UI for this mode.
To create the SelectImage UI panel, we'll duplicate the existing Main UI and adapt it. The panel will include a Header title and Cancel button. Follow these steps:
The header of the SelectImage UI panel is shown in the following screenshot:
Next, we'll add a panel to contain the image buttons that will display photos for the user to pick. These will be laid out in a grid. Use the following steps:
We now have an ImageSelect UI panel with a header and a container for the image buttons. Parts of the current hierarchy are shown in the following screenshot:
Lastly, we need to add the panel to the UI Controller as follows:
We now have a UI panel with a container for the image buttons. To make the buttons, we'll create a prefab and then write a script to populate the Image Buttons panel.
We will define an Image Button as a prefab so it can be duplicated for each image that we want to provide to the user in the selection menu. Create the button as follows:
UI Image versus Raw Image
An Image component takes an image sprite for its graphic. A Raw Image component takes a texture for its graphic. Sprites are small, highly efficient, preprocessed images used for UI and 2D applications. Textures tend to be larger with more pixel depth and fidelity used for 3D rendering and photographic images. You can change an imported image between these and other type using the image file's Inspector properties. To use the same photo asset (PNG files) for both the FramedPhoto prefab and the button, we're using a Raw Image component on the buttons.
Next, we'll write a script to populate the buttons with actual images we want to use.
The ImageButtons script will be a component on the Image Buttons panel. Its job is to generate the image buttons with pictures of the corresponding images. Create a new C# script named ImageButtons, open it for editing, and write it as follows:
using UnityEngine;
using UnityEngine.UI;
public class ImageButtons : MonoBehaviour
{
[SerializeField] GameObject imageButtonPrefab;
[SerializeField] ImagesData imagesData;
[SerializeField] AddPictureMode addPicture;
void Start()
{
for (int i = transform.childCount - 1; i >= 0; i--)
{
GameObject.Destroy( transform.GetChild(i).gameObject);
}
foreach (ImageInfo image in imagesData.images)
{
GameObject obj = Instantiate(imageButtonPrefab,transform);
RawImage rawimage = obj.GetComponent<RawImage>();
rawimage.texture = image.texture;
Button button = obj.GetComponent<Button>();
button.onClick.AddListener(() => OnClick(image));
}
}
void OnClick(ImageInfo image)
{
addPicture.imageInfo = image;
InteractionController.EnableMode("AddPicture");
}
}
Let's go through this script. At the top of the class, we declare three variables. imageButtonPrefab will be a reference to the ButtonPrefab that we will instantiated. imagesData is a reference to the object containing our list of images. And addPicture is a reference to AddPictureMode for each button to tell which image has been selected.
The first thing Start() does is clear out any child objects in the buttons panel. For example, we created a number of duplicates of the button to help us develop and visualize the panel, and they'll still be in the scene when it runs unless we remove them first.
Then, Start loops through each of the images, and for each one, creates an Image Button instance and assigns the image to the button's RawImage texture. And it adds a listener to the button's onClick events.
When one of the buttons is clicked, our OnClick function will be called, with that button's image as a parameter. We pass this image data to the AddPictureMode that will be used when AddPictureMode instantiates a new FramedPhoto object.
Add the script to the scene as follows:
The Image Buttons component now looks like the following screenshot:
OK. When the app starts up, the Image Buttons menu will be populated from the Images list in Images Data. Then, when the user presses an image button, it'll tell the AddPictureMode which image was selected, and then enabled AddPicture mode.
There is just one last step before we can try it out. Currently, the main menu's Add button enables AddPicture mode directly. We need to change it to call SelectImage instead, as follows:
If you got all this right, you should be able to Build and Run the scene and run through the complete scenario: pressing the Add button will present a Select Image menu. Tapping an image, the select panel is replaced with a prompt to tap to place the image, with its frame, on a wall. The following screenshots from my phone show, on the left, the Select Image menu. After selecting an image and placing it on the wall, the result is shown on the right. Then the app returns to the main menu:
To summarize, in this section we added the Select Image menu to the scene by first creating the UI panel and adding it to the UI Controller. Then we created an Image Button prefab and wrote the ImageButtons script that instantiates buttons for each image we want to include in the app. Clicking one of the buttons will pass the selected image data to AddPicture mode. When the user taps to place and a FramedPhoto is instantiated, we set the image to the one the user has selected. We also included a Cancel button in the menu so the user can cancel the add operation.
This is looking good so far. One problem we have is all the pictures are rendered in the same sized landscape frame and thus may look distorted. Let's fix that.
Currently, we're ignoring the actual size of the images and making them all fit into a landscape orientation with a 3:4 aspect ratio. Fortunately, we've included the actual (original) pixel dimensions of the image with our ImageInfo. We can use that now to scale the picture accordingly. We can make this change to the FramedPhoto script that's on the FramedPhoto prefab.
The algorithm for calculating the aspect ratio can be separated as a utility function in the ImagesData script. Open the ImagesData script and add the following code:
public static Vector2 AspectRatio(float width, float height)
{
Vector2 scale = Vector2.one;
if (width == 0 || height == 0)
return scale;
if (width > height)
{
scale.x = 1f;
scale.y = height / width;
}
else
{
scale.x = width / height;
scale.y = 1f;
}
return scale;
}
When the width is larger than height, the image is landscape, so we'll keep the X scale at 1.0 and scale down Y. When the height is larger than the width, it is portrait, so we'll keep the Y scale at 1.0 and scale down X. If they're the same or zero, we return (1,1). The function is declared static so it can be called using the ImagesData class name.
Open the FramedPhoto script for editing and make the changes highlighted in the following:
public void SetImage(ImageData image)
{
imageData = image;
Renderer renderer = imageObject.GetComponent<Renderer>();
Material material = renderer.material;
material.SetTexture("_BaseMap", imageData.texture);
AdjustScale();
}
public void AdjustScale()
{
Vector2 scale = ImagesData.AspectRatio(imageInfo.width, imageInfo.height);
scalerObject.localScale = new Vector3(scale.x, scale.y, 1f);
}
If you recall, the SetImage function is called by AddPictureMode immediately after a FramedPhoto object is instantiated. After SetImage sets the texture, it now calls AdjustScale to correct its aspect ratio. AdjustScale uses ImageData.AspectRatio to get the new local scale and updates the scalerObject transform.
You may notice that the frame width is slightly different on the horizontal versus vertical sides when the picture is not square. Fixing this requires an additional adjustment to the Frame object's scale. For example, on a landscape orientation, try setting the child Frame object's Scale X to 1.0 – 0.01/aspectratio. I'll leave that implementation up to you.
When you run the project again and place a picture on your wall, it'll be the correct aspect ratio according to the photo you picked. One improvement you could add is to scale the images on the Select Image Panel buttons so they too are not squished. I'll leave that exercise up to you.
At the beginning of this chapter, I gave you the requirements and a plan for this AR gallery project, including a statement of the project objectives, use cases, UX design, and user stories. You started the implementation using the ARFramework template created in Chapter 4, Creating an AR User Framework, and built upon it to implement new features for placing a framed photo on your walls.
To implement this feature, you created a SelectImage UI panel, a SelectImage Mode interaction mode, and populated a list of images data. After the app starts up and AR is tracking vertical planes, when the user presses the Add button in the main menu, it opens a Select Image menu showing images to pick from. The image buttons grid was generated from your image data using an ImageButton prefab you created. Clicking an image, you're prompted to tap an AR tracked wall, and a new framed photo of that image is placed on the wall, correctly scaled to the image's aspect ratio.
We now have a fine start to an interesting project. There is a lot more that can be done. For example, presently pictures can be placed on top of one another, which would be a mistake. Also, it would be good to be able to move, resize, and remove pictures. We'll add that functionality in the next chapter.
3.149.230.44