Spatial understanding

In the previous section, we walked through one approach of placing a hologram; the approach consisted of breaking the surface mesh into a set of planes, identifying an appropriate plane, then scoring each position on this plane before using the position with the best score. As you would expect, understanding and incorporating the physical world into MR applications is a common task. Fortunately, this task has been solved for us and made available in HoloToolkit. In this section, we will walk through how to make use of it in our application.

Asobo Studios faced the problem of better understanding the physical environment and finding suitable places for holograms when developing Young Conker, a HoloLens platform game where holographic character interacts and reacts naturally with the real world. Their solution has packaged up and made available in the HoloToolkit under the namespace HoloToolkit.SpatialUnderstanding. It comprises three primary modules:

  • Topology for simple surface and spatial queries
  • Shape for object detection
  • Object placement solver for constraint-based placement

In this section, we will be mainly concerned with the object placement solver but I encourage you to experiment with the other modules using the example bundled with HoloTookit repository for reference.

The flow will be similar to the flow implemented in the previous section. We will scan, process, and search for suitable places based on some specified criteria. Instead of planes, we will be inspecting custom mesh generated by the SpatialUnderstanding library and once we have sufficiently scanned the environment, we will prompt the user to perform an air tap gesture before using LevelSolver, a utility that will search for places given a set of constraints, from SpatialUnderstanding.

Let's start by disabling and commenting out all the dependent parts from the previous section that are not required in this section. In the Unity Editor, select the SpatialProcessing GameObject from the Hierarchy panel and disable by clicking on the top left checkbox in the Inspector panel. Now, to disable the associated code, double-click on the SceneController script from within the Project panel to open it up in Visual Studio and make the following changes.

Note the ScanningStateRoutine method:

   IEnumerator ScanningStateRoutine() 
    { 
        StatusText.Instance.Text = "Look around to scan your play area"; 
 
        while (!PlaneFinder.Instance.Finished) yield return null; 
 
        StatusText.Instance.Text = ""; 
 
        CurrentState = State.Placing; 
    } 

Update it to the following:

    IEnumerator ScanningStateRoutine() 
    { 
        PlaySpaceScanner.Instance.Scan();  
 
        while (PlaySpaceScanner.Instance.CurrentState !=  
PlaySpaceScanner.State.Finished) { if(PlaySpaceScanner.Instance.CurrentState ==
PlaySpaceScanner.State.Scanning) { StatusText.Instance.Text = "Look around to scan your
play area"; } else if (PlaySpaceScanner.Instance.CurrentState ==
PlaySpaceScanner.State.ReadyToFinish) { StatusText.Instance.Text = "Air tap when ready"; } else if (PlaySpaceScanner.Instance.CurrentState ==
PlaySpaceScanner.State.Finalizing) { StatusText.Instance.Text = "Finalizing scan (please
wait)"; } yield return null; } StatusText.Instance.Text = ""; CurrentState = State.Placing; }

This is functionally similar but with more states and delegating the task of finding suitable positions to PlaySpaceScanner. Now, replace the method PlacingStateRoutine with the following:

    IEnumerator PlacingStateRoutine() 
    { 
        PlaySpacePlaceFinder.Instance.FindPlaces(); 
 
        while (!PlaySpacePlaceFinder.Instance.Finished) yield return  
null; PlaySpacePlaceFinder.Instance.PlaceGameObjects(); CurrentState = State.Playing; }

Like the previous method, we are retaining the functional flow but delegating the task to a new class.

Let's now address the errors by creating and stubbing out the classes PlaySpaceScanner and PlaySpacePlaceFinder. In the Editor, click on the Create button within the Project panel and select C# Script; enter the name PlaySpaceScanner. Again, click on the Create button, select C# Script, and enter the name PlaySpacePlaceFinder. Double-click on PlaySpaceScanner to open it up in Visual Studio. For now, we will just stub out each class method to remove compile time errors and will return to this shortly to fill in the details. With that in mind, add the following code to the PlaySpaceScanner, remembering to add the namespace HoloToolkit.Unity:

    public class PlaySpaceScanner : Singleton<PlaySpaceScanner> { 
 
    public enum State 
    { 
        Undefined,  
        Scanning, 
        ReadyToFinish, 
        Finalizing, 
        Finished 
    } 
 
    public State CurrentState 
    { 
        get; 
        private set; 
    } 
 
    public void Scan() 
    { 
 
    } 
} 

Next, open the PlaySpaceScanner script and add the following code and the namespace HoloToolkit.Unity:

public class PlaySpacePlaceFinder : Singleton<PlaySpacePlaceFinder> { 
 
    public bool Finished { get; private set; } 
 
    public void FindPlaces() 
    { 
 
    } 
 
    public void PlaceGameObjects() 
    { 
 
    } 
     
} 

With our classes now stubbed out, let's return to the Editor to prepare our scene to use SpatialUnderstanding.

As we did in the previous section, we will create an empty GameObject to host all the components necessary for SpatialUnderstanding. Let's create that now. Click on the Create button from the Hierarchy panel, select Create Empty, and enter the name SpatialUnderstanding. With our newly created GameObject selected, click on the Add Component button from within the Inspector panel, and enter and select Spatial Understanding. This will also include its dependencies SpatialUnderstandingCustomMesh and SpatialUnderstandingSourceMesh. Let's now briefly inspect each component, starting with SpatialUnderstandingCustomMesh.

SpatialUnderstandingCustomMesh is responsible for pulling through and reconstructing the retopologized surface mesh from the underlying library.

The scanning process is an important step in the flow, as the higher level functions in the SpatialUnderstanding library depend on having the surfaces flat and walls at right angles. The SpatialUnderstanding library stores the space as a grid of 8cm sized voxel cubes. The mesh is extracted approximately every second using the isosurface from the voxel volume.

Setting the property DrawProcessedMesh to true will render the mesh using the assigned material to Mesh Material, talking of which, click on the Object Selector (circle button) to open the Material dialog, enter, and select SpatialUnderstandingSurface.

The next component, SpatialUnderstandingSourceMesh, is tasked with passing the observed surface mesh from SpatialMappingObserver to the SpatialUnderstanding library, which in turn processes the mesh.

And finally, the SpatialUnderstanding script is responsible for managing the scanning process. While we are here, uncheck Auto Begin Scanning, as we will manage the starting and stopping of the scanning based on the application state.

Let's now add our components PlaySpaceScanner and PlaySpacePlaceFinder to the SpatialUnderstanding GameObject. With the SpatialUnderstanding GameObject selected in the Hierarchy panel, click on the Add Component button from within the Inspector panel, and enter and select PlaySpaceScanner, and again for PlaySpacePlaceFinder. We will now return to the code to flesh out our PlaySpaceScanner script. Double-click on the PlaySpaceScanner script in the Project panel to open it in Visual Studio.

As the name suggest, our PlaySpaceScanner class will be responsible for scanning the environment, triggered to start scanning via SceneController when entering the Scanning state. It will continue scanning until we have sufficiently scanned enough-enough here defined as having scanned a surface that meets the specified minimum area threshold. Once the criteria have been meet, we will wait for the user to perform an air tap gesture before finalizing the scan and setting Finished to true, signaling to the SceneController to proceed to the next state. Add the following variables and properties to the PlaySpaceScanner class:

 
    public float minAreaForComplete = 50f; 
 
    public float minHorizontalAreaForComplete = 10f; 
 
    public float minVerticalAreaForComplete = 10f; 
 
    private SpatialMappingObserver _mappingObserver; 
 
    public SpatialMappingObserver MappingObserver 
    { 
        get 
        { 
            if(_mappingObserver == null) 
            { 
                _mappingObserver =  
FindObjectOfType<SpatialMappingObserver>(); } return _mappingObserver; } } private SpatialUnderstandingCustomMesh
_spatialUnderstandingCustomMesh; public SpatialUnderstandingCustomMesh
SpatialUnderstandingCustomMesh { get { if (_spatialUnderstandingCustomMesh == null) { _spatialUnderstandingCustomMesh =
FindObjectOfType<SpatialUnderstandingCustomMesh>(); } return _spatialUnderstandingCustomMesh; } }

Our criteria, which determines whether we have sufficiently scanned environment, includes minimum area (minAreaForComplete) and minimum horizontal and vertical areas (minHorizontalAreaForComplete, minVerticallAreaForComplete). MappingObserver and SpatialUnderstandingCustomMesh are convenient getters that we will use within this class to get reference to the respective class instance. Next, add the following code for the Start method:

    void Start() 
    { 
        MappingObserver.SetObserverOrigin(Camera.main.transform.position); 
        SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged; 
    } 

Here, we set the observer's origin to the camera's current position. This is used when setting the observer's bounding volume before each surface observation update, something we discussed in the previous chapter. We next register to state changes for the SpatialUnderstanding class, which are used to progress the experience forward. Let's define this delegate now. Add the following delegate to your PlaySpaceScanner class:

    void Instance_ScanStateChanged() 
    { 
        switch (SpatialUnderstanding.Instance.ScanState) 
        { 
            case SpatialUnderstanding.ScanStates.Scanning: 
                SpatialUnderstandingCustomMesh.DrawProcessedMesh =  
true; CurrentState = State.Scanning; break; case SpatialUnderstanding.ScanStates.Finishing: CurrentState = State.Finalizing; break; case SpatialUnderstanding.ScanStates.Done: SpatialUnderstandingCustomMesh.DrawProcessedMesh =
false; CurrentState = State.Finished; break; } }

Most of what we are doing here is simply mapping the current state of SpatialUnderstanding to an equivalent state defined in the PlaySpaceScanner class with the addition of flagging whether to show or hide the processed mesh.

Before we move on, there is a slight issue with our current approach of showing and hiding the processed surface mesh. To make a hologram feel like it is part of the environment is to have it behave like real objects. An example of this is having it being occluded by real world objects when obstructing the user's view. Our current approach doesn't achieve this. We essentially remove the virtual representation of the physical world by hiding the surface mesh. We should rather update how it is rendered so it can occlude virtual objects that appear behind it (relative to the user). Let's fix this now. Head back to the top of the PlaySpaceScanner class and add the following code:

    public Material defaultSurfaceMaterial; 
 
    public Material scanningSurfaceMaterial; 
 
    private Material _surfaceMaterial;  
 
    public Material SurfaceMaterial 
    { 
        get 
        { 
            return _surfaceMaterial;  
        } 
        set 
        { 
            _surfaceMaterial = value; 
 
            SpatialUnderstandingCustomMesh.MeshMaterial =  
_surfaceMaterial; foreach(var surfaceObject in
SpatialUnderstandingCustomMesh.SurfaceObjects) { surfaceObject.Renderer.material = _surfaceMaterial; } } }

Here, we declare two variables to hold references for the material to be assigned to the processed surface mesh, one assigned when scanning and the other used as default. We also define a property that will hold the current mesh and propagate changes to all the surface renderers when set. Next, return to the Start method and make the following changes:

void Start() 
{ 
         MappingObserver.SetObserverOrigin(Camera.main.transform.position); 
SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged; 
 
SpatialUnderstandingCustomMesh.DrawProcessedMesh = true; 
SurfaceMaterial = defaultSurfaceMaterial; 
} 

Within the Start method, we explicitly set the DrawProcessedMesh of SpatialUnderstandingCustomMesh to true and set the default surface material. Now, update the Instance_ScanStateChanged method with the following changes:

    void Instance_ScanStateChanged() 
    { 
        switch (SpatialUnderstanding.Instance.ScanState) 
        { 
            case SpatialUnderstanding.ScanStates.Scanning: 
                SurfaceMaterial = scanningSurfaceMaterial; 
                CurrentState = State.Scanning; 
                break; 
            case SpatialUnderstanding.ScanStates.Finishing: 
                CurrentState = State.Finalizing; 
                break; 
            case SpatialUnderstanding.ScanStates.Done: 
                SurfaceMaterial = defaultSurfaceMaterial; 
                CurrentState = State.Finished; 
                break; 
        } 
    } 

Instead of disabling the processed surface mesh renderer, we update its Material. Before continuing with the code, lets assign the appropriate Material to the defaultSurfaceMaterial and scanningSurfaceMaterial variables.

Back in the Editor, select the SpatialUnderstanding GameObject from the Hierarchy panel and, within the Inspector panel of the Play Space Scanner pane, click on the Default Surface Material object selector (right-most circle button), search, and select the Occlusion material. Once selected, click on the Scanning Surface Material object selector, search, and select the SpatialUnderstandingSurface material.

With that now done, return to Visual Studio and let's continue fleshing out our PlaySpaceScanner class. The next method we will implement is the Scan method. Find the method you declared before and make the following amendments:

    public void Scan() 
    { 
        CurrentState = State.Scanning; 
 
        if (!SpatialMappingManager.Instance.IsObserverRunning()) 
        { 
            SpatialMappingManager.Instance.StartObserver(); 
        } 
 
        if (SpatialUnderstanding.Instance.AllowSpatialUnderstanding &&  
SpatialUnderstanding.Instance.ScanState ==
SpatialUnderstanding.ScanStates.None) { SpatialUnderstanding.Instance.RequestBeginScanning(); } }

When the SceneController calls the Start method, we start the SurfaceObserver and SpatialUnderstanding processes and then constantly poll for updates from the SpatialUnderstanding instance to check whether we have sufficiently scanned the environment. We do this within the Update method. Let's add this now:

    void Update() 
    { 
        if(CurrentState == State.Scanning) 
        { 
            IntPtr statsPtr =  
SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr(); if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) > 0) { SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats(); if ((stats.TotalSurfaceArea > minAreaForComplete) || (stats.HorizSurfaceArea >
minHorizontalAreaForComplete) || (stats.WallSurfaceArea >
minVerticalAreaForComplete)) { CurrentState = State.ReadyToFinish; } } } }

If currently scanning, we query for the current playspace statistics from the SpatialUnderstanding library and compare these with the specified criteria to determine whether we have scanned enough. The first call, SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr(), is used to create a pointer to the current statistics, if not already created, and binds the values to reusedPlayspaceStats. The reusedPlayspaceStats class is returned with the next call containing metadata about our environment, such as the horizontal and vertical surface area, number of floors, platforms, and more.

The core of the SpatialUnderstanding library was written in C++ (unmanaged); memory-managed, data types, and so on, are handled differently between managed and unmanaged code. One way of communicating between the two is using pointers (IntPtr). Refer to Microsoft's page An Overview of Managed/Unmanaged Code Interoperability to learn more about unmanaged and managed Interoperability. 

If our criteria are satisfied, then we will update the state to ReadyToFinish. In this state, we push the responsibility to the user to signal when to finish scanning. For this signal, we are expecting the air tap gesture but, currently, we haven't registered for these events. Let's fix this now. Add the following code to the Instance_ScanStateChanged method:

    void Instance_ScanStateChanged() 
    { 
        switch (SpatialUnderstanding.Instance.ScanState) 
        { 
            case SpatialUnderstanding.ScanStates.Scanning: 
                InteractionManager.SourcePressed += OnAirTap; 
                SurfaceMaterial = scanningSurfaceMaterial; 
                CurrentState = State.Scanning; 
                break; 
            case SpatialUnderstanding.ScanStates.Finishing: 
                InteractionManager.SourcePressed -= OnAirTap; 
                CurrentState = State.Finalizing; 
                break; 
            case SpatialUnderstanding.ScanStates.Done: 
                SurfaceMaterial = defaultSurfaceMaterial; 
                CurrentState = State.Finished; 
                break; 
        } 
    } 

When scanning, we register for the air tap gesture and unregister when we exit. When we detect the air tap, we will request the SpatialUnderstanding library to finish scanning, which will update its ScanState to Finishing and then, finally, Done. Add the gesture delegate to PlaySpaceScanner, as shown:

    void OnAirTap(InteractionSourceState state) 
    { 
        if(CurrentState != State.ReadyToFinish ||   
SpatialUnderstanding.Instance.ScanStatsReportStillWorking) { return; } SpatialUnderstanding.Instance.RequestFinishScan(); }

That concludes our PlaceSpaceScanner class. When the scene transitions into the scanning state, it will begin scanning the environment. Once sufficiently scanned, the user will be able to air tap to finalize the scan. Once finished, the scene will transition into the placing state, which is the topic of the next section.

The following image shows the result of a scan of a hotel room performed by the SpatialUnderstanding library:

To find locations to place objects, we will be using the object placement solver of the SpatialUnderstanding , passing in a query made up of a set of rules and constraints. The object placement solver then searches the playspace to find the most suitable place. It's worth noting that places are persisted until explicitly removed. This allows for multi-object placement avoid having holograms overlap each other. The query consists of the following parts:

  • PlacementType defines the type of surface to place the hologram on. Some examples include Place_OnFloor, Place_OnWall, Place_UnderFurnitureEdge, and many others.
  • One or more ObjectPlacementRule define hard rules, meaning that the place cannot violate them. Some examples of placement rules include Rule_AwayFromPosition, Rule_AwayFromWalls, and Rule_AwayFromOtherObject.
  • One or more ObjectPlacementConstraint are like the object placement rules but are considered soft, meaning that the place is not required to satisfy the constraint but likely to increase its chance of being selected. Some examples include Constraint_NearPoint, Constraint_NearCenter, Constraint_AwayFromWall, and may others.

In the previous section, we decoupled the criteria from the class. In this section, we will take a simpler approach, helping keep the focus on working with the SpatialUnderstanding library rather than the supplementary code. Jump back into Visual Studio and open the PlaySpacePlaceFinder script; we will start by adding the class variables and properties:

    public enum States 
    { 
        None,  
        Processing,  
        Finished  
    } 
 
    public GameObject prefab; 
 
    public float distanceFromUser = 0.6f; 
 
    public bool Finished { get; private set; } 
 
    public States State { get; private set; } 
 
    bool solverInitialized = false; 
 
    List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementResult> 
queryPlacementResults = new
List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementResult>
();

Searching can be computationally expensive and, therefore, if offloaded to a separate thread. To help manage the process, we store the current state in the State property.

Before being used, the object placement solver needs to be initialized. We use the variable solverInitialized to flag whether this step has been performed or not, and initialize before any queries are performed. prefab is the prefab of the hologram we want to place. The variable distanceFromUser will be a value assigned to a rule added to our query. Our last variable is a list used to store the results of the query. ObjectPlacementResult stores the name of the query, position, orientation, and bounds that we will use to place and orientate our hologram.

Initialization is simply a matter on calling SpatialUnderstandingDllObjectPlacement.Solver_Init(). We wrap this in a method to handle logic of only calling it once and handling cases where the initialization fails; add the following method to the PlaySpacePlaceFinder class:

    bool InitializeSolver() 
    { 
        if (solverInitialized ||  
!SpatialUnderstanding.Instance.AllowSpatialUnderstanding) { return solverInitialized; } if (SpatialUnderstandingDllObjectPlacement.Solver_Init() > 0) { solverInitialized = true; } return solverInitialized;
}

As mentioned earlier, the places found (the results of the previous queries) are persisted, allowing for subsequent queries to be made that consider the currently occupied places. Because we are only placing a single hologram, we will clear any reversed places and the previous results each time we make a query. Add the following method to handle this:

 void Reset() 
 { 
     queryPlacementResults.Clear(); 
 
     if (SpatialUnderstanding.Instance.AllowSpatialUnderstanding) 
      { 
            
SpatialUnderstandingDllObjectPlacement.Solver_RemoveAllObjects(); } }

Here, we are simply removing all the results and clearing all persisted places from the object placement solver. Let's now revisit the FindPlaces method and fill in the details; add the following code to the method:

public void FindPlaces() 
{ 
if (!InitializeSolver()) 
{ 
   return; 
} 
 
Reset(); 
 
Bounds bounds = GetBoundsForObject(prefab); 
Vector3 halfDims = new Vector3(bounds.size.x * 0.5f,  
bounds.size.y * 0.5f, bounds.size.z * 0.5f); SpatialUnderstandingDllObjectPlacement.ObjectPlacementDefinition placementDefinition = SpatialUnderstandingDllObjectPlacement.ObjectPlacementDefinition.Create_OnFloor(halfDims); List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementRule> placementRules = new List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementRule>() { SpatialUnderstandingDllObjectPlacement.ObjectPlacementRule.Create_AwayFromPosition(Camera.main.transform.position, distanceFromUser), }; AsyncRunQuery(placementDefinition, placementRules);
}

In this method, once initialized and reset, we obtain the bounds of the prefab and, because the object placement solver is expecting half dimensions, divide each axis in half. We next construct our query parameters by creating a placement definition and list of placement rules. To make life easier, the ObjectPlacementDefinition and ObjectPlacementDefinition structures expose a set of static builder methods to simplify the process of building a query, which we make use of. We then pass these through to the AsyncRunQuery method. The process can be computationally expensive, so the task should be ideally offloaded from the main thread, which is the intention of using this method. Let's add this now:

bool AsyncRunQuery(SpatialUnderstandingDllObjectPlacement.ObjectPlacementDefinition placementDefinition, 
        List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementRule> placementRules = null, 
        List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementConstraint> placementConstraints = null) 
    { 
#if UNITY_WSA && !UNITY_EDITOR 
        System.Threading.Tasks.Task.Run(() => 
            { 
                RunQuery(placementDefinition, placementRules, placementConstraints);  
            } 
        ); 
 
        return true;  
#else 
        return RunQuery(placementDefinition, placementRules, placementConstraints); 
#endif 
    } 

The most significant part of the code is not the code itself by the preprocessor directives. Here, we run the query on a separate thread if running on the HoloLens device, otherwise we fall back to a synchronized flow, both passing the query down to the RunQuery method that is responsible for handling the query. Let's now implement this method:

bool RunQuery(SpatialUnderstandingDllObjectPlacement.ObjectPlacementDefinition placementDefinition, 
        List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementRule> placementRules = null, 
        List<SpatialUnderstandingDllObjectPlacement.ObjectPlacementConstraint> placementConstraints = null) 
{ 
if (SpatialUnderstandingDllObjectPlacement.Solver_PlaceObject( this.name, 
                SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(placementDefinition), 
                (placementRules != null) ? placementRules.Count : 0, 
                ((placementRules != null) && (placementRules.Count > 0)) ? SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(placementRules.ToArray()) : IntPtr.Zero, 
                (placementConstraints != null) ? placementConstraints.Count : 0, 
                ((placementConstraints != null) && (placementConstraints.Count > 0)) ? SpatialUnderstanding.Instance.UnderstandingDLL.PinObject(placementConstraints.ToArray()) : IntPtr.Zero, 
                SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticObjectPlacementResultPtr()) > 0) 
{ 
            SpatialUnderstandingDllObjectPlacement.ObjectPlacementResult placementResult = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticObjectPlacementResult(); 
 
queryPlacementResults.Add(placementResult.Clone() as SpatialUnderstandingDllObjectPlacement.ObjectPlacementResult); 
 
return true; 
} 
 
State = States.Finished; return true; }

Here, we are passing the query to the object placement Dynamic Link-Library (DLL) wrapper SpatialUnderstandingDllObjectPlacement to perform the query. As mentioned earlier, because the underlying library is written in C++ and complied as an unmanaged DLL, the parameters are needmarshaling. If the query was successful, we will obtain the results and update the state to Finished.

Our final method for querying is GetBoundsForObject used by the FindPlaces method to get the bounds of prefab. This method returns the attached collider bounds if one exists, otherwise creating bounds encapsulating all the renderers:

    Bounds GetBoundsForObject(GameObject prefab) 
    { 
        Bounds? bounds = null; 
 
        if (prefab.GetComponent<Collider>() != null) 
        { 
            bounds = prefab.GetComponent<Collider>().bounds; 
        } 
        else 
        { 
            Renderer[] renderers =   
prefab.GetComponentsInChildren<Renderer>(); foreach (Renderer renderer in renderers) { if (bounds.HasValue) { bounds.Value.Encapsulate(renderer.bounds); } else { bounds = renderer.bounds; } } } return new Bounds(bounds.HasValue ? Vector3.zero :
bounds.Value.center, bounds.HasValue ? bounds.Value.size :
Vector3.zero); }
Opportunity to refactor: because we have seen this code twice, we should ask ourselves where we can move it to make it accessible by other classes. One option is some utility class; the other is an extension method on the GameObject class, which would be my preferred choice.

Once the state has been updated to Finished, SceneController calls the PlaceGameObjects method on this class, so let's implement this now. Find the PlaceGameObjects method and add the following code:

    public bool PlaceGameObjects() 
    { 
        if(queryPlacementResults.Count == 0) 
        { 
            return false;  
        } 
         
        var objectPlacementResult = queryPlacementResults.First(); 
        GameObject go = Instantiate(prefab); 
        go.transform.position = objectPlacementResult.Position; 
        go.transform.up = objectPlacementResult.Up; 
 
        return true;  
    } 

At this point, all the hard work has been done. Our final task was to retrieve and use the results from the query to position and orientate our hologram. With this method now complete, we have finished placement using the SpatialUnderstanding library. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.152.162