Understanding how ARKit tracks the physical environment

To understand how ARKit renders content, it's essential that you understand how ARKit makes sense of the physical environment a user is in. When you implement an AR experience, you use an ARKit session. An ARKit session is represented by an instance of ARSession. Every ARSession uses an instance of ARSessionConfiguration to describe the tracking that it should do on the environment. The following diagram depicts the relationship between all objects involved in an ARKit session:

The preceding image shows how the session configuration is passed to the session. The session is then passed to a view that is responsible for rendering the scene. If you use SpriteKit to render the scene, the view is an instance of ARSKView. When you use SceneKit, this would be an instance of ARSCNView. Both the view and session have a delegate that will be informed about certain events that can occur during an ARKit session. You will learn more about these delegates later when you implement your AR gallery.

There are several different tracking options that you can configure on a session. One of the most basic tracking configurations is AROrientationTrackingConfiguration. This configuration only tracks the device's orientation, so not the user's movement in the environment. This kind of tracking monitors the device using three degrees of freedom. To be more specific, this tracking tracks the device's x, y, and z orientation. This kind of tracking is perfect if you're implementing something such as a 3D video where the user's movements can be ignored.

A more complex tracking configuration is ARWorldTrackingConfiguration, also known as World tracking. This type of configuration tracks the user's movements as well as the device's orientation. This means that a user can walk around an AR object to see it from all different sides. World tracking uses the device's motion sensors to determine the user's movements and the device orientation. This is very accurate for short and small movements, but not accurate enough to track movements over long periods of time and distances. To make sure the AR experience remains as precise as possible, world tracking also performs some advanced computer vision tasks to analyze the camera feed to determine the user's location in an environment.

In addition to tracking the user's movements, world tracking also uses computer vision to make sense of the environment that the AR session exists in. By detecting certain points of interest in the camera feed, world tracking can compare and analyze the position of these points in relation to the user's motion to determine the distances and sizes of objects. This technique also allows world tracking to detect walls and floors for instance.

The world tracking configuration stores everything it learns about the environment in an ARWorldMap. This map contains all ARAnchor instances that represent different objects and points of interest that exist in the session.

There are several other special tracking types that you can use in your app. For instance, you can use ARFaceTrackingConfiguration on devices with a TrueDepth camera to track a user's face. This kind of tracking is perfect if you want to recreate Apple's Animoji feature that was added to the iPhone X and newer in iOS 12.

You can also configure your session, so it automatically detects certain objects or images in a scene. To implement this, you can use ARObjectScanningConfiguration to scan for specific items or ARImageTrackingConfiguration to identify still images.

Before you get your hands dirty with implementing an ARKit session, let's explore the new ARKit Quicklook session to see how simple it is for you to allow users of your app to preview items in AR.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.234.225