Motion tracking in depth

ARCore implements motion tracking using an algorithm known as visual-inertial odometry (VIO). VIO combines the identification of image features from the device's camera with internal motion sensors to track the device's orientation and position relative to where it started. By tracking orientation and position, we have the ability to understand where a device is in 6 degrees of freedom, or what we will often refer to as the device's/object's pose. Let's take a look at what a pose looks like in the following diagram:

6 Degrees of Freedom, Pose

We will use the term pose frequently when identifying an object's position and orientation in 3D. If you recall from Chapter 4, ARCore on the Web, a pose can also be expressed in a mathematical notation called a matrix. We can also refer to rotation in a special form of complex math called a quaternion. Quaternions allow us to define all aspects of 3D rotation in a simple form. Again, we won't worry about the specific math here; we will just mention how it is used.

Perhaps it will be more helpful if we can see how this works in a modified ARCore sample. Open up the spawn-at-surface.html example from the Android/three.ar.js/examples folder in a text editor and follow the given steps:

  1. Scroll down or search for the update function.
  2. Locate the following line of code:
camera.updateProjectionMatrix();
  1. Add the following lines of code right after the highlighted line:
var pos = camera.position;
var rot = camera.rotation;
console.log("Device position (X:" + pos.x + ",Y:" + pos.y + ",Z:" + pos.z + ")");
console.log("Device orientation (pitch:" + rot._x + ",yaw:" + rot._y + ",roll:" + rot._z + ")");
  1. Save the file. The code we added just extracts the camera's position and orientation (rotation) into some helper variables: pos and rot. Then, it outputs the values to the console with the console.log function. As it happens, the camera also represents the device's view.
  2. Open Command Prompt or shell window.
  3. Launch the http-server in your android folder by entering this:
cd /android
http-server -d -p 9999
  1. Launch the Chrome debugging tools and connect remotely to your device.
  2. Open the spawn-at-surface.html file using the WebARCore browser app on your device.
  3. Switch back to the Chrome tools and click on Inspect.
  4. Wait for the new window to open and click on Console. Move your device around while running the AR app (spawn-at-surface.html), and you should see the Console tab updated with messages about the device's position and orientation. Here's an example of how this should look:
Console output showing device position and orientation being tracked

The code we added in this example tracks the camera, which, as it so happens, represents the view projected through the device in an AR app. We refer to a camera as the view of a scene in 3D. A 3D scene can have multiple cameras, but, typically, we only use one in AR. The following is a diagram of how we define a camera or view projection in 3D:



Viewing frustum of a 3D camera

The main task of a camera is to project or flatten the 3D virtual objects into a 2D image, which is then displayed on the device. If you scroll near the middle of the spawn-at-surface.html file, you will see the following code, which creates the camera for the scene:

camera = new THREE.ARPerspectiveCamera(
vrDisplay,
60,
window.innerWidth / window.innerHeight,
vrDisplay.depthNear,
vrDisplay.depthFar
);

Here, vrDisplay is the device's actual camera, 60 represents the field of view, window.innerWidth / window.innerHeight represents the aspect ratio, and vrDisplay.depthNear and vrDisplay.depthFar represent the near and far plane depth distances. The near and far, along with the field of view, represent the view frustum. All objects in the view frustum will be rendered. Feel free to try and change those parameters to see what effect they have on the scene view when running the app.

We use a field of view of 60 degrees in this setting to give a more natural perspective to the objects in the scene. Feel free to experiment with larger and smaller angles to see the visual effect this has on the scene objects.

Now that we have a better understanding of how we can track our device around a scene, we will extend our example. In the next section, we will introduce 3D spatial sound.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.189.177