i
i
i
i
i
i
i
i
11
Navigation and
Movement
in VR
So far, we have looked at how to specify 3D scenes and the shape and ap-
pearance of virtual objects. Weve investigated the principles of how to render
realistic images as if we were looking at the scene from any direction and using
any type of camera. Rendering static scenes, even if they result in the most
realistic images one could imagine, is not enough for VR work. Nor is render-
ing active scenes with a lot of movement if they result from predefined actions
or the highly scripted behavior of camera, objects or environment. This is the
sort of behavior one might see in computer-generated movies. Of course, in
VR, we still expect to be able to move the viewpoint and objects around in
three dimensions, and so the same theory and implementation detail of such
things as interpolation, hierarchical movement, physical animation and path
following still apply.
What makes VR different from computer animation is that we have to be
able to put ourselves (and others) into the virtual scene. We have to appear
to interact with the virtual elements, move them around and get them to do
things at our command. To achieve this, our software must be flexible enough
to do everything one would expect of a traditional computer animation pack-
age, but spontaneously, not just by pre-arrangement. However, on its own,
software is not enough. VR requires a massive contribution fr om the human
interface hardwar e. To put ourselves in the scene, we have to tell the software
where we are and what we are doing. This is not easy. We have to synchro-
nize the real with the virtual, and in the context of navigation and movement,
257
i
i
i
i
i
i
i
i
258 11. Navigation and Movement in VR
this requires motion tracking. Motion-tracking hardware will allow us to de-
termine where we (and other real objects) are in relation to a real coordinate
frame of reference. The special hardware needed to do this has its own unique
set of practical complexities (as we discussed in Section 4.3). But once we can
acquire movement data in real time and feed it into the visualization software,
it is not a difficult task to match the real coordinate system with the virtual
one so that we can appear to touch, pick up and throw a virtual object. Or
more simply, by knowing where we are and in what direction we are looking,
a synthetic view of a virtual world can be fed to the display we are using, e.g.,
a head-mounted display.
In this chapter, we start by looking at those aspects of computer anima-
tion which are important for describing and directing movement in VR. Prin-
cipally, we will look at inverse kinematics (IK), which is important when sim-
ulating the movement of articulated linkages, such as animal/human move-
ment and robotic machinery (so different to look at, but morphologically so
similar). We will begin by quickly reviewing how to achieve smooth motion
and rotation.
11.1 Computer Animation
The most commonly used approach to 3D computer animation is keyframe
in-betweening (tweening), which is most useful for basic rigid-body motion.
The idea of the keyframe is well known to paper-and-pencil animators. It is
a description of a scene at one instant of time, a key instant. Betw een key
instants, it is assumed that nothing startling” happens. It is the role of the
key animators to draw the key scenes (called keyframes), which are used by a
team of others to draw a series of scenes filling in the gaps between the keys so
that jumps and discontinuities do not appear This is called tweening (derived
from the rather long and unpronounceable word inbetweening).
The task of tweening is a fairly monotonous and repetitive one. Thus, it
is ideally suited to some form of automation with a computer. A half-hour
animated m ovie may only need a few thousand keyframes, about four percent
the total length. Some predefined and commonly used actions described by
library scripts might cut the work of the animators even further. For example,
engineering designers commonly need to visualize their design rotating in
front of the camera; a script or template for rotation about some specified
location at a fixed distance from the camera will cut the work of the animator
even further.
i
i
i
i
i
i
i
i
11.1. Computer Animation 259
Thus, in computer animation, the basic idea is:
Set up a description of a scene (place models, lights and cameras
in three dimensions) for each keyframe. Then use the computer
to calculate descriptions of the scene for each frame in between
the keyframes and render appropriate images.
Most (if not all) computer-animation application programs give their
users the task of describing the state of the action in keyframes, and then
they do their best to describe what happens in the snapshots taken during
the intervening frames. This is invariably done by interpolating between the
description of at least two, but possibly three or four keyframes.
For applications in VR, we cannot use the keyframe concept in the same
way, because events are taking place in real time. However, we still have to
render video frames, and we must synchronize what is rendered with a real-
time clock. So suppose we need to simulate the movement of a flight simula-
tor which represents an aircraft flying at 10 m/s along a path 10 km long over
a virtual terrain. The total flight time will take approximately 1,000 sec-
onds. This means that if the graphics card is refreshing at a rate of 100 Hz
(or every one hundredth of a second), our visualization software will have to
render 100,000 frames during this 1,000-second flight. In addition, if we
get external course correction information from joystick hardware every sec-
ond and use it to update the aircraft’s current position, we are still left with
the task of having to smoothly interpolate the viewpoints virtual position
and orientation 99 times every second (so that all the video frames can be
rendered without looking jerky). This is analogous to what happens in com-
puter animation when tweening between two keyframes to generate 100
pictures.
1
In practical terms, our rendering software has a harder job than that
of its equivalent in computer animation because of the need to synchronize
to a real-time clock. Going back to our simple flight simulator example,
if we cannot render 100 pictures per second, we shall have to make larger
steps in viewpoint position. Therefore, instead of rendering 100 frames per
second (fps) and moving the viewpoint 0.1 m at each frame, we might ren-
der 50 fps and move the viewpoint 0.2 m. Unfortunately, it is not usually
possible to know ahead of time how long it is going to take to render a
1
Thereisasmallsubtledifference. Intheanimation example, we know the starting and
ending positions and interpolate; in the flight-simulator example, we use the current position
and the last position to extrapolate.
i
i
i
i
i
i
i
i
260 11. Navigation and Movement in VR
video frame. F or one view it might take 10ms, for another 50 ms. As we
indicated in Chapter 7, the time it takes to render a scene depends on the
scene complexity, roughly proportional to the number of vertices within the
scene.
As we shall see in Part II of the book, application programs designed
for real-time interactive work are usually written using multiple threads
2
of
execution so that rendering, scene animation, motion tracking and haptic
feedback can all operate at optimal rates and with appropriate priorities. If
necessary, the rendering thread can skip frames. These issues are discussed in
Chapter 15.
11.2 Moving and Rotating in 3D
Rigid motion is the simplest type of motion to simulate, because each object
is considered an immutable entity, and to have it move smoothly from one
place to another, one only has to specify its position and orientation. Rigid
motion applies to objects such as vehicles, airplanes, or even people who do
not move their arms or legs about. All fly-by and walkthrough-type camera
movements fall into this category.
A point in space is specified by a position vector, p = (x, y, z). To specify
orientation, three additional values are also required. There are a number of
possibilities available to us, but the scheme where values of heading, pitch and
roll (
, , ) ar e given is a fairly intuitive description of a model’s orientation
(these values are also referr ed to as the Euler angles). Once the six num-
bers (x, y, z,
, , ) have been determined for each object or a viewpoint, a
transformation matrix can be calculated from them. The matrix is used to
place the object at the correct location relative to the global frame of reference
within which all the scenes objects are located.
11.2.1 Moving Smoothly
Finding the position of any object (camera, etc.) at any time instant t,asit
moves from one place to another, involves either mathematical interpolation
or extrapolation. Interpolation is more commonly utilized in computer ani-
mation, where we know the starting position of an object (x
1
, y
1
, z
1
) and its
2
Any program in which activities are not dependent can be designed so that each activity
is executed by its own separate and independent thread of execution. This means that if we
have multiple processors, each thread can be executed on a different processor.
i
i
i
i
i
i
i
i
11.2. Moving and Rotating in 3D 261
orientation (
1
,
1
,
1
)attimet
1
and the final position (x
2
, y
2
, z
2
) and orien-
tation (
2
,
2
,
2
)att
2
. We can then determine the position and orientation
of the object through interpolation at any time where t
1
t t
2
. Alter-
natively, if we need to simulate real-time motion then it is likely that we will
need to use two or more positions in order to predict or extrapolate the next
position of the object at a time in the future where t
1
< t
2
t.
Initially, let ’s look at position interpolation (angular interpolation is dis-
cussed separately in Section 11.2.2) in cases where the movement is along
a smooth curve. Obviously, the most appropriate way to determine any re-
quired intermediate locations, in between our known locations, is by using a
mathematical representation of a curved path in 3D space—also known as a
spline.
It is important to appreciate that for spline interpolation, the path must
be specified by at least four points. When a path is laid out by more than four
points, they are taken in groups of four. Splines have the big advantage that
they are well-behaved and they can have their flexibility adjusted by specified
parameters so that it is possible to have multiple splines or paths which go
through the same control points. Figure 11.1 illustrates the effect of increasing
and decreasing the tension in a spline.
The equation for any point p lying on a cubic spline segment, such as
that illustrated in Figure 11.2, is written in the form
p(
) = K
3
3
+ K
2
2
+ K
1
+ K
0
, (11.1)
Figure 11.1. Changes in the flexibility (or tension) of a spline allow it to represent
many paths through the same control points.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.198.81