i
i
i
i
i
i
i
i
144 6. A Pocket 3D Theory Reference
Figure 6.18. The field of view φ governs how much of the scene is visible to a camera
located at the viewpoint. Narrowing the field of view is equivalent to using a zoom
lens.
therange0 (X
max
1) horizontally and 0 (Y
max
1) vertically, where
X
max
and Y
max
refer to the number of pixels in the rendered image in the
horizontal and ver tical direction, respectively. The coordinate origin (0, 0) is
in the top-left corner (see Figure 6.17).
The distance of the projection plane from the viewpoint can be chosen
arbitrarily; setting it to a value of 1 simplifies the calculations. Thus, if the
plane of projection is located at (1, 0, 0) and orientated parallel to the yz-
plane (i.e., the viewer is looking along the x-axis) then the screen coordinates
(X
s
, Y
s
) for the projection of a point (x, y, z)aregivenby
X
s
=
X
max
2
y
x
s
x
, (6.7)
Y
s
=
Y
max
2
z
x
s
y
. (6.8)
The parameters s
x
and s
y
are scale values to allow for different aspect ratios
and fields of view. This effectively lets us change the zoom settings for the
camera. Obviously, X
s
and Y
s
must satisfy 0 X
s
< X
max
and 0 Y
s
< Y
max
.
If f
f
(measured in mm) is the focal length of the desired camera lens and
the aspect ratio is A
x
: A
y
then
s
x
=
X
max
2
f
f
21.22
A
x
,
s
y
=
Y
max
2
f
f
21.22
A
y
.
i
i
i
i
i
i
i
i
6.7. Summary 145
The numerical factor 21.2 is a constant to allow us to specify f
f
in stan-
dard mm units. For a camera lens of focal length f
f
, the field of view can
be expressed as
2ATAN 2(21.22, f
f
).
Any point (x, y, z) for which x < 1 will not be transformed correctly by
Equations (6.7) and (6.8). The reason for this is that we have placed the
projection plane at x = 1 and therefore these points will be behind the pro-
jection plane. As such, steps must be taken to eliminate these points before
the projection is applied. This process is called clipping.Wetalkmoreabout
clipping in the next chapter.
6.7 Summary
Lets take a quick review of what we have covered thus far. We know that
to construct a virtual environment, we need to be able to connect different
primitive shapes together. We then need to be able to assign properties to
these shapes; for example, color, lighting, textures etc. Once we have con-
structed our virtual environment, we then need to be able to produce single
images of it, which we do b y a process of rendering.
Whilst we have not covered the details of the rendering process yet, we do
know that before we render an image, we need to be able to determine what
we will actually see of the virtual environment. Some objects or primitives will
fall into this view and others will not. In order to determine what we can see,
we need to be able to perform some 3D geometry. In this chapter, we covered
the basics of this. In particular, we discussed determining the intersection
points of planes and lines and setting up viewing transformations.
So now that we have looked at how the viewing transformation is con-
structed, it is a good idea to look at the process of utilizing this to produce a
2D picture of our 3D environment. It is to the interesting topic of rendering
that we now turn our attention.
Bibliography
[1] F. Ayres. Matrices, Schaums Outline Series. New York: McGraw-Hill, 1967.
[2] R.H.CrowellandR.E.Williamson.Calculus of Vector Functions. Englewood
Cliffs, NJ: Prentice Hall, 1962.
[3] S. Lipschutz. Schaum’s Outline of Linear Algebra , Third Edition, Schaums Out-
line Series. New York: McGraw-Hill, 2000.
i
i
i
i
i
i
i
i
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.7.13