i
i
i
i
i
i
i
i
8.1. Rasterization 171
particularly, primitives that are behind the eye—can end up being rasterized, lead-
ing to incorrect results. For instance, consider the triangle shown in Figure 8.7.
Two vertices are in the view volume, but the third is behind the eye. The projec-
tion transformationmaps this vertex to a nonsensical location behind the far plane,
and if this is allowed to happen the triangle will be rasterized incorrectly. For this
reason, rasterization has to be preceded by a clipping operation that removes parts
of primitives that could extend behind the eye.
Clipping is a common operation in graphics, needed whenever one geometric
entity “cuts” another. For example, if you clip a triangle against the plane x =0,
the plane cuts the triangle into two parts if the signs of the x-coordinates of the
vertices are not all the same. In most applications of clipping, the portion of the
triangle on the “wrong” side of the plane is discarded. This operation for a single
plane is shown in Figure 8.8.
Figure 8.8. A polygon
is clipped against a clipping
plane. The portion “inside”
the plane is retained.
In clipping to prepare for rasterization, the “wrong” side is the side outside
the view volume. It is always safe to clip away all geometry outside the view
volume—that is, clipping against all six faces of the volume—but many systems
manage to get away with only clipping against the near plane.
This section discusses the basic implementation of a clipping module. Those
interested in implementing an industrial-speed clipper should see the book by
Blinn mentioned in the notes at the end of this chapter.
The two most common approaches for implementing clipping are
1. in world coordinates using the six planes that bound the truncated viewing
pyramid,
2. in the 4D transformed space before the homogeneous divide.
Either possibility can be effectively implemented (J. Blinn, 1996) using the fol-
lowing approach for each triangle:
for each of six planes do
if (triangle entirely outside of plane) then
break (triangle is not visible)
else if triangle spans plane then
clip triangle
if (quadrilateral is left) then
break into two triangles
8.1.4 Clipping Before the Transform (Option 1)
Option 1 has a straightforward implementation. The only question is, “What are
the six plane equations?” Because these equations are the same for all triangles
i
i
i
i
i
i
i
i
172 8. The Graphics Pipeline
rendered in the single image, we do not need to compute them very efciently.
For this reason, we can just invert the transform shown in Figure 5.11 and apply
it to the eight vertices of the transformed view volume:
(x, y, z)=(l, b, n)
(r, b, n)
(l, t, n)
(r, t, n)
(l, b,f)
(r, b, f )
(l, t, f)
(r, t, f )
The plane equations can be inferred from here. Alternatively, we can use vector
geometry to get the planes directly from the viewing parameters.
8.1.5 Clipping in Homogeneous Coordinates (Option 2)
Surprisingly, the option usually implemented is that of clipping in homogeneous
coordinates before the divide. Here the view volume is 4D, and it is bounded by
3D volumes (hyperplanes). These are:
x + lw =0
x rw =0
y + bw =0
y tw =0
z + nw =0
z fw =0
These planes are quite simple, so the efciency is better than for Option 1. They
still can be improved by transforming the view volume [l, r] × [b, t] × [f, n] to
[0, 1]
3
. It turns out that the clipping of the triangles is not much more complicated
than in 3D.
8.1.6 Clipping against a Plane
No matter which option we choose, we must clip against a plane. Recall from
Section 2.5.5 that the implicit equation for a plane through point q with normal
i
i
i
i
i
i
i
i
8.2. Operations Before and After Rasterization 173
n is
f(p)=n ·(p q)=0.
This is often written
f(p)=n · p + D =0. (8.2)
Interestingly, this equation not only describes a 3D plane, but it also describes a
line in 2D and the volume analog of a plane in 4D. All of these entities are usually
called planes in their appropriate dimension.
If we have a line segment between points a and b, we can “clip” it against
a plane using the techniques for cutting the edges of 3D triangles in BSP tree
programs described in Section 12.4.3. Here, the points a and b are tested to
determine whether they are on opposite sides of the plane f(p)=0by checking
whether f(a) and f(b) have different signs. Typically f(p) < 0 is dened to be
“inside” the plane, and f(p) > 0 is “outside” the plane. If the plane does split the
line, then we can solve for the intersection point by substituting the equation for
the parametric line,
p = a + t(b a),
into the f(p)=0plane of Equation (8.2). This yields
n · (a + t(b a))
+ D =0.
Solving for t gives
t =
n · a + D
n · (a b)
.
We can then nd the intersection point and “shorten” the line.
To clip a triangle, we again can follow Section 12.4.3 to produce one or two
triangles .
8.2 Operations Before and After Rasterization
Before a primitive can be rasterized, the vertices that dene it must be in screen
coordinates, and the colors or other attributes that are supposed to be interpolated
across the primitive must be known. Preparing this data is the job of the vertex-
processing stage of the pipeline. In this stage, incoming vertices are transformed
by the modeling, viewing, and projection transformations, mapping them from
their original coordinates into screen space (where, recall, position is measured
in terms of pixels). At the same time, other information, such as colors, surface
normals, or texture coordinates, is transformed as needed; we’ll discuss these
additional attributes in the examples below.
i
i
i
i
i
i
i
i
174 8. The Graphics Pipeline
After rasterization, further processing is done to compute a color and depth
for each fragment. This processing can be as simple as just passing through an in-
terpolated color and using the depth computed by the rasterizer; or it can involve
complex shading operations. Finally, the blending phase combines the fragments
generated by the (possibly several) primitives that overlapped each pixel to com-
pute the nal color. The most common blending approach is to choose the color
of the fragment with the smallest depth (closest to the eye).
The purposes of the different stages are best illustrated by examples.
8.2.1 Simple 2D Drawing
The simplest possible pipeline does nothing in the vertex or fragment stages, and
in the blending stage the color of each fragment simply overwrites the value of the
previous one. The application supplies primitives directly in pixel coordinates,
and the rasterizer does all the work. This basic arrangement is the essence of
many simple, older APIs for drawing user interfaces, plots, graphs, and other 2D
content. Solid color shapes can be drawn by specifying the same color for all
vertices of each primitive, and our model pipeline also supports smoothly varying
color using interpolation.
8.2.2 A Minimal 3D Pipeline
To draw objects in 3D, the only change needed to the 2D drawing pipeline is a
single matrix transformation: the vertex-processing stage multiplies the incoming
vertex positions by the product of the modeling, camera, projection, and viewport
matrices, resulting in screen-space triangles that are then drawn in the same way
as if they’d been specied directly in 2D.
One problem with the minimal 3D pipeline is that in order to get occlusion
relationships correct—to get nearer objects in front of farther away objects—
primitives must be drawn in back-to-front order. This is known as the painter’s
algorithm for hidden surface removal, by analogy to painting the background of
a painting rst, then painting the foreground over it. The painter’s algorithm is
a perfectly valid way to remove hidden surfaces, but it has several drawbacks.
Figure 8.9. Two occlu-
sion cycles, which cannot
be drawn in back-to-front
order.
It cannot handle triangles that intersect one another, because there is no correct
order in which to draw them. Similarly, several triangles, even if they don’t inter-
sect, can still be arranged in an occlusion cycle, as shown in Figure 8.9, another
case in which the back-to-front order does not exist. And most importantly, sort-
ing the primitives by depth is slow, especially for large scenes, and disturbs the
i
i
i
i
i
i
i
i
8.2. Operations Before and After Rasterization 175
efcient ow of data that makes object-order rendering so fast. Figure 8.10 shows
the result of this process when the objects are not sorted by depth.
Figure 8.10. The result
of drawing two spheres of
identical size using the min-
imal pipeline. The sphere
that appears smaller is far-
ther away but is drawn last,
so it incorrectly overwrites
the nearer one.
8.2.3 Using a z-Buffer for Hidden Surfaces
In practice the painter’s algorithm is rarely used; instead a simple and effective
hidden surface removal algorithm known as the z-buffer algorithm is used. The
method is very simple: at each pixel we keep track of the distance to the closest
surface that has been drawn so far, and we throw away fragments that are farther
away than that distance. The closest distance is stored by allocating an extra value
for each pixel, in addition to the red, green, and blue color values, which is known
as the depth, or z-value. The depth buffer, or z-buffer, is the name for the grid of
depth values.
The z-buffer algorithm is implemented in the fragment blending phase, by
comparing the depth of each fragment with the current value stored in the z-buffer.
If the fragment’s depth is closer, both its color and its depth value overwrite the
values currently in the color and depth buffers. If the fragment’s depth is farther
Of course there can be ties
in the depth test, in which
case the order may well
matter.
away, it is discarded. To ensure that the rst fragment will pass the depth test, the z
buffer is initialized to the maximum depth (the depth of the far plane). Irrespective
of the order in which surfaces are drawn, the same fragment will win the depth
test, and the image will be the same.
The z-buffer algorithm requires each fragment to carry a depth. This is done
simply by interpolating the z-coordinate as a vertex attribute, in the same way that
color or other attributes are interpolated.
The z-buffer is such a simple and practical way to deal with hidden surfaces in
object-order rendering that it is by far the dominant approach. It is much simpler
than geometric methods that cut surfaces into pieces that can be sorted by depth,
because it avoids solving any problems that don’t need to be solved. The depth
Figure 8.11. The result
of drawing the same two
spheres using the z-buffer.
order only needs to be determined at the locations of the pixels, and that is all
that the z-buffer does. It is universally supported by hardware graphics pipelines
and is also the most commonly used method for software pipelines. Figure 8.11
shows an example result.
Precision Issues
In practice, the z-values stored in the buffer are non-negative integers. This is
preferable to true oats because the fast memory needed for the z-buffer is some-
what expensive and is worth keeping to a minimum.
The use of integers can cause some precision problems. If we use an integer
range having B values {0, 1,...,B1}, we can map 0 to the near clipping plane
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.18.104.213