i
i
i
i
i
i
i
i
176 8. The Graphics Pipeline
Figure 8.12. A z-buffer rasterizing two triangles in each of two possible orders. The first
triangle is fully rasterized. The second triangle has every pixel computed, but for three of the
pixels the depth-contest is lost, and those pixels are not drawn. The final image is the same
regardless.
i
i
i
i
i
i
i
i
8.2. Operations Before and After Rasterization 177
z = n and B 1 to the far clipping plane z = f . Note, that for this discussion, we
assume z, n,andf are positive. This will result in the same results as the negative
case, but the details of the argument are easier to follow. We send each z-value to
a “bucket” with depth Δz =(f n)/B. We would not use the integer z-buffer if
memory were not a premium, so it is useful to make B as small as possible.
If we allocate b bits to store the z-value, then B =2
b
. We need enough bits
to make sure any triangle in front of another triangle will have its depth mapped
to distinct depth bins.
For example, if you are rendering a scene where triangles have a separation of
at least one meter, then Δz<1 should yield images without artifacts. There are
two ways to make Δz smaller: move n and f closer together or increase b.Ifb is
xed,asitmaybeinAPIsoronparticular hardware platforms, adjusting n and f
is the only option.
The precision of z-buffers must be handled with great care when perspective
images are created. The value Δz above is used after the perspective divide.
Recall from Section 7.3 that the result of the perspective divide is
z = n + f
fn
z
w
.
The actual bin depth is related to z
w
, the world depth, rather than z, the post-
perspective divide depth. We can approximate the bin size by differentiating both
sides:
Δz
fnΔz
w
z
2
w
.
Bin sizes vary in depth. The bin size in world space is
Δz
w
z
2
w
Δz
fn
.
Note that the quantity Δz is as discussed before. The biggest bin will be for
z
= f ,where
Δz
max
w
fΔz
n
.
Note that choosing n =0, a natural choice if we don’t want to lose objects right
in front of the eye, will result in an innitely large bin—a very bad condition. To
make Δz
max
w
as small as possible, we want to minimize f and maximize n. Thus,
it is always important to choose n and f carefully.
8.2.4 Per-vertex Shading
So far the application sending triangles into the pipeline is responsible for setting
the color; the rasterizer just interpolates the colors and they are written directly
i
i
i
i
i
i
i
i
178 8. The Graphics Pipeline
into the output image. For some applications this is sufcient, but in many cases
we want 3D objects to be drawn with shading, using the same illumination equa-
tions that we used for image-order rendering in Chapter 4. Recall that these equa-
tions require a light direction, an eye direction, and a surface normal to compute
the color of a surface.
One way to handle shading computations is to perform them in the vertex
stage. The application provides normal vectors at the vertices, and the positions
and colors of the lights are provided separately (they don’t vary across the surface,
so they don’t need to be specied for each vertex). For each vertex, the direction
to the viewer and the direction to each light are computed based on the positions
of the camera, the lights, and the vertex. The desired shading equation is evaluated
tocomputeacolor,whichisthenpassedtothe rasterizer as the vertex color. Per-
vertex shading is sometimes called Gouraud shading.
One decision to be made is the coordinate system in which shading com-
putations are done. World space or eye space are good choices. It is impor-
tant to choose a coordinate system that is orthonormal when viewed in world
space, because shading equations depend on angles between vectors, which are
not preserved by operations like nonuniform scale that are often used in the mod-
eling transformation, or perspective projection, often used in the projection to the
canonical view volume. Shading in eye space has the advantage that we don’t
need to keep track of the camera position, because the camera is always at the
origin in eye space, in perspective projection, or the view direction is always +z
in orthographic projection.
Per-vertex shading has the disadvantage that it cannot produce any details in
the shading that are smaller than the primitives used to draw the surface, because
it only computes shading once for each vertex and never in between vertices.
Figure 8.13. Two
spheres drawn using per-
pixel (Gouraud) shading.
Because the triangles are
large, interpolation artifacts
are visible.
For instance, in a room with a oor that is drawn using two large triangles and
illuminated by a light source in the middle of the room, shading will be evaluated
only at the corners of the room, and the interpolated value will likely be much too
dark in the center. Also, curved surfaces that are shaded with specular highlights
must be drawn using primitives small enough that the highlights can be resolved.
Figure 8.13 shows our two spheres drawn with per-vertex shading.
8.2.5 Per-fragment Shading
Per-fragment shading is
sometimes called Phong
shading, which is confusing
because the same name
is attached to the Phong
illumination model.
To avoid the interpolation artifacts associated with per-vertex shading, we can
avoid interpolating colors by performing the shading computations after the in-
terpolation, in the fragment stage. In per-fragment shading, the same shading
equations are evaluated, but they are evaluated for each fragment using interpo-
lated vectors, rather than for each vertex using the vectors from the application.
i
i
i
i
i
i
i
i
8.2. Operations Before and After Rasterization 179
In per-fragment shading the geometric information needed for shading is
passed through the rasterizer as attributes, so the vertex stage must coordinate
with the fragment stage to prepare the data appropriately. One approach is to in-
terpolate the eye-space surface normal and the eye-space vertex position, which
then can be used just as they would in per-vertex shading.
Figure 8.14 shows our two spheres drawn with per-vertex shading.
Figure 8.14. Two spheres
drawn using per-fragment
shading. Because the trian-
gles are large, interpolation
artifacts are visible.
8.2.6 Texture Mapping
Textures (discussed in Chapter 11) are images that are used to add extra detail to
the shading of surfaces that would otherwise look too homogeneous and articial.
The idea is simple: each time shading is computed, we read one of the values
used in the shading computation—the diffuse color, for instance—from a texture
instead of using the attribute values that are attached to the geometry being ren-
dered. This operation is known as a texture lookup: the shading code species a
texture coordinate, a point in the domain of the texture, and the texture-mapping
system nds the value at that point in the texture image and returns it. The texture
value is then used in the shading computation.
The most common way to dene texture coordinates is simply to make the
texture coordinate another vertex attribute. Each primitive then knows where it
lives in the texture.
8.2.7 Shading Frequency
The decision about where to place shading computations depends on how fast the
color changes—the scale of the details being computed. Shading with large-scale
features, such as diffuse shading on curved surfaces, can be evaluated fairly infre-
quently and then interpolated: it can be computed with a low shading frequency.
Shading that produces small-scale features, such as sharp highlights or detailed
textures, needs to be evaluated at a high shading frequency. For details that need
to look sharp and crisp in the image, the shading frequency needs to be at least
one shading sample per pixel.
So large-scale effects can safely be computed in the vertex stage, even when
the vertices dening the primitives are many pixels apart. Effects that require a
high shading frequency can also be computed at the vertex stage, as long as the
vertices are close together in the image; alternatively, they can be computed at the
fragment stage when primitives are larger than a pixel.
i
i
i
i
i
i
i
i
180 8. The Graphics Pipeline
For example, a hardware pipeline as used in a computer game, generally us-
ing primitives that cover several pixels to ensure high efciency, normally does
most shading computations per fragment. On the other hand, the PhotoRealistic
RenderMan system does all shading computations per vertex, after rst subdivid-
ing, or dicing, all surfaces into small quadrilaterals called micropolygons that are
about the size of pixels. Since the primitives are small, per-vertex shading in this
system achieves a high shading frequency that is suitable for detailed shading.
8.3 Simple Antialiasing
Just as with ray tracing, rasterization will produce jagged lines and triangle edges
if we make an all-or-nothing determination of whether each pixel is inside the
primitive or not. In fact, the set of fragments generated by the simple triangle
rasterization algorithms described in this chapter, sometimes called standard or
aliased rasterization, is exactly the same as the set of pixels that would be mapped
to that triangle by a ray tracer that sends one ray through the center of each pixel.
Also as in ray tracing, the solution is to allow pixels to be partly covered by a
primitive (Crow, 1978). In practice this form of blurring helps visual quality,
especially in animations. This is shown as the top line of Figure 8.15.
There are a number of different approaches to antialiasing in rasterization
applications. Just as with a ray tracer, we can produce an antialiased image by
setting each pixel value to the average color of the image over the square area
belonging to the pixel, an approach known as box filtering.Thismeanswehave
There are better filters than
thebox,butaboxfilterwill
suffice for all but the most
demanding applications.
to think of all drawable entities as having well-dened areas. For example, the line
in Figure 8.15 can be thought of as approximating a one-pixel-wide rectangle.
Figure 8.15. An antialiased and a jaggy line viewed at close range so individual pixels are
visible.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.12.230