i
i
i
i
i
i
i
i
8.4. Culling Primitives for Efficiency 181
The easiest way to implement box-filter antialiasing is by supersampling:cre-
ate images at very high resolutions and then downsample. For example, if our
goal is a 256 × 256 pixel image of a line with width 1.2 pixels, we could rasterize
a rectangle version of the line with width 4.8 pixels on a 1024 × 1024 screen,
andthenaverage4× 4 groups of pixels to get the colors for each of the 256 ×
256 pixels in the “shrunken” image. This is an approximation of the actual box-
filtered image, but works well when objects are not extremely small relative to the
distance between pixels.
Supersampling is quite expensive, however. Because the very sharp edges
that cause aliasing are normally caused by the edges of primitives, rather than
sudden variations in shading within a primitive, a widely used optimization is
to sample visibility at a higher rate than shading. If information about coverage
and depth is stored for several points within each pixel, very good antialiasing
can be achieved even if only one color is computed. In systems like RenderMan
that use per-vertex shading, this is achieved by rasterizing at high resolution: it is
inexpensive to do so because shading is simply interpolated to produce colors for
the many fragments, or visibility samples. In systems with per-fragment shading,
such as hardware pipelines, multisample antialiasing is achieved by storing for
each fragment a single color plus a coverage mask and a set of depth values.
8.4 Culling Primitives for Efficiency
The strength of object-order rendering, that it requires a single pass over all the
geometry in the scene, is also a weakness for complex scenes. For instance, in a
model of an entire city, only a few buildings are likely to be visible at any given
time. A correct image can be obtained by drawing all the primitives in the scene,
but a great deal of effort will be wasted processing geometry that is behind the
visible buildings, or behind the viewer, and therefore doesn’t contribute to the
final image.
Identifying and throwing away invisible geometry to save the time that would
be spent processing it is known as culling. Three commonly implemented culling
strategies (often used in tandem) are:
• view volume culling—the removal of geometry that is outside the view
volume;
• occlusion culling—the removal of geometry that may be within the view
volume but is obscured, or occluded, by other geometry closer to the
camera;
• backface culling—the removal of primitives facing away from the camera.