i
i
i
i
i
i
i
i
18
Peter Willemsen
Using Graphics Hardware
Throughout most of this book, the focus has been on the fundamentals underlying
computer graphics rather than on implementation details. This chapter takes a
slightly different route and blends the details of using graphics hardware with the
practical issues associated with programming that hardware.
This chapter, however, is not written to teach you OpenGL,
TM
other graphics
APIs, or even the nitty gritty specics of graphics hardware programming. The
purpose of this chapter is to introduce the basic concepts and thought processes
that are necessary when writing programs that use graphics hardware.
18.1 What Is Graphics Hardware
Graphics hardware describes the hardware components necessary to quickly ren-
der 3D objects as pixels on your computer’s screen using specialized rasterization-
based hardware architectures. The use of this term is meant to elicit a sense of
the physical components necessary for performing these computations. In other
words, we’re talking about the chipsets, transistors, buses, and processors found
on many current video cards. As we will see in this chapter, current graphics
hardware is very good at processing descriptions of 3D objects and transforming
them into the colored pixels that ll your monitor.
One thing has been certain with graphics hardware: it changes very quickly
with new extensions and features being added continually! One explanation for
the fast pace is the video game industry and its economic momentum. Essentially
445
i
i
i
i
i
i
i
i
446 18. Using Graphics Hardware
Geometry
Processing
Pixel
Processing
User
Program
primitives
2D screen
coordinates
Figure 18.1. The basic graphics hardware pipeline consists of stages that transform 3D
data into 2D screen objects ready for rasterizing and coloring by the pixel processing stages.
what this means is that each new graphics card provides better performance and
processing capabilities. As a result, graphics hardware is being used for tasks
that support a much richer use of 3D graphics. For instance, researchers are per-
forming computation on graphics hardware to perform ray-tracing (Purcell et al.,
2002) and even solve the Navier-Stokes equations to simulate uid ow (Harris,
2004).
Real-Time Graphics:By
real-time graphics, we
generally mean that the
graphics-related compu-
tations are being carried
out fast enough that the
results can be viewed
immediately. Being able
to conduct operations at
60Hz is considered real
time. Once the time to
refresh the display (
frame
rate
) drops below 15Hz,
the speed is considered
more interactive than it is
real-time, but this distinc-
tion is not critical. Because
the computations need to
be fast, the equations used
to render the graphics are
often approximations to
what could be done if more
time were available.
Most graphics hardware has been built to perform a set of xed operations
organized as a pipeline designed to push vertices and pixels through different
stages. The xed functionality of the pipeline ensures that basic coloring, lighting,
and texturing can occur very quickly—often referred to as real-time graphics.
Figure 18.1 illustrates the real-time graphics pipeline. The important things
to note about the pipeline follow:
The user program, or application, supplies the data to the graphics hardware
in the form of primitives, such as points, lines, or polygons describing the
3D geometry. Images or bitmaps are also supplied for use in texturing
surfaces.
Geometric primitives are processed on a per-vertex basis and are trans-
formed from 3D coordinates to 2D screen triangles.
Screen objects are passed to the pixel processors, rasterized, and then col-
ored on a per-pixel basis before being output to the frame buffer, and even-
tually to the monitor.
18.2 Describing Geometry for the Hardware
As a graphics programmer, you need to be concerned with how the data associ-
ated with your 3D objects is transferred onto the memory cache of the graphics
hardware. Unfortunately (or maybe fortunately), as a programmer you don’t have
complete control over this process. There are a variety of ways to place your
i
i
i
i
i
i
i
i
18.2. Describing Geometry for the Hardware 447
data on the graphics hardware, and each has its own advantages which will be
discussed in this section. Any of the APIs you might use to program your video
card will provide different methods to load data onto the graphics hardware mem-
ory. The examples that follow are presented in pseudocode that is based loosely
on the C function syntax of OpenGL,
TM
but semantically the examples should be
applicable to other graphics APIs.
Primitives: The three
primitives (points, lines,
and polygons) are the only
primitives available! Even
when creating spline-based
surfaces, such as NURBs,
the surfaces are tessellated
into triangle primitives by
the graphics hardware.
Most graphics hardware work with specic sets of geometric primitives. The
primitive types leverage primitive complexity for processing speed on the graph-
ics hardware. Simpler primitives can be processed very fast. The caveat is that
the primitive types need to be general purpose so as to model a wide range of
geometry from very simple to very complex. On typical graphics hardware, the
primitive types are limited to one or more of the following:
points—single vertices used to represent points or particle systems;
lines—pairs of vertices used to represent lines, silhouettes, or edge-
highlighting;
Point Rendering: Point
and line primitives may ini-
tially appear to be lim-
ited in use, but researchers
have used points to ren-
der very complex geome-
try (Rusinkiewicz & Levoy,
2000; Dachsbacher et al.,
2003).
polygons—triangles, triangle strips, indexed triangles, indexed triangle
strips, quadrilaterals, general convex polygons, etc., used for describing tri-
angle meshes, geometric surfaces, and other solid objects, such as spheres,
cones, cubes, or cylinders.
These three primitives form the basic building blocks for most geometry you
will dene. (An example of a triangle mesh is shown in Figure 18.2.) Using these
primitives, you can build descriptions of your geometry using one of the graphics
APIs and send the geometry to the graphics hardware for rendering. For instance,
Figure 18.2. How your geometry is organized will affect the performance of your applica-
tion. This wireframe depiction of the Little Cottonwood Canyon terrain dataset shows tens of
thousands of triangles organized in a triangle mesh running at real-time rates.
The image is
rendered using the VTerrain Project terrain system courtesy of Ben Discoe.
i
i
i
i
i
i
i
i
448 18. Using Graphics Hardware
to transfer the description of a line to the graphics hardware, we might use the
following:
beginLine();
vertex( x1, y1, z1 );
vertex( x2, y2, z2 );
endLine();
In this example, two things occur. First, one of the primitive types is declared and
made active by the beginLine() function call. The line primitive is then made
inactive by the endLine() function call. Second, all vertices declared between
these two functions are copied directly to the graphics card for processing with
the vertex function calls.
A second example creates a set of triangles grouped together in a strip (refer
to Figure 18.3); we could use the following code:
v0
v4
v3
v2
v1
t0
t1
t2
Figure 18.3. A trian-
gle strip composed of five
vertices defining three tri-
angles.
beginTriangleStrip();
vertex( x0, y0, z0 );
vertex( x1, y1, z1 );
vertex( x2, y2, z2 );
vertex( x3, y3, z3 );
vertex( x4, y4, z4 );
endTriangleStrip();
In this example, the primitive type, TriangleStrip, is made active and the set
of vertices that dene the triangle strip are copied to the graphics card memory for
processing. Note that ordering does matter when describing geometry. In the tri-
angle strip example, connectivity between adjacent triangles is embedded within
the ordering of the vertices. Triangle t0 is constructed from vertices (v0,v1,v2),
triangle t1 from vertices (v1,v3,v2), and triangle t2 from vertices (v2,v3,v4).
The key point to learn from these simple examples is that geometry is dened
for rendering on the graphics hardware using a primitive type along with a set of
vertices. The previous examples are simple and push the vertices directly onto
the graphics hardware. However, in practice, you will need to make conscious
decisions about how you will push your data to the graphics hardware. These
issues will be discussed shortly.
As geometry is passed to the graphics hardware, additional data can be spec-
ied for each vertex. This extra data is useful for dening state attributes, that
might represent the color of the vertex, the normal direction at the vertex, texture
coordinates at the vertex, or other per-vertex data. For instance, to set the color
and normal state parameters at each vertex of a triangle strip, we might use the
following code:
i
i
i
i
i
i
i
i
18.2. Describing Geometry for the Hardware 449
beginTriangleStrip();
color( r0, g0, b0 ); normal( n0x, n0y, n0z );
vertex( x0, y0, z0 );
color( r1, g1, b1 ); normal( n1x, n1y, n1z );
vertex( x1, y1, z1 );
color( r2, g2, b2 ); normal( n2x, n2y, n2z );
vertex( x2, y2, z2 );
color( r3, g3, b3 ); normal( n3x, n3y, n3z );
vertex( x3, y3, z3 );
color( r4, g4, b4 ); normal( n4x, n4y, n4z );
vertex( x4, y4, z4 );
endTriangleStrip();
Here, the color and normal direction at each vertex are specied just prior to the
vertex being dened. Each vertex in this example has a unique color and normal
direction. The color function sets the active color state using a RGB 3-tuple.
The normal direction state at each vertex is set by the normal function. Both the
color and normal function affect the current rendering state on the graphics
hardware. Any vertices dened after these state attributes are set will be bound
with those state attributes.
This is a good moment to mention that the graphics hardware maintains a
fairly elaborate set of state parameters that determine how vertices and other com-
ponents are rendered. Some state is bound to vertices, such as color, normal direc-
tion, and texture coordinates, while another state may affect pixel level rendering.
The graphics state at any particular moment describes a large set of internal hard-
ware parameters. This aspect of graphics hardware is important to consider when
you write 3D applications. As you might suspect, making frequent changes to the
graphics state affects performance at least to some extent. However, attempting
to minimize graphics state changes is only one of many areas where thoughtful
programming should be applied. You should attempt to minimize state changes
when you can, but it is unlikely that you can group all of your geometry to com-
pletely reduce state context switches. One data structure that can help minimize
state changes, especially on static scenes, is the scene graph data structure. Prior
to rendering any geometry, the scene graph can re-organize the geometry and as-
sociated graphics state in an attempt to minimize state changes. Scene graphs are
described in Chapter 12.
color( r, g, b );
normal( nx, ny, nz );
beginTriangleStrip();
vertex( x0, y0, z0 );
vertex( x1, y1, z1 );
vertex( x2, y2, z2 );
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.193.108