i
i
i
i
i
i
i
i
460 18. Using Graphics Hardware
on the CPU and each vertex would have been re-sent to the graphics hardware
on each iteration of your application. The ability to perform computations on the
vertices already stored in the graphics hardware memory is a big performance
win.
One of the simplest vertex shaders transforms a vertex into clip coordinates
and assigns the front-facing color to the color attribute associated with the vertex.
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix
*
gl_Vertex;
gl_FrontColor = gl_Color;
}
In this example, gl ModelViewProjectionMatrix is a built-in uniform
variable supplied by the GLSL run-time environment. The variables gl
Vertex
and gl
Color are built-in vertex attributes; the special output variables,
gl
Position and gl FrontColor are used by the vertex shader to set the
transformed position and the vertex color.
A more interesting vertex shader that implements the surface- shading equa-
tions developed in Chapter 10 illustrates the effect of per-vertex shading using the
Phong shading algorithm.
void main(void)
{
vec4 v = gl_ModelViewMatrix
*
gl_Vertex;
vec3 n = normalize(gl_NormalMatrix
*
gl_Normal);
vec3 l = normalize(gl_LightSource[0].position - v);
vec3 h = normalize(l - normalize(v));
float p = 16;
vec4 cr = gl_FrontMaterial.diffuse;
vec4 cl = gl_LightSource[0].diffuse;
vec4 ca = vec4(0.2, 0.2, 0.2, 1.0);
vec4 color;
if (dot(h,n) > 0)
color = cr
*
(ca + cl
*
max(0,dot(n,l))) +
cl
*
pow(dot(h,n), p);
else
color = cr
*
(ca + cl
*
max(0,dot(n,l)));
gl_FrontColor = color;
gl_Position = ftransform();
}
i
i
i
i
i
i
i
i
18.3. Processing Geometry into Pixels 461
From the code presented in this shader, you should be able to gain a sense of
shader programming and how it resembles C-style programming. Several things
are happening with this shader. First, we create a set of variables to hold the
vectors necessary for computing Phong shading: v, n, l,andh. Note that the
computation in the vertex shader is performed in eye-space. This is done for a va-
riety of reasons, but one reason is that the light-source positions accessible within
a shader have already been transformed into the eye coordinate frame. When you
create shaders, the coordinate system that you decide to use will likely depend
on the types of computations being performed; this is an important factor to con-
sider. Also, note the use of built-in functions and data structures in the example.
In particular, there are several functions used in this shader: normalize, dot,
max, pow,andftransform. These functions are provided with the shader
language. Additionally, the graphics state associated with materials and light-
ing can be accessed through built-in uniform variables: gl
FrontMaterial
and gl
LightSource[0]. The diffuse component of the material and light
is accessed through the diffuse member of these variables. The color at the
vertex is computed using Equation (10.8) and then stored in the special output
variable gl
FrontColor. The vertex position is transformed using the func-
Figure 18.7. Each sphere is rendered using only a vertex shader that computes Phong
shading. Because the computation is being performed on a per-vertex basis, the Phong
highlight only begins to appear accurate after the amount of geometry used to model the
sphere is increased drastically. (See also Plate VIII.)
i
i
i
i
i
i
i
i
462 18. Using Graphics Hardware
tion ftransform which is a convenience function that performs the multipli-
cation with the modelview and projection matrices. Figure 18.7 shows the results
from running this vertex shader with differently tessellated spheres. Because the
computations are performed on a per-vertex basis, a large amount of geometry is
required to produce a Phong highlight on the sphere that appears correct.
18.3.4 Fragment Shader Example
Fragment shaders are written in a manner very similar to vertex shaders, and to
emphasize this, Equation (10.8) from will be implemented with a fragment shader.
In order to do this, we rst will need to write a vertex shader to set the stage for
the fragment shader.
The vertex shader required for this example is fairly simple, but introduces the
use of varying variables to communicate data to the fragment shader.
varying vec4 v;
varying vec3 n;
void main(void)
{
v = gl_ModelViewMatrix
*
gl_Vertex;
n = normalize(gl_NormalMatrix
*
gl_Normal);
gl_Position = ftransform();
}
Recall that varying variables will be set on a per-vertex basis by a vertex shader,
but when they are accessed in a fragment shader, the values will vary (i.e., be
interpolated) across the triangle, or geometric primitive. In this case, the vertex
position in eye-space v and the normal at the vertex n are calculated at each
vertex. The nal computation performed by the vertex shader is to transform the
vertex into clip coordinates since the fragment shader will compute the lighting
at each fragment. It is not necessary to set the front-facing color in this vertex
shader.
The fragment shader program computes the lighting at each fragment using
the Phong shading model.
varying vec4 v;
varying vec3 n;
void main(void)
{
i
i
i
i
i
i
i
i
18.3. Processing Geometry into Pixels 463
vec3 l = normalize(gl_LightSource[0].position - v);
vec3 h = normalize(l - normalize(v));
float p = 16;
vec4 cr = gl_FrontMaterial.diffuse;
vec4 cl = gl_LightSource[0].diffuse;
vec4 ca = vec4(0.2, 0.2, 0.2, 1.0);
vec4 color;
if (dot(h,n) > 0)
color = cr
*
(ca + cl
*
max(0,dot(n,l))) +
cl
*
pow(dot(h,n),p);
else
color = cr
*
(ca + cl
*
max(0,dot(n,l)));
gl_FragColor = color;
}
The rst thing you should notice is the similarity between the fragment shader
code in this example and the vertex shader code presented in Section 18.3.3. The
Figure 18.8. The results of running the fragment shader from Section 18.3.4. Note that
the Phong highlight does appear on the left-most model which is represented by a single
polygon. In fact, because lighting is calculated at the fragment, rather than at each vertex,
the more coarsely tessellated sphere models also demonstrate appropriate Phong shading.
(See also Plate IX.)
i
i
i
i
i
i
i
i
464 18. Using Graphics Hardware
main difference is in the use of the varying variables, v and n. In the fragment
shader, the view vectors and normal values are interpolated across the surface of
the model between neighboring vertices. The results are shown in Figure 18.8.
Immediately, you should notice the Phong highlight on the quadrilateral, which
only contains four vertices. Because the shading is being calculated at the frag-
ment level using the Phong equation with the interpolated (i.e., varying) data,
more consistent and accurate Phong shading is produced with far less geometry.
18.3.5 General Purpose Computing on the GPU
After studying the vertex and fragment shader examples, you may be wondering
if you can write programs to perform other types of computations on the GPU.
Obviously, the answer is yes, as many problems can be coded to run on the GPU
given the various languages available for programming on the GPU. However, a
few facts are important to remember. Foremost, oating point math processing
on graphics hardware is not currently double-precision. Secondly, you will likely
need to transform your problem into a form that ts within a graphics-related
framework. In other words, you will need to use the graphics APIs to set up the
problem, use texture maps as data rather than traditional memory, and write vertex
and fragment shaders to frame and solve your problem.
Having stated that, the GPU may still be an attractive platform for computa-
tion, since the ratio of transistors that are dedicated to performing computation
is much higher on the GPU than it is on the CPU. In many cases, algorithms
running on GPUs run faster than on a CPU. Furthermore, GPUs perform SIMD
computation, which is especially true at the fragment-processing level. In fact,
it can often help to think about the computation occurring on the fragment pro-
cessor as a highly parallel version of a generic foreach construct, performing
simultaneous operations on a set of elements.
There has been a large amount of investigation to perform General Purpose
computation on GPUs, often referred to as GPGPU. Among other things, re-
searchers are using the GPU as a means to simulate the dynamics of clouds (Harris
et al., 2003), implement ray tracers (Purcell et al., 2002; N. A. Carr et al., 2002),
compute radiosity (Coombe et al., 2004), perform 3D segmentation using level
sets (A. E. Lefohn et al., 2003), or solve the Navier-Stokes equations (Harris,
2004).
General purpose computation is often performed on the GPU using multiple
rendering “passes, and most computation is done using the fragment processor
due to its highly data-parallel setup. Each pass, called a kernel, completes a por-
tion of the computation. Kernels work on streams of data with several kernels
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.56.45