Chapter 9. Emulating OpenGL Fixed Functionality

The programmability of OpenGL opens many new possibilities for never-before-seen rendering effects. Programmable shaders can provide results that are superior to OpenGL fixed functionality, especially in the area of realism. Nevertheless, it can still be instructive to examine how some of OpenGL’s fixed functionality rendering steps could be implemented with OpenGL shaders. While simplistic, these code snippets may be useful as stepping stones to bigger and better things.

This chapter describes OpenGL shader code that mimics the behavior of the OpenGL fixed functionality vertex and fragment processing. The shader code snippets are derived from the Full OpenGL Pipeline and Pixel Pipeline shaders developed by Dave Baldwin for inclusion in the white paper OpenGL 2.0 Shading Language. Further refinement of this shader code occurred for the first edition of this book. These code snippets were then verified and finalized with a tool called ShaderGen that takes a description of OpenGL’s fixed functionality state and automatically generates the equivalent shaders. ShaderGen was implemented by Inderaj Bains and Joshua Doss and is available from the 3Dlabs Web site.

The goal of the shader code in this chapter is to faithfully represent OpenGL fixed functionality. The code examples in this chapter reference existing OpenGL state wherever possible through built-in variables. In your own shaders, feel free to provide these values as user-defined uniform variables rather than accessing existing OpenGL state. By doing this, you will be prepared to throw off the shackles of the OpenGL state machine and extend your shaders in exciting and different new ways. But don’t get too enamored with the shaders presented in this chapter. In later chapters of this book, we explore a variety of shaders that provide better results than those discussed in this chapter.

Transformation

The features of the OpenGL Shading Language make it very easy to express transformations between the coordinate spaces defined by OpenGL. We’ve already seen the transformation that will be used by almost every vertex shader. The incoming vertex position must be transformed into clipping coordinates for use by the fixed functionality stages that occur after vertex processing. This is done in one of two ways, either this:

// Transform vertex to clip space
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

or this:

gl_Position = ftransform();

The only difference between these two methods is that the second case is guaranteed to compute the transformed position in exactly the same way as the fixed functionality method. Some implementations may have different hardware paths that result in small differences between the transformed position as computed by the first method and as computed by fixed functionality. This can cause problems in rendering if a multipass algorithm is used to render the same geometry more than once. In this case, the second method is preferred because it produces the same transformed position as the fixed functionality.

OpenGL specifies that light positions are transformed by the modelview matrix when they are provided to OpenGL. This means that they are stored in eye coordinates. It is often convenient to perform lighting computations in eye space, so it is often necessary to transform the incoming vertex position into eye coordinates as shown in Listing 9.1.

Example 9.1. Computation of eye coordinate position

vec4 ecPosition;
vec3 ecPosition3;    // in 3 space

// Transform vertex to eye coordinates
if (NeedEyePosition)
{
    ecPosition  = gl_ModelViewMatrix * gl_Vertex;
    ecPosition3 = (vec3(ecPosition)) / ecPosition.w;
}

This snippet of code computes the homogeneous point in eye space (a vec4) as well as the nonhomogeneous point (a vec3). Both values are useful as we shall see.

To perform lighting calculations in eye space, incoming surface normals must also be transformed. A built-in uniform variable is available to access the normal transformation matrix, as shown in Listing 9.2.

Example 9.2. Transformation of normal

normal = gl_NormalMatrix * gl_Normal;

In many cases, the application may not know anything about the characteristics of the surface normals that are being provided. For the lighting computations to work correctly, each incoming normal must be normalized so that it is unit length. For OpenGL fixed functionality, normalization is a mode in OpenGL that we can control by providing the symbolic constant GL_NORMALIZE to glEnable or glDisable. In an OpenGL shader, if normalization is required, we do it as shown in Listing 9.3.

Example 9.3. Normalization of normal

normal = normalize(normal);

Sometimes an application will always be sending normals that are unit length and the modelview matrix is always one that does uniform scaling. In this case, rescaling can be used to avoid the possibly expensive square root operation that is a necessary part of normalization. If the rescaling factor is supplied by the application through the OpenGL API, the normal can be rescaled as shown in Listing 9.4.

Example 9.4. Normal rescaling

normal = normal * gl_NormalScale;

The rescaling factor is stored as state within OpenGL and can be accessed from within a shader by the built-in uniform variable gl_NormalScale.

Texture coordinates can also be transformed. A texture matrix is defined for each texture coordinate set in OpenGL and can be accessed with the built-in uniform matrix array variable gl_TextureMatrix. Incoming texture coordinates can be transformed in the same manner as vertex positions, as shown in Listing 9.5.

Example 9.5. Texture coordinate transformation

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;

Light Sources

The lighting computations defined by OpenGL are somewhat involved. Let’s start by defining a function for each of the types of light sources defined by OpenGL: directional lights, point lights, and spotlights. We pass in variables that store the total ambient, diffuse, and specular contributions from all light sources. These must be initialized to 0 before any of the light source computation routines are called.

Directional Lights

A directional light is assumed to be at an infinite distance from the objects being lit. According to this assumption, all light rays from the light source are parallel when they reach the scene. Therefore a single direction vector can be used for every point in the scene. This assumption simplifies the math, so the code to implement a directional light source is simpler and typically runs faster than the code for other types of lights. Because the light source is assumed to be infinitely far away, the direction of maximum highlights is the same for every point in the scene. This direction vector can be computed ahead of time for each light source i and stored in gl_LightSource[i].halfVector. This type of light source is useful for mimicking the effects of a light source like the sun.

The directional light function shown in Listing 9.6 computes the cosine of the angle between the surface normal and the light direction, as well as the cosine of the angle between the surface normal and the half angle between the light direction and the viewing direction. The former value is multiplied by the light’s diffuse color to compute the diffuse component from the light. The latter value is raised to the power indicated by gl_FrontMaterial.shininess before being multiplied by the light’s specular color.

Example 9.6. Directional light source computation

void DirectionalLight(in int i,
                      in vec3 normal,
                      inout vec4 ambient,
                      inout vec4 diffuse,
                      inout vec4 specular)
{
     float nDotVP;         // normal . light direction
     float nDotHV;         // normal . light half vector
     float pf;             // power factor

     nDotVP = max(0.0, dot(normal,
                   normalize(vec3(gl_LightSource[i].position))));
     nDotHV = max(0.0, dot(normal, vec3(gl_LightSource[i].halfVector)));

     if (nDotVP == 0.0)
         pf = 0.0;
     else
         pf = pow(nDotHV, gl_FrontMaterial.shininess);

     ambient  += gl_LightSource[i].ambient;
     diffuse  += gl_LightSource[i].diffuse * nDotVP;
     specular += gl_LightSource[i].specular * pf;
}

The only way either a diffuse reflection component or a specular reflection component can be present is if the angle between the light source direction and the surface normal is in the range [−90°, 90°]. We determine the angle by examining nDotVP. This value is set to the greater of 0 and the cosine of the angle between the light source direction and the surface normal. If this value ends up being 0, the value that determines the amount of specular reflection is set to 0 as well. Our directional light function assumes that the vectors of interest are normalized, so the dot product between two vectors results in the cosine of the angle between them.

Point Lights

Point lights mimic lights that are near the scene or within the scene, such as lamps or ceiling lights or street lights. There are two main differences between point lights and directional lights. First, with a point light source, the direction of maximum highlights must be computed at each vertex rather than with the precomputed value from gl_LightSource[i].halfVector. Second, light received at the surface is expected to decrease as the point light source gets farther and farther away. This is called ATTENUATION. Each light source has constant, linear, and quadratic attenuation factors that are taken into account when the lighting contribution from a point light is computed.

These differences show up in the first few lines of the point light function (see Listing 9.7). The first step is to compute the vector from the surface to the light position. We compute this distance by using the length function. Next, we normalize VP so that we can use it in a dot product operation to compute a proper cosine value. We then compute the attenuation factor and the direction of maximum highlights as required. The remaining code is the same as for our directional light function except that the ambient, diffuse, and specular terms are multiplied by the attenuation factor.

Example 9.7. Point light source computation

void PointLight(in int i,
                in vec3 eye,
                in vec3 ecPosition3,
                in vec3 normal,
                inout vec4 ambient,
                inout vec4 diffuse,
                inout vec4 specular)
{
    float nDotVP;         // normal . light direction
    float nDotHV;         // normal . light half vector
    float pf;             // power factor
    float attenuation;    // computed attenuation factor
    float d;              // distance from surface to light source
    vec3  VP;             // direction from surface to light position
    vec3  halfVector;     // direction of maximum highlights

    // Compute vector from surface to light position
    VP = vec3(gl_LightSource[i].position) - ecPosition3;

    // Compute distance between surface and light position
    d = length(VP);

    // Normalize the vector from surface to light position
    VP = normalize(VP);

    // Compute attenuation
    attenuation = 1.0 / (gl_LightSource[i].constantAttenuation +
                         gl_LightSource[i].linearAttenuation * d +
                         gl_LightSource[i].quadraticAttenuation * d * d);

    halfVector = normalize(VP + eye);

    nDotVP = max(0.0, dot(normal, VP));
    nDotHV = max(0.0, dot(normal, halfVector));

    if (nDotVP == 0.0)
        pf = 0.0;
    else
        pf = pow(nDotHV, gl_FrontMaterial.shininess);

    ambient += gl_LightSource[i].ambient * attenuation;
    diffuse += gl_LightSource[i].diffuse * nDotVP * attenuation;
    specular += gl_LightSource[i].specular * pf * attenuation;
}

One optimization that we could make is to have two point light functions, one that computes the attenuation factor and one that does not. If the values for the constant, linear, and quadratic attenuation factors are (1, 0, 0) (the default values), we could use the function that does not compute attenuation and get better performance.

Spotlights

In stage and cinema, spotlights project a strong beam of light that illuminates a well-defined area. The illuminated area can be further shaped through the use of flaps or shutters on the sides of the light. OpenGL includes light attributes that simulate a simple type of spotlight. Whereas point lights are modeled as sending light equally in all directions, OpenGL models spotlights as light sources that are restricted to producing a cone of light in a particular direction.

The first and last parts of our spotlight function (see Listing 9.8) look the same as our point light function (see Listing 9.7). The differences occur in the middle of the function. A spotlight has a focus direction (gl_LightSource[i].spotDirection), and this direction is dotted with the vector from the light position to the surface (–VP). The resulting cosine value is compared to the precomputed cosine cutoff value (gl_LightSource[i].spotCosCutoff) to determine whether the position on the surface is inside or outside the spotlight’s cone of illumination. If it is outside, the spotlight attenuation is set to 0; otherwise, this value is raised to a power specified by gl_LightSource[i].spotExponent. The resulting spotlight attenuation factor is multiplied by the previously computed attenuation factor to give the overall attenuation factor. The remaining lines of code are the same as they were for point lights.

Example 9.8. Spotlight computation

void SpotLight(in int i,
               in vec3 eye,
               in vec3 ecPosition3,
               in vec3 normal,
               inout vec4 ambient,
               inout vec4 diffuse,
               inout vec4 specular)
{
    float nDotVP;           // normal . light direction
    float nDotHV;           // normal . light half vector
    float pf;               // power factor
    float spotDot;          // cosine of angle between spotlight
    float spotAttenuation;  // spotlight attenuation factor
    float attenuation;      // computed attenuation factor
    float d;                // distance from surface to light source
    vec3 VP;                // direction from surface to light position
    vec3 halfVector;        // direction of maximum highlights

    // Compute vector from surface to light position
    VP = vec3(gl_LightSource[i].position) - ecPosition3;

    // Compute distance between surface and light position
    d = length(VP);

    // Normalize the vector from surface to light position
    VP = normalize(VP);

    // Compute attenuation
    attenuation = 1.0 / (gl_LightSource[i].constantAttenuation +
                         gl_LightSource[i].linearAttenuation * d +
                         gl_LightSource[i].quadraticAttenuation * d * d);

    // See if point on surface is inside cone of illumination
    spotDot = dot(-VP, normalize(gl_LightSource[i].spotDirection));

    if (spotDot < gl_LightSource[i].spotCosCutoff)
        spotAttenuation = 0.0; // light adds no contribution
    else
        spotAttenuation = pow(spotDot, gl_LightSource[i].spotExponent);

    // Combine the spotlight and distance attenuation.
    attenuation *= spotAttenuation;

    halfVector = normalize(VP + eye);

    nDotVP = max(0.0, dot(normal, VP));
    nDotHV = max(0.0, dot(normal, halfVector));

    if (nDotVP == 0.0)
        pf = 0.0;
    else
        pf = pow(nDotHV, gl_FrontMaterial.shininess);

    ambient  += gl_LightSource[i].ambient * attenuation;
    diffuse  += gl_LightSource[i].diffuse * nDotVP * attenuation;
    specular += gl_LightSource[i].specular * pf * attenuation;
}

Material Properties and Lighting

OpenGL lighting calculations require knowing the viewing direction in the eye coordinate system in order to compute specular reflection terms. By default, the view direction is assumed to be parallel to and in the direction of the –z axis. OpenGL also has a mode that requires the viewing direction to be computed from the origin of the eye coordinate system (local viewer). To compute this, we can transform the incoming vertex into eye space by using the current modelview matrix. The x, y, and z coordinates of this point are divided by the homogeneous coordinate w to get a vec3 value that can be used directly in the lighting calculations. The computation of this eye coordinate position (ecPosition3) was illustrated in Section 9.1. To get a unit vector corresponding to the viewing direction, we normalize and negate the eye space position. Shader code to implement these computations is shown in Listing 9.9.

Example 9.9. Local viewer computation

if (LocalViewer)
    eye = -normalize(ecPosition3);
else
    eye = vec3(0.0, 0.0, 1.0);

With the viewing direction calculated, we can initialize the variables that accumulate the ambient, diffuse, and specular lighting contributions from all the light sources in the scene. We can then call the functions defined in the previous section to compute the contributions from each light source. In the code in Listing 9.10, we assume that all lights with an index less than the constant NumEnabled Lights are enabled. Directional lights are distinguished by having a position parameter with a homogeneous (w) coordinate equal to 0 at the time they were provided to OpenGL. (These positions are transformed by the modelview matrix when the light is specified, so the w coordinate remains 0 after transformation if the last column of the modelview matrix is the typical (0 0 0 1)). Point lights are distinguished by having a spotlight cutoff angle equal to 180.

Example 9.10. Loop to compute contributions from all enabled light sources

// Clear the light intensity accumulators
amb  = vec4(0.0);
diff = vec4(0.0);
spec = vec4(0.0);

// Loop through enabled lights, compute contribution from each
for (i = 0; i < NumEnabledLights; i++)
{
    if (gl_LightSource[i].position.w == 0.0)
        DirectionalLight(i, normal, amb, diff, spec);
    else if (gl_LightSource[i].spotCutoff == 180.0)
        PointLight(i, eye, ecPosition3, normal, amb, diff, spec);
    else
        SpotLight(i, eye, ecPosition3, normal, amb, diff, spec);
}

One of the changes made to OpenGL in version 1.2 was to add functionality to compute the color at a vertex in two parts: a primary color that contains the combination of the emissive, ambient, and diffuse terms as computed by the usual lighting equations; and a secondary color that contains just the specular term as computed by the usual lighting equations. If this mode is not enabled (the default case), the primary color is computed with the combination of emissive, ambient, diffuse, and specular terms.

Computing the specular contribution separately allows specular highlights to be applied after texturing has occurred. The specular value is added to the computed color after texturing has occurred, to allow the specular highlights to be the color of the light source rather than the color of the surface. Listing 9.11 shows how to compute the surface color (according to OpenGL rules) with everything but the specular contribution:

Example 9.11. Surface color computation, omitting the specular contribution

color = gl_FrontLightModelProduct.sceneColor +
            amb * gl_FrontMaterial.ambient +
            diff * gl_FrontMaterial.diffuse;

The OpenGL Shading Language conveniently provides us a built-in variable (gl_FrontLightModelProduct.sceneColor) that contains the emissive material property for front facing surfaces plus the product of the ambient material property for front-facing surfaces and the global ambient light for the scene (i.e., gl_FrontMaterial.emission + gl_FrontMaterial.ambient * gl_LightModel.ambient). We can add this together with the intensity of reflected ambient light and the intensity of reflected diffuse light. Next, we can do the appropriate computations, depending on whether the separate specular color mode is indicated, as shown in Listing 9.12.

Example 9.12. Final surface color computation

if (SeparateSpecular)
    gl_FrontSecondaryColor = vec4(spec *
                                  gl_FrontMaterial.specular, 1.0);
else
    color += spec * gl_FrontMaterial.specular;
gl_FrontColor = color;

There is no need to perform clamping on the values assigned to gl_Front-SecondaryColor and gl_FrontColor because these are automatically clamped by definition.

Two-Sided Lighting

To mimic OpenGL’s two-sided lighting behavior, you need to invert the surface normal and perform the same computations as defined in the preceding section, using the back-facing material properties. You can probably do it more cleverly than this, but it might look like Listing 9.13. The functions DirectionalLight, PointLight, and SpotLight that are referenced in this code segment are identical to the functions described in Section 9.2 except that the value glBackMaterial.shininess is used in the computations instead of the value glFrontMaterial.shininess.

Example 9.13. Two-sided lighting computation

normal = -normal;

// Clear the light intensity accumulators
amb  = vec4(0.0);
diff = vec4(0.0);
spec = vec4(0.0);

// Loop through enabled lights, compute contribution from each
for (i = 0; i < NumEnabledLights; i++)
{
    if (gl_LightSource[i].position.w == 0.0)
        DirectionalLight(i, normal, amb, diff, spec);
    else if (gl_LightSource[i].spotCutoff == 180.0)
        PointLight(i, eye, ecPosition3, normal, amb, diff, spec);
    else
        SpotLight(i, eye, ecPosition3, normal, amb, diff, spec);
}

color = gl_BackLightModelProduct.sceneColor +
        amb * gl_BackMaterial.ambient +
        diff * gl_BackMaterial.diffuse;

if (SeparateSpecular)
    gl_BackSecondaryColor = vec4(spec *
                                 gl_BackMaterial.specular, 1.0);
else
    color += spec * gl_BackMaterial.specular;

gl_BackColor = color;

There is no need to perform clamping on the values assigned to gl_BackSecondaryColor and gl_BackColor because these are automatically clamped by definition.

No Lighting

If no enabled lights are in the scene, it is a simple matter to pass the pervertex color and secondary color for further processing with the commands shown in Listing 9.14.

Example 9.14. Setting final color values with no lighting

if (SecondaryColor)
    gl_FrontSecondaryColor = gl_SecondaryColor;

// gl_FrontColor will be clamped automatically by OpenGL
gl_FrontColor = gl_Color;

Fog

In OpenGL, DEPTH-CUING and fog effects are controlled by fog parameters. A fog factor is computed according to one of three equations, and this fog factor performs a linear blend between the fog color and the computed color for the fragment. The depth value to be used in the fog equation can be either the fog coordinate passed in as a standard vertex attribute (gl_FogCoord) or the eye-coordinate distance from the eye. In the latter case, it is usually sufficient to approximate the depth value as the absolute value of the z-coordinate in eye space (i.e., abs(ecPosition.z)). When there is a wide angle of view, this approximation may cause a noticeable artifact (too little fog) near the edges. If this is the case, you could compute z as the true distance from the eye to the fragment with length(ecPosition). (This method involves a square root computation, so the code may run slower as a result.) The choice of which depth value to use would normally be done in the vertex shader as follows:

if (UseFogCoordinate)
    gl_FogFragCoord = gl_FogCoord;
else
    gl_FogFragCoord = abs(ecPosition.z);

A linear computation (which corresponds to the traditional computer graphics operation of depth-cuing) can be selected in OpenGL with the symbolic constant GL_LINEAR. For this case, the fog factor f is computed with the following equation:

Fog

start, end, and z are all distances in eye coordinates. start is the distance to the start of the fog effect, end is the distance to the end of the effect, and z is the value stored in gl_FogFragCoord. We can explicitly provide the start and end positions as uniform variables, or we can access the current values in OpenGL state by using the built-in variables gl_Fog.start and gl_Fog.end. The shader code to compute the fog factor with the built-in variables for accessing OpenGL state is shown in Listing 9.15.

Example 9.15. GL_LINEAR fog computation

fog = (gl_Fog.end - gl_FogFragCoord)) * gl_Fog.scale;

Because 1.0 / (gl_Fog.endgl_Fog.start) doesn’t depend on any per-vertex or per-fragment state, this value is precomputed and made available as the built-in variable gl_Fog.scale.

We can achieve a more realistic fog effect with an exponential function. With a negative exponent value, the exponential function will model the diminishing of the original color as a function of distance. A simple exponential fog function can be selected in OpenGL with the symbolic constant GL_EXP. The formula corresponding to this fog function is

f = e–(density .z)

The z value is computed as described for the previous function, and density is a value that represents the density of the fog. density can be provided as a uniform variable, or the built-in variable gl_Fog.density can be used to obtain the current value from OpenGL state. The larger this value becomes, the “thicker” the fog becomes. For this function to work as intended, density must be greater than or equal to 0.

The OpenGL Shading Language has a built-in exp (base e) function that we can use to perform this calculation. Our OpenGL shader code to compute the preceding equation is shown in Listing 9.16.

Example 9.16. GL_EXP fog computation

fog = exp(-gl_Fog.density * gl_FogFragCoord);

The final fog function defined by OpenGL is selected with the symbolic constant GL_EXP2 and is defined as

f = e–(density .z)2

This function changes the slope of the exponential decay function by squaring the exponent. The OpenGL shader code to implement it is similar to the previous function (see Listing 9.17).

Example 9.17. GL_EXP2 fog computation

fog = exp(-gl_Fog.density * gl_Fog.density *
           gl_FogFragCoord * gl_FogFragCoord);

OpenGL also requires the final value for the fog factor to be limited to the range [0,1]. We can accomplish this with the statement in Listing 9.18.

Example 9.18. Clamping the fog factor

fog = clamp(fog, 0.0, 1.0);

Any of these three fog functions can be computed in either a vertex shader or a fragment shader. Unless you have very large polygons in your scene, you probably won’t see any difference if the fog factor is computed in the vertex shader and passed to the fragment shader as a varying variable. This will probably also give you better performance overall, so it’s generally the preferred approach. In the fragment shader, when the (almost) final color is computed, the fog factor can be used to compute a linear blend between the fog color and the (almost) final fragment color. The OpenGL shader code in Listing 9.19 does the trick by using the fog color saved as part of current OpenGL state.

Example 9.19. Applying fog to compute final color value

color = mix(vec3(gl_Fog.color), color, fog);

The code presented in this section achieves the same results as OpenGL’s fixed functionality. But with programmability, you are free to use a completely different approach to compute fog effects.

Texture Coordinate Generation

OpenGL can be set up to compute texture coordinates automatically, based only on the incoming vertex positions. Five methods are defined, and each can be useful for certain purposes. The texture generation mode specified by GL_OBJECT_LINEAR is useful for cases in which a texture is to remain fixed to a geometric model, such as in a terrain modeling application. GL_EYE_LINEAR is useful for producing dynamic contour lines on an object. Examples of this usage include a scientist studying isosurfaces or a geologist interpreting seismic data. GL_SPHERE_MAP can generate texture coordinates for simple environment mapping. GL_REFLECTION_MAP and GL_NORMAL_MAP can work in conjunction with cube map textures. GL_REFLECTION_MAP passes the reflection vector as the texture coordinate. GL_NORMAL_MAP simply passes the computed eye space normal as the texture coordinate.

A function that generates sphere map coordinates according to the OpenGL specification is shown in Listing 9.20.

Example 9.20. GL_SPHERE_MAP computation

vec2 SphereMap(in vec3 ecPosition3, in vec3 normal)
{
   float m;
   vec3 r, u;
   u = normalize(ecPosition3);
   r = reflect(u, normal);
   m = 2.0 * sqrt(r.x * r.x + r.y * r.y + (r.z + 1.0) * (r.z + 1.0));
   return vec2(r.x / m + 0.5, r.y / m + 0.5);
}

A function that generates reflection map coordinates according to the OpenGL specification looks almost identical to the function shown in Listing 9.20. The difference is that it returns the reflection vector as its result (see Listing 9.21).

Example 9.21. GL_REFLECTION_MAP computation

vec3 ReflectionMap(in vec3 ecPosition3, in vec3 normal)
{
   float NdotU, m;
   vec3 u;
   u = normalize(ecPosition3);
   return (reflect(u, normal));
}

Listing 9.22 shows the code for selecting between the five texture generation methods and computing the appropriate texture coordinate values.

Example 9.22. Texture coordinate generation computation

// Compute sphere map coordinates if needed
if (TexGenSphere)
    sphereMap = SphereMap(ecposition3, normal);

// Compute reflection map coordinates if needed
if (TexGenReflection)
    reflection = ReflectionMap(ecposition3, normal);

// Compute texture coordinate for each enabled texture unit
for (i = 0; i < NumEnabledTextureUnits; i++)
{
    if (TexGenObject)
    {
        gl_TexCoord[i].s = dot(gl_Vertex, gl_ObjectPlaneS[i]);
        gl_TexCoord[i].t = dot(gl_Vertex, gl_ObjectPlaneT[i]);
        gl_TexCoord[i].p = dot(gl_Vertex, gl_ObjectPlaneR[i]);
        gl_TexCoord[i].q = dot(gl_Vertex, gl_ObjectPlaneQ[i]);
    }

    if (TexGenEye)
    {
        gl_TexCoord[i].s = dot(ecPosition, gl_EyePlaneS[i]);
        gl_TexCoord[i].t = dot(ecPosition, gl_EyePlaneT[i]);
        gl_TexCoord[i].p = dot(ecPosition, gl_EyePlaneR[i]);
        gl_TexCoord[i].q = dot(ecPosition, gl_EyePlaneQ[i]);
    }

    if (TexGenSphere)
        gl_TexCoord[i] = vec4(sphereMap, 0.0, 1.0);

    if (TexGenReflection)
        gl_TexCoord[i] = vec4(reflection, 1.0);

    if (TexGenNormal)
        gl_TexCoord[i] = vec4(normal, 1.0);
}

In this code, we assume that each texture unit less than NumEnabledTexture-Units is enabled. If this value is 0, the whole loop is skipped. Otherwise, each texture coordinate that is needed is computed in the loop.

Because the sphere map and reflection computations do not depend on any of the texture unit state, they can be performed once and the result is used for all texture units. For the GL_OBJECT_LINEAR and GL_EYE_LINEAR methods, there is a plane equation for each component of each set of texture coordinates. For the former case, we generate the components of gl_TexCoord[0] by multiplying the plane equation coefficients for the specified component by the incoming vertex position. For the latter case, we compute the components of gl_TexCoord[0] by multiplying the plane equation coefficients by the eye coordinate position of the vertex. Depending on what type of texture access is done during fragment processing, it may not be necessary to compute the t, p, or q texture component,[1] so these computations could be eliminated.

User Clipping

To take advantage of OpenGL’s user clipping (which remains as fixed functionality between vertex processing and fragment processing in programmable OpenGL), a vertex shader must transform the incoming vertex position into the same coordinate space as that in which the user clip planes are stored. The usual case is that the user clip planes are stored in eye space coordinates, so the OpenGL shader code shown in Listing 9.23 can provide the transformed vertex position.

Example 9.23. User-clipping computation

gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;

Texture Application

The built-in texture functions read values from texture memory. The values read from texture memory are used in a variety of ways. OpenGL fixed functionality includes support for texture application formulas enabled with the symbolic constants GL_REPLACE, GL_MODULATE, GL_DECAL, GL_BLEND, and GL_ADD. These modes operate differently, depending on the format of the texture being accessed. The following code illustrates the case in which an RGBA texture is accessed with the sampler tex0. The variable color is initialized to be gl_Color and then modified as needed so that it contains the color value that results from texture application.

GL_REPLACE is the simplest texture application mode of all. It simply replaces the current fragment color with the value read from texture memory. See Listing 9.24.

Example 9.24. GL_REPLACE computation

color = texture2D(tex0, gl_TexCoord[0].xy);

GL_MODULATE causes the incoming fragment color to be multiplied by the value retrieved from texture memory. This is a good texture function to use if lighting is computed before texturing (e.g., the vertex shader performs the lighting computation, and the fragment shader does the texturing). White can be used as the base color for an object rendered with this technique, and the texture then provides the diffuse color. This technique is illustrated with the OpenGL shader code in Listing 9.25.

Example 9.25. GL_MODULATE computation

color *= texture2D(tex0, gl_TexCoord[0].xy);

GL_DECAL is useful for applying an opaque image to a portion of an object. For instance, you might want to apply a number and company logos to the surfaces of a race car or tattoos to the skin of a character in a game. When an RGBA texture is accessed, the alpha value at each texel linearly interpolates between the incoming fragment’s RGB value and the texture’s RGB value. The incoming fragment’s alpha value is used as is. The code for implementing this mode is in Listing 9.26.

Example 9.26. GL_DECAL computation

vec4 texture = texture2D(tex0, gl_TexCoord[0].xy);
vec3 col = mix(color.rgb, texture.rgb, texture.a);
color = vec4(col, color.a);

GL_BLEND is the only texture application mode that takes the current texture environment color into account. The RGB values read from the texture linearly interpolate between the RGB values of the incoming fragment and the texture environment color. We compute the new alpha value by multiplying the alpha of the incoming fragment by the alpha read from the texture. The OpenGL shader code is shown in Listing 9.27.

Example 9.27. GL_BLEND computation

vec4 texture = texture2D(tex0, gl_TexCoord[0].xy);
vec3 col = mix(color.rgb, gl_TextureEnvColor[0].rgb, texture.rgb);
color = vec4(col, color.a * texture.a);

GL_ADD computes the sum of the incoming fragment color and the value read from the texture. The two alpha values are multiplied together to compute the new alpha value. This is the only traditional texture application mode for which the resulting values can exceed the range [0,1], so we clamp the final result (see Listing 9.28).

Example 9.28. GL_ADD computation

vec4 texture = texture2D(tex0, gl_TexCoord[0].xy);
color.rgb += texture.rgb;
color.a   *= texture.a;
color = clamp(color, 0.0, 1.0);

The texture-combine environment mode that was added in OpenGL 1.3 and extended in OpenGL 1.4 defines a large number of additional simple ways to perform texture application. A variety of new formulas, source values, and operands were defined. The mapping of these additional modes into OpenGL shader code is straightforward but tiresome, so it is omitted here.

Summary

The rendering formulas specified by OpenGL have been reasonable ones to implement in fixed functionality hardware for the past decade or so, but they are not necessarily the best ones to use in your shaders. We look at better-performing and more realistic shaders for lighting and reflection in subsequent chapters. Still, it can be instructive to see how these formulas can be expressed in shaders written in the OpenGL Shading Language. The shader examples presented in this chapter demonstrate the expression of these fixed functionality rendering formulas, but they should not be considered optimal implementations. Take the ideas and the shading code illustrated in this chapter and adapt them to your own needs.

Further Information

3Dlabs has made available a nifty tool for comparing fixed functionality behavior with equivalent shaders. With this application, called ShaderGen, you can set up OpenGL state and view fixed functionality behavior, and then, with a single mouse click, cause the application to automatically generate equivalent GLSL shaders. You can then examine, edit, compile, and link the generated shaders. You can easily switch between fixed functionality mode and programmable shader mode and compare results. Through the graphical user interface, you can also modify the state that affects rendering. Full source code for this application is also available.

The OpenGL Programming Guide, Fifth Edition, by the OpenGL Architecture Review Board, Woo, Neider, Davis, and Shreiner (2005), contains more complete descriptions of the various formulas presented in this chapter. The functionality is defined in the OpenGL specification, The OpenGL Graphics System: A Specification, (Version 2.0), by Mark Segal and Kurt Akeley, edited by Jon Leech and Pat Brown (2004). Basic graphics concepts like transformation, lighting, fog, and texturing are also covered in standard graphics texts such as Introduction to Computer Graphics by Foley, van Dam, et al., (1994).

Real-Time Rendering, by Akenine-Möller and Haines (2002), also contains good descriptions of these basic topics.

  1. 3Dlabs developer Web site. http://developer.3dlabs.com

  2. Akenine-Möller, Tomas, E. Haines, Real-Time Rendering, Second Edition, A K Peters, Ltd., Natick, Massachusetts, 2002. http://www.realtimerendering.com

  3. Baldwin, Dave, OpenGL 2.0 Shading Language White Paper, Version 1.0, 3Dlabs, October, 2001.

  4. Foley, J.D., A. van Dam, S.K. Feiner, J.H. Hughes, and R.L. Philips, Introduction to Computer Graphics, Addison-Wesley, Reading, Massachusetts, 1994.

  5. Foley, J.D., A. van Dam, S.K. Feiner, and J.H. Hughes, Computer Graphics: Principles and Practice, Second Edition in C, Second Edition, Addison-Wesley, Reading, Massachusetts, 1996.

  6. OpenGL Architecture Review Board, Dave Shreiner, J. Neider, T. Davis, and M. Woo, OpenGL Programming Guide, Fifth Edition: The Official Guide to Learning OpenGL, Version 2, Addison-Wesley, Reading, Massachusetts, 2005.

  7. OpenGL Architecture Review Board, OpenGL Reference Manual, Fourth Edition: The Official Reference to OpenGL, Version 1.4, Editor: Dave Shreiner, Addison-Wesley, Reading, Massachusetts, 2004.

  8. Segal, Mark, and Kurt Akeley, The OpenGL Graphics System: A Specification (Version 2.0), Editor (v1.1): Chris Frazier, (v1.2–1.5): Jon Leech, (v2.0): Jon Leech and Pat Brown, Sept. 2004. http://www.opengl.org/documentation/spec.html



[1] For historical reasons, the OpenGL texture coordinate components are named s, t, r, and q. Because of the desire to have single-letter, component-selection names in the OpenGL Shading Language, components for textures are named s, t, p, and q. This lets us avoid using r, which is needed for selecting color components as r, g, b, and a.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.210.151.5