Chapter 13. Shadows

 

I have a little shadow that goes in and out with me And what can be the use of him is more than I can see.

 
 --From My Shadow by Robert Louis Stevenson

Like Robert Louis Stevenson, have you ever wondered what shadows are good for? The previous chapter discussed lighting models, and wherever there is light, there are also shadows. Well, maybe this is true in the real world, but it is not always true in computer graphics. We have talked a lot about illumination already and have developed a variety of shaders that simulate light sources. But so far we have not described any shaders that generate shadows. This lack of shadows is part of the classic computer graphics “look” and is one obvious clue that a scene is synthetic rather than real.

What are shadows good for? Shadows help define the spatial relationships between objects in a scene. Shadows tell us when a character’s feet make contact with the floor as he is running. The shape and movement of a bouncing ball’s shadow gives us a great deal of information about the ball’s location at any point in time. Shadows on objects help reveal shape and form. In film and theater, shadows play a vital role in establishing mood. And in computer graphics, shadows help enhance the apparent realism of the scene.

Although computing shadows adds complexity and slows performance, the increase in comprehension and realism is often worth it. In this chapter, we explore some relatively simple techniques that produce shadows and shadow effects.

Ambient Occlusion

The lighting model built into OpenGL that we explored in Chapter 9 has a simple assumption for ambient light, namely, that it is constant over the entire scene. But if you look around your current location, you will probably see that this is almost always a bad assumption as far as generating a realistic image is concerned. Look underneath the table, and you will see an area that is darker than other areas. Look at the stack of books on the table, and you will see that there are dark areas between the books and probably a darker area near the base of the books. Almost every scene in real life contains a variety of complex shadows.

The alternative lighting models described in the previous chapter are an improvement over OpenGL’s fixed function lighting model, but they still fall short. If you look carefully at Color Plate 21D, you can see that the lighting still does not look completely realistic. The area under the chin, the junctions of the legs and the boots, and the creases in the clothing are brightly lit. If this were a real scene, we know that these areas would be darker because they would be obscured by nearby parts of the model. Hence, this object looks fake.

A relatively simple way to add realism to computer-generated imagery is a technique called AMBIENT OCCLUSION. This technique uses a precomputed occlusion (or accessibility) factor to scale the calculated direct diffuse illumination factor. It can be used with a variety of illumination methods, including hemisphere lighting and image-based lighting, as discussed in the preceding chapter. It results in soft shadows that darken parts of the object that are only partially accessible to the overall illumination in a scene.

The basic idea with ambient occlusion is to determine, for each point on an object, how much of the potentially visible hemisphere is actually visible and how much is obstructed by nearby parts of the object. The hemisphere that is considered at each point on the surface is in the direction of the surface normal at that point. For instance, consider the venerable teapot in Figure 13.1. The top of the knob on the teapot’s lid receives illumination from the entire visible hemisphere. But a point partway down inside the teapot’s spout receives illumination only from a very small percentage of the visible hemisphere, in the direction of the small opening at the end of the spout.

A 2D representation of the process of computing occlusion (accessibility) factors. A point on the top of the knob on the teapot’s lid has nothing in the way of the visible hemisphere (accessibility = 1.0) while a point inside the spout has its visible hemisphere mostly obscured (accessibility nearer to 0).

Figure 13.1. A 2D representation of the process of computing occlusion (accessibility) factors. A point on the top of the knob on the teapot’s lid has nothing in the way of the visible hemisphere (accessibility = 1.0) while a point inside the spout has its visible hemisphere mostly obscured (accessibility nearer to 0).

For a specific model we can precompute these occlusion factors and save them as per-vertex attribute values. Alternatively, we can create a texture map that stores these values for each point on the surface. One method for computing occlusion factors is to cast a large number of rays from each vertex and keep track of how many intersect another part of the object and how many do not. The percentage of such rays that are unblocked is the accessibility factor. The top of the lid on the teapot has a value of 1 since no other part of the model blocks its view of the visible hemisphere. A point partway down the spout has an accessibility value near 0, because its visible hemisphere is almost completely obscured.

We then multiply the computed accessibility (or occlusion) factor by our computed diffuse reflection value. This has the effect of darkening areas that are obscured by other parts of the model. It is simple enough to use this value in conjunction with our other lighting models. For instance, the hemisphere lighting vertex shader that we developed in Section 12.1 can incorporate ambient occlusion with a few simple changes, as shown in Listing 13.1.

Example 13.1. Vertex shader for hemisphere lighting with ambient occlusion

uniform vec3 LightPosition;
uniform vec3 SkyColor;
uniform vec3 GroundColor;

attribute float Accessibility;

varying vec3  DiffuseColor;

void main()
{
    vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
    vec3 tnorm      = normalize(gl_NormalMatrix * gl_Normal);
    vec3 lightVec   = normalize(LightPosition - ecPosition);
    float costheta  = dot(tnorm, lightVec);
    float a         = 0.5 + 0.5 * costheta;

    DiffuseColor = mix(GroundColor, SkyColor, a) * Accessibility;

    gl_Position     = ftransform();
}

The only change made to this shader is to pass in the accessibility factor as an attribute variable and use this to attenuate the computed diffuse color value. The results are quite a bit more realistic, as you can see by comparing Color Plate 21D and G. The overall appearance is too dark, but this can be remedied by choosing a mid-gray for the ground color rather than black. Color Plate 21F shows ambient occlusion with a simple diffuse lighting model.

The same thing can be done to the image-based lighting shader that we developed in Section 12.2 (see Listing 13.2) and to the spherical harmonic lighting shader that we developed in Section 12.3 (see Listing 13.3). In the former case, the lighting is done in the fragment shader, so the per-vertex accessibility factor must be passed to the fragment shader as a varying variable. (Alternatively, the accessibility values could have been stored in a texture that could be accessed in the fragment shader.)

Example 13.2. Fragment shader for image-based lighting

uniform vec3  BaseColor;
uniform float SpecularPercent;
uniform float DiffusePercent;

uniform samplerCube SpecularEnvMap;
uniform samplerCube DiffuseEnvMap;

varying vec3  ReflectDir;
varying vec3  Normal;
varying float Accessibility;

void main()
{
    // Look up environment map values in cube maps

    vec3 diffuseColor =
        vec3(textureCube(DiffuseEnvMap,  normalize(Normal)));

    vec3 specularColor =
        vec3(textureCube(SpecularEnvMap, normalize(ReflectDir)));

    // Add lighting to base color and mix

    vec3 color = mix(BaseColor, diffuseColor*BaseColor, DiffusePercent);
    color     *= Accessibility;
    color      = mix(color, specularColor + color, SpecularPercent);

    gl_FragColor = vec4(envColor, 1.0);
}

Example 13.3. Vertex shader for spherical harmonics lighting

varying vec3    DiffuseColor;
uniform float   ScaleFactor;
attribute float Accessibility;

const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;

// Constants for Old Town Square lighting
const vec3 L00  = vec3( 0.871297,  0.875222,  0.864470);
const vec3 L1m1 = vec3( 0.175058,  0.245335,  0.312891);
const vec3 L10  = vec3( 0.034675,  0.036107,  0.037362);
const vec3 L11  = vec3(-0.004629, -0.029448, -0.048028);
const vec3 L2m2 = vec3(-0.120535, -0.121160, -0.117507);
const vec3 L2m1 = vec3( 0.003242,  0.003624,  0.007511);
const vec3 L20  = vec3(-0.028667, -0.024926, -0.020998);
const vec3 L21  = vec3(-0.077539, -0.086325, -0.091591);
const vec3 L22  = vec3(-0.161784, -0.191783, -0.219152);

void main()
{
    vec3 tnorm    = normalize(gl_NormalMatrix * gl_Normal);

    DiffuseColor =  C1 * L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +
                    C3 * L20 * tnorm.z * tnorm.z +
                    C4 * L00 -
                    C5 * L20 +
                    2.0 * C1 * L2m2 * tnorm.x * tnorm.y +
                    2.0 * C1 * L21 * tnorm.x * tnorm.z +
                    2.0 * C1 * L2m1 * tnorm.y * tnorm.z +
                    2.0 * C2 * L11 * tnorm.x +
                    2.0 * C2 * L1m1 * tnorm.y +
                    2.0 * C2 * L10 * tnorm.z;

    DiffuseColor *= ScaleFactor;
    DiffuseColor *= Accessibility;

    gl_Position   = ftransform();
}

Results for ambient occlusion shaders are shown in Color Plate 21 C, F, G, H, and I. These images come from a GLSL demo program called deLight, written by Philip Rideout. Philip also wrote the ray-tracer that generated per-vertex accessibility information for a number of different models.

Ambient occlusion is a view-independent technique, but the computation of the occlusion factors assumes that the object is rigid. If the object has moving parts, the occlusion factors would need to be recomputed for each position. Work has been done recently on methods for computing the occlusion factors in real time (see Dynamic Ambient Occlusion and Indirect Lighting by Michael Bunnell in the book GPU Gems 2).

During the preprocessing stage, we can also compute an attribute called a BENT NORMAL. We obtain this value by averaging all the nonoccluded rays from a point on a surface. It represents the average direction of the available light arriving at that particular point on the surface. Instead of using the surface normal to access an environment map, we use the bent normal to obtain the color of the light from the appropriate portion of the environment map. We can simulate a soft fill light with a standard point or spotlight by using the bent normal instead of the surface normal and then multiplying the result by the occlusion factor.

Occlusion factors are not only useful for lighting but are also useful for reducing reflections from the environment in areas that are occluded. Hayden Landis of Industrial Light & Magic has described how similar techniques have been used to control reflections in films such as Pearl Harbor and Jurassic Park III. The technique is modified still further to take into account the type of surface that is reflecting the environment. Additional rays used along the main reflection vector provide an average (blurred) reflection. For diffuse surfaces (e.g., rubber), the additional rays are spread out more widely from the main reflection vector so that the reflection appears more diffuse. For more specular surfaces, the additional rays are nearer the main reflection vector, so the reflection is more mirrorlike.

Shadow Maps

Ambient occlusion is quite useful for improving the realism of rigid objects under diffuse lighting conditions, but often a scene will need to incorporate lighting from one or more well-defined light sources. In the real world, we know that strong light sources cause objects to cast well-defined shadows. Producing similar shadows in our computer-generated scenes will make them seem more realistic. How can we accomplish this?

The amount of computer graphics literature that discusses generation of shadows is vast. This is partly because no single shadowing algorithm is optimal for all cases. There are numerous trade-offs in performance, quality, and simplicity. Some algorithms work well for only certain types of shadow-casting objects or shadow-receiving objects. Some algorithms work for certain types of light sources or lighting conditions. Some experimentation and adaptation may be needed to develop a shadow-generation technique that is optimal for a specific set of conditions.

OpenGL and the OpenGL Shading Language include facilities for a generally useful shadowing algorithm called SHADOW MAPPING. In this algorithm, the scene is rendered multiple times—once for each light source that is capable of causing shadows, and once to generate the final scene, including shadows. Each of the per-light passes is rendered from the point of view of the light source. The results are stored in a texture that is called a SHADOW MAP or a DEPTH MAP. This texture is essentially a visible surface representation from the point of view of the light source. Surfaces that are visible from the point of view of this light source are fully illuminated by the light source. Surfaces that are not visible from the point of view of this light source are in shadow. Each of the textures so generated is accessed during the final rendering pass to create the final scene with shadows from one or more light sources. During the final rendering pass, the distance from the fragment to each light is computed and compared to the depth value in the shadow map for that light. If the distance from the fragment to the light is greater than the depth value in the shadow map, the fragment receives no contribution from that light source (i.e., it is in shadow); otherwise, the fragment is subjected to the lighting computation for that particular light source.

Because this algorithm involves an extra rendering pass for each light source, performance is a concern if a large number of shadow-casting lights are in the scene. But for interactive applications, it is quite often the case that shadows from one or two lights add sufficiently to the realism and comprehensibility of a scene. More than that and you may be adding needless complexity. And, just like other algorithms that use textures, shadow mapping is prone to severe aliasing artifacts unless proper care is taken.

The depth comparison can also lead to problems. Since the values being compared were generated in different passes with different transformation matrices, it is possible to have a small difference in the values. Therefore, you must use a small epsilon value in the comparison. You can use the glPolygonOffset command to bias depth values as the shadow map is being created. You must be careful, though, because too much bias can cause a shadow to become detached from the object casting the shadow.

A way to avoid depth precision problems with illuminated faces is to draw backfaces when you are building the shadow map. This avoids precision problems with illuminated faces because the depth value for these surfaces is usually quite different from the shadow map depth, so there is no possibility of precision problems incorrectly darkening the surface.

Precision problems are still possible on surfaces facing away from the light. You can avoid these problems by testing the surface normal—if it points away from the light, then the surface is in shadow. There will still be problems when the back and front faces have similar depth values, such as at the edge of the shadow. A carefully weighted combination of normal test plus depth test can provide artifact-free shadows even on gently rounded objects. However, this approach does not handle occluders that aren’t simple closed geometry.

Despite its drawbacks, shadow mapping is still a popular and effective means of generating shadows. A big advantage is that it can be used with arbitrarily complex geometry. It is supported in RenderMan and has been used to generate images for numerous movies and interactive games. OpenGL supports shadow maps (they are called depth component textures in OpenGL) and a full range of depth comparison modes that can be used with them. Shadow mapping can be performed in OpenGL with either fixed functionality or the programmable processing units. The OpenGL Shading Language contains corresponding functions for accessing shadow maps from within a shader (shadow2D, shadow2DProj, and the like).

Application Setup

As mentioned, the application must create a shadow map for each light source by rendering the scene from the point of view of the light source. For objects that are visible from the light’s point of view, the resulting texture map contains depth values representing the distance to the light. (The source code for the example program deLight available from the 3Dlabs Web site illustrates specifically how this is done.)

For the final pass of shadow mapping to work properly, the application must create a matrix that transforms incoming vertex positions into projective texture coordinates for each light source. The vertex shader is responsible for performing this transformation, and the fragment shader uses the interpolated projective coordinates to access the shadow map for each light source. To keep things simple, we look at shaders that deal with just a single light source. (You can use arrays and loops to extend the basic algorithm to support multiple light sources.)

We construct the necessary matrix by concatenating

  • The modeling matrix (M) that transforms modeling coordinates into world coordinates (this is the same modeling matrix that would be used to render the object normally)

  • A view matrix (Vlight) that rotates and translates world coordinates into a coordinate system that has the light source at the origin and is looking at the point (0, 0, 0) in world coordinates

  • A projection matrix (Plight) that defines the frustum for the view from the light source (i.e., field of view, aspect ratio, near and far clipping planes)

  • A scale and bias matrix that takes the clip space coordinates (i.e., values in the range [–1,1]) from the previous step into values in the range [0,1] so that they can be used directly as the index for accessing the shadow map.

The equation for the complete transformation looks like this:

Application Setup

We can actually use the OpenGL texture generation capabilities to generate shadows with OpenGL fixed functionality. We store the transformation matrix as a texture transformation matrix to produce the proper texture coordinates for use in the texture access operation. By performing this transformation in a vertex shader, we can have shadows and can add any desired programmable functionality as well. Let’s see how this is done in a vertex shader. Philip Rideout developed some shaders for the deLight demo that use shadow mapping. They have been adapted slightly for inclusion in this book.

Vertex Shader

The OpenGL Shading Language defines that all varying variables are interpolated in a perspective-correct manner. We can use this fact to perform the perspective division that is necessary to prevent objectionable artifacts during animation. To get perspective-correct projective texture coordinates, we need to end up with per-fragment values of s/q, t/q, and r/q. This is analogous to homogeneous clip coordinates where we divide through by the homogeneous coordinate w to end up with x/w, y/w, and z/w. Instead of interpolating the (s, t, r, q) projected texture coordinate directly and doing the division in the fragment shader, we divide the first three components by w in the vertex shader and then interpolate s/w, t/w, and r/w. The net result of the perspective-correct interpolation is then (s/w)/(q/w) = s/q and (t/w)/(q/w) = t/q, which is exactly what we want for projective texturing.

The vertex shader in Listing 13.4 shows how this is done. We use ambient occlusion in this shader, so these values are passed in as vertex attributes through the attribute variable Accessibility. These values attenuate the incoming per-vertex color values after a simple diffuse lighting model has been applied. The alpha value is left unmodified by this process. Using light source 0 as defined by OpenGL state makes it convenient for the application to draw shadows by using either the fixed functionality path or the programmable path. The matrix that transforms modeling coordinates to light source coordinates is stored in texture matrix 1 for the same reason and is accessed through the built-in uniform variable gl_TextureMatrix[1]. This matrix transforms the incoming vertex, and the resulting value has its first three components divided by the fourth component to make the interpolated values turn out correctly, as we have just discussed.

Example 13.4. Vertex shader for generating shadows

attribute float Accessibility;

varying vec4 ShadowCoord;

// Ambient and diffuse scale factors.
const float As = 1.0 / 1.5;
const float Ds = 1.0 / 3.0;

void main()
{
    vec4 ecPosition = gl_ModelViewMatrix * gl_Vertex;
    vec3 ecPosition3 = (vec3(ecPosition)) / ecPosition.w;
    vec3 VP = vec3(gl_LightSource[0].position) - ecPosition3;
    VP = normalize(VP);
    vec3 normal = normalize(gl_NormalMatrix * gl_Normal);
    float diffuse = max(0.0, dot(normal, VP));

    float scale = min(1.0, Accessibility * As + diffuse * Ds);

    vec4 texCoord = gl_TextureMatrix[1] * gl_Vertex;
    ShadowCoord   = texCoord / texCoord.w;

    gl_FrontColor  = vec4(scale * gl_Color.rgb, gl_Color.a);
    gl_Position    = ftransform();
}

Fragment Shader

A simple fragment shader for generating shadows is shown in Listing 13.5. The main function calls a subroutine named lookup to do the shadow map lookup, giving it offsets of 0 in both the x and the y directions. These offset values are added to the interpolated projective texture coordinate, an epsilon value is added, and the result is used to perform a texture access on the shadow map (depth texture) specified by ShadowMap. When shadow2Dproj is used to access a texture, the third component of the texture index (i.e., ShadowCoord.p + Epsilon) is compared with the depth value stored in the shadow map. The comparison function is the one specified for the texture object indicated by ShadowMap. If the comparison results in a value of true, shadow2Dproj returns 1.0; otherwise, it returns 0. If shadow2Dproj returns a value of 0, the lookup function returns a value of 0.75 (shadowed); otherwise, it returns a value of 1.0 (fully illuminated). The value returned by the lookup function is used to attenuate the red, green, and blue components of the fragment’s color. Fragments that are fully illuminated are unchanged, while fragments that are shadowed are multiplied by a factor of 0.75 to make them darker.

Example 13.5. Fragment shader for generating shadows

uniform sampler2DShadow ShadowMap;
uniform float Epsilon;

varying vec4 ShadowCoord;

float lookup(float x, float y)
{
    float depth = shadow2DProj(ShadowMap,
                      ShadowCoord + vec3(x, y, 0) * Epsilon).x;
    return depth != 1.0 ? 0.75 : 1.0;
}

void main()
{
    float shadeFactor = lookup(0.0, 0.0);
    gl_FragColor = vec4(shadeFactor * gl_Color.rgb, gl_Color.a);
}

Chances are that as soon as you execute this shader, you will be disappointed by the aliasing artifacts that appear along the edges of the shadows. We can do something about this, and we can customize the shader for a specific viewing situation to get a more pleasing result. Michael Bunnell and Fabio Pellacini describe a method for doing this in an article called Shadow Map Antialiasing in the book GPU Gems. Philip Rideout implemented this technique in GLSL, as shown in Listing 13.6 and Listing 13.7.

The shader in Listing 13.6 adds a couple of things. The first thing is that the main function assigns a value to Illumination based on a Boolean uniform variable. This shader essentially distinguishes between two types of shadows—those that are generated by the object itself and those that are generated by another object in the scene. The self-shadows are made a little lighter than other cast shadows for aesthetic reasons. The result of this conditional statement is that where the object shadows itself, the shadows are relatively light. And where the object’s shadow falls on the floor, the shadows are darker. (See Color Plate 22.)

The second difference is that the shadow map is sampled four times. The purpose of sampling multiple times is to try to do better at determining the shadow boundary. This lets us apply a smoother transition between shadowed and unshadowed regions, thus reducing the jagged edges of the shadow. However, it is incorrect to simply average the Boolean values returned by shadow2D, because this can result in rendering errors. Instead, the returned Boolean value is used to assign a value to Illumination, and then the four computed Illumination values are subsequently averaged.

Example 13.6. Fragment shader for generating shadows with antialiased edges, using four samples per pixel

uniform sampler2DShadow ShadowMap;
uniform float Epsilon;
uniform bool  SelfShadowed;
uniform float SelfShadowedVal;
uniform float NonSelfShadowedVal;

varying vec3 ShadowCoord;

float Illumination;

float lookup(float x, float y)
{
    float depth = shadow2D(ShadowMap,
                        ShadowCoord + vec3(x, y, 0) * Epsilon).x;
    return depth != 1.0 ? Illumination : 1.0;
}

void main()
{
    // lighten up the self-shadows
    Illumination = SelfShadowed ? SelfShadowedVal : NonSelfShadowedVal;

    float sum = 0.0;

    sum += lookup(-0.5, -0.5);
    sum += lookup( 0.5, -0.5);

    sum += lookup(-0.5, 0.5);
    sum += lookup( 0.5, 0.5);

    gl_FragColor = vec4(sum * 0.25 * gl_Color.rgb, gl_Color.a);
}

This shader can be extended in the obvious way to perform even more samples per pixel and thus improve the quality of the shadow boundaries even more. However, the more texture lookups that we perform in our shader, the slower it will run.

Using a method akin to dithering, we can actually use four samples that are spread somewhat farther apart to achieve a quality of antialiasing that is similar to that of using quite a few more than four samples per pixel. In Listing 13.7 we include code for computing offsets in x and y from the current pixel location. These offsets form a regular dither pattern that is used to access the shadow map. The results of using four dithered samples per pixel provides much better quality than just using four standard samples, though it is not quite as good as using 16 samples per pixel.

Example 13.7. Fragment shader for generating shadows, using four dithered samples

uniform sampler2DShadow ShadowMap;
uniform float Epsilon;
uniform bool  SelfShadowed;
uniform float SelfShadowedVal;
uniform float NonSelfShadowedVal;

varying vec3 ShadowCoord;

float Illumination;

float lookup(float x, float y)
{
    float depth = shadow2D(ShadowMap,
                       ShadowCoord + vec3(x, y, 0) * Epsilon).x;
    return depth != 1.0 ? Illumination : 1.0;
}

void main()
{
    // lighten up the self-shadows
    Illumination = SelfShadowed ? SelfShadowedVal : NonSelfShadowedVal;

    // use modulo to vary the sample pattern
    vec2 o = mod(floor(gl_FragCoord.xy), 2.0);

    float sum = 0.0;

    sum += lookup(vec2(-1.5, 1.5) + o);
    sum += lookup(vec2( 0.5, 1.5) + o);
    sum += lookup(vec2(-1.5, -0.5) + o);
    sum += lookup(vec2( 0.5, -0.5) + o);

    gl_FragColor = vec4(sum * 0.25 * gl_Color.rgb, gl_Color.a);
}

Sample images using these shaders are shown in Color Plate 22. A small area of the shadow has been enlarged by 400% to show the differences in quality at the edge of the shadow.

Deferred Shading for Volume Shadows

With contributions by Hugh Malan and Mike Weiblen

One of the disadvantages of shadow mapping as discussed in the previous section is that the performance depends on the number of lights in the scene that are capable of casting shadows. With shadow mapping, a rendering pass must be performed for each of these light sources. These shadow maps are utilized in a final rendering pass. All these rendering passes can reduce performance, particularly if a great many polygons are to be rendered.

It is possible to do higher-performance shadow generation with a rendering technique that is part of a general class of techniques known as DEFERRED SHADING. With deferred shading, the idea is to first quickly determine the surfaces that will be visible in the final scene and apply complex and time-consuming shader effects only to the pixels that make up those visible surfaces. In this sense, the shading operations are deferred until it can be established just which pixels contribute to the final image. A very simple and fast shader can render the scene into an offscreen buffer with depth buffering enabled. During this initial pass, the shader stores whatever information is needed to perform the necessary rendering operations in subsequent passes. Subsequent rendering operations are applied only to pixels that are determined to be visible in the high-performance initial pass. This technique ensures that no hardware cycles are wasted performing shading calculations on pixels that will ultimately be hidden.

To render soft shadows with this technique, we need to make two passes. In the first pass, we do two things:

  1. We use a shader to render the geometry of the scene without shadows or lighting into the framebuffer.

  2. We use the same shader to store a normalized camera depth value for each pixel in a separate buffer. (This separate buffer is accessed as a texture in the second pass for the shadow computations.)

In the second pass, the shadows are composited with the existing contents of the framebuffer. To do this compositing operation, we render the shadow volume (i.e., the region in which the light source is occluded) for each shadow casting object. In the case of a sphere, computing the shadow volume is relatively easy. The sphere’s shadow is in the shape of a truncated cone, where the apex of the cone is at the light source. One end of the truncated cone is at the center of the sphere (see Figure 13.2). (It is somewhat more complex to compute the shadow volume for an object defined by polygons, but the same principle applies.)

The shadow volume for a sphere

Figure 13.2. The shadow volume for a sphere

We composite shadows with the existing geometry by rendering the polygons that define the shadow volume. This allows our second pass shader to be applied only to regions of the image that might be in shadow.

To draw a shadow, we use the texture map shown in Figure 13.3. This texture map expresses how much a visible surface point is in shadow relative to a shadow-casting object (i.e., how much its value is attenuated) based on a function of two values: 1) the squared distance from the visible surface point to the central axis of the shadow volume, and 2) the distance from the visible surface point to the center of the shadow-casting object. The first value is used as the s coordinate for accessing the shadow texture, and the second value is used as the t coordinate. The net result is that shadows are relatively sharp when the shadow-casting object is very close to the fragment being tested and the edges become softer as the distance increases.

A texture map used to generate soft shadows

Figure 13.3. A texture map used to generate soft shadows

In the second pass of the algorithm, we do the following:

  1. Draw the polygons that define the shadow volume. Only the fragments that could possibly be in shadow are accessed during this rendering operation.

  2. For each fragment rendered,

    1. Look up the camera depth value for the fragment as computed in the first pass.

    2. Calculate the coordinates of the visible surface point in the local space of the shadow volume. In this space, the z axis is the axis of the shadow volume and the origin is at the center of the shadow-casting object. The x component of this coordinate corresponds to the distance from the center of the shadow-casting object and is used directly as the second coordinate for the shadow texture lookup.

    3. Compute the squared distance between the visible surface point and the z axis of the shadow volume. This value becomes the first coordinate for the texture lookup.

    4. Access the shadow texture by using the computed index values to retrieve the light attenuation factor and store this in the output fragment’s alpha value. The red, green, and blue components of the output fragment color are each set to 0.

    5. Compute for the fragment the light attenuation factor that will properly darken the existing framebuffer value. For the computation, enable fixed functionality blending, set the blend mode source function to GL_SRC_ALPHA, and set the blend destination function to GL_ONE.

Because the shadow (second pass) shader is effectively a 2D compositing operation, the texel it reads from the depth texture must exactly match the pixel in the framebuffer it affects. So the texture coordinate and other quantities must be bilinearly interpolated without perspective correction. We interpolate by ensuring that w is constant across the polygon—dividing x, y, and z by w and then setting w to 1.0 does the job. Another issue is that when the viewer is inside the shadow volume, all faces are culled. We handle this special case by drawing a screen-sized quadrilateral since the shadow volume would cover the entire scene.

Shaders for First Pass

The shaders for the first pass of the volume shadow algorithm are shown in Listings 13.8 and 13.9. In the vertex shader, to accomplish the standard rendering of the geometry (which in this specific case is all texture mapped), we just call ftransform and pass along the texture coordinate. The other lines of code compute the normalized value for the depth from the vertex to the camera plane. The computed value, CameraDepth, is stored in a varying variable so that it can be interpolated and made available to the fragment shader.

To render into two buffers by using a fragment shader, the application must call glDrawBuffers and pass it a pointer to an array containing symbolic constants that define the two buffers to be written. In this case, we might pass the symbolic constant GL_BACK_LEFT as the first value in the array and GL_AUX0 as the second value. This means that gl_FragData[0] will be used to update the value in the soon-to-be-visible framebuffer (assuming we are double-buffering) and the value for gl_FragData[1] will be used to update the value in auxiliary buffer number 0. Thus, the fragment shader for the first pass of our algorithm contains just two lines of code (Listing 13.9).

Example 13.8. Vertex shader for first pass of soft volume shadow algorithm

uniform vec3  CameraPos;
uniform vec3  CameraDir;
uniform float DepthNear;
uniform float DepthFar;

varying float CameraDepth;  // normalized camera depth
varying vec2 TexCoord;

void main()
{
    // offset = vector to vertex from camera's position
    vec3 offset = (gl_Vertex.xyz / gl_Vertex.w) - CameraPos;

    // z = distance from vertex to camera plane
    float z = -dot(offset, CameraDir);

    // Depth from vertex to camera, mapped to [0,1]
    CameraDepth = (z - DepthNear) / (DepthFar - DepthNear);

    // typical interpolated coordinate for texture lookup
    TexCoord = gl_MultiTexCoord0.xy;

    gl_Position = ftransform();
}

Example 13.9. Fragment shader for first pass of soft volume shadow algorithm

uniform sampler2D TextureMap;

varying float CameraDepth;
varying vec2  TexCoord;

void main()
{
    // draw the typical textured output to visual framebuffer
    gl_FragData[0] = texture2D(TextureMap, TexCoord);

    // write "normaliized vertex depth" to the depthmap's alpha.
    gl_FragData[1] = vec4(vec3(0.0), CameraDepth);
}

Shaders for Second Pass

The second pass of our shadow algorithm is responsible for compositing shadow information on top of what has already been rendered. After the first pass has been completed, the application must arrange for the depth information rendered into auxiliary buffer 0 to be made accessible for use as a texture. There are several ways we can accomplish this. One way is to set the current read buffer to auxiliary buffer 0 by calling glReadBuffer with the symbolic constant GL_AUX0, and then call glCopyTexImage2d to copy the values from auxiliary buffer 0 to a texture that can be accessed in the second pass of the algorithm. (A higher performance method that avoids an actual data copy is possible if the EXT_framebuffer_objects extension is used. This extension is expected to be promoted to the OpenGL core in OpenGL 2.1.)

In the second pass, the only polygons rendered are the ones that define the shadow volumes for the various objects in the scene. We enable blending by calling glEnable with the symbolic constant GL_BLEND, and we set the blend function by calling glBlendFunc with a source factor of GL_ONE and a destination factor of GL_SRC_ALPHA. The fragment shader outputs the shadow color and an alpha value obtained from a texture lookup operation. This alpha value blends the shadow color value into the frame buffer.

The vertex shader for the second pass (see Listing 13.10) is responsible for computing the coordinates for accessing the depth values that were computed in the first pass. We accomplish the computation by transforming the incoming vertex position, dividing the x, y, and z components by the w component, and then scaling and biasing the x and y components to transform them from the range [–1,1] into the range [0,1]. Values for ShadowNear and ShadowDir are also computed. These are used in the fragment shader to compute the position of the fragment relative to the shadow-casting object.

Example 13.10. Vertex shader for second pass of soft volume shadow algorithm

uniform mat3 WorldToShadow;
uniform vec3 SphereOrigin;

uniform vec3 CameraPos;
uniform vec3 CameraDir;
uniform float DepthNear;
uniform float DepthFar;

varying vec2 DepthTexCoord;
varying vec3 ShadowNear;
varying vec3 ShadowDir;

void main()
{
    vec4 tmp1 = ftransform();
    gl_Position = tmp1;

    // Predivide out w to avoid perspective-correct interpolation.
    // The quantities being interpolated are screen-space texture
    // coordinates and vectors to the near and far shadow plane,
    // all of which have to be bilinearly interpolated.
    // This could potentially be done by setting glHint,
    // but it wouldn't be guaranteed to work on all hardware.

    gl_Position.xyz /= gl_Position.w;
    gl_Position.w = 1.0;

    // Grab the transformed vertex's XY components as a texcoord
    // for sampling from the depth texture from pass 1.
    // Normalize them from [0,0] to [1,1]

    DepthTexCoord = gl_Position.xy * 0.5 + 0.5;

    // offset = vector to vertex from camera's position
    vec3 offset = (gl_Vertex.xyz / gl_Vertex.w) - CameraPos;

    // z = distance from vertex to camera plane
    float z = -dot(offset, CameraDir);

    vec3 shadowOffsetNear = offset * DepthNear / z;
    vec3 shadowOffsetFar  = offset * DepthFar / z;

    vec3 worldPositionNear = CameraPos + shadowOffsetNear;
    vec3 worldPositionFar  = CameraPos + shadowOffsetFar;

    vec3 shadowFar  = WorldToShadow * (worldPositionFar - SphereOrigin);
    ShadowNear = WorldToShadow * (worldPositionNear - SphereOrigin);
    ShadowDir = shadowFar - ShadowNear;
}

The fragment shader for the second pass is shown in Listing 13.11. In this shader, we access the cameraDepth value computed by the first pass by performing a texture lookup. We then map the fragment’s position into the local space of the shadow volume. The mapping from world to shadow space is set up so that the center of the occluding sphere maps to the origin, and the circle of points on the sphere at the terminator between light and shadow maps to a circle in the YZ plane.

The variables d and l are respectively the distance along the shadow axis and the squared distance from it. These values are used as texture coordinates for the lookup into the texture map defining the shape of the shadow.

With the mapping described above, points on the terminator map to a circle in the YZ plane. The texture map has been painted with the transition from light to shadow occurring at s=0.5; to match this, the mapping from world to shadow is set up so that the terminator circle maps to a radius of sqrt(0.5).

Finally, the value retrieved from the shadow texture is used as the alpha value for blending the shadow color with the geometry that has already been rendered into the frame buffer.

Example 13.11. Fragment shader for second pass of soft volume shadow algorithm

uniform sampler2D DepthTexture;
uniform sampler2D ShadowTexture;

varying vec2 DepthTexCoord;
varying vec3 ShadowNear;
varying vec3 ShadowDir;

const vec3 shadowColor = vec3(0.0);

void main()
{
    // read from DepthTexture
    // (depth is stored in texture's alpha component)
    float cameraDepth = texture2D(DepthTexture, DepthTexCoord).a;

    vec3 shadowPos = (cameraDepth * ShadowDir) + ShadowNear;
    float l = dot(shadowPos.yz, shadowPos.yz);
    float d = shadowPos.x;

    // k = shadow density: 0=opaque, 1=transparent
    // (use texture's red component as the density)
    float k = texture2D(ShadowTexture, vec2(l, d)).r;

    gl_FragColor = vec4(shadowColor, k);
}

Figure 13.4 shows the result of this multipass shading algorithm in a scene with several spheres. Note how the shadows for the four small spheres get progressively softer edges as the spheres increase in distance from the checkered floor. The large sphere that is farthest from the floor casts an especially soft shadow.

Screen shot of the volume shadows shader in action. Notice that spheres that are farther from the surface have shadows with softer edges.

Figure 13.4. Screen shot of the volume shadows shader in action. Notice that spheres that are farther from the surface have shadows with softer edges.

The interesting part of this deferred shading approach is that the volumetric effects are implemented by rendering geometry that bounds the volume of the effect. This almost certainly means processing fewer vertices and fewer fragments. The shaders required are relatively simple and quite fast. Instead of rendering the geometry once for each light source, the geometry is rendered just once, and all the shadow volumes due to all light sources can be rendered in a single compositing pass. Localized effects such as shadow maps, decals, and projective textures can be accomplished easily. Instead of having to write tricky code to figure out the subset of the geometry to which the effect applies, you write a shader that is applied to each pixel and use that shader to render geometry that bounds the effect. This technique can be extended to render a variety of different effects—volumetric fog, lighting, and improved caustics to name a few.

Summary

There are a number of techniques for generating shadows, and this chapter described several that particularly lend themselves to real-time usage. Ambient occlusion is a technique that complements the global illumination techniques described in Chapter 12 by adding soft shadows that would naturally appear in the corners and crevices of objects in a scene. Shadow mapping is a technique that is well suited to implementation with OpenGL shaders on today’s graphics hardware. A number of variations to shadow mapping can be used to improve its quality. We looked at a couple of methods that produce antialiased shadow edges. Finally, we also looked at a method that uses a deferred shading approach to render shadow volumes in order to produce soft shadows.

Further Information

The SIGGRAPH 2002 course notes contained the article Production-Ready Global Illumination, by Hayden Landis. This document describes ambient environments, reflection occlusion, and ambient occlusion and explains how they are used in the ILM computer graphics production environment. The article Ambient Occlusion, by Matt Pharr and Simon Green, provides further details about the preprocessing step and gives example shaders written in Cg. The GPU Gems 2 book contains an article by Michael Bunnell that describes efforts to compute occlusion factors in real time.

Frank Crow pioneered the development of shadow algorithms for computer graphics. Mark Segal and others described the basics of using texture mapping hardware to generate shadows in a 1992 SIGGRAPH paper. Randima Fernando and Mark Kilgard discuss a Cg implementation of these techniques in the book The Cg Tutorial. Eric Haines wrote a survey of real-time shadow algorithms and presented this information at GDC in 2001. Some of this material is also in the book Real-Time Rendering by Akenine-Möller and Haines.

Deferred shading has recently been a hot topic in computer games development. In the book GPU Gems 2, Oles Shishkovtsov discusses how this approach was used for the computer game S.T.A.L.K.E.R. His article also mentions presentations from the 2004 Game Developer’s Conference.

  1. Akenine-Möller, Tomas, and E. Haines, Real-Time Rendering, Second Edition, AK Peters, Ltd., Natick, Massachusetts, 2002. http://www.realtimerendering.com

  2. Bunnell, Michael, Dynamic Ambient Occlusion and Indirect Lighting, in GPU Gems 2: Programming Techniques for High-Performance Graphics and General-Purpose Computation, Editor: Matt Pharr, Addison-Wesley, Reading, Massachusetts, 2005. http://download.nvidia.com/developer/GPU_Gems_2/GPU_Gems2_ch14.pdf

  3. Bunnell, Michael, and Fabio Pellacini, Shadow Map Antialiasing, in GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics, Editor: Randima Fernando, Addison-Wesley, Reading, Massachusetts, 2004. http://developer.nvidia.com/object/gpu_gems_home.html

  4. Crow, Franklin C., Shadow Algorithms for Computer Graphics, Computer Graphics (SIGGRAPH ’77 Proceedings), vol. 11, no. 2, pp. 242–248, July 1977.

  5. Fernando, Randima, and Mark J. Kilgard, The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, Addison-Wesley, Boston, Massachusetts, 2003.

  6. Haines, Eric, Real-Time Shadows, GDC 2001 Presentation. http://www.gdconf.com/archives/2001/haines.pdf

  7. Landis, Hayden, Production-Ready Global Illumination, SIGGRAPH 2002 Course Notes, course 16, RenderMan In Production. http://www.debevec.org/HDRI2004/landis-S2002-course16-prodreadyGI.pdf

  8. Pharr, Matt and Simon Green, Ambient Occlusion, in GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics, Editor: Randima Fernando, Addison-Wesley, Reading, Massachusetts, 2004. http://developer.nvidia.com/object/gpu_gems_home.html

  9. Reeves, William T., David H. Salesin, and Robert L. Cook, Rendering Antialiased Shadows with Depth Maps, Computer Graphics (SIGGRAPH ’87 Proceedings), vol. 21, no. 4, pp. 283–291, July 1987.

  10. Segal, Mark, C. Korobkin, R. van Widenfelt, J. Foran, and P. Haeberli, Fast Shadows and Lighting Effects Using Texture Mapping, Computer Graphics (SIGGRAPH ’92 Proceedings), vol. 26, no. 2, pp. 249–252, July 1992.

  11. Shishkovtsov, Oles, Deferred Shading in S.T.A.L.K.E.R., in GPU Gems 2: Programming Techniques for High-Performance Graphics and General-Purpose Computation, Editor: Matt Pharr, Addison-Wesley, Reading, Massachusetts, 2005. http://developer.nvidia.com/object/gpu_gems_2_home.html

  12. Woo, Andrew, P. Poulin, and A. Fournier, A Survey of Shadow Algorithms, IEEE Computer Graphics and Applications, vol. 10, no. 6, pp.13–32, November 1990.

  13. Zhukov, Sergei, A. Iones, G. Kronin, An Ambient Light Illumination Model, Proceedings of Eurographics Rendering Workshop ’98.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.238.116.201