Chapter 12. Lighting

In the real world, we see things because they reflect light from a light source or because they are light sources themselves. In computer graphics, just as in real life, we won’t be able to see an object unless it is illuminated or emits light. To generate more realistic images, we need to have more realistic models for illumination, shadows, and reflection than those we’ve discussed so far.

In this chapter and the next two, we explore how the OpenGL Shading Language can help us implement such models so that they can execute at interactive rates on programmable graphics hardware. In this chapter, we look at some lighting models that provide more flexibility and give more realistic results than those built into OpenGL’s fixed functionality rendering pipeline. Much has been written on the topic of lighting in computer graphics. We only examine a few methods in this chapter. Hopefully, you’ll be inspired to try implementing some others on your own.

Hemisphere Lighting

In Chapter 9, we looked carefully at the fixed functionality lighting model built into OpenGL and developed shader code to mimic the fixed functionality behavior. However, this model has a number of flaws, and these flaws become more apparent as we strive for more realistic rendering effects. One problem is that objects in a scene do not typically receive all their illumination from a small number of specific light sources. Interreflections between objects often have noticeable and important contributions to objects in the scene. The traditional computer graphics illumination model attempts to account for this phenomena through an ambient light term. However, this ambient light term is usually applied equally across an object or an entire scene. The result is a flat and unrealistic look for areas of the scene that are not affected by direct illumination.

Another problem with the traditional illumination model is that light sources in real scenes are not point lights or even spotlights—they are area lights. Consider the indirect light coming in from the window and illuminating the floor and the long fluorescent light bulbs behind a rectangular translucent panel. For an even more common case, consider the illumination outdoors on a cloudy day. In this case, the entire visible hemisphere is acting like an area light source. In several presentations and tutorials, Chas Boyd, Dan Baker, and Philip Taylor of Microsoft described this situation as HEMISPHERE LIGHTING and discussed how to implement it in DirectX. Let’s look at how we might create an OpenGL shader to simulate this type of lighting environment.

The idea behind hemisphere lighting is that we model the illumination as two hemispheres. The upper hemisphere represents the sky, and the lower hemisphere represents the ground. A location on an object with a surface normal that points straight up gets all of its illumination from the upper hemisphere, and a location with a surface normal pointing straight down gets all of its illumination from the lower hemisphere (see Figure 12.1). By picking appropriate colors for the two hemispheres, we can make the sphere look as though locations with normals pointing up are illuminated and those with surface normals pointing down are in shadow.

A sphere illuminated using the hemisphere lighting model. A point on the top of the sphere (the black “x”) receives illumination only from the upper hemisphere (i.e., the sky color). A point on the bottom of the sphere (the white “x”) receives illumination only from the lower hemisphere (i.e., the ground color). A point right on the equator would receive half of its illumination from the upper hemisphere and half from the lower hemisphere (e.g., 50% sky color and 50% ground color).

Figure 12.1. A sphere illuminated using the hemisphere lighting model. A point on the top of the sphere (the black “x”) receives illumination only from the upper hemisphere (i.e., the sky color). A point on the bottom of the sphere (the white “x”) receives illumination only from the lower hemisphere (i.e., the ground color). A point right on the equator would receive half of its illumination from the upper hemisphere and half from the lower hemisphere (e.g., 50% sky color and 50% ground color).

To compute the illumination at any point on the surface, we must compute the integral of the illumination received at that point:

Color = a · SkyColor + (1 − a) · GroundColor

where

a = 1.0 − (0.5 · sin(θ)) for θ ≤ 90°

a = 0.5 · sin(θ) for θ > 90°

θ = angle between surface normal and north pole direction

But we can actually calculate a in another way that is simpler but roughly equivalent:

a = 0.5 + (0.5 · cos(θ))

This approach eliminates the need for a conditional. Furthermore, we can easily compute the cosine of the angle between two unit vectors by taking the dot product of the two vectors. This is an example of what Jim Blinn likes to call “the ancient Chinese art of chi ting.” In computer graphics, if it looks good enough, it is good enough. It doesn’t really matter whether your calculations are physically correct or a colossal cheat. The difference between the two functions is shown in Figure 12.2. The shape of the two curves is similar. One is the mirror of the other, but the area under the curves is the same. This general equivalency is good enough for the effect we’re after, and the shader is simpler and will likely execute faster as well.

Comparing the actual analytic function for hemisphere lighting to a similar but higher-performance function.

Figure 12.2. Comparing the actual analytic function for hemisphere lighting to a similar but higher-performance function.

For the hemisphere shader, we need to pass in uniform variables for the sky color and the ground color. We can also consider the “north pole” to be our light position. If we pass this in as a uniform variable, we can light the model from different directions.

Listing 12.1 shows a vertex shader that implements hemisphere lighting. As you can see, the shader is quite simple. The main purpose of the shader is to compute the diffuse color value and pass it on to fixed functionality fragment processing so that it can be written into the framebuffer. We accomplish this purpose by storing the computed color value in the built-in varying variable gl_FrontColor. Results for this shader are shown in Color Plate 21D and G. Compare this to the results of shading with a single directional light source shown in Color Plate 21A and B. Not only is the hemisphere shader simpler and more efficient, it produces a much more realistic lighting effect too! This lighting model can be utilized for tasks like model preview, where it is important to examine all the details of a model. It can also be used in conjunction with the traditional computer graphics illumination model. Point, directional, or spot lights can be added on top of the hemisphere lighting model to provide more illumination to important parts of the scene.

Example 12.1. Vertex shader for hemisphere lighting

uniform vec3 LightPosition;
uniform vec3 SkyColor;
uniform vec3 GroundColor;
void main()
{
    vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
    vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);
    vec3 lightVec = normalize(LightPosition - ecPosition);
    float costheta = dot(tnorm, lightVec);
    float a = 0.5 + 0.5 * costheta;
    
    gl_FrontColor = mix(GroundColor, SkyColor, a);
    
    gl_Position = ftransform();
}

One of the issues with this model is that it doesn’t account for self-occlusion. Regions that should really be in shadow because of the geometry of the model appear too bright. We remedy this in Chapter 13.

Image-Based Lighting

Back in Chapter 10 we looked at shaders to perform environment mapping. If we’re trying to achieve realistic lighting in a computer graphics scene, why not just use an environment map for the lighting? This approach to illumination is called IMAGE-BASED LIGHTING; it has been popularized in recent years by researcher Paul Debevec at the University of Southern California. Churches and auditoriums may have dozens of light sources on the ceiling. Rooms with many windows also have complex lighting environments. It is often easier and much more efficient to sample the lighting in such environments and store the results in one or more environment maps than it is to simulate numerous individual light sources.

The steps involved in image-based lighting are

  1. Use a LIGHT PROBE (e.g., a reflective sphere) to capture (e.g., photograph) the illumination that occurs in a real-world scene. The captured omnidirectional, high-dynamic range image is called a LIGHT PROBE IMAGE.

  2. Use the light probe image to create a representation of the environment (e.g., an environment map).

  3. Place the synthetic objects to be rendered inside the environment.

  4. Render the synthetic objects by using the representation of the environment created in step 2.

On his Web site (http://www.debevec.org/), Debevec offers a number of useful things to developers. For one, he has made available a number of images that can be used as high-quality environment maps to provide realistic lighting in a scene. These images are high dynamic range (HDR) images that represent each color component with a 32-bit floating-point value. Such images can represent a much greater range of intensity values than can 8-bit-per-component images. For another, he makes available a tool called HDRShop that manipulates and transforms these environment maps. Through links to his various publications and tutorials, he also provides step-by-step instructions on creating your own environment maps and using them to add realistic lighting effects to computer graphics scenes.

Following Debevec’s guidance, I purchased a 2-inch chrome steel ball from McMaster-Carr Supply Company (http://www.mcmaster.com). We used this ball to capture a light probe image from the center of the square outside our office building in downtown Fort Collins, Colorado (Color Plate 10A). We then used HDRShop to create a lat-long environment map (Color Plate 9) and a cube map (Color Plate 10B) of the same scene. The cube map and latlong map can be used to perform environment mapping as described in Chapter 10. That shader simulated a surface with an underlying base color and diffuse reflection characteristics that was covered by a transparent mirror-like layer that reflected the environment flawlessly.

We can simulate other types of objects if we modify the environment maps before they are used. A point on the surface that reflects light in a diffuse fashion reflects light from all the light sources that are in the hemisphere in the direction of the surface normal at that point. We can’t really afford to access the environment map a large number of times in our shader. What we can do instead is similar to what we discussed for hemisphere lighting. Starting from our light probe image, we can construct an environment map for diffuse lighting. Each texel in this environment map will contain the weighted average (i.e., the convolution) of other texels in the visible hemisphere as defined by the surface normal that would be used to access that texel in the environment.

Again, HDRShop has exactly what we need. We can use HDRShop to create a lat-long image from our original light probe image. We can then use a command built into HDRShop that performs the necessary convolution. This operation can be time consuming, because at each texel in the image, the contributions from half of the other texels in the image must be considered. Luckily, we don’t need a very large image for this purpose. The effect is essentially the same as creating a very blurry image of the original light probe image. Since there is no high frequency content in the computed image, a cube map with faces that are 64 × 64 or 128 × 128 works just fine. An example of a diffuse environment map is shown in Color Plate 10C.

A single texture access into this diffuse environment map provides us with the value needed for our diffuse reflection calculation. What about the specular contribution? A surface that is very shiny will reflect the illumination from a light source just like a mirror. This is what we saw in the environment mapping shader from Chapter 10. A single point on the surface reflects a single point in the environment. For surfaces that are rougher, the highlight defocuses and spreads out. In this case, a single point on the surface reflects several points in the environment, though not the whole visible hemisphere like a diffuse surface. HDRShop lets us blur an environment map by providing a Phong exponent—a degree of shininess. A value of 1.0 convolves the environment map to simulate diffuse reflection, and a value of 50 or more convolves the environment map to simulate a somewhat shiny surface. An example of the Old Town Square environment map that has been convolved with a Phong exponent value of 50 is shown in Color Plate 10D.

The shaders that implement these concepts end up being quite simple and quite fast. In the vertex shader, all that is needed is to compute the reflection direction at each vertex. This value and the surface normal are sent to the fragment shader as varying variables. They are interpolated across each polygon, and the interpolated values are used in the fragment shader to access the two environment maps in order to obtain the diffuse and the specular components. The values obtained from the environment maps are combined with the object’s base color to arrive at the final color for the fragment. The shaders are shown in Listing 12.2 and Listing 12.3. Examples of images created with this technique are shown in Color Plate 18.

Example 12.2. Vertex shader for image-based lighting

varying vec3 ReflectDir;
varying vec3 Normal;

void main()
{
    gl_Position = ftransform();
    Normal = normalize(gl_NormalMatrix * gl_Normal);
    vec4 pos = gl_ModelViewMatrix * gl_Vertex;
    vec3 eyeDir = pos.xyz;
    ReflectDir = reflect(eyeDir, Normal);
}

Example 12.3. Fragment shader for image-based lighting

uniform vec3 BaseColor;
uniform float SpecularPercent;
uniform float DiffusePercent;

uniform samplerCube SpecularEnvMap;
uniform samplerCube DiffuseEnvMap;

varying vec3 ReflectDir;
varying vec3 Normal;

void main()
{
    // Look up environment map values in cube maps

    vec3 diffuseColor =
        vec3(textureCube(DiffuseEnvMap, normalize(Normal)));
    
    vec3 specularColor =
        vec3(textureCube(SpecularEnvMap, normalize(ReflectDir)));
    
    // Add lighting to base color and mix

   vec3 color = mix(BaseColor, diffuseColor*BaseColor, DiffusePercent);
   color = mix(color, specularColor + color, SpecularPercent);

   gl_FragColor = vec4(envColor, 1.0);
}

The environment maps that are used can reproduce the light from the whole scene. Of course, objects with different specular reflection properties require different specular environment maps. And producing these environment maps requires some manual effort and lengthy preprocessing. But the resulting quality and performance make image-based lighting a great choice in many situations.

Lighting with Spherical Harmonics

In 2001, Ravi Ramamoorthi and Pat Hanrahan presented a method that uses spherical harmonics for computing the diffuse lighting term. This method reproduces accurate diffuse reflection, based on the content of a light probe image, without accessing the light probe image at runtime. The light probe image is preprocessed to produce coefficients that are used in a mathematical representation of the image at runtime. The mathematics behind this approach is beyond the scope of this book (see the references at the end of this chapter if you want all the details). Instead, we lay the necessary groundwork for this shader by describing the underlying mathematics in an intuitive fashion. The result is remarkably simple, accurate, and realistic, and it can easily be codified in an OpenGL shader. This technique has already been used successfully to provide real-time illumination for games and has applications in computer vision and other areas as well.

Spherical harmonics provides a frequency space representation of an image over a sphere. It is analogous to the Fourier transform on the line or circle. This representation of the image is continuous and rotationally invariant. Using this representation for a light probe image, Ramamoorthi and Hanrahan showed that you could accurately reproduce the diffuse reflection from a surface with just nine spherical harmonic basis functions. These nine spherical harmonics are obtained with constant, linear, and quadratic polynomials of the normalized surface normal.

Intuitively, we can see that it is plausible to accurately simulate the diffuse reflection with a small number of basis functions in frequency space since diffuse reflection varies slowly across a surface. With just nine terms used, the average error over all surface orientations is less than 3% for any physical input lighting distribution. With Debevec’s light probe images, the average error was shown to be less than 1% and the maximum error for any pixel was less than 5%.

Each spherical harmonic basis function has a coefficient that depends on the light probe image being used. The coefficients are different for each color channel, so you can think of each coefficient as an RGB value. A preprocessing step is required to compute the nine RGB coefficients for the light probe image to be used. Ramamoorthi makes the code for this preprocessing step available for free on his Web site. I used this program to compute the coefficients for all the light probe images in Debevec’s light probe gallery as well as the Old Town Square light probe image and summarized the results in Table 12.1.

Table 12.1. Spherical harmonic coefficients for light probe images

Coefficient

Old Town Square

Grace Cathedral

Eucalyptus Grove

St. Peter’s Basilica

Uffizi Gallery

L00

.87

.88

.86

.79

.44

.54

.38

.43

.45

.36

.26

.23

.32

.31

.35

L1m1

.18

.25

.31

.39

.35

.60

.29

.36

.41

.18

.14

.13

.37

.37

.43

L10

.03

.04

.04

-.34

-.18

-.27

.04

.03

.01

-.02

-.01

.00

.00

.00

.00

L11

-.00

-.03

-.05

-.29

-.06

.01

-.10

-.10

-.09

.03

.02

.00

-.01

-.01

-.01

L2m2

-.12

-.12

-.12

-.11

-.05

-.12

-.06

-.06

-.04

.02

.01

.00

-.02

-.02

-.03

L2m1

.00,

.00

.01

-.26

-.22

-.47

.01

-.01

-.05

-.05

-.03

-.01

-.01

-.01

-.01

L20

-.03

-.02

-.02

-.16

-.09

-.15

-.09

-.13

-.15

-.09

-.08

-.07

-.28

-.28

-.32

L21

-.08

-.09

-.09

.56

.21

.14

-.06

-.05

-.04

.01

.00

.00

.00

.00

.00

L22

-.16

-.19

-.22

.21

-.05

-.30

.02

.00

-.05

-.08

-.03

.00

-.24

-.24

-.28

Coefficient

Galileo’s Tomb

Vine Street Kitchen

Breezeway

Campus Sunset

Funston Beach Sunset

L00

1.04

.76

.71

.64

.67

.73

.32

.36

.38

.79

.94

.98

.68

.69

.70

L1m1

.44

.34

.34

.28

.32

.33

.37

.41

.45

.44

.56

.70

.32

.37

.44

L10

-.22

-.18

-.17

.42

.60

.77

-.01

-.01

-.01

-.10

-.18

-.27

-.17

-.17

-.17

L11

.71

.54

.56

-.05

-.04

-.02

-.10

-.12

-.12

.45

.38

.20

-.45

-.42

-.34

L2m2

.64

.50

.52

-.10

-.08

-.05

-.13

-.15

-.17

.18

.14

.05

-.17

-.17

-.15

L2m1

-.12

-.09

-.08

.25

.39

.53

-.01

-.02

.02

-.14

-.22

-.31

-.08

-.09

-.10

L20

-.37

-.28

-.29

.38

.54

.71

-.07

-.08

-.09

-.39

-.40

-.36

-.03

-.02

-.01

L21

-.17

-.13

-.13

.06

.01

-.02

.02

.03

0.03

.09

.07

.04

.16

.14

.10

L22

.55

.42

.42

-.03

-.02

-.03

-.29

-.32

-.36

.67

.67

.52

.37

.31

.20

The equation for diffuse reflection using spherical harmonics is

Diffuse = c1 L22 (x2y2) + c3 L20 z2 + c4 L20c5 L20 + 2c1 (L2 − 2 xy + L21 xz + L2 − 1 yz) + 2c2 (L11 x + L1 − 1 y + L10 z

The constants c1–c5 result from the derivation of this formula and are shown in the vertex shader code in Listing 12.4. The L coefficients are the nine basis function coefficients computed for a specific light probe image in the preprocessing phase. The x, y, and z values are the coordinates of the normalized surface normal at the point that is to be shaded. Unlike low dynamic range images (e.g., 8 bits per color component) that have an implicit minimum value of 0 and an implicit maximum value of 255, high dynamic range images represented with a floating-point value for each color component don’t contain well-defined minimum and maximum values. The minimum and maximum values for two HDR images may be quite different from each other, unless the same calibration or creation process was used to create both images. It is even possible to have an HDR image that contains negative values. For this reason, the vertex shader contains an overall scaling factor to make the final effect look right.

The vertex shader that encodes the formula for the nine spherical harmonic basis functions is actually quite simple. When the compiler gets hold of it, it becomes simpler still. An optimizing compiler typically reduces all the operations involving constants. The resulting code is quite efficient because it contains a relatively small number of addition and multiplication operations that involve the components of the surface normal.

Example 12.4. Vertex shader for spherical harmonics lighting

varying vec3 DiffuseColor;
uniform float ScaleFactor;

const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;

// Constants for Old Town Square lighting
const vec3 L00  = vec3( 0.871297,  0.875222,  0.864470);
const vec3 L1m1 = vec3( 0.175058,  0.245335,  0.312891);
const vec3 L10  = vec3( 0.034675,  0.036107,  0.037362);
const vec3 L11  = vec3(-0.004629, -0.029448, -0.048028);
const vec3 L2m2 = vec3(-0.120535, -0.121160, -0.117507);
const vec3 L2m1 = vec3( 0.003242,  0.003624,  0.007511);
const vec3 L20  = vec3(-0.028667, -0.024926, -0.020998);
const vec3 L21  = vec3(-0.077539, -0.086325, -0.091591);
const vec3 L22  = vec3(-0.161784, -0.191783, -0.219152);

void main()
{

    vec3 tnorm    = normalize(gl_NormalMatrix * gl_Normal);
    
    DiffuseColor =  C1 * L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +
                    C3 * L20 * tnorm.z * tnorm.z +
                    C4 * L00 -
                    C5 * L20 +
                    2.0 * C1 * L2m2 * tnorm.x * tnorm.y +
                    2.0 * C1 * L21  * tnorm.x * tnorm.z +
                    2.0 * C1 * L2m1 * tnorm.y * tnorm.z +
                    2.0 * C2 * L11  * tnorm.x +
                    2.0 * C2 * L1m1 * tnorm.y +   
                    2.0 * C2 * L10  * tnorm.z;
    
    DiffuseColor *= ScaleFactor;
    
    gl_Position = ftransform();
}

Example 12.5. Fragment shader for spherical harmonics lighting

varying vec3 DiffuseColor;

void main()
{
    gl_FragColor = vec4(DiffuseColor, 1.0);
}

Once again, our fragment shader has very little work to do. Because the diffuse reflection typically changes slowly, for scenes without large polygons we can reasonably compute it in the vertex shader and interpolate it during rasterization. As with hemispherical lighting, we can add procedurally defined point, directional, or spotlights on top of the spherical harmonics lighting to provide more illumination to important parts of the scene. Results of the spherical harmonics shader are shown in Color Plate 19. Compare Color Plate 19A with the Old Town Square environment map in Color Plate 9. Note that the top of the dog’s head has a bluish cast, while there is a brownish cast on his chin and chest. Coefficients for some of Paul Debevec’s light probe images provide even greater color variations. We could make the diffuse lighting from the spherical harmonics computation more subtle by blending it with the object’s base color.

The trade-offs in using image-based lighting versus procedurally defined lights are similar to the trade-offs between using stored textures versus procedural textures, as discussed in Chapter 11. Image-based lighting techniques can capture and recreate complex lighting environments relatively easily. It would be exceedingly difficult to simulate such an environment with a large number of procedural light sources. On the other hand, procedurally defined light sources do not use up texture memory and can easily be modified and animated.

The ÜberLight Shader

So far in this chapter we’ve discussed lighting algorithms that simulate the effect of global illumination for more realistic lighting effects. Traditional point, directional, and spotlights can be used in conjunction with these global illumination effects. However, the traditional light sources leave a lot to be desired in terms of their flexibility and ease of use.

Ronen Barzel of Pixar Animation Studios wrote a paper in 1997 that described a much more versatile lighting model specifically tailored for the creation of computer-generated films. This lighting model has so many features and controls compared to the traditional graphics hardware light source types that its RenderMan implementation became known as the “überlight” shader (i.e., the lighting shader that has everything in it except the proverbial kitchen sink). Larry Gritz wrote a public domain version of this shader that was published in Advanced RenderMan: Creating CGI for Motion Pictures, which he coauthored with Tony Apodaca. A Cg version of this shader was published by Fabio Pellacini and Kiril Vidimice of Pixar in the book GPU Gems, edited by Randima Fernando. The full-blown überlight shader has been used successfully in a variety of computer-generated films, including Toy Story, Monsters, Inc., and Finding Nemo. Because of the proven usefulness of the überlight shader, this section looks at how to implement its essential features in the OpenGL Shading Language.

ÜberLight Controls

In movies, lighting helps to tell the story in several different ways. Sharon Calahan gives a good overview of this process in the book Advanced RenderMan: Creating CGI for Motion Pictures. This description includes five important fundamentals of good lighting design that were derived from the book Matters of Light & Depth by Ross Lowell:

  • Directing the viewer’s eye

  • Creating depth

  • Conveying time of day and season

  • Enhancing mood, atmosphere, and drama

  • Revealing character personality and situation

Because of the importance of lighting to the final product, movies have dedicated lighting designers. To light computer graphics scenes, lighting designers must have an intuitive and versatile lighting model to use.

For the best results in lighting a scene, it is crucial to make proper decisions about the shape and placement of the lights. For the überlight lighting model, lights are assigned a position in world coordinates. The überlight shader uses a pair of superellipses to determine the shape of the light. A superellipse is a function that varies its shape from an ellipse to a rectangle, based on the value of a roundness parameter. By varying the roundness parameter, we can shape the beam of illumination in a variety of ways (see Figure 12.3 for some examples). The superellipse function is defined as

ÜberLight Controls
A variety of light beam shapes produced with the überlight shader. We enabled barn shaping and varied the roundness and edge width parameters of the superellipse shaping function. The top row uses edge widths of 0 and the bottom row uses 0.3. From left to right, the roundness parameter is set to 0.2, 0.5, 1.0, 2.0, and 4.0.

Figure 12.3. A variety of light beam shapes produced with the überlight shader. We enabled barn shaping and varied the roundness and edge width parameters of the superellipse shaping function. The top row uses edge widths of 0 and the bottom row uses 0.3. From left to right, the roundness parameter is set to 0.2, 0.5, 1.0, 2.0, and 4.0.

As the value for d nears 0, this function becomes the equation for a rectangle, and when d is equal to 1, the function becomes the equation for an ellipse. Values in between create shapes in between a rectangle and an ellipse, and these shapes are also useful for lighting. This is referred to in the shader as barn shaping since devices used in the theater for shaping light beams are referred to as barn doors.

It is also desirable to have a soft edge to the light, in other words, a gradual drop-off from full intensity to zero intensity. We accomplish this by defining a pair of nested superellipses. Inside the innermost superellipse, the light has full intensity. Outside the outermost superellipse, the light has zero intensity. In between, we can apply a gradual transition by using the smoothstep function. See Figure 12.3 for examples of lights with and without such soft edges.

Two more controls that add to the versatility of this lighting model are the near and far distance parameters, also known as the cuton and cutoff values. These define the region of the beam that actually provides illumination (see Figure 12.4). Again, smooth transition zones are desired so that the lighting designer can control the transition. Of course, this particular control has no real-world analogy, but it has proved to be useful for softening the lighting in a scene and preventing the light from reaching areas where no light is desired. See Figure 12.5 for an example of the effect of modifying these parameters.

Effects of the near and far distance parameters for the überlight shader

Figure 12.4. Effects of the near and far distance parameters for the überlight shader

Dramatic lighting effects achieved by alteration of the depth cutoff parameters of the überlight shader. In the first frame, the light barely reaches the elephant model. By simply adjusting the far depth edge value, we can gradually bathe our model in light.

Figure 12.5. Dramatic lighting effects achieved by alteration of the depth cutoff parameters of the überlight shader. In the first frame, the light barely reaches the elephant model. By simply adjusting the far depth edge value, we can gradually bathe our model in light.

Vertex Shader

Listing 12.6 shows the code for the vertex shader for the überlight model. The main purpose of the vertex shader is to transform vertex positions, surface normals, and the viewing (camera) position into the lighting coordinate system. In this coordinate system the light is at the origin and the z axis is pointed toward the origin of the world coordinate system. This allows us to more easily perform the lighting computations in the fragment shader. The computed values are passed to the fragment shader in the form of the varying variables LCpos, LCnorm, and LCcamera.

To perform these calculations, the application must provide ViewPosition, the position of the camera in world space, and WCLightPos, the position of the light source in world coordinates.

To do the necessary transformations, we need matrices that transform points from modeling coordinates to world coordinates (MCtoWC) and from world coordinates to the light coordinate system (WCtoLC). The corresponding matrices for transforming normals between the same coordinate systems are the inverse transpose matrices (MCtoWCit and WCtoLCit).

Example 12.6. Überlight vertex shader

uniform vec3 WCLightPos;       // Position of light in world coordinates
uniform vec4 ViewPosition;     // Position of camera in world space
uniform mat4 WCtoLC;           // World to light coordinate transform
uniform mat4 WCtoLCit;         // World to light inverse transpose
uniform mat4 MCtoWC;           // Model to world coordinate transform
uniform mat4 MCtoWCit;         // Model to world inverse transpose

varying vec3 LCpos;            // Vertex position in light coordinates
varying vec3 LCnorm;           // Normal in light coordinates
varying vec3 LCcamera;         // Camera position in light coordinates

void main()
{
    gl_Position = ftransform();

    // Compute world space position and normal
    vec4 wcPos = MCtoWC * gl_Vertex;
    vec3 wcNorm = (MCtoWCit * vec4(gl_Normal, 0.0)).xyz;

    // Compute light coordinate system camera position,
    // vertex position and normal
    LCcamera = (WCtoLC * ViewPosition).xyz;
    LCpos = (WCtoLC * wcPos).xyz;
    LCnorm = (WCtoLCit * vec4(wcNorm, 0.0)).xyz;
}

Fragment Shader

With the key values transformed into the lighting coordinate system for the specified light source, the fragment shader (Listing 12.7) can perform the necessary lighting computations. One subroutine in this shader (superEllipseShape) computes the attenuation factor of the light across the cross section of the beam. This value is 1.0 for fragments within the inner superellipse, 0 for fragments outside the outer superellipse, and a value between 0 and 1.0 for fragments between the two superellipses. Another subroutine (distanceShape) computes a similar attenuation factor along the direction of the light beam. These two values are multiplied together to give us the illumination factor for the fragment.

The computation of the light reflection is done in a manner similar to shaders we’ve examined in previous chapters. Because the computed normals may become denormalized by linear interpolation, we must renormalize them in the fragment shader to obtain more accurate results. After the attenuation factors are computed, we perform a simple reflection computation that gives a plastic appearance. You could certainly modify these computations to simulate the reflection from some other type of material.

Example 12.7. Überlight fragment shader

uniform vec3 SurfaceColor;

// Light parameters
uniform vec3 LightColor;
uniform vec3 LightWeights;

// Surface parameters
uniform vec3 SurfaceWeights;
uniform float SurfaceRoughness;
uniform bool AmbientClamping;

// Super ellipse shaping parameters
uniform bool BarnShaping;
uniform float SeWidth;
uniform float SeHeight;
uniform float SeWidthEdge;
uniform float SeHeightEdge;
uniform float SeRoundness;

// Distance shaping parameters
uniform float DsNear;
uniform float DsFar;
uniform float DsNearEdge;
uniform float DsFarEdge;

varying vec3 LCpos;          // Vertex position in light coordinates
varying vec3 LCnorm;         // Normal in light coordinates
varying vec3 LCcamera;       // Camera position in light coordinates

float superEllipseShape(vec3 pos)
{
   if (!BarnShaping)
       return 1.0;
   else
   {

       // Project the point onto the z = 1.0 plane
      vec2 ppos = pos.xy / pos.z;
      vec2 abspos = abs(ppos);

      float w = SeWidth;
      float W = SeWidth + SeWidthEdge;
      float h = SeHeight;
      float H = SeHeight + SeHeightEdge;

      float exp1 = 2.0 / SeRoundness;
      float exp2 = -SeRoundness / 2.0;

      float inner = w * h * pow(pow(h * abspos.x, exp1) +
                                pow(w * abspos.y, exp1), exp2);
      float outer = W * H * pow(pow(H * abspos.x, exp1) +
                                pow(W * abspos.y, exp1), exp2);

      return 1.0 - smoothstep(inner, outer, 1.0);
   }
}
float distanceShape(vec3 pos)
{
   float depth;

   depth = abs(pos.z);

   float dist = smoothstep(DsNear - DsNearEdge, DsNear, depth) *
                (1.0 - smoothstep(DsFar, DsFar + DsFarEdge, depth));
   return dist;
}

void main()
{
      vec3 tmpLightColor = LightColor;
      
      vec3 N = normalize(LCnorm);
      vec3 L = -normalize(LCpos);
      vec3 V = normalize(LCcamera-LCpos);
      vec3 H = normalize(L + V);
      
      vec3 tmpColor = tmpLightColor;

      float attenuation = 1.0;
      attenuation *= superEllipseShape(LCpos);
      attenuation *= distanceShape(LCpos);

      float ndotl = dot(N, L);
      float ndoth = dot(N, H);

      vec3 litResult;

      litResult[0] = AmbientClamping ? max(ndotl, 0.0) : 1.0;
      litResult[1] = max(ndotl, 0.0);
      litResult[2] = litResult[1] * max(ndoth, 0.0) * SurfaceRoughness;
      litResult *= SurfaceWeights * LightWeights;

      vec3 ambient = tmpLightColor * SurfaceColor * litResult[0];
      vec3 diffuse = tmpColor * SurfaceColor * litResult[1];
      vec3 specular = tmpColor * litResult[2];
      gl_FragColor = vec4(attenuation *
                         (ambient + diffuse + specular), 1.0);
}

An example of using this shader is shown in Color Plate 20, along with a screen shot of a user interface designed by Philip Rideout for manipulating its controls. The überlight shader as described by Barzel and Gritz actually has several additional features. It can support multiple lights, but our example shader showed just one for simplicity. The key parameters can be defined as arrays, and a loop can be executed to perform the necessary computations for each light source. In the following chapter, we show how to add shadows to this shader.

Summary

The summary of this chapter is “Just say NO!” to the traditional computer graphics lighting model.” Now that programmable graphics hardware has freed us from the shackles of the traditional hardware lighting equations, we are free to implement and experiment with a variety of new techniques. Some of the techniques we explored are both faster and more realistic than the traditional methods.

Hemisphere lighting is a simple way to approximate global illumination in a scene. Environment maps are very useful tools for simulating complex lighting environments. It is neither expensive nor difficult to capture images of real-world lighting conditions. Such light probe images can either be preprocessed and used to perform image-based lighting directly, or they can be preprocessed to compute spherical harmonic basis function coefficients that can be used for simple and high-performance lighting.

We’ve also seen that the traditional OpenGL fixed functionality lighting model leaves a lot to be desired in terms of flexibility and ease of use. Lighting models such as the one defined by the überlight shader are much more versatile and easier for artists to use.

Further Information

Hemisphere lighting has been popularized by Microsoft and is described in several presentations on DirectX. An online article, Per-Pixel Lighting, by Microsoft’s Phil Taylor describes this technique. Material in this article was derived from a talk given by Dan Baker and Chas Boyd at Meltdown 2001.

Image-based lighting builds on the foundations of texture mapping and reflection mapping first discussed by Jim Blinn and Martin Newell in 1976. Paul Debevec has recently popularized this area and maintains a Web site (http://www.debevec.org/) with lots of useful information on this topic, including a gallery of light probe images, the history of reflection mapping, electronic versions of his publications, and much more. A related Web site is http://www.hdrshop.com from which you can download the free (for personal and educational use) version of HDRShop or obtain the commercial version. There are also tutorials that helped me create the light probe images and environment maps described in this chapter.

Spherical harmonic lighting was described by Ravi Ramamoorthi and Pat Hanrahan in their 2001 SIGGRAPH paper. A lengthy and more tractable discussion of the details of this approach is available in a white paper by Robin Green of Sony.

The überlight shader for RenderMan is discussed in the book Advanced RenderMan: Creating CGI for Motion Pictures by Apodaca and Gritz. A Cg version of this shader is described by Fabio Pellacini and Kiril Vidimice in the book GPU Gems, edited by Randima Fernando.

  1. Apodaca, Anthony A., and Larry Gritz, Advanced RenderMan: Creating CGI for Motion Pictures, Morgan Kaufmann Publishers, San Francisco, 1999. http://www.renderman.org/RMR/Books/arman/materials.html

  2. Baker, Dan, and C. Boyd, Advanced Shading and Lighting, Microsoft Corp. Meltdown 2001 Presentation. http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXGLighting.ppt

  3. Barzel, Ronen, Lighting Controls for Computer Cinematography, Journal of Graphics Tools, 2(1), 1997, pp. 1–20.

  4. Blinn, James, and M.E. Newell, Texture and Reflection in Computer Generated Images, Communications of the ACM, vol. 19, no. 10, pp. 542–547, October 1976.

  5. Debevec, Paul, Image-Based Lighting, IEEE Computer Graphics and Applications, vol. 22, no 2, pp. 26-34. http://www.debevec.org/CGAIBL2/ibl-tutorial-cga2002.pdf

  6. Debevec, Paul, personal Web site. http://www.debevec.org

  7. Debevec, Paul, and J. Malik, Recovering High Dynamic Range Radiance Maps from Photographs, Computer Graphics (SIGGRAPH ’97 Proceedings), pp. 369–378. http://www.debevec.org/Research/HDR/

  8. Debevec, Paul, Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography, Computer Graphics (SIGGRAPH ’98 Proceedings), pp. 189–198. http://athens.ict.usc.edu/Research/IBL/

  9. GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics, Editor: Randima Fernando, Addison-Wesley, Reading, Massachusetts, 2004. http://developer.nvidia.com/object/gpu_gems_home.html

  10. Green, Robin, Spherical Harmonic Lighting: The Gritty Details, GDC 2003 Presentation. http://www.research.scea.com/gdc2003/spherical-harmoniclighting.html

  11. Lowell, Ross, Matters of Light & Depth, Lowel-Light Manufacturing, 1992.

  12. Ramamoorthi, Ravi, and P. Hanrahan, An Efficient Representation for Irradiance Environment Maps, Computer Graphics (SIGGRAPH 2001 Proceedings), pp. 497–500. http://www1.cs.columbia.edu/~ravir/papers/envmap/index.html

  13. Taylor, Philip, Per-Pixel Lighting, Microsoft Corp., Nov. 2001.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.238.116.201