Chapter 14. Surface Characteristics

Up to this point, we have primarily been modeling surface reflection in a simplistic way. The traditional reflection model is simple enough to compute and gives reasonable results, but it reproduces the appearance of only a small number of materials. In the real world, there is enormous variety in the way objects interact with light. To simulate the interaction of light with materials such as water, metals, metallic car paints, CDs, human skin, butterfly wings, and peacock feathers, we need to go beyond the basics.

One way to achieve greater degrees of realism in modeling the interaction between light and surfaces is to use models that are more firmly based on the physics of light reflection, absorption, and transmission. Such models have been the pursuit of graphics researchers since the 1970s. With the programmable graphics hardware of today, we are now at a point where such models can be used in real-time graphics applications to achieve unprecedented realism. A performance optimization that is often employed is to precompute these functions and store the results in textures that can be accessed from within a shader. A second way to achieve greater degrees of realism is to measure or photograph the characteristics of real materials and use these measurements in our shading algorithms. In this chapter, we look at shaders based on both approaches.

In addition, we have not yet looked at any shaders that allow for the transmission of light through a surface. This is our first example as we look at several shaders that model materials with differing surface characteristics.

Refraction

Refraction is the bending of light as it passes through a boundary between surfaces with different optical densities. You can easily see this effect by looking through the side of an aquarium or at a straw in a glass of water. Light bends by different amounts as it passes from one material to another, depending on the materials that are transmitting light. This effect is caused by light traveling at different speeds in different types of materials. This characteristic of a material is called its INDEX OF REFRACTION, and this value has been determined for many common materials that transmit light. It is easy to model refraction in our shaders with the built-in function refract. The key parameter that is required is the ratio of the index of refraction for the two materials forming a boundary where refraction occurs. The application can compute this ratio and provide it to the OpenGL shader as a uniform variable. Given a surface normal, an angle of incidence, and the aforementioned ratio, the refract function applies Snell’s law to compute the refracted vector. We can use the refracted vector in a fragment shader to access a cube map to determine the surface color for a transparent object.

Once again, our goal is to produce results that are “good enough.” In other words, we’re after a refraction effect that looks plausible, rather than a physically accurate simulation. One simplification that we make is that we model the refraction effect at only one surface boundary. When light goes from air through glass, it is refracted once at the air-glass boundary, transmitted through the glass, and refracted again at the glass-air boundary on the other side. We satisfy ourselves with simulating the first refraction effect. The results of refraction are complex enough that most people would not be able to tell the difference in the final image.

If we go ahead and write a shader that performs refraction, we will likely be somewhat disappointed in the results. It turns out that most transparent objects exhibit both reflection and refraction. The surface of a lake reflects the mountains in the distance if you are looking at the lake from one side. But if you get into your boat and go out into the lake and look straight down, you may see fish swimming around underneath the surface. This is known as the FRESNEL EFFECT. The Fresnel equations describe the reflection and refraction that occur at a material boundary as a function of the angle of incidence, the polarization and wavelength of the light, and the indices of refraction of the materials involved. It turns out that many materials exhibit a higher degree of reflectivity at extremely shallow (grazing) angles. Even a material such as nonglossy paper exhibits this phenomenon. For instance, hold a sheet of paper (or a book) so that you are looking at a page at a grazing angle and looking towards a light source. You will see a specular (mirrorlike) reflection from the paper, something you wouldn’t see at steeper angles.

Because the Fresnel equations are relatively complex, we make the simplifying assumptions that (A) the light in our scene is not polarized, (B) all light is of the same wavelength (but we loosen this assumption later in this section), and (C) it is sufficient to use an approximation to the Fresnel equations rather than the exact equations themselves. An approximation for the ratio between reflected light and refracted light created by Christophe Schlick is

F = f + (1 − f) (1 − VN)5

In this equation, V is the direction of view, N is the surface normal, and f is the reflectance of the material when θ is 0 given by

Refraction

where n1 and n2 are the indices of refraction for materials 1 and 2.

Let’s put this together in a shader. Figure 14.1 shows the relevant parameters in two dimensions. For the direction of view V, we want to compute a reflected ray and a refracted ray. We use each of these to access a texture in a cube map. We linearly blend the two values with a ratio we compute using the Fresnel approximations described above.

The geometry of refraction

Figure 14.1. The geometry of refraction

In The Cg Tutorial, Randima Fernando and Mark Kilgard describe Cg shaders for refraction that can easily be implemented in GLSL. The code for our vertex shader is shown in Listing 14.1. The ratio of indices of refraction for the two materials is precomputed and stored in the constant Eta. A value of 0.66 represents a boundary between air (index of refraction 1.000293) and glass (index of refraction 1.52). We can allow the user to control the amount of reflectivity at grazing angles by using a variable for the Fresnel power. Lower values provide higher degrees of reflectivity at grazing angles, whereas higher values reduce this effect. The value for f in the equations above is also stored as a constant. (We could have the application provide Eta and FresnelPower as uniform variables. This would then require the application to compute and pass F as well.)

The vertex shader uses the viewing position and the surface normal to compute a reflected ray and a refracted ray. The vertex position is transformed into eye coordinates. The reflect and refract functions both require an incident vector. This is just the vector going in the direction opposite of V in Figure 14.1. We compute this vector (i) by subtracting the viewing position (which is defined as being at (0, 0, 0) in the eye coordinate system) from the eye coordinate position and normalizing the result. We also transform the surface normal into the eye coordinate system and normalize it (n).

To compute the angle θ, we really need the vector V as shown in Figure 14.1 instead of i so that we can perform a dot product operation. We get this vector by negating i. We plug the values into the Fresnel approximation equation to get the ratio between the reflective and refractive components.

The values for i and n are sent to the built-in functions reflect and refract to compute a reflected vector and a refracted vector. These are used in the fragment shader to access the environment map. The application that uses these shaders allows the environment map to be rotated independently of the geometry. This transformation is stored in one of OpenGL’s texture matrices. The resulting rays must be transformed with this matrix to access the proper location in the rotated environment.

Example 14.1. Vertex shader for Fresnel reflection/refraction effect

const float Eta = 0.66;         // Ratio of indices of refraction
const float FresnelPower = 5.0;

const float F  = ((1.0-Eta) * (1.0-Eta)) / ((1.0+Eta) * (1.0+Eta));

varying vec3  Reflect;
varying vec3  Refract;
varying float Ratio;

void main()
{

    vec4 ecPosition  = gl_ModelViewMatrix * gl_Vertex;
    vec3 ecPosition3 = ecPosition.xyz / ecPosition.w;

    vec3 i = normalize(ecPosition3);
    vec3 n = normalize(gl_NormalMatrix * gl_Normal);

    Ratio   = F + (1.0 - F) * pow((1.0 - dot(-i, n)), FresnelPower);

    Refract = refract(i, n, Eta);
    Refract = vec3(gl_TextureMatrix[0] * vec4(Refract, 1.0));

    Reflect = reflect(i, n);
    Reflect = vec3(gl_TextureMatrix[0] * vec4(Reflect, 1.0));

    gl_Position = ftransform();
}

The corresponding fragment shader is shown in Listing 14.2. All the hard work has been done in the vertex shader. All that remains for the fragment shader is to perform the two environment map lookups and to use the computed ratio to blend the two values.

Example 14.2. Fragment shader for Fresnel reflection/refraction effect

varying vec3  Reflect;
varying vec3  Refract;
varying float Ratio;

uniform samplerCube Cubemap;

void main()
{
    vec3 refractColor = vec3(textureCube(Cubemap, Refract));
    vec3 reflectColor = vec3(textureCube(Cubemap, Reflect));

    vec3 color   = mix(refractColor, reflectColor, Ratio);

    gl_FragColor = vec4(color, 1.0);
}

With a small modification, we can get our reflection/refraction shader to perform another cool effect, although we stray a bit further from realistic physics. As stated earlier, the refraction of light is wavelength dependent. We made the simplifying assumption that all our light was a single wavelength, and this allowed us to compute a single refracted ray. In reality, there would be a continuum of refracted rays, one for each constituent wavelength of the light source. The breaking up of a light source into its constituent components, for example, with a prism, is called CHROMATIC DISPERSION. In camera lenses, this effect is undesirable and is called CHROMATIC ABERRATION.

We can model our light as though it contains three wavelengths of light: red, green, and blue. By providing a slightly different index of refraction for each of red, green, and blue, we can compute three slightly different refraction rays (see Listing 14.3). These three rays are passed to the fragment shader, where they perform three environment map accesses. The RefractR ray obtains just the red component of the final refracted color, and RefractG and RefractB obtain the green and blue components similarly. The result is used as the refracted color value. The remainder of the fragment shader is the same (see Listing 14.4).

Example 14.3. Vertex shader for chromatic aberration effect

const float EtaR = 0.65;
const float EtaG = 0.67;         // Ratio of indices of refraction
const float EtaB = 0.69;
const float FresnelPower = 5.0;

const float F  = ((1.0-EtaG) * (1.0-EtaG)) / ((1.0+EtaG) * (1.0+EtaG));

varying vec3  Reflect;
varying vec3  RefractR;
varying vec3  RefractG;
varying vec3  RefractB;
varying float Ratio;

void main()
{
    vec4 ecPosition  = gl_ModelViewMatrix * gl_Vertex;
    vec3 ecPosition3 = ecPosition.xyz / ecPosition.w;

    vec3 i = normalize(ecPosition3);
    vec3 n = normalize(gl_NormalMatrix * gl_Normal);

    Ratio   = F + (1.0 - F) * pow((1.0 - dot(-i, n)), FresnelPower);

    RefractR = refract(i, n, EtaR);
    RefractR = vec3(gl_TextureMatrix[0] * vec4(RefractR, 1.0));
    RefractG = refract(i, n, EtaG);
    RefractG = vec3(gl_TextureMatrix[0] * vec4(RefractG, 1.0));

    RefractB = refract(i, n, EtaB);
    RefractB = vec3(gl_TextureMatrix[0] * vec4(RefractB, 1.0));

    Reflect  = reflect(i, n);
    Reflect  = vec3(gl_TextureMatrix[0] * vec4(Reflect, 1.0));

    gl_Position = ftransform();
}

Example 14.4. Fragment shader for chromatic aberration effect

varying vec3  Reflect;
varying vec3  RefractR;
varying vec3  RefractG;
varying vec3  RefractB;
varying float Ratio;

uniform samplerCube Cubemap;

void main()
{
    vec3 refractColor, reflectColor;

    refractColor.r = vec3(textureCube(Cubemap, RefractR)).r;
    refractColor.g = vec3(textureCube(Cubemap, RefractG)).g;
    refractColor.b = vec3(textureCube(Cubemap, RefractB)).b;

    reflectColor   = vec3(textureCube(Cubemap, Reflect));

    vec3 color     = mix(refractColor, reflectColor, Ratio);

    gl_FragColor   = vec4(color, 1.0);
}

Results of these shaders are shown in Color Plate 17. Notice the color fringes that occur on the character’s knee and chest and on the top of his arm.

Diffraction

by Mike Weiblen

DIFFRACTION is the effect of light bending around a sharp edge. A device called a DIFFRACTION GRATING leverages that effect to efficiently split white light into the rainbow of its constituent colors. Jos Stam described how to approximate this effect, first with assembly language shaders (in a SIGGRAPH ’99 paper) and then with Cg (in an article in the book GPU Gems). Let’s see how we can approximate the behavior of a diffraction grating with an OpenGL shader.

First, let’s quickly review the wave theory of light and diffraction gratings. One way of describing the behavior of visible light is as waves of electromagnetic radiation. The distance between crests of those waves is called the wavelength, usually represented by the Greek letter lambda (λ).

The wavelength is what determines the color we perceive when the light hits the sensing cells on the retina of the eye. The human eye is sensitive to the range of wavelengths beginning from about 400 nanometers (nm) for deep violet, up to about 700nm for dark red. Within that range are what humans perceive as all the colors of the rainbow.

A diffraction grating is a tool for separating light based on its wavelength, similar in effect to a prism but using diffraction rather than refraction. Diffraction gratings typically are very closely spaced parallel lines in an opaque or reflective material. They were originally made with a mechanical engine that precisely scribed parallel lines onto the surface of a mirror. Modern gratings are usually created with photographic processes.

The lines of a grating have a spacing roughly on the order of the wavelengths of visible light. Because of the difference in path length when white light is reflected from adjacent mirrored lines, the different wavelengths of reflected light interfere and reinforce or cancel, depending on whether the waves constructively or destructively interfere.

For a given wavelength, if the path length of light reflecting from two adjacent lines differs by an integer number of wavelengths (meaning that the crests of the waves reflected from each line coincide), that color of light constructively interferes and reinforces in intensity. If the path difference is an integer number of wavelengths plus half a wavelength (meaning that crests of waves from one line coincide with troughs from the other line), those waves destructively interfere and extinguish at that wavelength. That interference condition varies according to the wavelength of the light, the spacing of the grating lines, and the angle of the light’s path (both incident and reflected) with respect to the grating surface. Because of that interference, white light breaks into its component colors as the light source and eyepoint move with respect to the diffracting surface.

Everyday examples of diffraction gratings include compact discs, novelty “holographic” gift-wrapping papers, and the rainbow logos on modern credit cards used to discourage counterfeiting.

To demonstrate this shader, we use the everyday compact disc as a familiar example; extending this shader for other applications is straightforward.

While everyone is familiar with the overall physical appearance of a CD, let’s look at the microscopic characteristics that make it a functional diffraction grating. A CD consists of one long spiral of microscopic pits embossed onto one side of a sheet of mirrored plastic. The dimensions of those pits is on the order of several hundred nanometers, or the same order of magnitude as the wavelengths of visible light. The track pitch of the spiral of pits (i.e., the spacing between each winding of the spiral) is nominally 1600 nanometers. The range of those dimensions, being so close to visible wavelengths, is what gives a CD its rainbow light-splitting qualities.

Our diffraction shader computes two independent output color components per vertex:

  1. An anisotropic glint that reflects the color of the light source

  2. A color based on accumulation of the wavelengths that constructively interfere for the given light source and eyepoint locations.

We can do this computation just by using a vertex shader. No fragment processing beyond typical OpenGL fixed functionality is necessary. Therefore, we write this vertex shader to take advantage of OpenGL’s capability to combine programmable and fixed functionality processing. The vertex shader writes to the built-in varying variable gl_FrontColor and the special output variable gl_Position, and no programmable fragment processing is necessary.

The code for the diffraction vertex shader is shown in Listing 14.5. To render the diffraction effect, the shader requires the application to send a normal and tangent for each vertex. For this shader, the tangent is defined to be parallel to the orientation of the simulated grating lines. In the case of a compact disc (which has a spiral of pits on its mirrored surface), the close spacing of that spiral creates a diffraction grating of basically concentric circles, so the tangent is tangent to those circles.

Since the shader uses wavelength to compute the constructive interference, we need to convert wavelength to OpenGL’s RGB representation of color. We use the function lambda2rgb, which approximates the conversion of wavelength to RGB by using a bump function. We begin the conversion by mapping the range of visible wavelengths to a normalized 0.0 to 1.0 range. From that normalized wavelength, we create a vec3 by subtracting an offset for each of the red/green/blue bands. Then for each color component, we compute the contribution with the bump expression 1 – cx2 clamped to the range of [0, 1]. The c term controls the width of the bump and is selected for best appearance by allowing the bumps to overlap somewhat, approximating a relatively smooth rainbow spread. This bump function is quick and easy to implement, but we could use another approach to the wavelength-to-RGB conversion, for example, using the normalized wavelength to index into a lookup table or using a 1D texture, which would be tuned for enhanced spectral qualities.

More than one wavelength can satisfy the constructive interference condition at a vertex for a given set of lighting and viewing conditions, so the shader must accumulate the contribution from each of those wavelengths. Using the condition that constructive interference occurs at path differences of integer wavelength, the shader iterates over those integers to determine the reinforced wavelength. That wavelength is converted to an RGB value by the lambda2rgb function and accumulated in diffColor.

A specular glint of HighlightColor is reflected from the grating lines in the region where diffractive interference does not occur. The SurfaceRoughness term controls the width of that highlight to approximate the scattering of light from the microscopic pits.

The final steps of the shader consist of the typical vertex transformation to compute gl_Position and the summing of the lighting contributions to determine gl_FrontColor. The diffAtten term attenuates the diffraction color slightly to prevent the colors from being too intensely garish.

A simplification we made in this shader is this: Rather than attempt to represent the spectral composition of the HighlightColor light source, we assume the incident light source is a flat spectrum of white light.

Being solely a vertex shader, the coloring is computed only at vertices. Since diffraction gratings can produce dramatic changes in color for a small displacement, there is an opportunity for artifacts caused by insufficient tesselation. Depending on the choice of performance trade-offs, this shader could easily be ported to a fragment shader if per-pixel shading is preferred.

Results from the diffraction shader are shown in Figure 14.2 and Color Plate 17.

The diffraction shader simulates the look of a vinyl phonograph record (3Dlabs, Inc.)

Figure 14.2. The diffraction shader simulates the look of a vinyl phonograph record (3Dlabs, Inc.)

Example 14.5. Vertex shader for diffraction effect

attribute vec3 Tangent;     // parallel to grating lines at each vertex

// map a visible wavelength [nm] to OpenGL's RGB representation

vec3 lambda2rgb(float lambda)
{
    const float ultraviolet = 400.0;
    const float infrared    = 700.0;

    // map visible wavelength range to 0.0 -> 1.0
    float a = (lambda-ultraviolet) / (infrared-ultraviolet);

    // bump function for a quick/simple rainbow map
    const float C = 7.0;        // controls width of bump
    vec3 b = vec3(a) - vec3(0.75, 0.5, 0.25);
    return max((1.0 - C * b * b), 0.0);
}
void main()
{
    // extract positions from input uniforms
    vec3 lightPosition = gl_LightSource[0].position.xyz;
    vec3 eyePosition   = -osg_ViewMatrix[3].xyz / osg_ViewMatrix[3].w;

    // H = halfway vector between light and viewer from vertex
    vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
    vec3 L = normalize(lightPosition - P);
    vec3 V = normalize(eyePosition - P);
    vec3 H = L + V;

    // accumulate contributions from constructive interference
    // over several spectral orders.
    vec3 T  = gl_NormalMatrix * Tangent;
    float u = abs(dot(T, H));
    vec3 diffColor = vec3(0.0);
    const int numSpectralOrders = 3;
    for (int m = 1; m <= numSpectralOrders; ++m)
    {
        float lambda = GratingSpacing * u / float(m);
        diffColor += lambda2rgb(lambda);
    }

    // compute anisotropic highlight for zero-order (m = 0) reflection.
    vec3  N = gl_NormalMatrix * gl_Normal;
    float w = dot(N, H);
    float e = SurfaceRoughness * u / w;
    vec3 hilight = exp(-e * e) * HighlightColor;

    // write the values required for fixed function fragment processing
    const float diffAtten = 0.8; // attenuation of the diffraction color
    gl_FrontColor = vec4(diffAtten * diffColor + hilight, 1.0);
    gl_Position = ftransform();
}

BRDF Models

The traditional OpenGL reflectance model and the one that we have been using for most of the previous shader examples in this book (see, for example, Section 6.2) consists of three components: ambient, diffuse, and specular. The ambient component is assumed to provide a certain level of illumination to everything in the scene and is reflected equally in all directions by everything in the scene. The diffuse and specular components are directional in nature and are due to illumination from a particular light source. The diffuse component models reflection from a surface that is scattered in all directions. The diffuse reflection is strongest where the surface normal points directly at the light source, and it drops to zero where the surface normal is pointing 90° or more away from the light source. Specular reflection models the highlights caused by reflection from surfaces that are mirrorlike or nearly so. Specular highlights are concentrated on the mirror direction.

But relatively few materials have perfectly specular (mirrorlike) or diffuse (Lambertian) reflection characteristics. To model more physically realistic surfaces, we must go beyond the simplistic lighting/reflection model that is built into OpenGL. This model was developed empirically and is not physically accurate. Furthermore, it can realistically simulate the reflection from only a relatively small class of materials.

For more than two decades, computer graphics researchers have been rendering images with more realistic reflection models called BIDIRECTIONAL REFLECTANCE DISTRIBUTION FUNCTIONS, or BRDFS. A BRDF model for computing the reflection from a surface takes into account the input direction of incoming light and the outgoing direction of reflected light. The elevation and azimuth angles of these direction vectors are used to compute the relative amount of light reflected in the outgoing direction (the fixed functionality OpenGL model uses only the elevation angle). A BRDF model renders surfaces with ANISOTROPIC reflection properties (i.e., surfaces that are not rotationally invariant in their surface reflection properties). Instruments have been developed to measure the BRDF of real materials. In some cases, the measured data has been used to create a function with a few parameters that can be modified to model the reflective characteristics of a variety of materials. In other cases, the measured data has been sampled to produce texture maps that reconstruct the BRDF function at runtime. A variety of different measuring, sampling, and reconstruction methods have been devised to use BRDFs in computer graphics, and this is still an area of active research.

Generally speaking, the amount of light that is reflected to a particular viewing position depends on the position of the light, the position of the viewer, and the surface normal and tangent. If any of these changes, the amount of light reflected to the viewer may also change. The surface characteristics also play a role because different wavelengths of light may be reflected, transmitted, or absorbed, depending on the physical properties of the material. Shiny materials have concentrated, near-mirrorlike specular highlights. Rough materials have specular highlights that are more spread out. Metals have specular highlights that are the color of the metal rather than the color of the light source. The color of reflected light may change as the reflection approaches a grazing angle with the surface. Materials with small brush marks or grooves reflect light differently as they are rotated, and the shapes of their specular highlights also change. These are the types of effects that BRDF models are intended to accurately reproduce.

A BRDF is a function of two pairs of angles as well as the wavelength and polarization of the incoming light. The angles are the altitude and azimuth of the incident light vector (θi, φi) and the altitude and azimuth of the reflected light vector (θr, φr). Both sets of angles are given with respect to a given tangent vector. For simplicity, some BRDF models omit polarization effects and assume that the function is the same for all wavelengths. Because the incident and reflected light vectors are measured against a fixed tangent vector in the plane of a surface, BRDF models can reproduce the reflective characteristics of anisotropic materials such as brushed or rolled metals. And because both the incident and reflected light vectors are considered, BRDF models can also reproduce the changes in specular highlight shapes or colors that occur when an object is illuminated by a light source at a grazing angle.

BRDF models can either be theoretical or empirical. Theoretical models attempt to model the physics of light and materials in order to reproduce the observed reflectance properties. In contrast, an empirical model is a function with adjustable parameters that is designed to fit measured reflectance data for a certain class of materials. The volume of measured data typically prohibits its direct use in a computer graphics environment, and this data is often imperfect or incomplete. Somehow, the measured data must be boiled down to a few useful values that can be plugged into a formula or used to create textures that can be accessed during rendering. A variety of methods for reducing the measured data have been developed.

One such model was described by Greg Ward in a 1992 SIGGRAPH paper. He and his colleagues at Lawrence Berkeley Laboratory built a device that was relatively efficient in collecting reflectance data from a variety of materials. The measurements were the basis for creating a mathematical reflectance model that provided a reasonable approximation to the measured data. Ward’s goal was to produce a simple empirical formula that was physically valid and fit the measured reflectance data for a variety of different materials. Ward measured the reflectivity of various materials to determine a few key values with physical meaning and plugged those values into the formula he developed to replicate the measured data in a computer graphics environment.

To understand Ward’s model, we should first review the geometry involved, as shown in Figure 14.3. This diagram shows a point on a surface and the relevant direction vectors that are used in the reflection computation:

The geometry of reflection

Figure 14.3. The geometry of reflection

  • N is the unit surface normal.

  • L is the unit vector in the direction of the simulated light source.

  • V is the unit vector in the direction of the viewer.

  • R is the unit vector in the direction of reflection from the simulated light source.

  • H is the unit angular bisector of V and L (sometimes called the halfway vector).

  • T is a unit vector in the plane of the surface that is perpendicular to N (i.e., the tangent).

  • B is a unit vector in the plane of the surface that is perpendicular to both N and T (i.e., the binormal).

The formula developed by Ward is based on a Gaussian reflectance model. The key parameters of the formula are the diffuse reflectivity of the surface (ρd), the specular reflectivity of the surface (ρs), and the standard deviation of the surface slope (α). The final parameter is a measure of the roughness of a surface. The assumption is that a surface is made up of tiny microfacets that reflect in a specular fashion. For a mirrorlike surface, all the microfacets line up with the surface. For a rougher surface, some random orientation of these microfacets causes the specular highlight to spread out more. The fraction of facets that are oriented in the direction of H is called the facet slope distribution function, or the surface slope. Several possibilities for this function have been suggested.

With but a single value for the surface slope, the mathematical model is limited in its ability to reproduce materials exhibiting anisotropic reflection. To deal with this, Ward’s model includes two α values, one for the standard deviation of the surface slope in the x direction (i.e., in the direction of the surface tangent vector T) and one for the standard deviation of the surface slope in the y direction (i.e., in the direction of the surface binormal value B). The formula used by Ward to fit his measured reflectance data and the key parameters derived from that data is

The geometry of reflection

This formula looks a bit onerous, but Ward has supplied values for ρd, ρs, αx, and αy for several materials, and all we need to do is code the formula in the OpenGL Shading Language. The result of this BRDF is plugged into the overall equation for illumination, which looks like this

The geometry of reflection

This formula basically states that the reflected radiance is the sum of a general indirect radiance contribution, plus an indirect semispecular contribution, plus the radiance from each of N light sources in the scene. I is the indirect radiance, Ls is the radiance from the indirect semispecular contribution, and Li is the radiance from light source i. For the remaining terms, ωi is the solid angle in steradians of light source i, and ρbd is the BRDF defined in the previous equation.

This all translates quite easily into OpenGL Shading Language code. To get higher-quality results, we compute all the vectors in the vertex shader, interpolate them, and then perform the reflection computations in the fragment shader.

The application is expected to provide four attributes for every vertex. Two of them are standard OpenGL attributes and need not be defined by our vertex program: gl_Vertex (position) and gl_Normal (surface normal). The other two attributes are a tangent vector and a binormal vector, which the application computes. These two attributes should be provided to OpenGL with either the glVertexAttrib function or a generic vertex array. The location to be used for these generic attributes can be bound to the appropriate attribute in our vertex shader with glBindAttribLocation. For instance, if we choose to pass the tangent values in vertex attribute location 3 and the binormal values in vertex attribute location 4, we would set up the binding with these lines of code:

glBindAttribLocation(programObj, 3, "Tangent");
glBindAttribLocation(programObj, 4, "Binormal");

If the variable tangent is defined to be an array of three floats and binormal is also defined as an array of three floats, we can pass in these generic vertex attributes by using the following calls:

glVertexAttrib3fv(3, tangent);
glVertexAttrib3fv(4, binormal);

Alternatively, we could pass these values to OpenGL by using generic vertex arrays.

Listing 14.6 contains the vertex shader. Its primary job is to compute and normalize the vectors shown in Figure 14.3, namely, the unit vectors N, L, V, H, R, T, and B. We compute the values for N, T, and B by transforming the application-supplied normal, tangent, and binormal into eye coordinates. We compute the reflection vector R by using the built-in function reflect. We determine L by normalizing the direction to the light source. Because the viewing position is defined to be at the origin in eye coordinates, we compute V by transforming the viewing position into eye coordinates and subtracting the surface position in eye coordinates. H is the normalized sum of L and V. All seven of these values are stored in varying variables that will be interpolated and made available to the fragment shader.

Example 14.6. Vertex shader for rendering with Ward’s BRDF model

attribute vec3 Tangent;
attribute vec3 Binormal;

uniform vec3 LightDir;  // Light direction in eye coordinates
uniform vec4 ViewPosition;

varying vec3 N, L, H, R, T, B;

void main()
{
    vec3 V, eyeDir;
    vec4 pos;

    pos    = gl_ModelViewMatrix * gl_Vertex;
    eyeDir = pos.xyz;

    N = normalize(gl_NormalMatrix * gl_Normal);
    L = normalize(LightDir);
    V = normalize((gl_ModelViewMatrix * ViewPosition).xyz - pos.xyz);
    H = normalize(L + V);
    R = normalize(reflect(eyeDir, N));
    T = normalize(gl_NormalMatrix * Tangent);
    B = normalize(gl_NormalMatrix * Binormal);

    gl_Position = ftransform();
}

It is then up to the fragment shader to implement the equations defined previously. The values that parameterize a material (ρd, ρs, αx, αy) are passed as the uniform variables P and A. We can use the values from the table published in Ward’s paper or try some values of our own. The base color of the surface is also passed as a uniform variable (Ward’s measurements did not include color for any of the materials). Instead of dealing with the radiance and solid angles of light sources, we just use a uniform variable to supply coefficients that manipulate these terms directly.

The vectors passed as varying variables become denormalized during interpolation, but if the polygons in the scene are all relatively small, this effect is hard to notice. For this reason, we can usually skip the step of renormalizing these values in the fragment shader. The first three lines of code in the fragment shader (Listing 14.7) compute the expression in the exp function from Ward’s BRDF. The next two lines obtain the necessary cosine values by computing the dot product of the appropriate vectors. We then use these values to compute the value for brdf, which is the same as ρbd in the equations above. The next equation puts it all together into an intensity value that attenuates the base color for the surface. The attenuated value becomes the final color for the fragment.

Example 14.7. Fragment shader for rendering with Ward’s BRDF model

const float PI = 3.14159;
const float ONE_OVER_PI = 1.0 / PI;

uniform vec4 SurfaceColor; // Base color of surface
uniform vec2 P;            // Diffuse (x) and specular reflectance (y)
uniform vec2 A;            // Slope distribution in x and y
uniform vec3 Scale;        // Scale factors for intensity computation

varying vec3 N, L, H, R, T, B;

void main()
{
    float e1, e2, E, cosThetaI, cosThetaR, brdf, intensity;

    e1 = dot(H, T) / A.x;
    e2 = dot(H, B) / A.y;
    E = -2.0 * ((e1 * e1 + e2 * e2) / (1.0 + dot(H, N)));

    cosThetaI = dot(N, L);
    cosThetaR = dot(N, R);

    brdf = P.x * ONE_OVER_PI +
           P.y * (1.0 / sqrt(cosThetaI * cosThetaR)) *
           (1.0 / (4.0 * PI * A.x * A.y)) * exp(E);

    intensity = Scale[0] * P.x * ONE_OVER_PI +
                Scale[1] * P.y * cosThetaI * brdf +
                Scale[2] * dot(H, N) * P.y;

    vec3 color = intensity * SurfaceColor.rgb;

    gl_FragColor = vec4(color, 1.0);
}

Some results from this shader are shown in Color Plate 23. It certainly would be possible to extend the formula and surface parameterization values to operate on three channels instead of just one. The resulting shader could be used to simulate materials whose specular highlight changes color depending on the viewing angle.

Polynomial Texture Mapping with BRDF Data

This section describes the OpenGL Shading Language BRDF shaders that use the Polynomial Texture Mapping technique developed by Hewlett-Packard. The shaders presented are courtesy of Brad Ritter, Hewlett-Packard. The BRDF data is from Cornell University. It was obtained by measurement of reflections from several types of automotive paints that were supplied by Ford Motor Co.

One reason this type of rendering is important is that it achieves realistic rendering of materials whose reflection characteristics vary as a function of view angle and light direction. Such is the case with these automotive paints. To a car designer, it is extremely important to be able to visualize the final “look” of the car, even when it is painted with a material whose reflection characteristics vary as a function of view angle and light direction. One of the paint samples tested by Cornell, Mystique Lacquer, has the peculiar property that the color of its specular highlight color changes as a function of viewing angle. This material cannot be adequately rendered if only conventional texture-mapping techniques are used.

The textures used in this example are called POLYNOMIAL TEXTURE MAPS, or PTMs. PTMs are essentially light-dependent texture maps; PTMs are described in a 2001 SIGGRAPH paper by Malzbender, Gelb, and Wolters. PTMs reconstruct the color of a surface under varying lighting conditions. When a surface is rendered with a PTM, it takes on different illumination characteristics depending on the direction of the light source. As with bump mapping, this behavior helps viewers by providing perceptual clues about the surface geometry. But PTMs go beyond bump maps in that they capture surface variations resulting from self-shadowing and interreflections. PTMs are generated from real materials and preserve the visual characteristics of the actual materials. Polynomial texture mapping is an image-based technique that does not require bump maps or the modeling of complex geometry.

The image in Color Plate 27A shows two triangles from a PTM demo developed by Hewlett-Packard. The triangle on the upper right has been rendered with a polynomial texture map, and the triangle on the lower left has been rendered with a conventional 2D texture map. The objects that were used in the construction of the texture maps were a metallic bezel with the Hewlett-Packard logo on it and a brushed metal notebook cover with an embossed 3Dlabs logo. As you move the simulated light source in the demo, the conventional texture looks flat and somewhat unrealistic, whereas the PTM texture faithfully reproduces the highlights and surface shadowing that occur on the real-life objects. In the image captured here, the light source is a bit in front and above the two triangles. The PTM shows realistic reflections, but the conventional texture can only reproduce the lighting effect from a single lighting angle (in this case, as if the light were directly in front of the object).

The PTM technique developed by HP requires as input a set of images of the desired object, with the object illuminated by a light source of a different known direction for each image, all captured from the same viewpoint. For each texel of the PTM, these source images are sampled and a least-squares biquadric curve fit is performed to obtain a polynomial that approximates the lighting function for that texel. This part of the process is partly science and partly art (a bit of manual intervention can improve the end results). The biquadric equation generated in this manner allows runtime reconstruction of the lighting function for the source material. The coefficients stored in the PTM are A, B, C, D, E, and F, as shown in this equation:

Au2 + Bv2 + Cuv + Du + Ev + F

One use of PTMs is for representing materials with surface properties that vary spatially across the surface. Things like brushed metal, woven fabric, wood, and stone are all materials that reflect light differently depending on the viewing angle and light source direction. They may also have interreflections and self-shadowing. The PTM technique captures these details and reproduces them at runtime. There are two variants for PTMs: luminance (LRGB) and RGB. An LRGB PTM uses the biquadric polynomials to determine the brightness of each rendered pixel. Because each texel in an LRGB PTM has its own biquadric polynomial function, the luminance or brightness characteristics of each texel can be unique. An RGB PTM uses a separate biquadric polynomial for each of the three colors: red, green, and blue. This allows objects rendered with an RGB PTM to vary in color as the light position shifts. Thus, color-shifting materials such as holograms can be accurately reconstructed with an RGB PTM.

The key to creating a PTM for these types of spatially varying materials is to capture images of them as lit from a variety of light source directions. Engineers at Hewlett-Packard have developed an instrument—a dome with multiple light sources and a camera mounted at the top—to do just that. This device can automatically capture 50 images of the source material from a single fixed camera position as illuminated by light sources in different positions. A photograph of this picture-taking device is shown in Figure 14.4.

A device for capturing images for the creation of polynomial texture maps (© Copyright 2003, Hewlett-Packard Development Company, L.P., reproduced with permission)

Figure 14.4. A device for capturing images for the creation of polynomial texture maps (© Copyright 2003, Hewlett-Packard Development Company, L.P., reproduced with permission)

The image data collected with this device is the basis for creating a PTM for the real-world texture of a material (e.g., automobile paints). These types of PTMs have four degrees of freedom. Two of these represent the spatially varying characteristics of the material. These two degrees of freedom are controlled by the 2D texture coordinates. The remaining two degrees of freedom represent the light direction. These are the two independent variables in the biquadric polynomial.

A BRDF PTM is slightly different from a spatially varying PTM. BRDF PTMs model homogeneous materials—that is, they do not vary spatially. BRDF PTMs use two degrees of freedom to represent the light direction, and the remaining two degrees of freedom represent the view direction. The parameterized light direction (Lu,Lv) is used for the independent variables of the biquadric polynomial, and the parameterized view direction (Vu,Vv) is used as the 2D texture coordinate.

No single parameterization works well for all BRDF materials. A further refinement to enhance quality for BRDF PTMs for the materials we are trying to reproduce is to reparameterize the light and view vectors as half angle and difference vectors (Hu,Hv) and (Du,Dv). In the BRDF PTM shaders discussed in the next section, Hu and Hv are the independent variables of the biquadric polynomial, and (Du,Dv) is the 2D texture coordinate. A large part of the vertex shader’s function is to calculate (Hu,Hv) and (Du,Dv).

BRDF PTMs can be created as either LRGB or RGB PTMs. The upcoming example shows how an RGB BRDF PTM is rendered with OpenGL shaders. RGBA textures with 8 bits per component are used because the PTM file format and tools developed by HP are based on this format.

Application Setup

To render BRDF surfaces using the following shaders, the application must set up a few uniform variables. The vertex shader must be provided with values for uniform variables that describe the eye direction (i.e., an infinite viewer) and the position of a single light source (i.e., a point light source). The fragment shader requires the application to provide values for scaling and biasing the six polynomial coefficients. (These values were prescaled when the PTM was created to preserve precision, and they must be rescaled with the scale and bias factors that are specific to that PTM.)

The application is expected to provide the surface normal, vertex position, tangent, and binormal in exactly the same way as the BRDF shader discussed in the previous section. Before rendering, the application should also set up seven texture maps: three 2D texture maps to hold the A, B, and C co-efficients for red, green, and blue components of the PTM; three 2D texture maps to hold the D, E, and F coefficients for red, green, and blue components of the PTM; and a 1D texture map to hold a lighting function.

This last texture is set up by the application whenever the lighting state is changed. The light factor texture solves four problems:

  1. The light factor texture is indexed with LdotN, which is positive for front-facing vertices and negative for back-facing vertices. As a first level of complexity, the light texture can solve the front-facing/backfacing discrimination problem by being 1.0 for positive index values and 0.0 for back-facing values.

  2. We’d like to be able to light BRDF PTM shaded objects with colored lights. As a second level of complexity, the light texture (which has three channels, R, G, and B) uses a light color instead of 1.0 for positive index values.

  3. An abrupt transition from front-facing to back-facing looks awkward and unrealistic on rendered images. As a third level of complexity, we apply a gradual transition in the light texture values from 0 to 1.0. We use a sine or cosine curve to determine these gradual texture values.

  4. There is no concept of ambient light for PTM rendering. It can look very unrealistic to render back-facing pixels as (0,0,0). Instead of using 0 values for negative indices, we use values such as 0.1.

Vertex Shader

The BRDF PTM vertex shader is shown in Listing 14.8. This shader produces five varying values:

  • gl_Position,as required by every vertex shader

  • TexCoord,which is used to access our texture maps to get the two sets of polynomial coefficients

  • Du,a float that contains the cosine of the angle between the light direction and the tangent vector

  • Dv,a float that contains the cosine of the angle between the light direction and the binormal vector

  • LdotN,a float that contains the cosine of the angle between the incoming surface normal and the light direction

The shader assumes a viewer at infinity and one point light source.

Example 14.8. Vertex shader for rendering BRDF-based polynomial texture maps

//
// PTM vertex shader by Brad Ritter, Hewlett-Packard
// and Randi Rost, 3Dlabs.
//
// © Copyright 2003 3Dlabs, Inc., and
// Hewlett-Packard Development Company, L.P.,
// Reproduced with Permission
//
uniform vec3 LightPos;
uniform vec3 EyeDir;
attribute vec3 Tangent;
attribute vec3 Binormal;

varying float Du;
varying float Dv;
varying float LdotN;
varying vec2  TexCoord;

void main()
{

    vec3 lightTemp;
    vec3 halfAngleTemp;
    vec3 tPrime;
    vec3 bPrime;

    // Transform vertex
    gl_Position = ftransform();
    lightTemp = normalize(LightPos - gl_Vertex.xyz);

    // Calculate the Half Angle vector
    halfAngleTemp = normalize(EyeDir + lightTemp);

    // Calculate T' and B'
    //    T' = |T - (T.H)H|
    tPrime = Tangent - (halfAngleTemp * dot(Tangent, halfAngleTemp));
    tPrime = normalize(tPrime);

    //    B' = H x T'
    bPrime = cross(halfAngleTemp, tPrime);

    Du = dot(lightTemp, tPrime);
    Dv = dot(lightTemp, bPrime);

    // Multiply the Half Angle vector by NOISE_FACTOR
    // to avoid noisy BRDF data
    halfAngleTemp = halfAngleTemp * 0.9;

    // Hu = Dot(HalfAngle, T)
    // Hv = Dot(HalfAngle, B)
    // Remap [-1.0..1.0] to [0.0..1.0]
    TexCoord.s = dot(Tangent, halfAngleTemp) * 0.5 + 0.5;
    TexCoord.t = dot(Binormal, halfAngleTemp) * 0.5 + 0.5;

    // "S" Text Coord3: Dot(Light, Normal);
    LdotN = dot(lightTemp, gl_Normal) * 0.5 + 0.5;
}

The light source position and eye direction are passed in as uniform variables by the application. In addition to the standard OpenGL vertex and normal vertex attributes, the application is expected to pass in a tangent and a binormal per vertex, as described in the previous section. These two generic attributes are defined with appropriate names in our vertex shader.

The first line of the vertex shader transforms the incoming vertex value by the current modelview-projection matrix. The next line computes the light source direction for our positional light source by subtracting the vertex position from the light position. Because LightPos is defined as a vec3 and the built-in attribute gl_Vertex is defined as a vec4, we must use the .xyz component selector to obtain the first three elements of gl_Vertex before doing the vector subtraction operation. The result of the vector subtraction is then normalized and stored as our light direction.

The following line of code computes the half angle by summing the eye direction vector and the light direction vector and normalizing the result.

The next few lines of code compute the 2D parameterization of our half angle and difference vector. The goal here is to compute values for u (Du) and v (Dv) that can be plugged into the biquadric equation in our fragment shader. The technique we use is called Gram-Schmidt orthonormalization. H (half angle), T′, and B′ are the orthogonal axes of a coordinate system. T′ and B′ maintain a general alignment with the original T (tangent) and B (binormal) vectors. Where T and B lie in the plane of the triangle being rendered, T′ and B′ are in a plane perpendicular to the half angle vector. More details on the reasons for choosing H, T′, and B′ to define the coordinate system are available in the paper Interactive Rendering with Arbitrary BRDFs Using Separable Approximations by Jan Kautz and Michael McCool (1999).

BRDF data often has noisy data values for extremely large incidence angles (i.e., close to 180°), so in the next line of code, we avoid the noisy data in a somewhat unscientific manner by applying a scale factor to the half angle. This effectively causes these values to be ignored.

Our vertex shader code then computes values for Hu and Hv and places them in the varying variable TexCoord. These are plugged into our biquadric equation in the fragment shader as the u and v values. These values hold our parameterized difference vector and are used to look up the required polynomial coefficients from the texture maps, so they are mapped into the range [0,1].

Finally, we compute a value that applies the lighting effect. This value is simply the cosine of the angle between the surface normal and the light direction. It is also mapped into the range [0,1] because it is the texture coordinate for accessing a 1D texture to obtain the lighting factor that is used.

Fragment Shader

The fragment shader for our BRDF PTM surface rendering is shown in Listing 14.9.

Example 14.9. Fragment shader for rendering BRDF-based polynomial texture maps

//
// PTM fragment shader by Brad Ritter, Hewlett-Packard
// and Randi Rost, 3Dlabs.
//
// © Copyright 2003 3Dlabs, Inc., and
// Hewlett-Packard Development Company, L.P.,
// Reproduced with Permission
//
uniform sampler2D ABCred;          // = 0
uniform sampler2D DEFred;          // = 1
uniform sampler2D ABCgrn;          // = 2
uniform sampler2D DEFgrn;          // = 3
uniform sampler2D ABCblu;          // = 4
uniform sampler2D DEFblu;          // = 5
uniform sampler1D Lighttexture;    // = 6

uniform vec3 ABCscale, ABCbias;
uniform vec3 DEFscale, DEFbias;

varying float Du;        // passes the computed L*tPrime value
varying float Dv;        // passes the computed L*bPrime value
varying float LdotN;     // passes the computed L*Normal value
varying vec2 TexCoord;   // passes s, t, texture coords

void main()
{
    vec3    ABCcoef, DEFcoef;
    vec3    ptvec;

    // Read coefficient values for red and apply scale and bias factors
    ABCcoef = (texture2D(ABCred, TexCoord).rgb - ABCbias) * ABCscale;
    DEFcoef = (texture2D(DEFred, TexCoord).rgb - DEFbias) * DEFscale;

    // Compute red polynomial
    ptvec.r = ABCcoef[0] * Du * Du +
              ABCcoef[1] * Dv * Dv +
              ABCcoef[2] * Du * Dv +
              DEFcoef[0] * Du +
              DEFcoef[1] * Dv +
              DEFcoef[2];

    // Read coefficient values for green and apply scale and bias factors
    ABCcoef = (texture2D(ABCgrn, TexCoord).rgb - ABCbias) * ABCscale;
    DEFcoef = (texture2D(DEFgrn, TexCoord).rgb - DEFbias) * DEFscale;

    // Compute green polynomial
    ptvec.g = ABCcoef[0] * Du * Du +
              ABCcoef[1] * Dv * Dv +
              ABCcoef[2] * Du * Dv +
              DEFcoef[0] * Du +
              DEFcoef[1] * Dv +
              DEFcoef[2];

    // Read coefficient values for blue and apply scale and bias factors
    ABCcoef = (texture2D(ABCblu, TexCoord).rgb - ABCbias) * ABCscale;
    DEFcoef = (texture2D(DEFblu, TexCoord).rgb - DEFbias) * DEFscale;

    // Compute blue polynomial
    ptvec.b = ABCcoef[0] * Du * Du +
              ABCcoef[1] * Dv * Dv +
              ABCcoef[2] * Du * Dv +
              DEFcoef[0] * Du +
              DEFcoef[1] * Dv +
              DEFcoef[2];
    // Multiply result * light factor
    ptvec *= texture1D(Lighttexture, LdotN).rgb;

    // Assign result to gl_FragColor
    gl_FragColor = vec4(ptvec, 1.0);
}

This shader is relatively straightforward if you’ve digested the information in the previous three sections. The values in the s and t components of TexCoord hold a 2D parameterization of the difference vector. TexCoord indexes into each of our coefficient textures and retrieves the values for the A, B, C, D, E, and F coefficients. The BRDF PTMs are stored as mipmap textures, and, because we’re not providing a bias argument, the computed level-of-detail bias is just used directly. Using vector operations, we scale and bias the six coefficients by using values passed from the application through uniform variables.

We then use these scaled, biased coefficient values together with our parameterized half angle (Du and Dv) in the biquadric polynomial to compute the red value for the surface. We repeat the process to compute the green and blue values as well. We compute the lighting factor by accessing the 1D light texture, using the cosine of the angle between the light direction and the surface normal. Finally, we multiply the lighting factor by our polynomial vector and use an alpha value of 1.0 to produce the final fragment color.

The image in Color Plate 27B shows our BRDF PTM shaders rendering a torus with the BRDF PTM created for the Mystique Lacquer automotive paint. The basic color of this paint is black, but, in the orientation captured for the still image, the specular highlight shows up as mostly white with a reddish-brown tinge on one side of the highlight and a bluish tinge on the other. As the object is moved around or as the light is moved around, our BRDF PTM shaders properly render the shifting highlight color.

Summary

This chapter looked at how shaders model the properties of light that arrives at a particular point on a surface. Light can be transmitted, reflected, or absorbed. We developed shaders that model the reflection and refraction of light based on an approximation to the Fresnel equations, a shader that simulates diffraction, and shaders that implement a bidirectional reflectance distribution function. Finally, we studied a shader that uses image-based methods to reproduce varying lighting conditions and self-shadowing for a variety of materials.

Pardon the pun, but the shaders presented in this chapter (as well as in the preceding two chapters) only begin to scratch the surface of the realistic rendering effects that are possible with the OpenGL Shading Language. The hope is that by developing shaders to implement a few examples of lighting, shadows, and reflection, you will be equipped to survey the literature and implement a variety of similar techniques. The shaders we’ve developed can be further streamlined and optimized for specific purposes.

Further Information

A thorough treatment of reflectance and lighting models can be found in the book Real-Time Shading, by Marc Olano et al. (2002). Real-Time Rendering by Akenine-Möller and Haines also contains discussions of Fresnel reflection and the theory and implementation of BRDFs. The paper discussing the Fresnel approximation we’ve discussed was published by Christophe Schlick as part of Eurographics ’94. Cg shaders that utilize this approximation are described in the Cg Tutorial and GPU Gems books. The diffraction shader is based on work presented by Jos Stam in a 1999 SIGGRAPH paper called Diffraction Shaders. Stam later developed a diffraction shader in Cg and discussed it in the book GPU Gems.

The specific paper that was drawn upon heavily for the BRDF reflection section was Gregory Ward’s Measuring and Modeling Anisotropic Reflection, which appeared in the 1992 SIGGRAPH conference proceedings. A classic early paper that set the stage for this work was the 1981 SIGGRAPH paper, A Reflectance Model for Computer Graphics, by Cook and Torrance.

The SIGGRAPH 2001 proceedings contain the paper Polynomial Texture Maps by Tom Malzbender, Dan Gelb, and Hans Wolters. Additional information is available at the Hewlett-Packard Laboratories Web site, http://www.hpl.hp.com/ptm/. At this site, you can find example data files, a PTM viewing program, the PTM file format specification, and utilities to assist in creating PTMs.

The book Physically-Based Rendering: From Theory to Implementation by Pharr and Humphreys is a thorough treatment of techniques for achieving realism in computer graphics.

  1. Akenine-Möller, Tomas, and E. Haines, Real-Time Rendering, Second Edition, AK Peters, Ltd., Natick, Massachusetts, 2002. http://www.realtimerendering.com

  2. Cook, Robert L., and Kenneth E. Torrance, A Reflectance Model for Computer Graphics, Computer Graphics (SIGGRAPH ’81 Proceedings), pp. 307–316, July 1981.

  3. Fernando, Randima, and Mark J. Kilgard, The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, Addison-Wesley, Boston, Massachusetts, 2003.

  4. Hewlett-Packard, Polynomial Texture Mapping, Web site. http://www.hpl.hp.com/ptm

  5. Kautz, Jan, and Michael D. McCool, Interactive Rendering with Arbitrary BRDFs Using Separable Approximations, 10th Eurographics Workshop on Rendering, pp. 281–292, June 1999. http://www.mpi-sb.mpg.de/~jnkautz/publications

  6. Malzbender, Tom, Dan Gelb, and Hans Wolters, Polynomial Texture Maps, Computer Graphics (SIGGRAPH 2001 Proceedings), pp. 519–528, August 2001. http://www.hpl.hp.com/research/ptm/papers/ptm.pdf

  7. Olano, Marc, John Hart, Wolfgang Heidrich, and Michael McCool, Real-Time Shading, AK Peters, Ltd., Natick, Massachusetts, 2002.

  8. Pharr, Matt and Greg Humphreys, Physically Based Rendering: From Theory to Implementation, Morgan Kaufmann, San Francisco, 2004. http://pbrt.org/

  9. Schlick, Christophe, An Inexpensive BRDF Model for Physically Based Rendering, Eurographics ’94, published in Computer Graphics Forum, vol. 13., no. 3, pp. 149–162, September, 1994.

  10. Stam, Jos, Diffraction Shaders, Computer Graphics (SIGGRAPH ’99 Proceedings, pp. 101–110, August 1999. http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/diff.pdf

  11. Stam, Jos, Simulating Diffraction, in GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics, Editor: Randima Fernando, Addison-Wesley, Reading, Massachusetts, 2004. http://developer.nvidia.com/object/gpu_gems_home.html

  12. Ward, Gregory, Measuring and Modeling Anisotropic Reflection, Computer Graphics (SIGGRAPH ’92 Proceedings), pp. 265–272, July 1992. http://radsite.lbl.gov/radiance/papers/sg92/paper.html

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.238.116.201