| ||

--Walt Disney |

As Walt Disney noted, animation is largely about explanation. Rotating an object provides information about shape and form. Doing a simulated walkthrough of a building gives a sense of how it would feel to be inside the building. Watching a character’s facial expressions provides insight into the emotions the character is feeling.

With the latest graphics hardware and the OpenGL Shading Language, we can do even more realistic rendering in real time on low-cost graphics hardware. I was somewhat surprised when I found out how easy it was to animate programmable shaders. By developing shaders that modify their behavior over time, we can turn over even more of the rendering task to the graphics hardware.

This chapter describes how a shader can be written to produce an animation effect. Typically, the animation effect is controlled by one or more uniform variables that convey an application’s sense of time. These variables pass a frame count, an elapsed time value, or some other value that can be used as the basis for chronology. If there are multiple objects in a scene and they are expected to animate in different ways, each object is rendered with its own unique shader. If objects have identical characteristics other than the animation effect, the source code for each of the shaders can be the same except for the portion that defines the animation effect.

A shader can be written to behave differently depending on these control values, and so, if they are updated at each frame, the scene is drawn slightly differently in each frame to produce the animation effect. Discontinuities in cyclical motion can be avoided by design, with a smooth overlap whereby the end of the cycle wraps back to the beginning. By controlling the animation effect in an OpenGL shader, the application need not perform any complex computations to achieve this motion. All it needs to do is update the control value(s) of each frame and redraw the object. However, the application can perform animation computations in addition to the animation effects built into a shader to produce extremely interesting effects.

Interesting animation effects can be created with stored textures, procedural textures, noise, and other mechanisms. In this chapter, we describe simple animation effects and discuss shaders for modifying the shape and position of an object and for simulating an oscillating motion.

Suppose you have an object in the scene that must be drawn two different ways, such as a neon restaurant sign saying “OPEN” that repeatedly flashes on and off. It would certainly be possible to write two different shaders for rendering this object, one for when the sign is “on” and one for when the sign is “off.” However, in a situation like this, it may actually be easier to write a single shader that takes a control variable specifying whether the object is to be drawn as on or off. The shader can be written to render one way if the control variable is on and another way if it is off.

To do this, the application would set up a uniform variable that indicates the on/off state. The application would update this variable as needed. To achieve a sequence of 3 seconds on and 1 second off, the application would write the variable with the value for on, update the value to off 3 seconds later, update the value to on 1 second later, and repeat this sequence. In the interim, the application just continually redraws the geometry every frame. The decoupling of the rendering from the animation effect might make the application a little simpler and a little more maintainable as a result.

An improvement to the “on/off” animation is to have the application pass the shader one or more values that are tested against one or more threshold values within the shader. Using one control value and two threshold values, you could write a shader with three behaviors: one for when the control value is less than the first threshold value, one for when the control value is between the two threshold values, and one for when the control value is greater than the second threshold value.

In the case just described, you actually may have a transition period when the neon light is warming up to full brightness or dissipating to its off condition. This type of transition helps to “soften” the change between two states so that the transition appears more natural. The smoothstep function is handy for calculating such a transition.

We can achieve a simple animation effect for stored textures or procedural textures just by adding an offset to the texture access calculation. For instance, if we wanted to have procedural clouds that drift slowly across the sky, we could make a simple change to the cloud shader that we discussed in Section 15.4. Instead of using the object position as the index into our 3D noise texture, we add an offset value. The offset is defined as a uniform variable and can be updated by the application at each frame. If we want the clouds to drift slowly from left to right, we just subtract a small amount from the x component of this uniform variable each frame. If we want the clouds to move rapidly from bottom to top, we just subtract a larger amount from the y component of this value. To achieve a more complex effect, we might modify all three coordinates each frame. We could use a noise function in computing this offset to make the motion more natural and less like a scrolling effect.

The cloud shader as modified so as to be animatable is shown in Listing 16.1.

**Example 16.1. Animatable fragment shader for cloudy sky effect**

varying float LightIntensity; varying vec3 MCposition; uniform sampler3D Noise; uniform vec3 SkyColor; // (0.0, 0.0, 0.8) uniform vec3 CloudColor; // (0.8, 0.8, 0.8) uniform vec3 Offset; // updated each frame by the application void main() { vec4 noisevec = texture3D(Noise, MCposition + Offset); float intensity = (noisevec[0] + noisevec[1] + noisevec[2] + noisevec[3]) * 1.5; vec3 color = mix(SkyColor, CloudColor, intensity) * LightIntensity; gl_FragColor = vec4(color, 1.0); }

Another cool animation effect, called morphing, gradually blends between two things. This could be used to mix two effects over a sequence of frames. A complete animation sequence can be created by performing *KEY-FRAME INTERPOLATION*. Important frames of the animation are identified, and the frames in between them can be generated with substantially less effort. Instead of the application doing complex calculations to determine the proper way to render the “in between” object or effect, it can all be done automatically within the shader.

You can blend between the geometry of two objects to create a tweened (inbetween) version or do a linear blend between two colors, two textures, two procedural patterns, and so on. All it takes is a shader that uses a control value that is the ratio of the two items being blended and that is updated each frame by the application. In some cases, a linear blend is sufficient. For an oscillating effect, you’ll probably want to have the application compute the interpolation factor by using a spline function to avoid jarring discontinuities in the animation. (You could have the shader compute the interpolation value, but it’s better to have the application compute it once per frame rather than have the vertex shader compute it once per vertex or have the fragment shader compute it needlessly at every fragment.)

For instance, using generic vertex attributes, you can actually pass the geometry for two objects at a time. The geometry for the first object would be passed through the usual OpenGL calls (glVertex, glColor, glNormal, etc.). A second set of vertex information can be passed by means of generic vertex attributes 0, 1, 2, etc. The application can provide a blending factor through a uniform variable, and the vertex shader can use this blending factor to do a weighted average of the two sets of vertex data. The tweened vertex position is the one that actually gets transformed, the tweened normal is the one actually used for lighting calculations, and so on.

To animate a character realistically, you need to choose the right number of key frames as well as the proper number of inbetweens to use. In their classic book, *Disney Animation—The Illusion of Life,* Frank Thomas and Ollie Johnston (1995, pp. 64–65) describe this concept as “Timing,” and explain it in the following way:

Just two drawings of a head, the first showing it leaning toward the right shoulder and the second with it over on the left and its chin slightly raised, can be made to communicate a multitude of ideas, depending entirely on the Timing used. Each inbetween drawing added between these two “extremes” gives a new meaning to the action.

No inbetweens

The character has been hit by a tremendous force. His head is nearly snapped off.

One inbetween

. . . has been hit by a brick, rolling pin, frying pan.

Two inbetweens

. . . has a nervous tic, a muscle spasm, an uncontrollable twitch.

Three inbetweens

. . . is dodging the brick, rolling pin, frying pan.

Four inbetweens

. . . is giving a crisp order, “Get going!” “Move it!”

Five inbetweens

. . . is more friendly, “Over here.” “Come on—hurry!”

Six inbetweens

. . . sees a good-looking girl, or the sports car he has always wanted.

Seven inbetweens

. . . tries to get a better look at something.

Eight inbetweens

. . . searches for the peanut butter on the kitchen shelf.

Nine inbetweens

. . . appraises, considering thoughtfully.

Ten inbetweens

. . . stretches a sore muscle.

The shader in Listing 16.2, developed by Philip Rideout, morphs between two objects—a square that is generated by the application and a sphere that is procedurally generated in the vertex shader. The sphere is defined entirely by a single value—its radius—provided by the application through a uniform variable. The application passes the geometry defining the square to the vertex shader with the standard built-in attributes gl_Normal and gl_Vertex. The vertex shader computes the corresponding vertex and normal on the sphere with a subroutine called sphere. The application provides a time-varying variable (Blend) for morphing between these two objects. Because we are using the two input vertex values to compute a third, inbetween, value, we cannot use the ftransform function. We’ll transform the computed vertex directly within the vertex shader.

**Example 16.2. Vertex shader for morphing between a plane and a sphere**

varying vec4 Color; uniform vec3 LightPosition; uniform vec3 SurfaceColor; const float PI = 3.14159; const float TWO_PI = PI * 2.0; uniform float Radius; uniform float Blend; vec3 sphere(vec2 domain) { vec3 range; range.x = Radius * cos(domain.y) * sin(domain.x); range.y = Radius * sin(domain.y) * sin(domain.x); range.z = Radius * cos(domain.x); return range; } void main() { vec2 p0 = gl_Vertex.xy * TWO_PI; vec3 normal = sphere(p0);; vec3 r0 = Radius * normal; vec3 vertex = r0; normal = mix(gl_Normal, normal, Blend); vertex = mix(gl_Vertex.xyz, vertex, Blend); normal = normalize(gl_NormalMatrix * normal); vec3 position = vec3(gl_ModelViewMatrix * vec4(vertex, 1.0)); vec3 lightVec = normalize(LightPosition - position); float diffuse = max(dot(lightVec, normal), 0.0); if (diffuse < 0.125) diffuse = 0.125; Color = vec4(SurfaceColor * diffuse, 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(vertex,1.0); }

In this shader, a simple lighting model is used. The color value that is generated by the vertex shader is simply passed through the fragment shader to be used as our final fragment color.

The sphere is somewhat unique in that it can be procedurally generated. Another way to morph between two objects is to specify the geometry for one object, using the normal OpenGL mechanisms, and to specify the geometry for the second object, using generic vertex attributes. The shader then just has to blend between the two sets of geometry in the same manner as described for the sphere morph shader.

Another blending effect gradually causes an object to disappear over a sequence of frames. The control value could be used as the alpha value to cause the object to be drawn totally opaque (alpha is 1.0), totally invisible (alpha is 0), or partially visible (alpha is between 0 and 1.0).

You can also fade something in or out by using the **discard** keyword. The lattice shader described in Section 11.3 discards a specific percentage of pixels in the object each time it is drawn. You could vary this percentage from 0 to 1.0 to make the object appear, or vary from 1.0 to 0 to make the object disappear. Alternatively, you could evaluate a noise function at each location on the surface, and compare with this value instead. In this way, you can cause an object to erode or rust away over time.

In the previous chapter, we talked about some of the interesting and useful things that can be done with noise. Listing 16.3 shows a vertex shader by Philip Rideout that calls the built-in noise3 function in the vertex shader and uses it to modify the shape of the object over time. The result is that the object changes its shape irregularly.

**Example 16.3. Vertex shader using noise to modify and animate an object’s shape**

uniform vec3 LightPosition; uniform vec3 SurfaceColor; uniform vec3 Offset; uniform float ScaleIn; uniform float ScaleOut; varying vec4 Color; void main() { vec3 normal = gl_Normal; vec3 vertex = gl_Vertex.xyz + noise3(Offset + gl_Vertex.xyz * ScaleIn) * ScaleOut; normal = normalize(gl_NormalMatrix * normal); vec3 position = vec3(gl_ModelViewMatrix * vec4(vertex,1.0)); vec3 lightVec = normalize(LightPosition - position); float diffuse = max(dot(lightVec, normal), 0.0); if (diffuse < 0.125) diffuse = 0.125; Color = vec4(SurfaceColor * diffuse, 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(vertex,1.0); }

The key to this shader is the call to noise3 with a value (Offset) that changes over time. The vertex itself is also used as input to the noise3 function so that the effect is repeatable. The ScaleIn and ScaleOut factors control the amplitude of the effect. The result of this computation is added to the incoming vertex position to compute a new vertex position. Because of this, the vertex shader must compute gl_Position by performing the transformation explicitly rather than by calling ftransform.

A new type of rendering primitive was invented by Bill Reeves and his colleagues at Lucasfilm in the early 1980s as they struggled to come up with a way to animate the fire sequence called “The Genesis Demo” in the motion picture *Star Trek II: The Wrath of Khan*. Traditional rendering methods were more suitable for rendering smooth, well-defined surfaces. What Reeves was after was a way to render a class of objects he called “fuzzy”—things like fire, smoke, liquid spray, comet tails, fireworks, and other natural phenomena.

These things are fuzzy because none of them have a well-defined boundary and the components typically change over time.

The technique that Reeves invented to solve this problem was described in the 1983 paper, *Particle Systems—A Technique for Modeling a Class of Fuzzy Objects.* *PARTICLE SYSTEMS* had been used in rendering before, but Reeves realized that he could get the particles to behave the way he wanted them to by giving each particle its own set of initial conditions and by establishing a set of probabilistic rules that governed how particles would change over time.

There are three main differences between particle systems and traditional surface-based rendering techniques. First, rather than an object being defined with polygons or curved surfaces, it is represented by a cloud of primitive particles that define its volume. Second, the object is considered dynamic rather than static. The constituent particles come into existence, evolve, and then die. During their lifetime, they can change position and form. Finally, objects defined in this manner are not completely specified. A set of initial conditions are specified, along with rules for birth, death, and evolution. Stochastic processes are used to influence all three stages, so the shape and appearance of the object is nondeterministic.

Some assumptions are usually made to simplify the rendering of particle systems, among them,

Particles do not collide with other particles.

Particles do not reflect light; they emit light.

Particles do not cast shadows on other particles.

Particle attributes often include position, color, transparency, velocity, size, shape, and lifetime. For rendering a particle system, each particle’s attributes are used along with certain global parameters to update its position and appearance at each frame. Each particle’s position might be updated on the basis of the initial velocity vector and the effects from gravity, wind, friction, and other global factors. Each particle’s color (including transparency), size, and shape can be modified as a function of global time, the age of the particle, its height, its speed, or any other parameter that can be calculated.

What are the benefits of using particle systems as a rendering technique? For one thing, complex systems can be created with little human effort. For another, the complexity can easily be adjusted. And as Reeves says in his 1983 paper, “The most important thing about particle systems is that they move: good dynamics are quite often the key to making things look real.”

For this shader, my goal was to produce a shader that acted like a “confetti cannon”—something that spews out a large quantity of small, brightly colored pieces of paper. They don’t come out all at once, but they come out in a steady stream until none are left. Initial velocities are somewhat random, but there is a general direction that points up and away from the origin. Gravity influences these particles and eventually brings them back to earth.

The code in Listing 16.4 shows the C subroutine that I used to create the initial values for my particle system. To accomplish the look I was after, I decided that for each particle I needed its initial position, a randomly generated color, a randomly generated initial velocity (with some constraints), and a randomly generated start time.

The subroutine createPoints lets you create an arbitrary-sized, two-dimensional grid of points for the particle system. There’s no reason for a two-dimensional grid, but I was interested in seeing the effect of particles “popping off the grid” like pieces of popcorn. It would be even easier to define the particle system as a 1D array, and all of the vertex positions could have exactly the same initial value (for instance (0,0,0)).

But I set it up as a 2D array, and so you can pass in a width and height to define the number of particles to be created. After the memory for the arrays is allocated, a nested loop computes the values for each of the particle attributes at each grid location. Each vertex position has a *y*-coordinate value of 0, and the *x* and *z* coordinates vary across the grid. Each color component is assigned a random number in the range [0.5,1.0] so that mostly bright pastel colors are used. The velocity vectors are assigned random numbers to which I gave a strong upward bias by multiplying the *y* coordinate by 10. The general direction of the particles is aimed away from the origin by the addition of 3 to both the *x*- and the *z*- coordinates. Finally, each particle is given a start-time value in the range [0,10].

**Example 16.4. C subroutine to create vertex data for particles**

static GLint arrayWidth, arrayHeight; static GLfloat *verts = NULL; static GLfloat *colors = NULL; static GLfloat *velocities = NULL; static GLfloat *startTimes = NULL; void createPoints(GLint w, GLint h) { GLfloat *vptr, *cptr, *velptr, *stptr; GLfloat i, j; if (verts != NULL) free(verts); verts = malloc(w * h * 3 * sizeof(float)); colors = malloc(w * h * 3 * sizeof(float)); velocities = malloc(w * h * 3 * sizeof(float)); startTimes = malloc(w * h * sizeof(float)); vptr = verts; cptr = colors; velptr = velocities; stptr = startTimes; for (i = 0.5 / w - 0.5; i < 0.5; i = i + 1.0/w) for (j = 0.5 / h - 0.5; j < 0.5; j = j + 1.0/h) { *vptr = i; *(vptr + 1) = 0.0; *(vptr + 2) = j; vptr += 3; *cptr = ((float) rand() / RAND_MAX) * 0.5 + 0.5; *(cptr + 1) = ((float) rand() / RAND_MAX) * 0.5 + 0.5; *(cptr + 2) = ((float) rand() / RAND_MAX) * 0.5 + 0.5; cptr += 3; *velptr = (((float) rand() / RAND_MAX)) + 3.0; *(velptr + 1) = ((float) rand() / RAND_MAX) * 10.0; *(velptr + 2) = (((float) rand() / RAND_MAX)) + 3.0; velptr += 3; *stptr = ((float) rand() / RAND_MAX) * 10.0; stptr++; } arrayWidth = w; arrayHeight = h; }

OpenGL has built-in attributes for vertex position, which we use to pass the initial particle position, and for color, which we use to pass the particle’s color. We need to use generic vertex attributes to specify the particle’s initial velocity and start time. Let’s pick indices 3 and 4 and define the necessary constants:

#define VELOCITY_ARRAY 3 #define START_TIME_ARRAY 4

After we have created a program object, we can bind a generic vertex attribute index to a vertex shader attribute variable name. (We can do this even before the vertex shader is attached to the program object.) These bindings are checked and go into effect at the time glLinkProgram is called. To bind the generic vertex attribute index to a vertex shader variable name, we do the following:

glBindAttribLocation(ProgramObject, VELOCITY_ARRAY, "Velocity"); glBindAttribLocation(ProgramObject, START_TIME_ARRAY, "StartTime");

After the shaders are compiled, attached to the program object, and linked, we’re ready to draw the particle system. All we need to do is call the drawPoints function shown in Listing 16.5. In this function, we set the point size to 2 to render somewhat larger points. The next four lines of code set up pointers to the vertex arrays that we’re using. In this case, we have four: one for vertex positions (i.e., initial particle position), one for particle color, one for initial velocity, and one for the particle’s start time (i.e., birth). After that, we enable the arrays for drawing by making calls to glEnableClientState for the standard vertex attributes and glEnableVertexAttribArray for the generic vertex attributes. Next we call glDrawArrays to render the points, and finally, we clean up by disabling each of the enabled vertex arrays.

**Example 16.5. C subroutine to draw particles as points**

void drawPoints() { glPointSize(2.0); glVertexPointer(3, GL_FLOAT, 0, verts); glColorPointer (3, GL_FLOAT, 0, colors); glVertexAttribPointer(VELOCITY_ARRAY, 3, GL_FLOAT, GL_FALSE, 0, velocities); glVertexAttribPointer(START_TIME_ARRAY, 1, GL_FLOAT, GL_FALSE, 0, startTimes); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableVertexAttribArray(VELOCITY_ARRAY); glEnableVertexAttribArray(START_TIME_ARRAY); glDrawArrays(GL_POINTS, 0, arrayWidth * arrayHeight); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glDisableVertexAttribArray(VELOCITY_ARRAY); glDisableVertexAttribArray(START_TIME_ARRAY); }

To achieve the animation effect, the application must communicate its notion of time to the vertex shader, as shown in Listing 16.6. Here, the variable ParticleTime is incremented once each frame and loaded into the uniform variable Time. This allows the vertex shader to perform computations that vary (animate) over time.

The vertex shader (see Listing 16.7) is the key to this example of particle system rendering. Instead of simply transforming the incoming vertex, we use it as the initial position to compute a new position based on a computation involving the uniform variable Time. It is this newly computed position that is actually transformed and rendered.

This vertex shader defines the attribute variables Velocity and StartTime. In the previous section, we saw how generic vertex attribute arrays were defined and bound to these vertex shader attribute variables. As a result of this, each vertex has an updated value for the attribute variables Velocity and StartTime, as well as for the standard vertex attributes specified by gl_Vertex and gl_Color.

The vertex shader starts by computing the age of the particle. If this value is less than zero, the particle has not yet been born. In this case, the particle is just assigned the color provided through the uniform variable Background. (If you actually want to see the grid of yet-to-be-born particles, you could provide a color value other than the background color. And if you want to be a bit more clever, you could pass the value t as a varying variable to the fragment shader and let it discard fragments for which t is less than zero. For our purposes, this wasn’t necessary.)

If a particle’s start time is less than the current time, the following kinematic equation is used to determine its current position:

In this equation *P _{i}* represents the initial position of the particle, v represents the initial velocity, t represents the elapsed time, a represents the acceleration, and P represents the final computed position. For acceleration, we use the value of acceleration due to gravity on Earth, which is 9.8 meters per second

After this, all that remains is to transform the computed vertex and store the result in gl_Position.

**Example 16.7. Confetti cannon (particle system) vertex shader**

uniform float Time; // updated each frame by the application uniform vec4 Background; // constant color equal to background attribute vec3 Velocity; // initial velocity attribute float StartTime; // time at which particle is activated varying vec4 Color; void main() { vec4 vert; float t = Time - StartTime; if (t >= 0.0) { vert = gl_Vertex + vec4(Velocity * t, 0.0); vert.y -= 4.9 * t * t; Color = gl_Color; } else { vert = gl_Vertex; // Initial position Color = Background; // "pre-birth" color } gl_Position = gl_ModelViewProjectionMatrix * vert; }

The value computed by the vertex shader is simply passed through the fragment shader to become the final color of the fragment to be rendered. Some frames from the confetti cannon animation sequence are shown in Figure 16.1.

**Figure 16.1. Several frames from the animated sequence produced by the particle system shader. In this animation, the particle system contains 10,000 points with randomly assigned initial velocities and start times. The position of the particle at each frame is computed entirely in the vertex shader according to a formula that simulates the effects of gravity. (3Dlabs, Inc.)**

There’s a lot that you can do to make this shader more interesting. You might pass the t value from the vertex shader to the fragment shader as suggested earlier and make the color of the particle change over time. For instance, you could make the color change from yellow to red to black to simulate an explosion. You could reduce the alpha value over time to make the particle fade out. You might also provide a “time of death” and extinguish the particle completely at a certain time or when a certain distance from the origin is reached. Instead of drawing the particles as points, you might draw them as short lines so that you could blur the motion of each particle. You could also vary the size of the point (or line) over time to create particles that grow or shrink. You can make the physics model a lot more sophisticated than the one illustrated. To make the particles look better, you could render them as point sprites, another new feature in OpenGL 2.0. (A point sprite is a point that is rendered as a textured quadrilateral that always faces the viewer.)

The real beauty in doing particle systems within a shader is that the computation is done completely in graphics hardware rather than on the host CPU. If the particle system data is stored in a vertex buffer object, there’s a good chance that it will be stored in the on-board memory of the graphics hardware, so you won’t even be using up any I/O bus bandwidth as you render the particle system each frame. With the OpenGL Shading Language, the equation for updating each particle can be arbitrarily complex. And, because the particle system is rendered like any other 3D object, you can rotate it around and view it from any angle while it is animating. There’s really no end to the effects (and the fun!) that you can have with particle systems.

The previous three examples discussed animating the geometry of an object and used the vertex processor to achieve this animation (because the geometry of an object cannot be modified by the fragment processor). The fragment processor can also create animation effects. The main purpose of most fragment shaders is to compute the fragment color, and any of the factors that affect this computation can be varied over time. In this section, we look at a shader that perturbs the texture coordinates in a time-varying way to achieve an oscillating or wobbling effect. With the right texture, this effect can make it very simple to produce an animated effect to simulate a gelatinous surface or a “dancing” logo.

This shader was developed to mimic the wobbly 2D effects demonstrated in some of the real-time graphics demos that are available on the Web (see http://www.scene.org for some examples). Its author, Antonio Tejada, wanted to use the OpenGL Shading Language to create a similar effect.

The central premise of the shader is that a sine function is used in the fragment shader to perturb the texture coordinates before the texture lookup operation. The amount and frequency of the perturbation can be controlled through uniform variables sent by the application. Because the goal of the shader was to produce an effect that looked good, the accuracy of the sine computation was not critical. For this reason and because the sine function had not been implemented at the time he wrote this shader, Antonio chose to approximate the sine value by using the first two terms of the Taylor series for sine. The fragment shader would have been simpler if the built-in sin function had been used, but this approach demonstrates that numerical methods can be used as needed within a shader. (As to whether using two terms of the Taylor series would result in better performance than using the built-in sin function, it’s hard to say. It probably varies from one graphics hardware vendor to the next, depending on how the sin function is implemented.)

For this shader to work properly, the application must provide the frequency and amplitude of the wobbles, as well as a light position. In addition, the application increments a uniform variable called StartRad at each frame. This value is used as the basis for the perturbation calculation in the fragment shader. By incrementing the value at each frame, we animate the wobble effect. The application must provide the vertex position, the surface normal, and the texture coordinate at each vertex of the object to be rendered.

The vertex shader for the wobble effect is responsible for a simple lighting computation based on the surface normal and the light position provided by the application. It passes along the texture coordinate without modification. This is exactly the same as the functionality of the Earth vertex shader described in Section 10.2.2, so we can simply use that vertex shader.

The fragment shader to achieve the wobbling effect is shown in Listing 16.8. It receives as input the varying variable LightIntensity as computed by the vertex shader. This variable is used at the very end to apply a lighting effect to the fragment. The uniform variable StartRad provides the starting point for the perturbation computation in radians, and it is incremented by the application at each frame to animate the wobble effect. We can make the wobble effect go faster by using a larger increment value, and we can make it go slower by using a smaller increment amount. We found that an increment value of about 1° gave visually pleasing results.

The frequency and amplitude of the wobbles can be adjusted by the application with the uniform variables Freq and Amplitude. These are defined as **vec2** variables so that the x and y components can be adjusted independently. The final uniform variable defined by this fragment shader is WobbleTex, which specifies the texture unit to be used for accessing the 2D texture that is to be wobbled.

For the Taylor series approximation for sine to give more precise results, it is necessary to ensure that the value for which sine is computed is in the range [–π/2,π/2]. The constants C_PI (π), C_2PI (2π), C_2PI_I (1/2π), and C_PI_2 (π/2) are defined to assist in this process.

The first half of the fragment shader computes a perturbation factor for the *x* direction. We want to end up with a perturbation factor that depends on both the s and the t components of the texture coordinate. To this end, the local variable rad is computed as a linear function of the s and t values of the texture coordinate. (A similar but different expression computes the *y* perturbation factor in the second half of the shader.) The current value of StartRad is added. Finally, the x component of Freq is used to scale the result.

The value for rad increases as the value for StartRad increases. As the scaling factor Freq.x increases, the frequency of the wobbles also increases. The scaling factor should be increased as the size of the texture increases on the screen to keep the apparent frequency of the wobbles the same at different scales. You can think of the Freq uniform variable as the Richter scale for wobbles. A value of 0 results in no wobbles whatsoever. A value of 1.0 results in gentle rocking, a value of 2.0 causes jiggling, a value of 4.0 results in wobbling, and a value of 8.0 results in magnitude 8.0 earthquake-like effects.

The next seven lines of the shader bring the value of rad into the range [–π/2,π/2]. When this is accomplished, we can compute **sin**(rad) by using the first two terms of the Taylor series for sine, which is just x – x^{3}/3! The result of this computation is multiplied by the x component of Amplitude. The value for the computed sine value will be in the range [−1,1]. If we just add this value to the texture coordinate as the perturbation factor, it will *really* perturb the texture coordinate. We want a wobble, not an explosion! Multiplying the computed sine value by a value of 0.05 results in reasonably sized wobbles. Increasing this scale factor makes the wobbles bigger, and decreasing it makes them smaller. You can think of this as how far the texture coordinate is stretched from its original value. Using a value of 0.05 means that the perturbation alters the original texture coordinate by no more than ±0.05. A value of 0.5 means that the perturbation alters the original texture coordinate by no more than ±0.5.

With the *x* perturbation factor computed, the whole process is repeated to compute the *y* perturbation factor. This computation is also based on a linear function of the s and t texture coordinate values, but it differs from that used for the *x* perturbation factor. Computing the *y* perturbation value differently avoids symmetries between the *x* and *y* perturbation factors in the final wobbling effect, which doesn’t look as good when animated.

With the perturbation factors computed, we can finally do our (perturbed) texture access. The color value that is retrieved from the texture map is multiplied by LightIntensity to compute the final color value for the fragment. Several frames from the animation produced by this shader are shown in Color Plate 29. These frames show the shader applied to a logo to illustrate the perturbation effects more clearly in static images. But the animation effect is also quite striking when the texture used looks like the surface of water, lava, slime, or even animal/monster skin.

**Example 16.8. Fragment shader for wobble effect**

// Constants const float C_PI = 3.1415; const float C_2PI = 2.0 * C_PI; const float C_2PI_I = 1.0 / (2.0 * C_PI); const float C_PI_2 = C_PI / 2.0; varying float LightIntensity; uniform float StartRad; uniform vec2 Freq; uniform vec2 Amplitude; uniform sampler2D WobbleTex; void main() { vec2 perturb; float rad; vec3 color; // Compute a perturbation factor for the x-direction rad = (gl_TexCoord[0].s + gl_TexCoord[0].t - 1.0 + StartRad) * Freq.x; // Wrap to -2.0*PI, 2*PI rad = rad * C_2PI_I; rad = fract(rad); rad = rad * C_2PI; // Center in -PI, PI if (rad > C_PI) rad = rad - C_2PI; if (rad < -C_PI) rad = rad + C_2PI; // Center in -PI/2, PI/2 if (rad > C_PI_2) rad = C_PI - rad; if (rad < -C_PI_2) rad = -C_PI - rad; perturb.x = (rad - (rad * rad * rad / 6.0)) * Amplitude.x; // Now compute a perturbation factor for the y-direction rad = (gl_TexCoord[0].s - gl_TexCoord[0].t + StartRad) * Freq.y; // Wrap to -2*PI, 2*PI rad = rad * C_2PI_I; rad = fract(rad); rad = rad * C_2PI; // Center in -PI, PI if (rad > C_PI) rad = rad - C_2PI; if (rad < -C_PI) rad = rad + C_2PI; // Center in -PI/2, PI/2 if (rad > C_PI_2) rad = C_PI - rad; if (rad < -C_PI_2) rad = -C_PI - rad; perturb.y = (rad - (rad * rad * rad / 6.0)) * Amplitude.y; color = vec3(texture2D(WobbleTex, perturb + gl_TexCoord[0].st)); gl_FragColor = vec4(color * LightIntensity, 1.0); }

With the fixed functionality in previous versions of OpenGL, animation effects were strictly in the domain of the application and had to be computed on the host CPU. With programmability, it has become easy to specify animation effects within a shader and let the graphics hardware do this work. Just about any aspect of a shader—position, shape, color, texture coordinates, and lighting, to name just a few—can be varied according to a global definition of current time.

When you develop a shader for an object that will be in motion, you should also consider how much of the animation effect you can encode within the shader. Encoding animation effects within a shader can offload the CPU and simplify the code in the application. This chapter described some simple ways for doing this. On and off, scrolling, and threshold effects are quite easy to do within a shader. Key-frame interpolation can be supported in a simple way through the power of programmability. Particles can be animated, including their position, color, velocity, and any other important attributes. Objects and textures can be made to oscillate, move, grow, or change based on mathematical expressions.

Animation is a powerful tool for conveying information, and the OpenGL Shading Language provides another avenue for expressing animation effects.

If you’re serious about animated effects, you really should read *Disney Animation: The Illusion of Life*, by two of the “Nine Old Men” of Disney animation fame, Frank Thomas and Ollie Johnston (1981). This book is loaded with color images and insight into the development of the animation art form at Disney Studios. It contains several decades worth of information about making great animated films. If you can, try to find a used copy of the original printing from Abbeville Press rather than the reprint by Hyperion. A softcover version was also printed by Abbeville, but this version eliminates much of the history of Disney Studios. A brief encapsulation of some of the material in this book can be found in the 1987 SIGGRAPH paper, *Principles of Traditional Animation Applied to 3D Computer Animation*, by John Lasseter.

Rick Parent’s 2001 book, *Computer Animation: Algorithms and Techniques*, contains descriptions of a variety of algorithms for computer animation. The book *Game Programming Gems*, edited by Mark DeLoura (2000), also has several pertinent sections on animation.

Particle systems were first described by Bill Reeves in his 1983 SIGGRAPH paper, *Particle Systems—A Technique for Modeling a Class of Fuzzy Objects*. In 1998, Jeff Lander wrote an easy-to-follow description of particle systems, titled “The Ocean Spray in Your Face,” in his column for *Game Developer Magazine*. He also made source code available for a simple OpenGL-based particle system demonstration program that he wrote.

DeLoura, Mark, ed.,

*Game Programming Gems*, Charles River Media, Hingham, Massachusetts, 2000.Lander, Jeff,

*The Ocean Spray in Your Face*,*Game Developer Magazine*, vol. 5, no. 7, pp. 13–19, July 1998. http://www.darwin3d.com/gdm1998.htmLasseter, John,

*Principles of Traditional Animation Applied to 3D Computer Animation*, Computer Graphics, (SIGGRAPH ’87 Proceedings), pp. 35–44, July 1987.Parent, Rick,

*Computer Animation: Algorithms and Techniques*, Morgan Kaufmann Publishers, San Francisco, 2001.Reeves, William T.,

*Particle Systems—A Technique for Modeling a Class of Fuzzy Objects*, ACM Transactions on Graphics, vol. 2, no. 2, pp. 91–108, April 1983.Reeves, William T., and Ricki Blau,

*Approximate and Probabilistic Algorithms for Shading and Rendering Structured Particle Systems*, Computer Graphics (SIGGRAPH ’85 Proceedings), pp. 313–322, July 1985.Thomas, Frank, and Ollie Johnston,

*Disney Animation—The Illusion of Life,*Abbeville Press, New York, 1981.Thomas, Frank, and Ollie Johnston,

*The Illusion of Life—Disney Animation, Revised Edition*, Hyperion, 1995.Watt, Alan H., and Mark Watt,

*Advanced Animation and Rendering Techniques: Theory and Practice,*Addison-Wesley, Reading, Massachusetts, 1992.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.