Chapter 6

Will It Blend?

Yes! It blends!

—Tom Dickson, owner of the Blendtec blender company

In 2006, Tom Dickson posted a goofy video to YouTube illustrating how tough his company's blenders were by blending some marbles into powder. Since then, his frequent videos have been viewed more than 100 million times and have featured blendings of everything from a tiki torch and a laser pointer to a Justin Bieber doll and a new camcorder. Tom's kind of blending has nothing to do with our kind of blending, though, unless the sadistic and unmerciful pulverization of a couple of iPads and an iPhone 4 count. After all, they are OpenGL ES devices—devices that have their own form of blending, albeit not nearly as destructive. (Yes, it's a stretch.)

Blending plays an important role in OpenGL ES applications. It is the process used to create translucent objects that can be used for something as simple as a window to something as complicated as a pond. Other uses include the addition of atmospherics such as fog or smoke, the smoothing out of aliased lines, and the simulation of various sophisticated lighting effects. OpenGL ES 2 has a complex mechanism that uses small modules called shaders to do specialized blending effects among other things. But before shaders there were blending functions, which were not nearly as versatile but considerably easier to use.

In this chapter, you'll learn the basics of blending functions and how to apply them for both color and alpha blending. After that, you'll use a different kind of blending involving multiple textures, used for far more sophisticated effects such as shadowing. Finally, I'll show how we can apply these effects in the solar-system project.

Alpha Blending

You have no doubt noticed the color quadruplet of RGBA. As mentioned earlier, the A part is the alpha channel, and it is traditionally used for specifying translucency in an image. In a bitmap used for texturing, the alpha layer forms an image of sorts, which can be translucent in one section, transparent in another, and completely opaque in a third. If an object isn't using texturing but instead has its color specified via its vertices, lighting, or overall global coloring, alpha will let the entire object or scene have translucent properties. A value of 1.0 means the object or pixel is completely opaque, while 0 means it is completely invisible.

For alpha to work, as with any blending model, you work with both a source image and a destination image. Because this topic is best understood through examples, we're going to start with the first one now.

First let's go back to the original bouncy square exercise from Chapter 3. Then use Listing 6-1 in place of the original drawInRect() method, making sure you call setClipping in your initializer as before. Solid squares of colors are used here first, instead of textured ones, because it makes for a simpler example.

Listing 6-1. The new drawInRect() method

(void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
        static const GLfloat squareVertices[] =                                     //1
        {
               -0.5, -0.5, 0.0,
                0.5, -0.5, 0.0,
               -0.5,  0.5, 0.0,
                0.5,  0.5, 0.0
        };

         static float transY = 0.0;

         glClearColor(0.0, 0.0, 0.0, 1.0);                                          //2
    
         glClear(GL_COLOR_BUFFER_BIT);
    
         glMatrixMode(GL_MODELVIEW);
         glLoadIdentity();
    
    //Do square one bouncing up and down.
    
         glTranslatef(0.0, (GLfloat)(sinf(transY)/2.0), -4.0);                      //3
    
         glVertexPointer(3, GL_FLOAT, 0, squareVertices);
         glEnableClientState(GL_VERTEX_ARRAY);
  
    //SQUARE 1
    
        glColor4f(0.0, 0.0,1.0,1.0);                                                //4

        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
    //SQUARE 2

        glColor4f(1.0, 0.0,0.0, .5);                                                //5
    glLoadIdentity();
    glTranslatef( (GLfloat)(sinf(transY)/2.0),0.0, -3.0);                           //6
        
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                                          //7
    
    transY += 0.075;                                                                //8
}

(void)setClipping
{
    float aspectRatio;                        
    const float zNear = .01;
    const float zFar = 100;
    const float fieldOfView =30.0;
    GLfloat    size;
    
    CGRect frame = [[UIScreen mainScreen] bounds];        
    
    //h/w clamps the fov to the height; flipping it would make it relative to the width.
    
    aspectRatio=(float)frame.size.height/(float)frame.size.width;
    
    //Set the OpenGL projection matrix.
    
    glMatrixMode(GL_PROJECTION);
    
    size = zNear * tanf((fieldOfView/57.3)/ 2.0);
    
    glFrustumf(-size, size, -size *aspectRatio,         
               size *aspectRatio, zNear, zFar);
    
    glViewport(0, 0, frame.size.width, frame.size.height);    
    
    //Make the OpenGL modelview matrix the default.
    
    glMatrixMode(GL_MODELVIEW);
}

And as before, let's take a close look at the code:

  • You should now recognize the bouncy square's coordinates. And in this case, the z component is added to make a 3D bouncy square.
  • Of course, in line 2, the buffer is cleared. But make the background black instead of the default gray.
  • In line 3 the square is moved back by 4 units.
  • Because there is no coloring per vertex, this call to glColor4f() in line 4 will set the entire square to blue. However, notice the last component of 1.0. That is the alpha, and it will be addressed shortly. Immediately following gColor4f() is the call to actually draw the square.
  • But we want two squares to show how they can blend. So in line 5, the color is changed to red and is given an alpha of .5, half that of the blue one.
  • Following that is a translation of only 3 units in line 6, and as a result, the red square will be larger because it is closer. Also, notice that the x value is now being translated. Instead of the up and down movement of the blue square, the closer red one will move left and right.
  • And in line 7, the red square is rendered. Because there is no depth buffer being used right now, the only reason why the red square covers the blue one is that it is drawn after the blue square.
  • In line 8 the value of the translation is cut down so as to decrease the motion, making it a little easier to catch the blending effects when turned on.

If all works, you should have something that looks like Figure 6-1.

images

Figure 6-1. The blue square goes up and down; the red one goes left and right.

It's not much to look at, but this will be the framework for the next several experiments. The first will switch on the default blending function.

As with so many other OpenGL features, you turn blending on with the call glEnable(GL_BLEND). Add that anywhere before the first call to glDrawArray(). Recompile, and what do you see? Nothing, or at least nothing has changed. It still looks like Figure 6-1. That's because there's more to blending than saying shaking your fist at the monitor shouting “Blend!” We must specify a blending function as well, which describes how the source colors (as expressed via its fragments or pixels) mix with those at the destination. The default, of course, is when the source fragments always replace those at the destination, when depth cueing is off. As a matter of fact, blending can take place only when z-buffering is switched off.

Blending Functions

To change the default blending, we must resort to using glBlendFunc(), which comes with two parameters. The first tells just what to do with the source, and the second specifies what to do with the destination. To picture what goes on, note that all that's ultimately happening is that each of the RGBA source components is added, subtracted, or whatever, with each of the destination components. That is, the source's red channel is mixed with the destination's red channel, the source's green is mixed with the destination's green, and so on. This is usually expressed the following way: call the source RGBA values Rs, Gs, Bs, and As, and call the destination values Rd, Gd, Bd, and Ad. But we also need both source and destination blending factors, expressed as Sr, Sg, Sb, and Sa and Dr, Dg, Db, and Da. (It's not as complicated as it seems, really.) And here's the formula for the final composite color:

image

In other words, multiply the source color by its blending factor and add it to the destination color multiplied by its blending factor.

One of the most common forms of blending is to overlay a translucent face on top of stuff that has already been drawn—that is, the destination. As before, that can be a simulated window pane, a heads-up display for a flight simulator, or other graphics that might just look nicer when mixed with the existing imagery. (The latter is used a lot in Distant Suns for a number of the elements such as the constellation names, the outlines, and so on.) Depending on the purpose, you may want the overlay to be nearly opaque, using an alpha approaching 1.0, or very tenuous, with an alpha approaching 0.0.

In this basic blending task, the source's colors are first multiplied by the alpha value, its blending factor. So if the source red is maxed out at 1.0 and the alpha is 0.75, the result is derived by simply multiplying 1.0 by 0.75. The same would be used for both green and blue. On the other hand, the destination colors are multiplied by 1.0 minus the source's alpha. Why? That effectively yields a composite color that can never exceed the maximum value of 1.0; otherwise, all sorts of color distortion could happen. Or imagine it this way: the source's alpha value is the proportion of the color “width” of 1.0 that the source is permitted to fill. The leftover space then becomes 1.0 minus the source's alpha. The larger the alpha, the greater the proportion of the source color that can be used, with an increasingly smaller proportion reserved for the destination color. So as the alpha approaches 1.0, the greater the amount of the source color that is copied to the frame buffer, replacing the destination color.

images Note In these examples, normalized color values are used because they make it much easier to follow the process instead of using unsigned bytes, which would express the colors from 0 to 255.

Now we can examine that in the next example. To set up the blending functions described earlier, you would use the following call:

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA are the blending factors described earlier. And remember that the first parameter is the source's blending, the object being written currently. Place that line immediately after where you enable blending. And to the red colors, compile and run. Do you see Figure 6-2?

images

Figure 6-2. The red square has an alpha of .5, and the blue has an alpha of 1.0.

So, what's happening? The blue has an alpha of 1.0, so each blue fragment completely replaces anything in the background. Then the red with an alpha of .5 means that 50 percent of the red is written to the destination. The black area will be a dim red but only 50 percent of the specified value of 1.0 given in glColor4f(). So far, so good. Now on top of the blue, 50 percent of the red value is mixing with a 50 percent blue value:

Blended color=Color Source*Alpha of source + (1.0-Alpha of Source)*Color of the destination

Or looking at each component based on the values in the earlier red square example, here are the calculations:

Red=1.0*0.5+(1.0-0.5)*0.0

Green=0.0*0.5+(1.0-0.5)*0.0

Blue=0.0*0.5+(1.0-0.5)*1.0

So, the final color of the fragment's pixels should be 0.5,0.0,0.5, or magenta. Now, the red and resulting magenta are a little on the dim side. What would you do if you wanted to make this much brighter? It would be nice if there were a means of blending the full intensities of the colors. Would you use alpha values of 1.0? Nope. Why? Well, with blue as the destination and a source alpha of 1.0, the preceding blue channel equation would be 0.0*1.0+(1.0-1.0)*1.0. And that equals 0, while the red would be 1.0, or solid. What you would want is to have the brightest red when writing to the black background, and the same for the blue. For that you would use a blending function that writes both colors at full intensity, such as GL_ONE. That means the following:

glBlendFunc(GL_ONE, GL_ONE);

Going back to the equations using the source triplet of red=1, green=0, blue=0 and the destination of red=0, green=0, blue=1 (with alpha defaulting to 1.0), the calculations would be as follows:

Red=1*1+0*1

Green=0* (1+(0-0)*1

Blue=0*1+(1-0)*1

And that yields a color in which red=1, green=0, blue=1. And that my friends, is magenta, as shown in Figure 6-3.

images

Figure 6-3. Blending full intensities of red and blue

Now it's time for another experiment of sorts. Take the code from the previous example, set both alphas to 0.5, and reset the blend function back to the traditional values for transparency:

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

After you run this modified code, take note of the combined color, and notice that the further square is blue at -4.0 away and is also the first to be rendered, with the red one as the second. Now reverse the order of the colors that are drawn, and run. What's wrong? You should get something like Figure 6-4.

images

Figure 6-4. The left is drawn with blue first (left), while the one on the right is drawn with red first (right).

The intersections are slightly different colors. This shows one of the mystifying gotchas in OpenGL: like with most 3D frameworks, the blending will be slightly different depending on the order of the faces and colors when rendered. In this case, it is actually quite simple to figure out what's going on. In Figure 6-4 (left), the blue square is drawn first with an alpha of .5. So, even though the blue color triplet is defined as 0,0,1, the alpha value will knock that down to 0,0,.5 while it is written to the frame buffer. Now add the red square with similar properties. Naturally, the red will write to the black part of the frame buffer in the same manner as the blue, so the final value will be .5,0,0. But note what happens when red writes on top of the blue. Since the blue is already at half of its intensity, the blending function will cut that down even further, to .25, as a result of the destination part of the blending function, (1.0-Source alpha)*blue+destination, or (1.0-.5).5+0, or .25. The final color is then .5,0,.25. With the lower intensity of the blue, it contributes less to the composite color, leaving red to dominate. Now in Figure 6-4 (right), the order is reversed, so the blue dominates with a final color of .25,0,.5.

Table 6-1 has all of the allowable OpenGL ES blending factors, although not all are supported by both source and destination. As you can see, there is ample room for tinkering, with no set rules of what creates the best-looking effect. This will be highly reliant on your individual tastes and needs. It is a lot of fun to try different values, though. Make sure to fill the background with a dim gray, because some of the combinations will just yield black when written to a black background.

image

In Chapter 5, we took a look at the GL extensions that OpenGL ES on iOS devices supported. Several of those are for more sophisticated blending solutions such as GL_OES_blend_equation_separate, GL_OES_blend_func_separate, GL_OES_blend_subtract, and GL_EXT_blend_minmax. These values are used with the methods glBlendEquation() and glBlendEquationSeparate().

Look back at the default blending equation, where the final color is determined by a source value+a dest value. But what if you wanted the source to subtract the destination instead of add? Calling glBlendEquation(GL_FUNC_SUBTRACT) will do the job. Add that line right below glBlendFunc(), ensure both squares have an alpha of .5, and reset the colors back to the original with red in front, compile and run. The results may be slightly nonobvious, as in Figure 6-5 (left). What is happening is that the operation really is “subtracting” blue from the red source, but there is no blue component in the red square's color. The math yields a final color with red=.5, green=0, and blue=-.25. But because negative colors do not occur in this plane of existence (or at least in San Jose, California), the system clamps the blue component to 0. The result is a solid red where the intersection is. So, in order to see something here, the front square needs to be drawn with some blue already. So, change red's color to be 1,0,1, or magenta. Now when run, Figure 6-5 (right) is the result, because the blue destination can subtract from the blue in the source, leaving a positive value that the system understands. And in this case the value of the intersection is .5,0,.25, which is why we don't have a pure red but more of a magenta-ish red. Try importing it into a paint program, and verify the actual colors using the eyedropper function.

images

Figure 6-5a,b. On the left, no blending takes place using the subtract operation, while it succeeds on the right.

There are still two other function calls in the extended set, and they are glBlendEquationSeparateOES() and glBlendFuncSeparateOES(). These functions allow you to modify the RGB channels separately from alpha. The OES suffix specifies that these are extensions to OpenGL ES (but only for 1.1 of OpenGL—they are standard in 2.0, so you don't need the OES at the end), and are defined in glext.h. One way in which this is useful is to counteract the effects rendering order that Figure 6-4 illustrates.

And one final method here that might be really handy in some blending operations is that of glColorMask(). This function lets you block one or more color channels from being written to the destination. To see this in action, modify the red square's colors to be 1,1,0,1; set the two blend functions back to GL_ONE; and comment out the line glBlendEquation(GL_FUNC_SUBTRACT);. You should see something like Figure 6-6 (left) when run. The red square is now yellow and, when blended with blue, yields white at the intersection. Now add the following line:

        glColorMask(GL_TRUE, GL_FALSE, GL_TRUE, GL_TRUE);

The preceding line masks, or turns off, the green channel when being drawn to the frame buffer. When run, you should see Figure 6-6 (right), which looks remarkably like Figure 6-3. And as a matter of fact, logically they are identical.

images

Figure 6-6. The left one doesn't use glColorMask, so all colors are in play, while the right one masks off the green channel.

Multicolor Blending

Now we can spend a few minutes looking at the effect of blending functions when the squares are defined with individual colors for each vertex. Add Listing 6-2 to the venerable drawInRect(). The first color set defines yellow, magenta, and cyan (the three complementary colors to the standard red-green-blue specified in the second set).

Listing 6-2. Vertex Colors for the Two Squares

static const GLfloat squareColorsYMCA[] =
{
        1.0, 1.0,   0, 1.0,
          0, 1.0, 1.0, 1.0,
          0,   0,   0, 1.0,
        1.0,   0, 1.0, 1.0,
};

 static const GLfloat squareColorsRGBA[] =
{
        1.0,   0,   0, 1.0,
          0, 1.0,   0, 1.0,
          0,   0, 1.0, 1.0,
        1.0, 1.0, 1.0, 1.0,
};

Assign the first color array to the first square (which has been the blue one up until now), and assign the second to the former red square. And don't forget to enable the use of the color array. You should be familiar enough now to know what to do. Also, notice that the arrays are now normalized as a bunch of GLfloats as opposed to the previously used unsigned bytes, so you'll have to tweak the calls to glColorPointer(). The solution is left up to the reader (I've always wanted to say that). With the blending disabled, you should see Figure 6-7 (left), and when enabled using the traditional function for transparency, Figure 6-7 (center) should be the result. What? It isn't? You say it still looks like the first figure? Why would that be?

Look back at the color arrays. Notice how the last value in each row, alpha, is at its maximum of 1.0. Remember that with this blending mode, any of the destination values are multiplied by: (1.0 – source alpha), or rather, 0.0, so that the source color reigns supreme as you saw in a previous example. One solution to seeing some real transparency would be to use the following:

glBlendFunc(GL_ONE, GL_ONE);

This works because it ditches the alpha channel altogether. If you want alpha with the “standard” function, merely change the 1.0 values to something else, such as .5. And the result is Figure 6-7 (right).

images

Figure 6-7. No blending (left), GL_ONE blending (center), alpha blending (right), respectively

Texture Blending

Now, with fear and trembling, we can approach the blending of textures. Initially this seems much like the earlier alpha blending, but all sorts of interesting things can be done by using multitexturing.

First let's rework the earlier code to support two textures at once and do vertex blending. Listing 6-3 merges some of the code from Chapter 5 with the framework from this chapter's examples.

Listing 6-3. The drawInRect() Method Rejiggered to Support Two Textured Squares

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{

        static const GLfloat squareVertices[] =
        {
               -0.5, -0.5, 0.0,
                0.5, -0.5, 0.0,
               -0.5,  0.5, 0.0,
                0.5,  0.5, 0.0
        };
        static const GLfloat squareColorsYMCA[] =
        {
                1.0, 1.0,   0, 1.0,
                  0, 1.0, 1.0, 1.0,
                  0,   0,   0, 1.0,
                1.0,   0, 1.0, 1.0,
        };

        static const GLfloat squareColorsRGBA[] =
        {
                1.0,   0,   0, 1.0,
                  0, 1.0,   0, 1.0,
                  0,   0, 1.0, 1.0,
                1.0, 1.0, 1.0, 1.0,
        };

          static  GLfloat textureCoords[] =
        {                
                0.0, 0.0,
                1.0, 0.0,
                0.0, 1.0,
                1.0, 1.0
        };

        static float transY = 0.0;

        glMatrixMode(GL_PROJECTION);
            glLoadIdentity();

            [self setClipping];

            glClearColor(0.0, 0.0,0.0, 1.0);

            glClear(GL_COLOR_BUFFER_BIT);

            glMatrixMode(GL_MODELVIEW);
            glLoadIdentity();

   //Set up for using textures.
   glEnable(GL_TEXTURE_2D);
   glBindTexture(GL_TEXTURE_2D,m_Texture0.name);
   glTexCoordPointer(2, GL_FLOAT,0,textureCoords);
   glEnableClientState(GL_TEXTURE_COORD_ARRAY);

   //Do square one bouncing up and down.

   glTranslatef(0.0, (GLfloat)(sinf(transY)/2.0), -4.0);

   glVertexPointer(3, GL_FLOAT, 0, squareVertices);
   glEnableClientState(GL_VERTEX_ARRAY);

            //glEnable(GL_BLEND);

            glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);            


   //SQUARE 1

          //glEnableClientState(GL_COLOR_ARRAY);

          glColorPointer(4, GL_FLOAT, 0, squareColorsYMCA);
    
          glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
   //SQUARE 2
        
   glLoadIdentity();
   glTranslatef( (GLfloat)(sinf(transY)/2.0),0.0, -3.0);
    
   glColorPointer(4, GL_FLOAT, 0, squareColorsRGBA);
   glBindTexture(GL_TEXTURE_2D,m_Texture1.name);
   
   glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
   transY += 0.075f;                                
}

In addition to this, make sure to add loadTexture() from the Chapter 5 examples, and initialize it in the usual place. Because we need two different textures, initialize the first as m_Texture0 and the second as m_Texture1. You will likely notice that while I have both blending and color stuff, I commented out some lines just for this first run-through to ensure that the basic stuff is working. If it's working, you should see something like Figure 6-8 (left). And if that works, unleash the vertex colors by uncommenting glEnableClientState(GL_COLOR_ARRAY) and glEnable(GL_BLEND), which should yield Figure 6-8 (center). And for Figure 6-8 (right), the Golden Gate Bridge is colored with a solid red. I shall let you, dear reader, figure out how to do this.

Using a single bitmap and colorizing it is a common practice to save memory. If you are doing some UI components in the OpenGL layer, consider using a single image, and colorize it using these techniques. You might ask why is it a solid red as opposed to merely being tinted red, allowing for some variation in colors. What is happening here is that the vertex's colors are being multiplied by the colors of each fragment. For the red, I've used the RGB triplet of 1.0,0.0,0.0. So when each fragment is being calculated in a channel-wise multiplication, the green and blue channels are going to be multiplied by 0, so they are completely filtered out, leaving just the red. If you wanted to let some of the other colors leak through, you would specify the vertices to lean toward a more neutral tone, with the desired tint color being a little higher than the others, such as 1.0,0.7,0.7.

images

Figure 6-8. On the left, only the textures are displayed. In the center, they're blended with color, and for the one on the right they're solid red.

You can also add translucency to textures quite easily. To enable this, I'll introduce a small simplifying factor here. You can colorize the textured face by using a single color by simply using glColor4f() and eliminate the need to create the vertex color array. Setting the alpha to less than 1.0 results in the see-through texture, as shown in Figure 6-9.

images

Figure 6-9. The image on the left has an alpha of .5, while the figure on the right has an alpha of .75.

Multitexturing

So now we've covered blending for colors and mixed mode with textures and colors, but what about combining two textures to make a third? Such a technique is called multitexturing. Multitexturing can be used for layering one texture on top of another while performing certain mathematical operations. More sophisticated applications include simple image processing. But let's go for the low-hanging fruit first.

Multitexturing requires the use of texture combiners and texture units. Texture combiners let you combine and manipulate textures that are bound to one of the hardware's texture units, the specific part of the graphics chip that wraps an image around an object. Before the iPhone 3GS, you had only two texture units to deal with, which was a limitation of the PowerVR MBX graphics chip from Imagination Technologies. When the 3GS came out, Apple switched to using the more powerful SGX chip, which increased that to a total of eight texture units. If you anticipate using combiners in a big way, you might want to verify the supported total by glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &numberTextureUnits), where numberTextureUnits is defined as a GLint.

To set up a pipeline to handle multitexturing, we need to tell OpenGL what textures to use and how to mix them together. The process isn't that much different (in theory at least) than defining the blend functions when dealing with the alpha and color blending operations previously. It does involve heavy use of the glTexEnvf() call, another one of OpenGL's wildly overloaded methods. (If you don't believe me, check out its official reference page on the OpenGL site.) This sets up the texture environment that defines each stage of the multitexturing process.

Figure 6-10 illustrates the combiner chain. Each combiner refers to the previous texture fragment (P0 or Pn) or the incoming fragment for the first combiner. It then takes a fragment from a “source” texture (called S0), combines it with P0, and hands it off to the next combiner if needed (called C1); then the cycle repeats.

images

Figure 6-10. The texture combiner chain

The best way to tackle this topic is like any others: go to the code. In the following example, two textures are loaded together, bound to their respective texture units, and merged into a single output texture. Several kinds of methods used to combine the two images are tried with the results of each shown and examined in depth.

First, we revisit our old friend, drawInRect(). We're back to only a single texture, going up and down. The color support has also been stripped out. So, you should have something like Listing 6-4. And make sure that you are still loading a second texture.

Listing 6-4. drawInRect() Revisited, Modified for Multitexture Support

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
     static const GLfloat squareVertices[] =         
    {
         -0.5, -0.5, 0.0,
          0.5, -0.5, 0.0,
         -0.5,  0.5, 0.0,
          0.5,  0.5, 0.0
    };
    
    static  GLfloat textureCoords[] =
    {                
        0.0, 0.0,
        1.0, 0.0,
        0.0, 1.0,
        1.0, 1.0
    };
    
    static float transY = 0.0;
    
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    
    [self setClipping];
    
    glClearColor(0.0, 0.0,0.0, 1.0);
    
    glClear(GL_COLOR_BUFFER_BIT);
    
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    
    //Set up for using textures.

    glEnable(GL_TEXTURE_2D);
    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(3, GL_FLOAT, 0, squareVertices);

    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glClientActiveTexture(GL_TEXTURE0);                                             //1
    glTexCoordPointer(2, GL_FLOAT,0,textureCoords);

    glClientActiveTexture(GL_TEXTURE1);                                             //2
    glTexCoordPointer(2, GL_FLOAT,0,textureCoords);
    
    glLoadIdentity();
    glTranslatef(0.0, (GLfloat)(sinf(transY)/2.0), -2.5);
    [self multiTexture:m_Texture0.name tex1:m_Texture1.name];

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
    transY += 0.075f;    
}

There is a new call here shown in lines 1 and 2, glClientActiveTexture(), which sets what texture unit to operate on. This is on the client side, not the hardware side of things, and indicates which texture unit is to receive the texture coordinate array. Don't get this confused with glActiveTexture(), used in Listing 6-5 below, that actually turns a specific texture unit on.

The other additional method we need is multiTexture, shown in Listing 6-5. This is a very simple default case. The fancy stuff comes later.

Listing 6-5. Sets Up the Texture Combiners

-(void)multiTexture:(GLuint)tex0 tex1:(GLuint)tex1
{
     GLfloat combineParameter=GL_MODULATE;                                          //1

    // Set up the first texture.

     glActiveTexture(GL_TEXTURE0);                                                  //2
     glBindTexture(GL_TEXTURE_2D, tex0);                                            //3

    // Set up the second texture.

     glActiveTexture(GL_TEXTURE1);
     glBindTexture(GL_TEXTURE_2D, tex1);

    // Set the texture environment mode for this texture to combine.

     glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, combineParameter);              //4
}

Here's what is going on:

  • Line 1 specifies what the combiners should do. Table 6-2 lists all the possible values.
  • glActiveTexture() makes active a specific hardware texture unit in line 2.
  • Line 3 should not be a mystery, because you have seen it before. In this example, the first texture is bound to a specific hardware texture unit. The following two lines do the same for the second texture.
  • Now tell the system what to do with the textures in line 4. Table 6-2 lists all the possible parameters. (In the table, P is previous, S is source, subscript a is alpha, and c is color and is used only when color and alpha have to be considered separately.)

image

Now compile and run. Your display should superficially resemble the results of Figure 6-11.

images

Figure 6-11. Hedly is the “previous” texture on the left, while the Jackson Pollack-ish painting is the “source.” When using GL_MODULATE, the results are on the right.

Now it's time to play with other combiner settings. Try GL_ADD for the texture mode, followed by GL_BLEND and GL_DECAL. My results are shown in Figure 6-12. For addition, notice how the white part of the overlay texture is opaque. Because white is 1.0 for all three colors, it will always yield a 1.0 color so as to block out anything underneath. For the nonwhite shades, you should see a little of the Hedly texture poke through. GL_BLEND, as shown in Figure 6-12 (center), is not quite as obvious. Why cyan splats in place of the red? Simple. Say the red value is 1.0, its highest. Consider the equation for GL_BLEND:

Output=Pn(1–Sn)+Sn×C

The first section would be zero for red, because red's value of 1 is subtracted by the 1 in the equation, and by gosh, the second one would be too, providing that the default environment color of black is used. Consider the green channel. Assume that the background image has a value of .5 for green, the “previous” color, while keeping the splat color (the source) of solid red (so no blue or green in the splat). Now the first section of the equation becomes .5*(1.0-0.0), or .5. That is, the .5 value for green in the previous texture, Hedly, is multiplied against “1- minus-green” in the source texture. Because both the green and blue channels in the source's red splats would be 0.0, the combination of green and blue without any red gives a cyan shading, because cyan is the inverse of red. And if you look really closely at Figure 6-12 (center), you can just make out a piece of Hedly poking through. The same holds true for the magenta and yellow splats. In Figure 6-12 (right), GL_DECAL is used and can serve many of the same duties that decals for plastic models had, namely, the application of signs or symbols that would block out anything behind it. So for decals, typically the alpha channel would be set to 1.0 for the actual image part of the texture, while it would be 0.0 for any part that was not of the desired image. Typically the background would be black, and on your paint program you would have it generate an alpha channel based on luminosity or for the part of the image that has a nonzero color. In the case of the splat, because the background was white, I had to invert the colors first to turn it black, generate the mask, and merge it with the normal positive image. The image actually used is the rgb_splats_masked.256.color.png file you can find in the project download. Some alpha that is slightly less than 1 was generated for the green channel, and as a result, you can see a little part of Hedly showing through.

Note On older pre-iPhone 3GS/pre-iPod touch third-generation devices, Apple lists a number of caveats in its OpenGL ES programming guide. If you want to ensure your creation will work on earlier devices, you should check it out.

images

Figure 6-12. On the left, GL_ADD was used, GL_BLEND was added for the center, and GL_DECAL was added on the right.

One further task would be to animate the second texture. Add the following to drawInRect():

    for(i=0;i<8;i++)
    {
          textureCoords2[i]+=.01;
    }

Then make a duplicate of the original textureCoords array, and name it textureCoords2. The latter coordinates are specific to the second texture, so modify the second call to glTexCoordPointer() to use the new data. And finally, declare the index i somewhere. You should see texture 2 scrolling wildly on top of texture 1.

An effect like this could be used to animate rain or snow in a cartoonlike setting or a cloud layer surrounding a planet. The latter would be cool if you had two additional textures, one for the upper deck of clouds and one for the lower, moving at different rates.

As mentioned, the environment parameter GL_COMBINE needs an additional family of settings to get working, because it lets you operate on a much more precise level with the combiner equations. If you were to do nothing more than just using GL_COMBINE, it defaults to GL_MODULATE, so you'd see no difference between the two. Using Arg0 and Arg1 means the input sources are set up by using something like the following line, where GL_SOURCE0_RGB is the argument 0 or Arg0, referenced in Table 6-3:

glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_TEXTURE);

And similarly you'd use GL_SOURCE1_RGB for Arg1.

image

Mapping with Bumps

You can do many extremely sophisticated things with textures; bump mapping is just one. So, what follows is a discussion of exactly what “bumps” are and why anyone should be concerned with mapping them.

As previously pointed out, much of the challenge in computer graphics is to make complicated-looking visuals using clever hacks behind the scenes. Bump mapping is just one of those tricks, and in OpenGL ES 1, it can be implemented with texture combiners.

Just as textures were “tricks” to layer complexity to a simple face, bump mapping is a technique to add a third dimension to the texture. It's used to generate a roughness to the overall surface of an object, giving some surprisingly realistic highlights when illuminated. It might be used to simulate waves on a lake, the surface of a tennis ball, or a planetary surface.

Roughness to an object's surface is perceived by the way it plays with both light and shadow. For example, consider a full moon vs. a gibbous moon, as shown in Figure 6-13. The moon is full when the sun is directly in front of it, and as a result, the surface is little more than varying shades of gray. No shadows whatsoever are visible. It's not much different than you looking at the ground facing away from the sun. Around the shadow of your head the surface looks flat. Now, if the light source is moved to the side of things, suddenly all sorts of details pop out. Figure 6-13 (right) shows a gibbous moon that has the sun toward the left, the moon's eastern limb. It's a completely different story, isn't it?

images

Figure 6-13. Relatively little detail shows on the left, while with oblique lighting, a lot more shows on the right.

Understanding how highlights and shadows work together is absolutely critical to the training of fine artists and illustrators.

Adding real surface displacement to replicate the entire lunar surface would likely require many gigabytes of data and is out of the question for the current generation of small handheld devices from both a memory and a CPU standpoint. Thus enters the rather elegant hack of bump mapping to the center stage.

If you remember in Chapter 4 on lighting, you had to add an array of “face normals” to the sphere model. Normals are merely vectors that are perpendicular to the face that show the direction the face is pointing. It is the angle of the normal to any of the light sources that largely determines just how bright or dark the face will be. And the more directly oriented the face is toward the light, the brighter it will be. So, what if you had a compact way to encode normals not on a face-by-face basis, because a model might have relatively few faces, but on, say, a pixel-by-pixel basis? And what if you could combine that encoded normal array with a real image texture and process it in a way that could brighten or darken a pixel from the image, based on the direction of incoming light?

This brings us back to the texture combiners. In Table 6-3, notice the last two combiner types: GL_DOT3_RGB and GL_DOT3_RGBA. Now, reach back, way back to your high-school geometry classes. Remember the dot product of two vectors? Both the dot products and cross products were those things that you scorned with the whine “Teacherrrrr?? Why do I need to know this?” Well, now you are going to get your answer.

The dot product is the length of a vector based on the angle of two other vectors. Still not following? Consider Figure 6-14 (left). The dot product is the “amount” of the normal vector that is aiming toward the light, and that value is used to directly illuminate the face. In Figure 6-14 (right), the face is at a right angle to the direction of the sun, so it is not illuminated.

images

Figure 6-14. On the left (left), the face is illuminated, which is not the case on the right (right).

With this in mind, the “cheat” that bump mapping uses is as follows. Take the actual texture you want to use, and add a special second companion texture to it. This second texture encodes normal information in place of the RGB colors. So, instead of using floats that are 4 bytes each, it uses 1-byte values for the xyz of normal vectors that conveniently fit inside a single 4-byte pixel. Since the vectors usually don't have to be super accurate, the 8-bit resolution is just fine and is very memory efficient. So, these normals are generated in a way to map directly to the vertical features you want highlighted.

Because normals can have negative values as well as positive (negative when pointing away from the sun), the xyz values are centered in the range of 0 to 1. That is, -127 to +127 must be mapped to anywhere between 0 and 1. So, the “red” component, which is typically the x part of the vector, would be calculated as follows:

image

And of course this is similar for the green and blue bits.

Now look at the formula expressed in the GL_DOT3_RGB entry of Table 6-3. This takes the RGB triplet as the vector and returns its length. N is the normal vector, and L is the light vector, so the length is solved as follows:

image

So if the face is aimed directly toward the light along the x-axis, the normal's red would be 1.0, and the light's red or x value would also be 1.0. The green and blue bits would be .5, which is the encoded form of 0. Plugging that into the earlier equation would look like this:

image

This is exactly what we'd expect. And if the normal is pointing up and away from the surface in the z direction, encoded in the blue byte, the answer should be 0 because the normals are largely aimed up away from the texture's X and Y planes. Figure 6-15 (left) shows a bit of our earth map, while Figure 6-15 (right) shows its corresponding normal map.

images

Figure 6-15. The left side is our image; the right is the matching normal map.

And why is the normal map primarily purple? The straight-up vector pointing away from the earth's surface is encoded such that red=.5, green=.5, and blue=1. (Keep in mind that .5 is actually 0.)

When the texture combiner is set to the DOT3 mode, it uses the normal and a lighting vector to determine the intensity of each texel. That value is then used to modulate the color of the real image texture.

Now it's time to recycle the previous multitexture project. We'll need to add a second texture composed of the bump map, available from the Apress site, and change the way the combiners are set.

To the viewDidLoad() method, load the normal map for this example into m_Texture0, followed by the companion earth texture as m_Texture1. Then add the new routine, MultiTextureBumpMap(), as shown in Listing 6-6.

Listing 6-6. Setting Up the Combiners for Bump Mapping

-(void)multiTextureBumpMap:(GLuint)tex0 tex1:(GLuint)tex1
{
    GLfloat x,y,z;
    static float lightAngle=0.0;

    lightAngle+=1.0;                                                                //1

    if(lightAngle>180)
        lightAngle=0;

    // Set up the light vector.

    x = sin(lightAngle * (3.14159 / 180.0));                                        //2
    y = 0.0;
    z = cos(lightAngle * (3.14159 / 180.0));

    // Half shifting to have a value between 0.0 and 1.0.

    x = x * 0.5 + 0.5;                                                              //3
    y = y * 0.5 + 0.5;
    z = z * 0.5 + 0.5;

    glColor4f(x, y, z, 1.0);                                                        //4

    //The color and normal map are combined.

    glActiveTexture(GL_TEXTURE0);                                                   //5
    glBindTexture(GL_TEXTURE_2D, tex0);

    glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);                     //6
    glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);                         //7
    glTexEnvf(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE);                             //8
    glTexEnvf(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PREVIOUS);                            //9

    // Set up the Second Texture, and combine it with the result of the Dot3 combination.

    glActiveTexture(GL_TEXTURE1);                                                   //10
    glBindTexture(GL_TEXTURE_2D, tex1);

    glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);                    //11

}

The preceding operation takes place using two stages. The first blends the bump map with the primary color, which is established using the glColor4f call. The second takes the results of that and combines it with the color image using our old friend GL_MODULATE.

So, let's examine it piece by piece:

  • In line 1 we define lightAngle that will cycle between 0 and 180 degrees around the texture to show how the highlights look under varying lighting conditions.
  • Calculate the xyz values of the light vector in lines 2ff.
  • In line 3, the xyz components need to be scaled to match those of the bump map.
  • Line 4 colors the fragments using the light vector components.
  • Lines 5f set and bind the bump map first, which is tex0.
  • GL_COMBINE in line 6 tells the system to expect a combining type to follow.
  • In line 7, we specify that we're going to combine just the RGB values using GL_DOT3_RGB operations (GL_DOT3_RGBA includes the alpha but is not needed here).
  • Here we set up “stage 0,” the first of two stages.  The source of the first bit of data is specified in line 8. This says to use the texture from the current texture unit (GL_TEXTURE0) as the source for the bump map assigned in line 5.
  • Then line 9 tells it to blend with the previous color—in this case, that which was set via glColor() in line 4. For stage 0, GL_PREVIOUS is the same as GL_PRIMARY_COLOR, because there is no previous texture to use.
  • Now set up stage 1 in line 10 and the following line. The argument, tex1, is the color image.
  • Now all we want to do is combine the image with the bump map, which is what line 11 does.

Now all you have to do is to call the new method in place of multTexture() used in the previous exercise. My source texture is selected so that you can easily see the results. When started, the light should move from left to right and illuminate the edges of the land masses, as shown in Figure 6-16.

images

Figure 6-16. Bump-mapped North America at morning, noon, and evening, respectively

Looks pretty cool, eh? But can we apply this to a spinning sphere? Give it a shot and recycle the solar-system model from the end of the previous chapter. To make the fine detail of the bump map more easily seen, the sun is dropped in lieu of a somewhat larger image for the earth. So, we'll load the bump map, move the earth to the center of the scene, tweak the lighting, and add the combiner support.

So first off, add a new parameter to the init method of Planet.m for the bump map so that it looks like the following line, and call it where you generate the earth object:

-(id) init:(GLint)stacks slices:(GLint)slices radius:(GLfloat)radius
squash:(GLfloat)squash textureFile:(NSString *)textureFile bumpmapFile:(NSString
*)bumpmapFile

Underneath where you allocate the main image, add the following:

if(bumpmapFile!=nil)
        m_BumpMapInfo=[self loadTexture:bumpmapFile];

And to the header add this:

GLKTextureInfo  *m_BumpMapInfo;

Exchange the initGeometry() call in your solar-system view controller for Listing 6-7:

-(void)initGeometry
{
    m_Eyeposition[X_VALUE]=0.0;
    m_Eyeposition[Y_VALUE]=0.0;
    m_Eyeposition[Z_VALUE]=3.0;

    m_Earth=[[Planet alloc] init:50 slices:50 radius:1.0 squash:1.0
textureFile:@"earth_light.png" bumpmapFile:@"earth_normal_hc.png"];
    [m_Earth setPositionX:0.0 Y:0.0 Z:0.0];
}

Meanwhile, use Listing 6-8 as the new execute() method to be placed in Planet.m and called from the bump mapping controller's executePlanet() routine. This mainly sets things up for the texture combiners and calls multiTextureBumpMap().

Listing 6-8. The Modified Execute in Planet.m that Calls multiTextureBumpMap() for Bump Mapping

-(bool)execute
{
      glMatrixMode(GL_MODELVIEW);
      glEnable(GL_CULL_FACE);
      glCullFace(GL_BACK);
      glEnable(GL_LIGHTING);

      glFrontFace(GL_CW);

      glEnable(GL_TEXTURE_2D);
      glEnableClientState(GL_VERTEX_ARRAY);
      glVertexPointer(3, GL_FLOAT, 0, m_VertexData);

      glEnableClientState(GL_TEXTURE_COORD_ARRAY);
      glClientActiveTexture(GL_TEXTURE0);

      glBindTexture(GL_TEXTURE_2D, m_TextureInfo.name);

      glTexCoordPointer(2, GL_FLOAT, 0, m_TexCoordsData);

      glClientActiveTexture(GL_TEXTURE1);
      glTexCoordPointer(2, GL_FLOAT,0,m_TexCoordsData);

      glMatrixMode(GL_MODELVIEW);

      glEnableClientState(GL_NORMAL_ARRAY);
      glNormalPointer(GL_FLOAT, 0, m_NormalData);

       glColorPointer(4, GL_UNSIGNED_BYTE, 0, m_ColorData);

       [self multiTextureBumpMap:m_BumpMapInfo.name tex1:m_TextureInfo.name];

       glDrawArrays(GL_TRIANGLE_STRIP, 0, (m_Slices+1)*2*(m_Stacks-1)+2);

     return true;
}

Make sure to copy over multiTextureBumpMap() from the previous exercise to Planet.m.

Now go to where you initialize the lights in your solar-system controller, and comment out the call to create the specular material. Bump mapping and specular reflections don't get along too well.

And to your solar-system's controller replace its current execute() and executePlanet() methods with listing 6-9. This dumps the sun, moves the earth into the center of things, and places the main light off to the left.

Listing 6-9. The New Execute Routine that Places the Earth in the Center

-(void)execute
{
       GLfloat posFill1[]={-8.0,0.0,5.0,1.0};
       GLfloat cyan[]={0.0,1.0,1.0,1.0};
       static GLfloat angle=0.0;
       GLfloat orbitalIncrement=.5;
       GLfloat sunPos[4]={0.0,0.0,0.0,1.0};

       glLightfv(SS_FILLLIGHT1,GL_POSITION,posFill1);

       glEnable(GL_DEPTH_TEST);

       glClearColor(0.0, 0.25f, 0.35f, 1.0);
       glClear(GL_COLOR_BUFFER_BIT);

       glPushMatrix();

       glTranslatef(-m_Eyeposition[X_VALUE],-m_Eyeposition[Y_VALUE],-
         m_Eyeposition[Z_VALUE]);
       glLightfv(SS_SUNLIGHT,GL_POSITION,sunPos);

       glEnable(SS_FILLLIGHT1);
       glDisable(SS_FILLLIGHT2);

       glPushMatrix();

       angle+=orbitalIncrement;

        [self executePlanet:m_Earth];

       glPopMatrix();

       glPopMatrix();
}
-(void)executePlanet:(Planet *)planet
{
       GLfloat posX, posY, posZ;
       static GLfloat angle=0.0;

       glPushMatrix();

       [planet getPositionX:&posX Y:&posY Z:&posZ];

       glTranslatef(posX,posY,posZ);

       glRotatef(angle,0.0,1.0,0.0);

       [planet execute];

       glPopMatrix();

       angle+=.4;
}

If you now see something like Figure 6-17, you may officially pat yourself on the back.

images

Figure 6-17. The bumpy Earth

OK, now for an experiment. Move the light's position so that it comes in from the right instead of the left. Figure 6-18 is the unexpected result. What's going on here? Now the mountains look like valleys.

images

Figure 6-18. Huh?

What's happening is that we are going where no combiner has gone before. By sticking in our own lighting, the effect of the simulated lighting as provided by the light vector is removed. With our light on the left, it just happens to look good mainly by luck. Bump mapping here works OK if the lighting of your scene is relatively static. It doesn't like multiple light sources. In fact, the pseudolighting effect specified via the light vector is ignored in lieu of the “real” light sources. Furthermore, if you turn off those sources, the light vector ignores any of the shading on the object altogether. In this case, you would see the entire planet lighten up and darken because that's what is happening to the texture itself, because it is merely a 2D surface. If part of it is lit, all is lit. So, what's a GL nerd to do? Shaders my friend. Shaders. And that is where OpenGL ES 2 and the iOS 5 extensions come into play; they are covered in Chapter 10.

Summary

In this chapter, you learned about the blending capabilities supplied by OpenGL ES 1. Blending has its own unique language as expressed through the blending functions and combiners. You've learned about translucency, how and when to apply it. Also covered were some of the neat tricks available by using both blending and textures for animation and bump mapping. In the next chapter, I'll start to apply some of these tricks and show others that can make for a more interesting 3D universe.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.210.151.5