Chapter 10

OpenGL ES 2, Shaders, and…

Her angel's face, As the great eye of heaven shined bright, And made a sunshine in the shady place.

—Edmund Spenser

There are two different versions of the OpenGL ES graphics library on your iOS devices. This book has largely dealt with the higher-level one, known as OpenGL ES 1, sometimes referred to as 1.1 or 1.x. The second version is a rather confusingly named OpenGL ES 2. The first one is by far the easier of the two; it comes with all sorts of helper libraries doing much of the 3D mathematics and all of the lighting, coloring, and shading on your behalf. ES 2 eschews all of those niceties and is sometimes referred to as the “programmable function” version (vs. ES 1's “fixed function” design, which is generally sneered at by the true pixel-jockeys who prefer more control over their imagery, usually for immersive 3D game environments where every little visual footnote is emphasized). For that, OpenGL ES 2 was released.

In this chapter, we'll just touch ever so briefly on shaders, just enough to give you a general feel for them. Afterward, I'll go into some more of the GLKit goodness not covered in previous chapters.

Version 1 is relatively close to the desktop variety of OpenGL, making porting applications, particularly vintage ones, a little less painful than having a badger gnaw your face off. The things that were left out were done so to keep the footprint small on limited-resource devices and to ensure performance was as good as could be.

Version 2 defenestrated compatibility altogether and concentrated on performance and flexibility-oriented features aimed primarily at entertainment-oriented software. Among the things left out were glRotatef(), glTranslatef(), matrix stack operations, and so on. But what we got in return are some delightful little morsels such as a programmable pipeline via the use of shaders. And the loss of the transformation methods have been replaced with the new iOS5 GLKit math libraries, so the learning curve is just a little less steep now.

As to be expected, version 2 is way too large to cover in a single chapter, so what follows is a general overview that should give you a good feel for the topic and whether it is something you'd want to tackle at some point.

Shaded Pipelines

If you have had a passing familiarity with OpenGL or Direct3D, the mere mention of the term shaders might give you the shivers. They seem like a mysterious talisman belonging to the innermost circles of graphics priesthood.

Not so.

The “fixed function” pipeline of version 1 refers to the lighting and coloring of the vertices and fragments. For example, you are permitted to have up to eight lights, and each light has various properties. The lights illuminate surfaces, each with their own properties called materials. Combining the two, we get a fairly nice, but constrained, general-purpose lighting model. But what if you wanted to do something different? What if you wanted to have a surface fade to transparency depending on its relative illumination? The darker, the more transparent? What if you wanted to accurately model shadows of, say, the rings of Saturn, thrown upon its cloud decks, or the pale luminescent light you get right before sunrise? All of those would be next to impossible given the limitations of the fixed-function model, especially the final one, because the lighting equations are quite complicated once you start taking into consideration the effect of moisture in the atmosphere, backscattering, and so on. Well, a programmable pipeline that lets you model those precise equations without using any tricks such as texture combiners is exactly what version 2 gives us.

Back to Where We Started

Let's go back to the very first example given in Chapter 1, the two cubes. You have already tweaked one of the shaders and lived to tell about it, but now we can go a little deeper.

The pipeline architecture of ES 2 permits you to have different access points in the geometric processing, as shown in Figure 10-1. The first hands you each vertex along with the various attributes (for example, xyz coordinates, colors, and opacity). This is called the vertex shader, and you have already played with one in the first chapter. At this point, it is up to you to determine what this vertex should look like and how it should be rendered with the supplied attributes and the state of your 3D world. When done, the vertex is handed back to the hardware, rasterized with the data you calculated, and passed on as 2D fragments to your fragment shader. It is here where you can combine the textures as needed, do any final processing, and hand it back to eventually be rendered in the frame buffer.

If this sounds like a lot of work for each fragment of each object in your scene roaring along at 60 fps, you are right. But fundamentally, shaders are small programs that are actually loaded and run on the graphics hardware itself and, as a result, are very, very fast.

images

Figure 10-1. Overview of OpenGL ES 2 architecture

Shader Structure

Both vertex and fragment shaders are similar in structure and look a little like a small C program. The entry point is always called main() as in C (and Objective-C), while the syntax is likewise very C-ish.

The shader language, called GLSL (not to be confused with its Direct3d counterpart, HLSL), contains a rich set of built-in functions that belong to one of three main categories:

  • Math operations oriented toward graphics processing such as matrix, vector, trig, derivative, and logic functions
  • Texture sampling
  • Small helper utilities such as modulo, comparisons, and valuators

Values are passed to and from shaders in the following types:

  • Uniforms, which are values passed from the calling program. These might include the matrices for transformations or projection. They are available in both the vertex and fragment shaders, and they must be declared as the same type in each place.
  • Varying variables (yes, it is a dumb-sounding name), which are variables defined in the vertex shader that are passed on to the fragment shader.

Variables may be defined as the usual numeric primitives or as graphics-oriented types based on vectors and matrices, as shown in Table 10-1.

images

In addition to these types, you can supply modifiers to define the precision of int- and float-based types. These can be highp (24 bit), mediump (16 bit), or lowp (10 bit), with highp being the default. All transformations must be done in highp, while colors need only mediump. (It beats me why there is no precision qualifier for bools, though.)

Any basic types can be declared as constant variables, such as const float x=1.0.

Structures are also allowed and look just like their C counterparts.

Restrictions

Since shaders reside on the GPU, they naturally have a number of restrictions to them, limiting their complexity. They may be limited by “instruction count,” number of uniforms permitted (typically 128), number of temporary variables, and depth of loop nesting. Unfortunately, on OpenGL ES, there is no real way to fetch these limits from the hardware, so you can only be aware that they exist and keep your shaders as small as possible.

There are also differences between the vertex and fragment shaders. For example, highp support is optional, whereas it is mandatory on the vertex shader. Bah.

Back to the Spinning Cubes

So, now let's jump back to the original example of the dueling cubes and break down how a basic OpenGL ES 2 program is structured. As you'll see, the process of generating a shader is not unlike generating most any other application. You have your basic compile, link, and load sequence. Listing 10-1 demonstrates the first part of that process, compiling the thing. In Apple's example, all of these steps are placed in a view controller, but they can go anywhere.

Listing 10-1. Compiling a Shader

- (BOOL)compileShader:(GLuint *)shader                                              //1
        type:(GLenum)type file:(NSString *)file
{
    GLint status;
    const GLchar *source;
    
    source = (GLchar *)[[NSString stringWithContentsOfFile:file
                        encoding:NSUTF8StringEncoding error:nil] UTF8String];
    
    if (!source)
    {
        NSLog(@"Failed to load vertex shader");
        return NO;
    }
    
    *shader = glCreateShader(type);                                                 //2
    
    glShaderSource(*shader, 1, &source, NULL);                                      //3
    glCompileShader(*shader);                                                       //4
    
#if defined(DEBUG)
    
    GLint logLength;
    glGetShaderiv(*shader, GL_INFO_LOG_LENGTH, &logLength);                         //5
    
    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetShaderInfoLog(*shader, logLength, &logLength, log);                    //6
        NSLog(@"Shader compile log: %s", log);
        free(log);
    }
    
#endif
    
    glGetShaderiv(*shader, GL_COMPILE_STATUS, &status);                             //7
    
    if (status == 0)
    {
        glDeleteShader(*shader);                                                    //8
        return NO;
    }
    
    return YES;
}

When you get away from all of the error handling code, the process boils down to creating a shader, passing in the source, and compiling.

  • In the argument list of line 1, an address is passed to receive the newly generated shader's. handle. A type value is also supplied, which can be either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. And finally a file name is specified. You don't need to supply a shader from a file, because others may actually specify the shader as static strings defined inside the body of the code.
  • In line 2, glCreateShader() generates an empty shader object and returns its handle.
  • Line 3 passes the source of the shader to the newly created object, because its real job is to contain the text strings used to define the shader.
  • Next we compile the thing in line 4.
  • Now we should check the compilation status. Since there is no means to debug shaders when on the GPU, a nice error management system has been provided that can send back fairly detailed strings to the calling program. Line 5 gets the length of the string, while 6 gets the actual contents. For example, in Shader.vsh, I supplied a variable that was never defined and received the following:
    ERROR: 0:19: Use of undeclared identifier 'normalX'
  • But instead of using a string to determine what to do, you can also get a numerical error of either GL_TRUE if the compile was actually successful or GL_FALSE if otherwise, as shown in line 7. And if not, delete the shader, as in line 8.

The next step in the process is to link the program, as shown in Listing 10-2. glLinkProgram() is the only call of any significance, with the rest being error handling.

Listing 10-2. Linking the Newly Created Shader Program

- (BOOL)linkProgram:(GLuint)prog
{
    GLint status;
    glLinkProgram(prog);                         
    
#if defined(DEBUG)
    GLint logLength;
    
    glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
    
    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetProgramInfoLog(prog, logLength, &logLength, log);
        NSLog(@"Program link log: %s", log);
        free(log);
    }
#endif
    
    glGetProgramiv(prog, GL_LINK_STATUS, &status);
    
    if (status == 0)
    {
        return NO;
    }
    
    return YES;
}

After linking, it is customary to “validate” the program. Validation is a way for the OpenGL implementors to return information about any aspects of your code, such as recommended improvements. You would use this primarily during the development process, as shown in Listing 10-3. And as before, it is largely error handling.

Listing 10-3. Program Validation

- (BOOL)validateProgram:(GLuint)prog
{
    GLint logLength, status;
    
    glValidateProgram(prog);                        
    glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);
    
    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetProgramInfoLog(prog, logLength, &logLength, log);
        NSLog(@"Program validate log: %s", log);
        free(log);
    }
    
    glGetProgramiv(prog, GL_VALIDATE_STATUS, &status);        
    
    if (status == 0)
    {
        return NO;
    }
    
    return YES;
}

The final routine, loadShaders(), as shown in Listing 10-4, ties together the three routines from earlier and binds our attributes and parameters to the program. That way, we can pass an array of vertex information or parameters and specify their names on both sides of the fence.

Listing 10-4. Loading the Shaders and Resolving Parameters

- (BOOL)loadShaders                        
{
    GLuint vertShader, fragShader;
    NSString *vertShaderPathname, *fragShaderPathname;

    _program = glCreateProgram();                                                   //1
    
    vertShaderPathname = [[NSBundle mainBundle]
                          pathForResource:@"Shader" ofType:@"vsh"];
    
    if (![self compileShader:&vertShader                                            //2
                        type:GL_VERTEX_SHADER file:vertShaderPathname])
    {
        NSLog(@"Failed to compile vertex shader");
        return NO;
    }
    
    fragShaderPathname = [[NSBundle mainBundle] pathForResource:@"Shader"  
    ofType:@"fsh"];
    
    if (![self compileShader:&fragShader
                        type:GL_FRAGMENT_SHADER file:fragShaderPathname])
    {
        NSLog(@"Failed to compile fragment shader");
        return NO;
    }
    
    glAttachShader(_program, vertShader);                                           //3
    glAttachShader(_program, fragShader);
    
    glBindAttribLocation(_program, ATTRIB_VERTEX, "position");                      //4
    glBindAttribLocation(_program, ATTRIB_NORMAL, "normal");
    
    if (![self linkProgram:_program])                                               //5
    {
        NSLog(@"Failed to link program: %d", _program);
        
        if (vertShader)
        {
            glDeleteShader(vertShader);
            vertShader = 0;
        }
        
        if (fragShader)
        {
            glDeleteShader(fragShader);
            fragShader = 0;
        }
        
        if (_program)
        {
            glDeleteProgram(_program);
            _program = 0;
        }
        
        return NO;
    }
        
    uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX] =                                  //6
        glGetUniformLocation(_program, "modelViewProjectionMatrix");
    
    uniforms[UNIFORM_NORMAL_MATRIX] =                 
        glGetUniformLocation(_program, "normalMatrix");
        
    if (vertShader)                                                                 //7
    {
        glDetachShader(_program, vertShader);
        glDeleteShader(vertShader);
    }
    
    if (fragShader)
    {
        glDetachShader(_program, fragShader);
        glDeleteShader(fragShader);
    }
    
    return YES;
}

Here's what's happening:

  • Line 1 generates a program handle and creates an empty program object. You keep this handle around and use it to specify which program you want to use for a specific piece of geometry, because you can have multiple programs and swap them back and forth as needed.
  • Now the shaders are compiled in lines 2ff.
  • Lines 3f bind the specific shader to the new program object. Each program object must have one vertex and one fragment shader.
  • In lines 4ff, we bind whatever attributes we want to the program. In the actual vertex shader code, you can see attributes by those names defined for use:
    attribute vec4 position;
    attribute vec3 normal;

    The names can be nearly anything you want; there is nothing special about the use of position or normal.

  • Line 5 links both shaders.
  • Besides binding the attributes to named entities in the shaders, you can also do the same with uniforms, as demonstrated in lines 6 and 7. Remember that uniforms are merely values passed from the calling program. (They differ from attributes in that attributes are mapped one-to-one with each vertex.) In this case, we are supplying two matrices and naming them modelViewProjectionMatrix and normalMatrix. Looking again in the vertex shader, you can see the following:
    uniform mat4 modelViewProjectionMatrix;
    uniform mat3 normalMatrix;
  • Lines 7ff support a memory optimization quark. Once linked, the shader code is copied into the program, so the shader objects as we have them are no longer needed. Since they use reference counts, glDetachShader() serves to decrease the count by one, and, of course, when 0, they can be safely deleted.
  • As a side effect, if you change the shader in anyway, it will have to be relinked before the changes take effect. And in case you may have to relink things, the driver may hold onto some cached information to use later. Otherwise, the detach/delete process can aid the driver in reusing the cached memory.

As you can see, the actual calls required are pretty minimal, but Apple's demo includes all of the error handling as well, which is why it was retained.

Now with that out of the way, we can look at the two shaders. Listing 10-5 is the vertex shader, Shader.vsh. Note that the shaders pairs share the same prefix, with the vertex shader having a suffix of vsh while the fragment shader uses fsh.

Listing 10-5. The Demo's Vertex Shader

attribute vec4 position;                                                            //1
attribute vec3 normal;

varying lowp vec4 colorVarying;                                                     //2

uniform mat4 modelViewProjectionMatrix;                                             //3
uniform mat3 normalMatrix;

void main()
{
    vec3 normalDirection = normalize(normalMatrix * normal);                        //4
    vec3 lightPosition = vec3(0.0, 0.0, 1.0);                                       //5
    vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);                                   //6
    
    float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));              //7
                 
    colorVarying = diffuseColor * nDotVP;                                           //8
    
    gl_Position = modelViewProjectionMatrix * position;                             //9
}

Here's a closer look:

  • Lines 1f declare the attributes that we specified in the calling code. Remember that attributes are arrays of data mapping directly to each vertex and are available only in the vertex shader.
  • In line 2, a varying vector variable is declared. This will be used to pass color information down to the fragment shader.
  • Lines 3f declare two uniforms that were originally specified in loadShaders() earlier.
  • In Line 4, the normal is multiplied by the normalMatrix. You can't use the Modelview matrix in this case, because normals do not need to be translated, and any scaling done in the Modelview would distort the normal. As it turns out, you can use the inverse and transposed Modelview matrix for the normals. With that in hand, the result is normalized.
  • Lines 5 supplies the position of the light, while line 6 supplies the default color. Normally you wouldn't embed that data inside a shader, but it is likely done this way just as a convenience.
  • Now, in line 7, the dot product of the normal (now called eyeNormal) and the position of the light is taken to produce the angle between the two. The max() function ensures that the return value is clamped to be >=0 to handle any negative values.
  • Now by simply multiplying the dot product by the diffuse color, as shown in line 7, we get the luminosity of a face relative to the local position of the light. The closer the normal aims toward the light, the brighter it should be. As it aims away from the light, it will get darker, finally going to 0.
  • gl_Position is a predefined varying in GLSL and is used to pass the transformed vertex's position back to the driver.

The fragment shader in this example is the most basic there is. It simply passes the color info from the vertex shader through and untouched. gl_FragColor is another predefined varying, and it is here where any final calculations would be made, as shown in Listing 10-6.

Listing 10-6. The Simplest Fragment Shader

varying lowp vec4 colorVarying;

void main()
{
    gl_FragColor = colorVarying;
}

Now we're ready to use our shaders, which turn out to be surprisingly straightforward. First, glUseProgram() sets the current program, followed by the glUniform* functions that pass the values from your app to the shaders on the card. The attributes are usually supplied at the same time as the geometry, via the use of calls such as glVertexAtttribPointer().

One additional note regarding this example is to be found in its setupGL() method. This was briefly touched upon in Chapter 1 but now is a good time to take a little closer look at how the data is actually passed to the GPU in an OpenGL ES 2 program. Vertex array objects (VAOs), not to be confused with vertex buffer objects, represent a collection of information that describes a specific state of your scene. As with other objects, creating/using VAOs follows the same path: generate a unique ID, bind it, fill it up, and then unbind it until needed. Many VAOs can be generated that haul about the pointers of the geometry and attributes different aspects of your world. In the cube example, consider Listing 10-7. After the VAO ID is generated and bound as the current VAO, a vertex buffer object is created for the interleaved geometric data. Afterward, the VAO is notified about how the VBO data is organized, and in this case, just the position and normals are addressed.

Listing 10-7. Creating the Vertex Array Object

    glGenVertexArraysOES (1, &_vertexArray);
    glBindVertexArrayOES(_vertexArray);
    
    glGenBuffers(1, &_vertexBuffer);
    glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
    glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData,
    GL_STATIC_DRAW);
    
    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24,
    BUFFER_OFFSET(0));
    glEnableVertexAttribArray(GLKVertexAttribNormal);
    glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24,
    BUFFER_OFFSET(12));
    
    glBindVertexArrayOES(0);

When it comes time to draw, the VAO handle is set to be the current one, and the normal glDrawArray() routine is called.

Earth at Night

Let's start with our earth model and see how shaders can be used to make it more interesting. You're familiar with the image used for the earth's surface, as shown in Figure 10-2 (left), but you may have also seen a similar image of the earth at night, as shown in Figure 10-2 (right). What would be neat is if we could show the night texture on the dark side of the earth, instead of just a dark version of the regular texture map.

images

Figure 10-2. The daytime earth (left) vs. the nighttime earth (right)

Under OpenGL 1.1, this would be very tricky to accomplish if at all. The algorithm should be relatively simple: render two spheres of exactly the same dimensions. One has the night image, and the other has the day image. Vary the daylight-side alpha channel of the texture of the day-side earth based on the illumination. When illumination reaches 0, it is completely transparent, and the night portion shows through. However, under OpenGL ES 2, you can code the shaders very easily to match the algorithm almost exactly.

So, I started with the cube template from Apple and dumped the cube stuff and added Planet.mm and Planet.h files. setupGL() was changed to Listing 10-8. Notice the loading of the two textures and two shader programs.

Listing 10-8. Setting Up to Show Earth at Night

- (void)setupGL
{   
    int planetSize=20;
    
    [EAGLContext setCurrentContext:self.context];
    
    [self loadShaders:&m_NightsideProgram shaderName:@"nightside"];
    [self loadShaders:&m_DaysideProgram shaderName:@"dayside"];          
    
    float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
    m_ProjectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f),
    aspect, 0.1f, 100.0f);
    
    glEnable(GL_DEPTH_TEST);
    
    m_EyePosition=GLKVector3Make(0.0,0.0,65.0);

    m_WorldModelViewMatrix=GLKMatrix4MakeTranslation(-m_EyePosition.x,-m_EyePosition.y,-
    m_EyePosition.z);

    m_Sphere=[[Planet alloc] init:planetSize slices:planetSize radius:10.0f squash:1.0f
    textureFile:NULL];    
    [m_Sphere setPositionX:0.0 Y:0.0 Z:0.0];
    
    m_EarthDayTexture=[self loadTexture:@"earth_light.png"];
    m_EarthNightTexture=[self loadTexture:@"earth_night.png"];

    m_LightPosition=GLKVector3Make(100.0, 10,100.0);   //behind the earth
                                                         
}

In loadShaders() I merely added one more attribute, namely, texCoord, or the texture coordinates. These are recovered in the fragment shader:

    glBindAttribLocation(*program, ATTRIB_VERTEX, "position");
    glBindAttribLocation(*program, ATTRIB_NORMAL, "normal");
    glBindAttribLocation(*program, GLKVertexAttribTexCoord0, "texCoord");

I also pass the light's position as a uniform, instead of hard-coding it in the vertex shader. This is set up in a couple of steps:

  • First, add it to the shader: uniform vec3 lightPosition;.
  • Then in loadShaders(), you fetch its “location” using glGetUniformLocation(). That merely returns a unique ID for this session that is then used when setting or getting data from the shader.
  • The light's position can then be set by using this:
glUniform3fv(uniforms[UNIFORM_LIGHT_POSITION],1,m_LightPosition.v);

Then change the call to add two parameters so that it can be called with different shader names, and add a pointer to a progam handle. And remember to change the code to support the parameters instead of the temp variables:

- (BOOL)loadShaders:(GLuint *)program  shaderName:(NSString *)shaderName

Now in Listing 10-9, both sides of the earth are drawn, with the night side going first, while the daylight side goes second. The programs are swapped as needed.

Listing 10-9. The drawInRect() Method to Handle This Situation

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    GLfloat gray=0.0;
    static int frame=0;

    glClearColor(gray,gray,gray, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    //nightside
    
    [self useProgram:m_NightsideProgram];                    

    [m_Sphere setBlendMode:PLANET_BLEND_MODE_SOLID];
    [m_Sphere execute:m_EarthNightTexture.name];
    
    //dayside
    
    [self useProgram:m_DaylightProgram];
    
    [m_Sphere setBlendMode:PLANET_BLEND_MODE_FADE];
    [m_Sphere execute:m_EarthDayTexture.name];

    //atmosphere
    
    glCullFace(GL_FRONT);
    glEnable(GL_CULL_FACE);
    glFrontFace(GL_CW);
    
    frame++;
}

On the day side of the earth, I use the program m_DaysideProgram, while on the night side, I use another one, called m_NightsideProgram. Both use the identical vertex shader, as shown in Listing 10-10.

Listing 10-10. The Vertex Shader for Both the Day and Night Sides of the Earth

attribute vec4 position;                        
attribute vec3 normal;
attribute vec2 texCoord;                                                            //1

varying vec2 v_texCoord;  

varying lowp vec4 colorVarying;

uniform mat4 modelViewProjectionMatrix;                
uniform mat3 normalMatrix;
uniform vec3 lightPosition;                                                         //2

void main()
{
    v_texCoord=texCoord;                                                            //3
    
    vec3 eyeNormal = normalize(normalMatrix * normal);            
    vec4 diffuseColor = vec4(1.0, 1.0, 1.0, 1.0);
        
    float nDotVP = max(0.0, dot(normalDirection, normalize(lightPosition)));
                 
    colorVarying = diffuseColor * nDotVP;
    
    gl_Position = modelViewProjectionMatrix * position;
}

This is almost identical to Apple's template, but we've added a couple of things:

  • Line 1 serves to pass an additional attribute, namely, the texture coordinates for each vertex. This is then passed straight through to the fragment shader via line 3 using the v_texCoord varying.
  • In line 2 is the new uniform you may recall in the view controller's code that holds the position of the light.

Listing 10-11 shows the fragment shader for the daylight side of the earth, while Listing 10-12 does the same but for the night side.

Listing 10-11. The Fragment Shader for the Daylight Side of the Earth

varying lowp vec4 colorVarying;                                                     //1

precision mediump float;                            
varying vec2 v_texCoord;                                                            //2        
uniform sampler2D s_texture;                                                        //3

void main()                                         
{       
    gl_FragColor = texture2D( s_texture, v_texCoord )*colorVarying;                 //4
}

You can see how simple these are for such beautiful results:

  • Line 1 picks up the varying variable, colorVarying, from the vertex shader.
  • Line 2 does the same for the texture coordinates, followed by line 3 that has the texture. The sampler2D, as shown in line 3, is a built-in uniform variable that points out which texture unit is being used.
  • Finally, in line 4, the built-in function texture2D extracts the value from the texture referenced by s_texture at the coordinates of v_texCoord. That value is then multiplied by colorVarying, the “real” color of the fragment. The less the colorVarying is, the darker the color becomes.

Listing 10-12 shows how to do the night side of the earth.

Listing 10-12. Rendering the Night Side

varying lowp vec4 colorVarying;                    

precision mediump float;                                            
varying vec2 v_texCoord;                                          
uniform sampler2D s_texture;                     

void main()                                         
{       
    vec4 newColor;
  
    newColor=1.0-colorVarying;                                                      //1
        
    gl_FragColor = texture2D( s_texture, v_texCoord )*newColor;                     //2
}

Here in line 1, we're merely taking the opposite of what was in the day-side shader. As the color increases because of the sun, the dark-side texture fades to nothing. This might be overkill, because the night-side texture would be washed out by the other, but the lights are just a little too visible after the terminator for my taste.

There's one final thing to do, and that is to modify your planet object so as to be drawable with a vertex array object. Yes, it's yet another interminable listing, as shown in Listing 10-13. The data must first be packed into more efficient interleaved form, referenced in Chapter 9. Afterward, a VAO is generated as a wrapper of sorts.

Listing 10-13. Creating a VAO for the Planet

-(void)createInterleavedData
{
    int i;
    GLfloat *vtxPtr;
    GLfloat *norPtr;
    GLfloat *colPtr;
    GLfloat *textPtr;
    int xyzSize;
    int nxyzSize;
    int rgbaSize;
    int textSize;
    
    struct VAOInterleaved *startData;                    
    
    int structSize=sizeof(struct VAOInterleaved);
    long allocSize=structSize*m_NumVertices;
    
    m_InterleavedData=(struct VAOInterleaved *)malloc(allocSize);                   //1
    
    startData=m_InterleavedData;
    
    vtxPtr=m_VertexData;                        
    norPtr=m_NormalData;
    colPtr=m_ColorData;
    textPtr=m_TexCoordsData;

    xyzSize=sizeof(GLfloat)*NUM_XYZ_ELS;
    nxyzSize=sizeof(GLfloat)*NUM_NXYZ_ELS;
    rgbaSize=sizeof(GLfloat)*NUM_RGBA_ELS;
    textSize=sizeof(GLfloat)*NUM_ST_ELS;
    
    for(i=0;i<m_NumVertices;i++)                                                    //2
    {       
        memcpy(&startData->xyz,vtxPtr,xyzSize);     //geometry
        memcpy(&startData->nxyz,norPtr,nxyzSize);   //normals
        memcpy(&startData->rgba,colPtr,rgbaSize);   //colors
        memcpy(&startData->st,textPtr,textSize);   //texture coords

        startData++;
        
        vtxPtr+=NUM_XYZ_ELS;
        norPtr+=NUM_NXYZ_ELS;
        colPtr+=NUM_RGBA_ELS;
        textPtr+=NUM_ST_ELS;
    }
}


-(void)createVAO
{
    GLuint numBytesPerXYZ,numBytesPerNXYZ,numBytesPerRGBA;
    GLuint structSize=sizeof(struct VAOInterleaved);
    
    [self createInterleavedData];
    
    //note that the context is set in the in the parent object

    glGenVertexArraysOES(1, &m_VertexArrayName);
    glBindVertexArrayOES(m_VertexArrayName);
    
    numBytesPerXYZ=sizeof(GL_FLOAT)*NUM_XYZ_ELS;
    numBytesPerNXYZ=sizeof(GL_FLOAT)*NUM_NXYZ_ELS;
    numBytesPerRGBA=sizeof(GL_FLOAT)*NUM_RGBA_ELS;

    glGenBuffers(1, &m_VertexBufferName);
    glBindBuffer(GL_ARRAY_BUFFER, m_VertexBufferName);
            glBufferData(GL_ARRAY_BUFFER, sizeof(struct VAOInterleaved)*m_NumVertices,
        m_InterleavedData, GL_STATIC_DRAW);
    

    glEnableVertexAttribArray(GLKVertexAttribNormal);                 
    glVertexAttribPointer(GLKVertexAttribNormal, NUM_NXYZ_ELS, GL_FLOAT, GL_FALSE,
        structSize, BUFFER_OFFSET(numBytesPerXYZ));
                      
    glEnableVertexAttribArray(GLKVertexAttribColor);                 
    glVertexAttribPointer(GLKVertexAttribColor, NUM_RGBA_ELS, GL_FLOAT,
        GL_FALSE, structSize, BUFFER_OFFSET(numBytesPerNXYZ+numBytesPerXYZ));

    glEnableVertexAttribArray(GLKVertexAttribTexCoord0);      
           
    glVertexAttribPointer(GLKVertexAttribTexCoord0,NUM_ST_ELS, GL_FLOAT,  GL_FALSE,  
        structSize,  
        BUFFER_OFFSET(numBytesPerNXYZ+numBytesPerXYZ+numBytesPerRGBA));
}
  • In line 1 allocate an array of structures to carry each of the components. Here the structure is defined in Planet.h:
    struct VAOInterleaved
            {
                GLfloat xyz[NUM_XYZ_ELS];
                GLfloat nxyz[NUM_NXYZ_ELS];
                GLfloat rgba[NUM_RGBA_ELS];
                GLfloat st[NUM_ST_ELS];
            };
  • Lines 2ff scan through all the data and copy it to the interleaved array.
  • Down in the next method, the VAO is created. Much like the earlier example for the cubes, the only new elements are the addition of the texture coordinates and the RGBA color data to the mix.

Now with that out of the way, check the results in Figure 10-3.

images

Figure 10-3. Illuminating the darkness one texel at a time

But What About Specular Reflections?

Just as any other shiny thing (and the earth is shiny in the blue parts), you might expect to see some sort of reflections of the sun in the water. Well, you'd be right. Figure 10-4 shows a real image of the earth, and right in the middle is the reflection of the sun. Let's try it on our own earth.

images

Figure 10-4. Earth seen from space as it reflects the sun

Naturally we are going to have to write our own specular reflection shader (or in this case, add it to the existing daylight shader).

Swap the old vertex shader for Listing 10-14, and swap the fragment shader for the one in Listing 10-15. Here I precalculate the specular information along with normal diffuse coloring, but the two are kept separate until the fragment shader. The reason is that not all parts of the earth are reflective, so the land masses shouldn't get the specular treatment.

Listing 10-14. Vertex Shader for the Secular Reflection

attribute vec4 position;
attribute vec3 normal;
attribute vec2 texCoord;

varying vec2 v_texCoord;  

varying lowp vec4 colorVarying;
varying lowp vec4 specularColorVarying;                                             //1

uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
uniform vec3 eyePosition;

void main()
{
    float shininess=100.0;
    float balance=.75;

    vec3 normalDirection = normalize(normalMatrix * normal);                       //2
    vec3 eyeNormal = normalize(eyePosition);

    vec3 lightDirection;

    float specular=0.0;
    
    v_texCoord=texCoord;
    
    eyeNormal = normalize(normalMatrix * normal);
    vec4 diffuseColor = vec4(1.0, 1.0, 1.0, 1.0);
        
    lightDirection=normalize(lightPosition);
    
    float nDotVP = max(0.0, dot(normalDirection,lightDirection));

    float nDotVPReflection = dot(reflect(-lightDirection,normalDirection),eyeNormal);   //3

    specular = pow(max(0.0,nDotVPReflection),shininess)*.75;                        //4
    specularColorVarying=vec4(specular,specular,specular,0.0);                      //5        
                            
    colorVarying = diffuseColor * nDotVP;
    
    gl_Position = modelViewProjectionMatrix * position;
}

Here's what is going on:

  • Line 1 declares a varying variable to hand the specular illumination off to the fragment shader.
  • Next, in line 2, we get a normalized normal transformed by the normalmatrix (yup, still sounds funny), which is needed to get the proper specular value.
  • We now need to get the dot product of the reflection of the light and the normalized normal multiplied normally by the normalmatrix in an normal fashion. See line 3. Notice the use of the reflect() method, which is another one of the niceties in the shader language. Reflect generates a reflected vector based on the negative light direction and the local normal. That is then dotted with the eyeNormal.
  • In Line 4 that dot value is taken and used to generate the actual specular component. You will also see our old friend shininess, and just as in version 1 of OpenGS ES, the higher the value, the narrower the reflection.
    • Since we can consider the sun's color just to be white, the specular color in line 5 can be made to have all its components set to the same value.

Now the fragment shader can be used to refine things even further, as shown in Listing 10-15.

Listing 10-15. The Fragment Shader That Handles the Specular Reflection

precision mediump float;                            

varying lowp vec4 colorVarying;
varying lowp vec4 specularColorVarying;                                             //1

uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;

varying vec2 v_texCoord;                            
uniform sampler2D s_texture;

void main()                                         
{       
    vec4 finalSpecular=vec4(0,0,0,1);
    vec4 surfaceColor;
    float halfBlue;
    
    surfaceColor=texture2D( s_texture, v_texCoord );

    halfBlue=0.5*surfaceColor[2];                                                   //2    
    
    if(halfBlue>1.0)                                                                //3
        halfBlue=1.0;
    
    if((surfaceColor[0]<halfBlue) && (surfaceColor[1]<halfBlue))                    //4
        finalSpecular=specularColorVarying;
        
    gl_FragColor = surfaceColor*colorVarying+colorVarying*finalSpecular;            //5
}

The main task here is to determine which fragments represent sea and which do not. It's pretty easy: the blue stuff is water (powerful wet stuff, that water!), and everything that isn't, isn't.

  • First in line 1, we pick up the specularColorVarying variable.
  • In line 2, we pick up the blue component and divide it by half, clamping in line 3, since no color can actually go above full intensity.
  • Line 4 does the filtering. If the red and green components were both less than half that of the blue, then it's a pretty safe bet that we can draw the specular glint over the water, instead of some place like Chad.
  • The specular piece is now added to the fragment color in the last line, after first multiplying it with the colorVarying, because that will modulate it with everything else.

Figure 10-5 shows the results.

images

Figure 10-5. A close-up on the right of the earth/water interface

Bring in the Clouds

So, it certainly seems as if something else is missing. Oh, yeah. Those cloud things. Well, we're in luck because shaders can very easily manage that as well. Available in the downloadable project files I've added a cloud map of the entire earth, as shown in Figure 10-6. The land masses are a little hard to see, but in the lower right is Australia, while in the left half you can barely see South America. So, our job is to overlay it on top of the color landscape map and drop out all of the low-contrast bits.

images

Figure 10-6. Full-earth cloud patterns

Not only are we going to add clouds to our model, but we'll also see how to handle multitexturing using shaders, as in, how does one tell a shader to use more than one texture? Remember the lesson about texture units in Chapter 6? They come in really handy right now, because that is where the textures are stored, ready for the fragment shader to pick them up. Normally, for a single texture, the system defaults in a way that no additional setup is needed, save for the normal call to glBindTexture(). However, if you want to use more than one, there is some setup required. The steps are as follows:

  1. Load the new texture in your main program.
  2. Add a second uniform sampler2D to your fragment shader to support a second texture and pick it up via glGetUniformLocation().
  3. Tell the system which texture unit to use with which sampler.
  4. Activate and bind the desired textures to the specified TUs while in the main loop, drawInRect().

Now to a few specifics: you already know how to load textures. That is, of course, a no-brainer. So, in step 2, you will want to add something like the following to the fragment shader, the same one used for the previous couple of exercises:

        uniform sampler2D cloud_texture;

And to loadShaders():

        uniforms[UNIFORM_SAMPLER1] = glGetUniformLocation(*program, "cloud_texture");
        uniforms[UNIFORM_SAMPLER0] = glGetUniformLocation(*program, "s_texture");

Step 3 is added in the view controller's setupGL(). The glUniform1i() call takes the “location” of the uniform in the fragment shader for the first argument and takes the actual TU number in the second. So, in this case, sampler0 is bound to texture unit 0, while sampler1 goes to texture unit 1. Since a single texture always defaults to TU0, as well as the first sampler, the setup code is not needed.

        glUseProgram(m_DaysideProgram);
        glUniform1i(uniforms[UNIFORM_SAMPLER0],0);
        glUniform1i(uniforms[UNIFORM_SAMPLER1],1);

        glUseProgram(m_NightsideProgram);
        glUniform1i(uniforms[UNIFORM_SAMPLER0],0);
        glUniform1i(uniforms[UNIFORM_SAMPLER1],1);

When running the main loop, in step 4, you can do the following:

        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D,m_EarthNightTexture.name);
    
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D,m_EarthCloudTexture.name);
   
        [self useProgram:m_NightsideProgram];
   
        [m_Sphere setBlendMode:PLANET_BLEND_MODE_SOLID];
        [m_Sphere execute:m_EarthNightTexture.name];

glActiveTexture() specifies what TU to use followed by a call to bind the texture. Afterward, the program can be used to the desired effect.

The cloud-luv'n fragment should now look something like Listing 10-16 to perform the actual blending.

Listing 10-16. Blends a Second Texture with Clouds on Top of the Others

precision mediump float;                            

varying lowp vec4 colorVarying;
varying lowp vec4 specularColorVarying;


uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;

varying vec2 v_texCoord;                            
uniform sampler2D s_texture;
uniform sampler2D cloud_texture;                                                    //1

void main()                                         
{       
    vec4 finalSpecular=vec4(0,0,0,1);
    vec4 surfaceColor;
    vec4 cloudColor;
    
    float halfBlue;            //a value used to detect a mainly blue fragment.
    
    surfaceColor=texture2D( s_texture, v_texCoord );
    cloudColor=texture2D(cloud_texture, v_texCoord );                               //2

    halfBlue=0.5*surfaceColor[2];
    
    if(halfBlue>1.0)
        halfBlue=1.0;
    
    if((surfaceColor[0]<halfBlue) && (surfaceColor[1]<halfBlue))
        finalSpecular=specularColorVarying;
  
    if(cloudColor[0]>0.15)                                                          //3
    {
        cloudColor[3]=1.0;
        gl_FragColor=(cloudColor*1.3+surfaceColor*.4)*colorVarying;
    }
   else
        gl_FragColor=(surfaceColor+finalSpecular)*colorVarying;
}

Here's what is happening:

  • Line 1 is merely declaring the new cloud_texture.
  • In Line 2, we pick up the cloud color from the cloud sampler object.
  • The new color is filtered and merged with the earth's surface image, lines 3ff. The numbers used are quite arbitrary, but they give the best image. Naturally much of the finer detail will have to be cut out to ensure the colored land masses show through.

    Since the clouds are grayscale objects, I need to pick up only a single color to test, because the normal RGB values are identical. So, I opted to handle all texels brighter than .20. Then I ensure that the alpha channel is 1.0 and combine all three components. The cloudColor is given a slight boost with the 1.3 multiplier, while the underlying surface uses only .4, so as to emphasize the clouds a little more while still making them relatively opaque.

I hope you'll see something like Figure 10-7. Now it's starting to look like a real planet.

images

Figure 10-7. Putting it all together

This is just one very simple example of using a shader. When it comes to space themes, for example, you might generate a hazy atmosphere around a planet or 3D volumetric textures to simulate galaxies or nebula. If only I had another ten chapters….

More Fun and Games with GLKit

As mentioned previously, the introduction of the GLKit in iOS 5 was largely designed to make working in OpenGL ES a little easier. The kit supplied new functionality in four areas, three of which you already have dealt with and are very handy in either version 1 or 2:

  • GLKView and GLKViewController (hiding some of the messiness when dealing with the drawing surface)
  • Texture management
  • Math libraries (rich and standardized math API)
  • Effects (standard means to encapsulate shader-based effects)

Of the four, the latter two were specifically targeted to make working with OpenGL ES 2 a little easier. It's the final one, however, that adds a little bit of extra flash that we're going to cover now.

GLKEffects

The GLKEffects library was created as a way to manage shader-based effects. At the time of this writing, GLKit comes with two prebuilt effects classes, and I am sure we'll see more. The core to this is the GLKBaseEffects class. GLKBaseEffects incorporates, and to use Apple's term “mimics,” much of what OpenGL ES 1 users had to leave behind when making the jump to 2. This includes the following:

  • The basic lighting model from OpenGL ES 1, but with only three lights at a time though, vs. 8 or more under version 1.
  • Materials, using the GLKEffectPropertyMaterial class
  • Support for materials and all of their respective qualities
  • Fog
  • Multitexturing

These are the two subclasses:

  • GLKReflectionMap: Turns an object into a shiny toy
  • GLKSkyboxEffect: Creates a 360-degree panorama

Both the reflection and skybox are standard effects used often in games and elsewhere. The skybox is very useful in flight simulators so you can look anywhere and be immersed in the artificial world.

GLKReflectionMap

Sometimes called environment mapping, reflection mapping is used to make an object look like it is made out of the most polished metal or glass. Because it is cool-looking and relatively easy, it is commonly found in games and elsewhere.

The reflection effect largely comes from having a fixed texture with some geometry moving or rotating underneath. That means the texture coordinates for the reflective surfaces will vary dynamically to counteract any motion the underlying object might have.

The texture most commonly used as the “environment” typically comes in the form of what's called a cube map. A cube map is simply a texture that can be subdivided into six squares and then reassembled in cube form around the reflecting object. Why a cube instead of a sphere? Less fuss, mainly, unless you like fuss and want a reflected texture as pure and distortion free as possible, but creating a cube texture is easier than a full 360-degree spherical texture.

Think of what a cube made out of paper would look like unfolded, using Hedly and his pals as the subject, as shown Figure 10-8.

images

Figure 10-8. Hedly and his friends. They're a quiet bunch.

So, how is a cube map used? First, get used to adding a third component to the texture coordinates, which specifies the face of the cube to use. Cube mapping makes a number of assumptions:

  • The environment in the reflection very far away, so no parallax will be visible.
  • The discontinuities are largely difficult to notice unless you know what you're looking for.
  • The object cannot reflect any part of itself, unless you use some special image maps to compensate.
  • You have a curved surface, because cube maps don't look right in a flat surface like a mirror.

To draw an object with a reflection/cube map, OpenGL will draw a ray from your viewpoint, bounce it off the target, and figure out where it hits on the cube map. The intersection point on the object specifies what texture coordinates are to be used, and the intersection on the cube map picks out what texel to use and which of the six faces was hit.

With that in mind, let's add cube map to the earth using GLKReflectionMapEffect.

You can start again with the standard template project and then add Planet.mm to it. You will need to modify both the view controller and the planet code. First, let's handle your setupGL() method in the view controller by substituting Listing 10-17 for the template code.

Listing 10-17. Setting Up Your Reflection Map

-(NSString *)imagePath:(NSString *)image
{    
     return [[NSBundle mainBundle] pathForResource:image ofType:NULL];    
}

- (void)setupGL
{
    int planetSize=50;


    NSArray *images = [[NSMutableArray alloc] initWithObjects:                       //1
                       [self imagePath:@"hedly1.png"],
                       [self imagePath:@"hedly2.png"],
                       [self imagePath:@"hedly3.png"],
                       [self imagePath:@"hedly4.png"],
                       [self imagePath:@"hedly5.png"],
                       [self imagePath:@"hedly6.png"],
                       nil];
    
[EAGLContext setCurrentContext:self.context];

    NSDictionary *options=
    [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithBool:YES],GLKTextureLoaderOriginBottomLeft,nil];
    
GLKTextureInfo *info=                                                                //2
    [GLKTextureLoader cubeMapWithContentsOfFiles:images options:options error:nil];

    
    self.effect = [[GLKReflectionMapEffect alloc] init];                             //3
    self.effect.light0.enabled = GL_TRUE;            
    self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 1.0f, 1.0f, 1.0f);
    self.effect.light0.specularColor = GLKVector4Make(1.0f, 1.0f, 1.0f, 1.0f);
    self.effect.material.shininess = 15.0f;
    self.effect.lightingType = GLKLightingTypePerPixel;


    self.effect.textureCubeMap.name =info.name;
    
    self.effect.light0.position=GLKVector4Make(-5.0f, 5.0f, 10.0f, 1.0);
    
    glEnable(GL_DEPTH_TEST);
    
    m_Eyeposition.x=0.0;
    m_Eyeposition.y=0.0;
    m_Eyeposition.z=5.0;
    
    m_Earth=[[Planet alloc] init:planetSize slices:planetSize                        //4
    radius:.5f squash:1.0f
    textureFile:@"earth_light.png"];    
    
    [m_Earth setPositionX:0.0 Y:0.0 Z:-3.0];                                                               
}

Bet you want to know what's going on here?

  • The six-sided cube map is specified by creating an array of the six images in lines 1ff.
  • Line 2 generates the GLKTextureInfo object and uses its cube map support to fetch the six needed files.
  • The new effects object is allocated in line 3. After that, the lighting, materials, and position info are filled in, not at all unlike good old' OpenGL ES 1.
  • And finally, the earth is generated just like before, in line 4.

Now we need the update() and drawInRect() methods, as shown in Listing 10-18.

Listing 10-18. Updating the Effect

- (void)update
{
    GLfloat scale=2.0;
    float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);//1

    GLKMatrix4 projectionMatrix =                         
        GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
     
    self.effect.transform.projectionMatrix = projectionMatrix;
     
    GLKMatrix4 baseModelViewMatrix = GLKMatrix4Identity;  
    GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;  

    baseModelViewMatrix = GLKMatrix4Scale(baseModelViewMatrix,scale,scale,scale);    //2
    baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0, 0.5,
    0.0f);
    modelViewMatrix = GLKMatrix4MakeTranslation(0.0, 0.0, -3.0);
    modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, baseModelViewMatrix);
     
    self.effect.transform.modelviewMatrix = modelViewMatrix;        
    self.effect.matrix=GLKMatrix3Identity;                                           //3        
    
    _rotation+=0.03;
}

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    GLfloat gray=0.2;
    
    glClearColor(gray,gray,gray, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    [self.effect prepareToDraw];                                                     //4
    [m_Earth execute:self.effect];                    
}

In the update() function, you'll see how we now need to rely on the matrixy functions from the new math library; there's no glRotatef() or glTranslatef(), glPushMatrix(), or glPopMatrix() in this universe.

  • Lines 1ff specify the projection matrix, what would normally be given over to glFrustum() in the alternate universe of version 1.
  • Line 2 and those following create the matrices we need for the transformations, ultimately assigning the final modelViewMatrix object to the transform field of the effect's own GLKEffectPropertyTransform object. GLKEffectPropertyTransform contains both the Modelview and normal matrix.
  • Not only does the “effect” have its own transformation matrix, it can also have an additional matrix to handle specific components of that effect. In this case, line 3 highlights this extra matrix. The Modelview matrix is for the geometry of the effect, just like it is in version 1, but this new one can be used to transform other things. In this case, it could be used to rotate the cube map. Setting it to the identity keeps the cube map static, letting just the earth model rotate.
  • When ready, call the prepareToDraw() method of the effect's class, and it will apply the new settings, after which you may render the object itself, with the results shown in Figure 10-9.
images

Figure 10-9. Reflection mapping the earth

For complicated objects such as the earth model, you would be better off using a more simple cube map. The most basic ones typically would show a horizon, ground, and sky, usually produced by different gradients.

Summary

In this final chapter, you learned a little about OpenGL ES 2, the programmable pipeline version of ES; learned how and where shaders fit it in; and used them to add some extra detail to the earth. (For extra credit, try porting the rest of the simulator to version 2.) The final exercise used the OpenGL ES 2–exclusive GLKit effects objects to create a cube map and a shiny earth, rounding out the GLKit introduction. I advise watching the superb presentation of the GLKit by Apple from the 2011-WWDC. iTunes has all of the talks online.

Throughout this book, you've learned basic 3D theory, both in the math involved and in the overall principles. I'd like to think it's given you a basic understanding of the topic, even knowing that the book could be many times larger, considering that we've barely touched 3D graphics.

The Khronos Group, the keepers of all things officially OpenGL, has published several extensive books on the subject. Affectionately known by the color of their covers, there's the Red Book (the official programming guide), the Blue Book, (tutorials and reference), the Orange Book (shading language), the Green Book (Open GL on the Mac), and the Sort-of-Purplish Book (OpenGL ES 2). There are also numerous other third-party books that get much deeper than I've been able to go. Likewise, there are many web sites dedicated to OpenGL tutorials; nehe.gamedev.net is by far one of the best with nearly 50 different tutorials as of this writing.

And as you're going over the work of other authors, be it from other books or on the web, just remember that this book is the one that gave you the sun, the earth, and the stars. Not many others can claim that.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.210.151.5