Chapter 5


The true worth of a man is not to be found in man himself but in the colours and textures that come alive in others.

—Albert Schweitzer

People would be a rather dull bunch without texture in their lives. Removing those interesting little foibles and eccentricities would remove a little of the sheen in our daily wanderings, be they odd but beguiling little habits or unexpected talents. Imagine the high-school janitor who happens to be an excellent ballroom dancer, the famous comedian who must wear only new white socks every day, the highly successful game engineer who's afraid to write letters by hand—all can make us smile and add just a little bit of wonder through the day. And so it is when creating artificial worlds. The visual perfection that computers can generate might be pretty, but it just doesn't feel right if you want to create a sense of authenticity to your scenes. That's where texturing comes in.

Texture makes that which is perfect become that which is real. The American Heritage Dictionary describes it this way: “The distinctive physical composition or structure of something, especially with respect to the size, shape, and arrangement of its parts.” Nearly poetic, huh?

In the world of 3D graphics, texturing is as vital as lighting in creating compelling images and can be incorporated with surprisingly little effort nowadays. Much of the work in the graphics chip industry is rooted in rendering increasingly detailed textures at higher rates than each previous generation of hardware.

Because texturing in OpenGL ES is such a vast topic, this chapter will be confined to the basics, with more advanced topics and techniques reserved for the next chapter. With that in mind, let's get started.

The Language of Texturing

Say you wanted to create an airstrip in a game you're working on. How would you do that? Simple, take a couple of black triangles and stretch them really long. Bang! You've got your landing strip! Not so fast there, sport. What about the lines painted down the center of the strip? How about a bunch of small white faces? That could work. But don't forget those yellow chevrons at the very end. Well, add a bunch of additional faces and color them yellow. And don't forget about the numbers. How about the curved lines leading to the tarmac? Pretty soon you might be up to hundreds of triangles, but that still wouldn't help with the oil spots, repairs, skid marks, and roadkill. Now it starts getting complicated. Getting all of the fine detail could require thousands if not tens of thousands of faces. Meanwhile, your buddy, Arthur, is also creating a strip. You are comparing notes, telling him about polygon counts, and you haven't even gotten to the roadkill yet. Arthur says all he needed was a couple of triangles and one image. You see, he used texture maps, and using texture maps can create a highly detailed surface such as an airstrip, brick walls, armor, clouds, creaky weathered wooden doors, a cratered terrain on a distant planet, or the rusting exterior of a '56 Buick.

In the early days of computer graphics, texturing (or texture mapping) used up two of the most precious resources: CPU cycles and memory. Texture mapping was used sparingly, and all sorts of little tricks were done to save on both resources. With memory now virtually free (compared to 20 years ago) and with modern chips having seemingly limitless speed, using textures is no longer a decision one should ever have to stay up all night and struggle with.

All About Textures (Mostly)

Textures come in two broad types: procedural and image. Procedural textures are generated on the fly based on some algorithm. There are “equations” for wood, marble, asphalt, stone, and so on. Nearly any kind of material can be reduced to an algorithm and hence drawn onto an object, as shown in Figure 5-1.


Figure 5-1. A golden chalice (left). By using procedural texture mapping (right), the chalice can be made up of gold ore instead, while the cone uses a marble map.

Procedural textures are very powerful because they can produce an infinite variety of scalable patterns that can be enlarged to reveal increasingly more detail, as shown in Figure 5-2. Otherwise, this would require a massive static image.


Figure 5-2. Close-up on the goblet from Figure 5-1 (right). Notice the fine detailing that would need a very large image to accomplish.

The 3D rendering application Strata Design 3D-SE, which was used for the images in Figure 5-2, supports both procedural and image-based textures. Figure 5-3 shows the dialog used to specify the parameters of the gold ore texture depicted in Figure 5-2.


Figure 5-3. All of the possible settings used to produce the gold ore texture in Figure 5-2.

Procedural textures, and to a lesser degree image textures, can be classified in a spectrum of complexity from random to structured. Random, or stochastic textures, can be thought of as “looking like noise,” like a fine-grained material such as sand, dust, gravel, the grain in paper, and so on. Near stochastic could be flames, grass, or the surface of a lake. On the other hand, structured textures have broad recognizable features and patterns. A brick wall, wicker basket, plaid, or herd of geckos would be structured.

Image Textures

As referenced earlier, image textures are just that. They can serve as a surface or material texture such as mahogany wood, steel plating, or leaves scattered across the ground. If done right, these can be seamlessly tiled to cover a much larger surface than the original image would suggest. And because they come from real life, they don't need the sophisticated software used for the procedural variety. Figure 5-4 shows the chalice scene, but this time with wood textures, mahogany for the chalice, and alder for the cone, while the cube remains gold.


Figure 5-4. Using real-world image textures

Besides using image textures as materials, they can be used as pictures themselves in your 3D world. A rendered image of an iPad can have a texture dropped into where the screen is. A 3D city could use real photographs for windows on the buildings, for billboards, or for family photos in a living room.

OpenGL ES and Textures

When OpenGL ES renders an object, such as the mini solar system in Chapter 4, it draws each triangle and then lights and colorizes them based on the three vertices that make up each face. Afterward it merrily goes to the next one, singing a jaunty little tune no doubt. A texture is nothing more than an image. As you learned earlier in the chapter, it can be generated on the fly to handle context-sensitive details (such as cloud patterns), or it can be a JPEG, PNG, or anything else. It is made up of pixels, of course, but when operating as a texture, they are called texels. You can think of an OpenGL ES texture as a bunch of little colored “faces” (the texels), each of the same size and stitched together in one sheet of, say, 256 such faces on a side. Each face is the same size as each other one and can be stretched or squeezed so as to work on surfaces of any size or shape. They don't have corner geometry to waste memory storing xyz values, can come in a multitude of lovely colors, and give a lot of bang for the buck. And of course they are extraordinarily versatile.

Like your geometry, textures have their own coordinate space. Where geometry denotes locations of its many pieces using the trusty Cartesian coordinates known as x, y, and z, textures use s and t. The process that applies a texture to some geometric object is called UV mapping. (s and t are used only for OpenGL world, whereas others use u and v. Go figure.)

So, how is this applied? Say you have a square tablecloth that you must make fit a rectangular table. You need to attach it firmly along one side and then tug and stretch it along the other until it just barely covers the table. You can attach just the four corners, but if you really want it to “fit,” you can attach other parts along the edge or even in the middle. That's a little how a texture is fitted to a surface.

Texture coordinate space is normalized; that is, both s and t range from 0 to 1. They are unitless entities, abstracted so as not to rely on either the dimensions of the source or the destination. So, the face to be textured will carry around with its vertices s and t values that lay between 0.0 to 1.0, as shown in Figure 5-5.


Figure 5-5. Texture coordinates go from 0 to 1.0, no matter what the texture is.

In the most elementary example, we can apply a rectangular texture to a rectangular face and be done with it, as illustrated in Figure 5-5. But what if you wanted only part of the texture? You could supply a PNG that had only the bit you wanted, which is not very convenient if you wanted to have many variants of the thing. However, there's another way. Merely change the s and t coordinates of the destination face. Let's say all you wanted was the upper-left quarter of the Easter Island statue I call Hedly. All you need to do is change the coordinates of the destination, and those coordinates are based on the proportion of the image section you want, as shown in Figure 5-6. That is, because you want the image to be cropped halfway down the s-axis, the s coordinate will no longer go from 0 to 1 but instead from 0 to .5. And the t coordinate would then go from .5 to 1.0. If you wanted the lower-left corner, you'd use the same 0 to .5 ranges as the s coordinate.

Also note that the texture coordinate system is resolution independent. That is, the center of an image that is 512 on a side would be (.5,.5), just as it would be for an image 128 on a side.


Figure 5-6. Clipping out a portion of the texture by changing the texture coordinates

Textures are not limited to rectilinear objects. With careful selections of the st coordinates on your destination face, you can do some of the more colorful shapes depicted in Figure 5-7.


Figure 5-7. Mapping an image to unusual shapes

If you keep the image coordinates the same across the vertices of your destination, the image's corners will follow those of the destination, as shown in Figure 5-8.


Figure 5-8. Distorting images can give a 3D effect on 2D surfaces.

Textures can also be tiled so as to replicate patterns that depict wallpaper, brick walls, sandy beaches, and so on, as shown in Figure 5-9. Notice how the coordinates actually go beyond the upper limit of 1.0. All that does is to start the texture repeating so that, for example, an s of .6 equals an s of 1.6, 2.6, and so on.


Figure 5-9. Tiled images are useful for repeated patterns such as those used for wallpaper or brick walls.

Besides the tiling model shown in Figure 5-9, texture tiles can also be “mirrored,” or clamped, which is a mechanism for dealing with s and t when outside of the 0 to 1.0 range.

Mirrored tiling repeats textures as above, but also flips columns/rows of alternating images, as shown in Figure 5-10 (left). Clamping an image means that the last row or column of texels repeats, as shown in Figure 5-10 (right). Clamping looks like a total mess with my sample image but is useful when the image has a neutral border. In that case, you can prevent any image repetition on either or both axes if s or v exceeds its normal bounds.


Figure 5-10. A mirrored-repeat for just the s-axis (left), the texture clamped (right)

Note The problem with the right edge in Figure 5-10 suggests that textures designed to be clamped should have a 1-pixel-wide border to match the colors of the object to which they are bound. Unless you think it's really cool, then of course that trumps nearly everything.

OpenGL ES, as you know by now, doesn't do quadrilaterals—that is, faces with four sides (as opposed to its big desktop brother). So, we have to fabricate them using two triangles, giving us structures such as the triangle strips and fans that we experimented with in Chapter 3. Applying textures to this “fake” quadrilateral is a simple affair. One triangle has texture coordinates of (0,0), (1,0), and (0,1), while the other has coordinates of (1,0), (1,1), and (0,1). It should make more sense if you study Figure 5-11.


Figure 5-11. Placing a texture across two faces

And finally, let's take a look at how a single texture can be stretched across a whole bunch of faces, as shown in Figure 5-12, and then we can get back to the fun stuff.


Figure 5-12. Stretching a texture across many faces

Image Formats

OpenGL ES supports many different image formats, and I'm not talking about PNG vs. JPEG, but I mean the form and layout in memory. The standard is 32 bits, which assigns 8 bits of memory each for red, green, blue, and alpha. Referred to as RGBA, it is the standard used for most of the exercises. It is also the “prettiest” because it provides more than 16 million colors and translucency. However, you can often get away with 16-bit or even 8-bit images. In doing that, you can save a lot of memory and crank up the speed quite a bit, with careful selection of images. See Table 5-1 for some of the more popular formats.


Also a format requirement of sorts is that, generally, OpenGL can use only texture images that are power-of-two on a side. Some systems can get around that, such as iOS with certain limitations, but for the time being, just stick with the standard.

So, with all of this stuff out of the way, it's time to start coding.

Back to the Bouncy Square One

Let's take a step back and fetch the generic bouncy square again example again, which we first worked on in Chapter 3. We'll apply a texture to it and then manipulate it to show off some of the tricks detailed in this chapter, such as repeating, animating, and distortion.

Previous to iOS 5, programmers needed to create their own texture conversion code or use code supplied by Apple, which took nearly 40 lines of Core Graphics to convert a .png to OpenGL compatible format. Now we have two nice shiney new toys to play with called GLKTexture, and GLKTextureInfo.

In your view controller add Listing 5-1.

Listing 5-1. Loading and converting a texture to OpenGL format.

-(GLKTextureInfo *)loadTexture:(NSString *)filename
    NSError *error;
    GLKTextureInfo *info;

    NSString *path=[[NSBundle mainBundle]pathForResource:filename ofType:NULL];
    info=[GLKTextureLoader textureWithContentsOfFile:path options:NULL error:&error];

    return info;


This can now be initialized from viewDidLoad() using the following:

    [EAGLContext setCurrentContext:self.context];
    m_Texture=[self loadTexture:@"hedly.png"];

The two texture parameters specify how to handle repeating textures, covered below. My image, hedly.png, is the photo of one of the mysterious huge stone heads on Easter Island in the Pacific. For ease of testing, use a power-of-two (POT) image, 32 bits, RGBA.

Note By default, OpenGL requires each row of texels in the image data to be aligned on a 4-byte boundary. Our RGBA textures adhere to that; for other formats, consider using the call glPixelStorei(GL_PACK_ALIGNMENT,x), where x can be 1, 2, 4, or 8 bytes for alignment. Use 1 to cover all cases.

Note that there is usually a size limitation for textures, which depends on the actual graphics hardware used. On both the first- and second-generation iPhones (the original and 3G) and iPod/Touch devices, textures were limited to no larger than 10241024 because of using the Power VR MBX platform. On all others, the newer PowerVR SGX is used, which doubles the max size of textures to 2048×2048. You can find out how big a texture a particular platform can use by calling the following, where maxSize is an integer, and then compensate at runtime:


Now change the drawInRect() routine, as shown in Listing 5-2. Most of this you have seen before, with the new stuff detailed below. And while you're at it, go ahead and add GLKTextureInfo *m_Texture to the header.

Listing 5-2. Render the geometry with the texture

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
    static const GLfloat squareVertices[] =
        -0.5f, -0.33f,
        0.5f, -0.33f,
        -0.5f,  0.33f,
        0.5f,  0.33f,

    static const GLubyte squareColors[] = {
        255, 255,   0, 255,
        0,   255, 255, 255,
        0,     0,   0,   0,
        255,   0, 255, 255,
    static const GLfloat textureCoords[] =                                          //1
        0.0, 0.0,
        1.0, 0.0,
        0.0, 1.0,
        1.0, 1.0

    static float transY = 0.0f;

    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);

    glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);

    transY += 0.075f;

    glVertexPointer(2, GL_FLOAT, 0, squareVertices);
    glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors);

    glEnable(GL_TEXTURE_2D);                                                        //2
    glEnable(GL_BLEND);                                                             //3
    glBlendFunc(GL_ONE, GL_SRC_COLOR);                                              //4
    glBindTexture(GL_TEXTURE_2D,;                                    //5
    glTexCoordPointer(2, GL_FLOAT,0,textureCoords);                                 //6
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);                                    //7

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                                          //8


    glDisableClientState(GL_TEXTURE_COORD_ARRAY);                                   //9

So, what's going on here?

  • The texture coordinates are defined here in lines 1ff. Notice that as referenced earlier they are all between 0 and 1. We'll play with these values a little later.
  • In line 2, the GL_TEXTURE_2D target is enabled. Desktop OpenGL supports 1D and 3D textures but not ES.
  • Here is where blending can be enabled. Blending is where the source color of the image and the destination color of the background are blended (mixed) according to some equation that is switched on in line 4.

    The blend function determines how the source and destination pixels/fragments are mixed together. The most common form is where the source overwrites the destination, but others can create some interesting effects. Because this is such a large topic, it deserves its own chapter, which as it turns out is Chapter 6.

  • Line 5 ensures that the texture we want is the current one. Like the other OpenGL objects, textures are assigned a “name,” (a unique integer ID number), which will be referenced until it's deleted.
  • Line 6 is where the texture coordinates are handed off to the hardware.
  • And just as you had to tell the client to handle the colors and vertices, you need to do the same for the texture coordinates here in line 7.
  • Line 8 you'll recognize, but this time besides drawing the colors and the geometry, it now takes the information from the current texture, matches up the four texture coordinates to the four corners specified by the squareVertices[] array (each vertex of the textured object needs to have a texture coordinate assigned to it), and blends it using the values specified in line 4.
  • Finally, disable the client state for texture, line 9, the same way it was disabled for color and vertices.

If everything works right, you should see something like Figure 5-13a. You don't you say? It's upside down? Depending on the format used, your texture could very well be inverted, with its internal origin in the upper-left corner instead of the lower left. The fix is easy for this. Change loadTexture to look like:

-(GLKTextureInfo *)loadTexture:(NSString *)filename
    NSError *error;
    GLKTextureInfo *info;
    NSDictionary *options=[NSDictionary dictionaryWithObjectsAndKeys:
                          [NSNumber numberWithBool:YES],

    NSString *path=[[NSBundle mainBundle]pathForResource:filename ofType:NULL];

    info=[GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];



    return info;

What you're telling the loader to do is to flip the origin of the texture to anchor it at the bottom left of the screen. Now does it look like 5-13 (left)?

Notice how the texture is also picking up the colors from the vertices? Comment out the line glEnableClientState(GL_COLOR_ARRAY) in drawInRect(), and you should now see Figure 5-13 (right). If you don't see any image, double-check your file and ensure that it really is a power-of-two in size, such as 128×128 or 256×256.


Figure 5-13. Applying texture to the bouncy square. Using vertex colors (left) and not (right).

So, now we can replicate some of the examples in the first part of this chapter. The first is to pick out only a portion of the texture to display. Change textureCoords in drawInRect to the following:

static  GLfloat textureCoords[] =
        0.0, 0.5,
        0.5, 0.5,
        0.0, 1.0,
        0.5, 1.0

Did you get Figure 5-14?


Figure 5-14. Cropping the image using s and t coordinates

The mapping of the texture coordinates to the real geometric coordinates looks like Figure 5-15. Spend a few minutes to understand what is happening here if you're not quite clear yet. Simply put, there's a one-to-one mapping of the texture coordinates in their array with the geometric coordinates in theirs.


Figure 5-15. The texture coordinates have a one-to-one mapping with the geometric ones.

Now change the texture coordinates to the following. Can you guess what will happen (Figure 5-16)?

static  GLfloat textureCoords[] =
        0.0, 0.0,
        2.0, 0.0,
        0.0, 2.0,
        2.0, 2.0

Figure 5-16. Repeating the image is convenient when you need to do repetitive patterns such as wallpaper.

Now let's distort the texture by changing the vertex geometry, and to make things visually clearer, restore the original texture coordinates to turn off the repeating:

static const GLfloat squareVertices[] =
       -0.5f, -0.33f,
        0.5f, -0.15f,
       -0.5f,  0.33f,
        0.5f,  0.15f,

This should pinch the right side of the square and take the texture with it, as shown in Figure 5-17.


Figure 5-17. Pinching down the right side of the polygon

Armed with all of this knowledge, what would happen if you changed the texture coordinates dynamically? Add the following code to drawInRect—anywhere should work:

        static float texIncrease=0.01;

This will increase the texture coordinates just a little from frame to frame. Run, and stand in awe. This is a really simple trick to get animated textures. A marquee in a 3D world might use this. You could create a texture that was like a strip of movie film with a cartoon character doing something and change the s and t values to jump from frame to frame like a little flip book. Another is to create a texture-based font. Because OpenGL has no native font support, it's up to us, the long-suffering engineers of the world, to add it ourselves. Sigh. This could be done by placing the characters of the desired font onto a single mosaic texture, called a “font atlas,” and then selecting them by carefully using texture coordinates.


Mipmaps are a means of specifying multiple levels of detail for a given texture. That can help in two ways: it can smooth out the appearance of a textured object as its distance to the viewpoint varies, and it can save resource usage when textured objects are far away.

For example, in Distant Suns, I may use a texture for Jupiter that is 1024×512. But that would be a waste of both memory and CPU if Jupiter was so far away that it was only a few pixels across. Here is where mipmapping can come into play. So, what is a mipmap?

From the Latin phrase “multum in parvo” (literally: “much in little”), a mipmap is a family of textures of varying levels of detail. Your root image might be 128 on a side, but when a part of a mipmap, it would have textures that were also 64, 32, 16, 8, 4, 2, and 1 pixel on a side, as shown in Figure 5-18.


Figure 5-18. Hedly the head, the mipmapped edition

In iOS5, switching on mipmapping is done by adding only one additional parameter to the GLKTextureLoader:textureWithContentsOfFile() call. So swap in the options dictionary in place of the previous one, as follows:

NSDictionary *options=[NSDictionary dictionaryWithObjectsAndKeys:
                           [NSNumber numberWithBool:TRUE],GLKTextureLoaderGenerateMipmaps,nil];

Naturally, drawInRect() also needs some changes. Swap out your old drawInRect() for the new and improved version in Listing 5-5. This will cause the z value to oscillate back and forth.

Listing 5-5. Subsitute this for your drawInRect.

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
    static int counter=1;
    static float direction=-1.0;
    static float transZ=-1.0;
    static GLfloat rotation=0;
    static bool initialized=NO;
    static const GLfloat squareVertices[] =
        -0.5f, -0.5f,-0.5f,
         0.5f, -0.5f,-0.5f,
        -0.5f,  0.5f,-0.5f,
         0.5f,  0.5f,-0.5f
    static  GLfloat textureCoords[] =
        0.0, 0.0,
        1.0, 0.0,
        0.0, 1.0,
        1.0, 1.0
    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
    glVertexPointer(3, GL_FLOAT, 0, squareVertices);
    glTexCoordPointer(2, GL_FLOAT,0,textureCoords);                     

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                              



    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                              





    rotation += 1.0;

Grab a copy of setClipping() and move it over here, then call it from your initialzation code in viewDidLoad().

If that compiles and runs OK, you should see something like Figure 5-19.


Figure 5-19. Two images, both mipmapped, but one's better looking. What gives?

You've probably noticed that the two images look a little different. The top image is shimmery, and the bottom image is noticably smoother, making it easier to look at. That brings me to the topic of filtering.


An image, when used as a texture, may exhibit various artifacting depending on its content and final size when projected onto the screen. Very detailed images might be seen with an annoying shimmering effect. However, it is possible to dynamically modify an image to minimize these effects through a process called filtering. Filtering is typically used in conjunction with mipmapping because the former can make use of the latter's multiple images.

Let's say you have a texture that is 128×128 but the texture face is 500 pixels on a side. What should you see? Obviously the image's original pixels, now called texels, are going to be much larger than any of the screen pixels. This is a process referred to as magnification. Conversely, you could have a case where the texels are much smaller than a pixel, and that is called minification. Filtering is the process used to determine how to correlate a pixel's color with the underlying texel, or texels. Tables 5-3 and 5-4, respectively, show the possible variants of this.



There are three main approaches to filtering:

  • Point sampling (called nearest in OpenGL lingo): A pixel's color is based on the texel that is nearest to the pixel's center. This is the simplest, is the fastest, and naturally yields the least satisfactory image.
  • Bilinear sampling, otherwise called just linear: A pixel's coloring is based on a weighted average of a 2×2 array of texels nearest to the pixel's center. This can smooth out an image considerably.
  • Trilinear sampling: Requires mipmaps and takes the two closest mipmap levels to the final rendering on the screen, performs a bilinear selection on each, and then takes a weighted average of the two individual values.

It's the trilinear sampling that you saw in action in the previous exercise, and it results in a pretty dramatic increase in perceived image quality.

Figure 5-20a shows a close-up of Hedly with the filtering off, while Figure 5-20b has it switched on. If you go back and look at the mipmap code, you'll see that both images are actually using filtering. The shimmery one on top is using the GL_LINEAR and the GL_NEAREST filtering. The bottom is doing the same but using the additional information that the mipmaps provide. Just for kicks, you might want to do comparisons of some of the other settings. For example, which is better: GL_NEAREST or GL_LINEAR?

Filtering might eventually go the way of 8-bit coloring as the retina-level displays become more common.

One more thing: if you look really closely at the bottom image, you might actually see it swap to another texture size. It's subtle, but it's there.


Figure 5-20. All filtering turned off (left), bilinear filtering turned on (right)

OpenGL Extensions and PVRTC Compression

Even though OpenGL is a standard, it was designed with extensibility in mind, letting various hardware manufacturers add their own special sauce to the 3D soup using extension strings. In OpenGL, developers can poll for possible extensions and then use them if they exist. To get a look at this, use the following line of code:

char *extentionList=glGetString(GL_EXTENSIONS);

This will return a space-separated list of the various extra options in iOS for OpenGL ES, looking something like this (from iOS 4.3):

GL_OES_blend_equation_separate GL_OES_blend_func_separate GL_OES_blend_subtract
GL_OES_compressed_paletted_texture GL_OES_depth24 GL_OES_draw_texture
GL_OES_fbo_render_mipmap GL_OES_framebuffer_object GL_OES_mapbuffer
GL_OES_matrix_palette GL_OES_packed_depth_stencil GL_OES_point_size_array
GL_OES_point_sprite GL_OES_read_format GL_OES_rgb8_rgba8 GL_OES_stencil_wrap
GL_OES_stencil8 GL_OES_texture_mirrored_repeat GL_OES_vertex_array_object
GL_EXT_blend_minmax GL_EXT_discard_framebuffer GL_EXT_read_format_bgra
GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod_bias
GL_APPLE_framebuffer_multisample GL_APPLE_texture_2D_limited_npot
GL_APPLE_texture_format_BGRA8888 GL_APPLE_texture_max_level GL_IMG_read_format

The last line, in bold, points out that iOS can handle a special compressed texture format called PVRTC used by the brand of graphics processing units (GPUs) in iOS devices. The first two generations of iPhones and iPod/Touches used a PowerVR MBX chip, while the later ones use the more powerful PowerVR SGX GPU. The advantage of the later ones is that it can accept highly compressed textures in their own format and display them on the fly. This can save substantial memory while increasing framerate, by compressing textures down as small as 1/16th uncompressed size.

Of course, this comes with one main caveat: images must have a square power-of-two (POT) form. The compression works best on photographic type of images as opposed to contrasty graphics.

Note Another interesting extra feature is found in the string GL_APPLE_texture_2D_limited_npot GL. NPOT means “nonpower-of-two.” Remember that more recent versions of iOS can use NPOT images? So if you have reason to use an NPOT image, check the extensions beforehand and handle the results accordingly.

Here we're going to generate and import a PVRTC. First you will have to compress your existing files down to the PVR format using the a nice little tool that Imagination Technologies, the manufacturer of the PowerVR graphics chips, used in all iOS devices.

You can fetch it at Look for PowerVR Insider Utilities under the developer's section. It is called PVRTexTool.

Note Apple also supplies a texture convertor called texturetool. While it is only a command-line based tool, it is very powerful in its own right and could be used to handle large batch jobs if you had a lot of files to compress at once.

Do not be alarmed when you launch it! It might look like Windows NT, but there is nothing wrong with your picture. It actually uses the X11 windowing platform that makes it usable across many different operation systems.

To convert a texture to PVRTC, simply load it into the editor, and select the Encode Current Texture button. That will open up a new dialog that will let you select which 3D platform you want to encode to; in this case, select the OpenGL ES 1.x tab. Select either the PVRTC 2BPP or PVRTC 4BPP button in the Compressed Formats section, and then the encode button on the bottom.

That's it!

Table 5-5 shows the formats generated by the tool. Even though you have a selection of only two, four are possible depending on whether the source bitmap has alpha or not. The 2BPP format means two bits-per-pixel while 4BPP means, well you guessed it, four-bits-per-pixel.

Many other image formats supported as well, and those are covered in Chapter 9 in the discussion on performance issues.


Say you have a 512×512 PNG texture that will consume 1 MB of memory. The least compression using PVRTexTool will take less than 200 KB. The greatest compression format, 2 bits/pixel with alpha, would be a mere 64 KB.

PVRTC textures are also readable by GLKTextureLoader() but will fail to load if you specify GLKTextureLoaderGenerateMipmaps in the option dictionary. However, you can use PVRTexTool to embed a mipmap chain in the host file for you. Because the compression is a lossy one, you can see the various resolution files pop in and out when using the preceding code, more readily than the sharper images.

Note Because PVRTC is hardware specific, Apple has issued a precautionary note to not necessarily rely on PVRTC support in future devices. This simply means that Apple may at some point use a different GPU manufacturer that is not likely to support another company's format.

More Solar System Goodness

Now we can go back to our solar-system model from the previous chapter and add a texture to the Earth so that it can really look like the Earth. Examine Planet.m, and swap out init() for Listing 5-6.

Listing 5-6. Modified sphere generator with texture support added

-(id) init:(GLint)stacks slices:(GLint)slices radius:(GLfloat)radius                //1
        squash:(GLfloat) squash textureFile:(NSString *)textureFile
        unsigned int colorIncrment=0;
        unsigned int blue=0;
        unsigned int red=255;
        int numVertices=0;

              m_TextureInfo=[self loadTexture:textureFile];                         //2


        if ((self = [super init]))
                m_Stacks = stacks;
                m_Slices = slices;
                m_VertexData = nil;

                m_TexCoordsData = nil;


                GLfloat *vPtr = m_VertexData =
                        (GLfloat*)malloc(sizeof(GLfloat) * 3 * ((m_Slices*2+2) *

                //Color data

                GLubyte *cPtr = m_ColorData =
                        (GLubyte*)malloc(sizeof(GLubyte) * 4 * ((m_Slices*2+2) *

                //Normal pointers for lighting

                GLfloat *nPtr = m_NormalData = (GLfloat*)
                        malloc(sizeof(GLfloat) * 3 * ((m_Slices*2+2) * (m_Stacks)));

                GLfloat *tPtr=nil;                                                  //3

                        tPtr=m_TexCoordsData =
                        (GLfloat *)malloc(sizeof(GLfloat) * 2 * ((m_Slices*2+2) *Image

                unsigned int phiIdx, thetaIdx;


                for(phiIdx=0; phiIdx < m_Stacks; phiIdx++)
                        //Starts at -1.57 goes up to +1.57 radians.

                        //The first circle

                        float phi0 = M_PI * ((float)(phiIdx+0) * (1.0/(float)Image
                        (m_Stacks)) - 0.5);
                        //The second one

                        float phi1 = M_PI * ((float)(phiIdx+1) * (1.0/(float)Image
(m_Stacks)) - 0.5);
                        float cosPhi0 = cos(phi0);
                        float sinPhi0 = sin(phi0);
                        float cosPhi1 = cos(phi1);
                        float sinPhi1 = sin(phi1);

                        float cosTheta, sinTheta;


                        for(thetaIdx=0; thetaIdx < m_Slices; thetaIdx++)
                                //Increment along the longitude circle each "slice."

                                float theta = -2.0*M_PI * ((float)thetaIdx) *Image
                                cosTheta = cos(theta);
                                sinTheta = sin(theta);

                                //We're generating a vertical pair of points, such
                                //as the first point of stack 0 and the first pointImage
                                // of stack 1above it. This is how TRIANGLE_STRIPS work,
                                //taking a set of 4 vertices and essentially drawingImage
                                // two triangles at a time. The first is v0-v1-v2 andImage the next is
                                // v2-v1-v3 etc. Get x-y-z for the first vertex ofImage stack.

                                vPtr[0] = m_Scale*cosPhi0 * cosTheta;
                                vPtr[1] = m_Scale*sinPhi0*m_Squash;
                                vPtr[2] = m_Scale*(cosPhi0 * sinTheta);

                                //The same but for the vertex immediately above theImage previous one.

                                vPtr[3] = m_Scale*cosPhi1 * cosTheta;
                                vPtr[4] = m_Scale*sinPhi1*m_Squash;
                                vPtr[5] = m_Scale*(cosPhi1 * sinTheta);

                                //Normal pointers for lighting.

                                nPtr[0] = cosPhi0 * cosTheta;
                                nPtr[2] = cosPhi0 * sinTheta;
                                nPtr[1] = sinPhi0;

                                nPtr[3] = cosPhi1 * cosTheta;
                                nPtr[5] = cosPhi1 * sinTheta;
                                nPtr[4] = sinPhi1;

                                if(tPtr!=nil)                                       //4
                                        GLfloat texX = (float)thetaIdx *Image
                                        tPtr[0] = texX;
                                        tPtr[1] = (float)(phiIdx+0) *Image
                                        tPtr[2] = texX;
                                        tPtr[3] = (float)(phiIdx+1) *Image

                                cPtr[0] = red;
                                cPtr[1] = 0;
                                cPtr[2] = blue;
                                cPtr[4] = red;
                                cPtr[5] = 0;
                                cPtr[6] = blue;
                                cPtr[3] = cPtr[7] = 255;

                                cPtr += 2*4;
                                vPtr += 2*3;
                                nPtr += 2*3;

                                if(tPtr!=nil)                                       //5
                                                tPtr += 2*2;


                        // Degenerate triangle to connect stacks and maintain winding order.

                        vPtr[0] = vPtr[3] = vPtr[-3];
                        vPtr[1] = vPtr[4] = vPtr[-2];
                        vPtr[2] = vPtr[5] = vPtr[-1];

                        nPtr[0] = nPtr[3] = nPtr[-3];
                        nPtr[1] = nPtr[4] = nPtr[-2];
                        nPtr[2] = nPtr[5] = nPtr[-1];

                                tPtr[0] = tPtr[2] = tPtr[-2];                       //6
                                tPtr[1] = tPtr[3] = tPtr[-1];




        return self;

So, here is what's happening:

  • A file name for the image is added to the end of the parameter list in line 1. Remember to add it also to init's declaration in Planet.h.
  • In line 2, the texture is created and GLKTextureInfo is returned.
  • In lines 3ff, the coordinates for the texture are allocated.
  • Starting at line 4, calculate the texture coordinates. Because the sphere has x slices and y stacks and the coordinate space goes only from 0 to 1, we need to advance each value by increments of 1/m_slices for s and 1/m_stacks for t. Notice that this covers two pairs of coordinates, one above the other, matching the layout of the triangle strips that also produces stacked pairs of coordinates.
  • In line 5, advance the pointer to the coordinate array to hold the next set of values.
  • And finally, line 6 ties up some loose threads in preparation for going to the next stack in the loop.

Next, update Planet.h by adding the following to the interface:

#import <GLKit/GLKit.h>

Also add the following:

GLKTextureInfo *m_TextureInfo;
GLfloat *m_TexCoordsData;

Copy over the loadTexture() method from the first example to the planet object, and modify the header as needed. Feel free to remove the mipmap support if you like, but there's no harm in leaving it in; it's just not essential for this exercise.

For an earth texture, note that this will wrap around the entire sphere model, so not just any image will do; as such, it should resemble Figure 5-21. You can get the one I use for this exercise, which is available from the Apress website. Or you might want to check NASA first at


Figure 5-21. Textures typically fill out the entire frame, edge to edge. Planets use a Mercator projection (a cylindrical map).

When you've found a suitable image, add it to your project and hand it off to the planet object when allocated back in your solar system's controller. Because you don't need a texture for the sun, you can just pass a nil pointer. And of course we'll need to update the execute() method, as shown in Listing 5-7.

Listing 5-7. Ready to handle the new texture



             glTexCoordPointer(2, GL_FLOAT, 0, m_TexCoordsData);


        glVertexPointer(3, GL_FLOAT, 0, m_VertexData);
        glNormalPointer(GL_FLOAT, 0, m_NormalData);

        glColorPointer(4, GL_UNSIGNED_BYTE, 0, m_ColorData);
        glDrawArrays(GL_TRIANGLE_STRIP, 0, (m_Slices+1)*2*(m_Stacks-1)+2);


        return true;

The main difference here is the addition of code to enable texturing, to set the current texture, and to hand off the pointer to OpenGL.

Compile and run, and ideally you'll see something like Figure 5-22.


Figure 5-22. Sun and Earth

If you examine the actual artwork used for this exercise, you'll notice that it is fairly bright but low in contrast. The main reason is that the real oceans are actually quite dark and it just did not look right under the lighting condition.


This chapter served as an introduction to textures and their uses. It covered basic texture theory, how texture coordinates are expressed, how mipmaps can be used for greater fidelity, and how textures can be filtered to smooth them out. The solar-system model was updated so that earth now really looks like the earth using a texture map. In the next chapter, we'll continue with textures, putting to use the iPhone's multiple texture units, along with blending techniques.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.