In this chapter you will be introduced to the classes that combine to form the foundation of the game engine. Then, you’ll create the sprite classes that draw and control the game characters on the screen.
Beginning with this chapter, you’ll be dealing with a lot of code. Instead of writing the code from scratch, you can follow along with the completed code by downloading it from the book’s website, www.peachpit.com/iOSGames. While examining the code, it will be useful to identify the key sections and to understand their purpose.
You learned a bit about controllers in the previous chapter. In Raiders, controllers are classes that manage the game flow, and are responsible for rendering scenes and managing sprite objects.
Open Raiders.xcodeproj in Xcode 4 and look at the files pane to the left. You will notice a lot of new files and what appear to be folders (Figure 4.1).
Note
From this point forward, files will be described by their class names, unless otherwise stated. You can assume that all files have an .m and an .h file.
Now, let’s look at the new files in increasing order of complexity.
AbstractSceneController is the abstract class for the scenes. It is a very simple class with only two external methods and one private method.
playScene
is implemented by its child classes, and each instance will be particular to that scene, because it includes the rendering code for the scene.
addSprite
is used internally to the class and adds a sprite to the list of sprites in the scene. The list of sprites is used in updateScene
.
updateScene
loops though the list of sprites in the current scene and creates the transformations required to render the sprites. You’ll learn more about updateScene
in the “Drawing Sprites” section in this chapter.
In this project, the GameController class has a property called currentScene
of type AbstractSceneController, that holds the active scene; and it has a method called playCurrentScene
that calls the playScene
method of the currentScene
.
You may find something in GameController that you have not seen before. At the top of the .m file is the following:
@interface GameController (private)
- (void)initScene;
@end
This code is called a category in Objective-C. It adds methods to the class without the need to subclass. We could have put initScene
in the .h file, but then it would be available to other code using the GameController class. By adding it as a category, it signals that this method is private (internal) to the GameController class and shouldn’t be called from outside the class.
In this specific case, initScene
is called when the GameController class is first instantiated, and it assigns the MenuSceneController to the currentScene
property, which allows the menu scene to automatically appear when the app is launched.
The other line of code that might catch your eye is the following:
SYNTHESIZE_SINGLETON_FOR_CLASS(GameController);
This code uses a #define
macro that is defined in SynthesizeSingleton.h (thanks to Matt Gallagher from www.CocoaWithLove.com). This block of code takes the class name as a parameter and creates the singleton construction code for us. As a result, you don’t have to write the same boilerplate code to create a singleton in every class that needs it. Use this approach in other projects to save a lot of time “reinventing the wheel.”
The last method is updateWorld
, in which all of the objects in the “world” update.
The biggest change in the code from Chapter 3 is in ViewController. Previously, ViewController was handling all the setup, transformations, and rendering code. In this chapter, we use other classes to handle that code, and the GameController class is used to handle the rendering tasks. A new variable, sharedGameController
of type GameController, is a link to the game controller—and if you look in glkView:drawInRect
—you’ll see that the code has been replaced with a single call:
[sharedGameController playCurrentScene]
In update
, you’ll see a single call to [sharedGameController updateWorld]
.
This is the power of abstract classes. In the earlier version of the project, methods had a lot of transformation and rendering code. But now the code is abstracted into other classes. In the current project, the current scene will be rendered, which in this case is the Menu scene.
This approach makes reading and maintaining code much easier because each distinct class is responsible for rendering its own scene. It’s therefore easier to go to the appropriate class to understand the flow of the app because you know the code relating to the menu will be in the menu scene.
OpenGL ES is a powerful graphics rendering API for creating 2D and 3D graphics. Because all objects in OpenGL ES are made up of triangles, you can create any object by combining triangles of various sizes. For example, a square or rectangle consists of two triangles combined.
Note
This next section is going to get quite intense, so it is recommended that you find a quiet place to read. Concentration will be required.
A triangle is made up of individual points, each called a vertex, which is a geometric point in space. In Raiders, a vertex will be an x,y point; but, in reality, each vertex in the game is a point in 3D space that always has a z of 0. Because the game is in 2D, the rendering is performed on a single plane in 3D space.
To create a square from two triangles you only need four vertices as shown in Figure 4.2. OpenGL is smart enough to close the triangles from vertices 1-3 and 4-2. Once you start adding triangles, you can see the power of this. To create a house image from the square, you would add another triangle but would create only one more vertex (Figure 4.3).
Figure 4.2 Square made from two triangles with four vertices
Figure 4.3 Adding another triangle creates only one more vertex.
A 2D game like Raiders only requires you to create quads, which are two triangles combined to create a square.
By itself, a quad is just a collection of points of a certain size. Once the quad is created, a texture is needed for it. A texture is a graphic that covers the top of the quad vertices, much like a dust cover covers a book or a tea cosy covers a teapot. This technique is called texture mapping.
OpenGL ES imposes a limitation on all textures. A texture’s width and height must be a power of two: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. Because iOS devices prior to iPad, iPad 2, and iPhone 4 could not recognize textures greater than 1024 × 1024, you should consider that the maximum size for a texture in an iOS game. However, the image doesn’t need to be square, so it could be, for example, 64 × 32 pixels in size.
In the file Sprite.m, you will see the following arrays:
static const GLfloat vertices[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat texCoords[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLushort cubeIndices[] =
{
0, 2, 1,
1, 2, 3,
};
The first array defines the vertices of the quad. The second array defines the texture mapping coordinates.
A texture is always mapped using a quad with the range of 0-1 in height and width (Figure 4.4), which is called a texture coordinate system. The texture coordinate system is usually refered to as the (s,t) system, while the geometry system is referred to as the (u,v) system. So when mapping a texture to a quad, it is said that you are mapping (s,t) coordinates onto the (u,v) coordinates, which is sometimes called UV mapping in 3D applications.
Figure 4.4 Coordinates for a texture to be mapped
Using Figure 4.2 as the reference for the (u,v) coordinates and Figure 4.4 for the (s,t) coordinates, vertex 1 would map to coordinates of (0,0), vertex 2 of (0,1), and so on.
You don’t need to use the full 0-1 range to map the texture. For example, you could map half the texture by using the (s,v) coordinate range of (0,0), (1,0), (0, 0.5), (1, 0.5).
In most beginning examples of OpenGL ES, it is usual to have separate arrays for vertex points and texture vertex points. In fact, Raiders does just that, as you can see in Figure 4.4.
This is a reasonable approach and is relatively easy to read and maintain, which is all that is necessary for simple quads with textures.
For the sake of full understanding, however, you should know that for more complicated models, Apple recommends using interleaved vertex data, which uses an array of structs rather than the series of arrays. In a struct, data is stored in a single structure (array) and interleaved together. So the information for vertex 1 is read together, then the information for vertex 2 is read together, and so on (Figure 4.5). This technique provides memory locality for each vertex and is superior to separate arrays (Figure 4.6).
Figure 4.5 Interleaved vertex data
Figure 4.6 Separate vertex arrays
With a background in place, it’s time to look at Raider’s Sprite class. The Sprite class is instantiated with the method initWithImageNamed
which takes a NSString
as a parameter that is the name of an image file to be used as a texture.
Look at the code in initWithImageNamed
(code first, then description):
textureInfo = [GLKTextureLoader textureWithCGImage: [UIImage imageNamed:imageName].CGImage options:nil error:nil];
name = textureInfo.name;
self.width = textureInfo.width;
self.height = textureInfo.height;
Prior to the introduction of GLKit, it took many, many lines of code to load images into a texture buffer. As you can see above, you can now do this with only one line of iOS 5 code.
The next line assigns the texture’s OpenGL-friendly name to a property for later use. The width and height of the image are also stored for later use.
[self initVertexInfo];
[self initEffect];
The next two lines allocate and assign the vertex information into memory, and initialize a GLKit effect that will create the appropriate shaders.
This method sets up the buffers for holding the quad and texture vertices.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
These two lines of code are needed to allow the textures to be mapped with a transparent background.
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, 0, 0, texCoords);
glBindVertexArrayOES(0);
The next four lines are the meat of this method. They set up and assign the quad vertices and the texture coordinates into memory buffers. The use of the GLKit constants GLKVertexAttribPosition
and GLKVertexAttribTexCoord0
provides vertex position and texture coordinates to the shader. Either a hand-coded shader, or one that GLKit will create automatically.
As you’ve learned, in “vanilla” Open GL ES 2.0 code, shaders must be written that calculate rendering effects. They are written in their own languages, and while they can be complicated to write and understand, they aren’t really necessary for a simple 2D game. Luckily, Apple has abstracted shaders away in GLKit by introducing a set of effects classes for handling shading, lighting, and texturing.
effect = [[GLKBaseEffect alloc] init]; effect.texture2d0.name = name;
effect.texture2d0.enabled = GL_TRUE;
effect.texture2d0.target = GLKTextureTarget2D;
effect.light0.enabled = GL_FALSE;
The effect usage in Raiders is minimal. Because Raiders has no lighting or shading and only 2D textures, the code is fairly simple. The texture name that was assigned in initWithImageNamed:
is assigned to the effect and the lighting is switched off. You can apply different lighting effects to the world, such as diffuse or spot lighting. This isn’t needed for our 2D game; but in a 3D game, lighting can give a sense of depth and extra realism.
updateTransforms
creates the projection matrix, which transforms coordinates in an object from vector space to screen space. Raiders uses an orthographic projection that has no vanishing point or perspective, and places parallel lines of an object at a constant distance. Because no perspective is applied in this projection, objects can’t get distorted.
GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0.0f, 320, 0.0f, 480, 0.0f, 1.0f);
effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeScale(self.width, self.height, -1.0f);
modelViewMatrix = GLKMatrix4Multiply(transformation, modelViewMatrix);
effect.transform.modelviewMatrix = modelViewMatrix;
The first line sets up the projection matrix to the width and height of the screen. Then the matrix is assigned to the effect.
Another matrix is created that will act as the transformation matrix for the sprite. It is first scaled to the size of the image, then moved using the transformation matrix that is created if the sprite needs to be positioned at specific screen coordinates (explained in the next section). Then, this new matrix is assigned to the effect.
drawAtPosition
allows the sprite to be rendered at a specific point. The position parameter passed in is stored in a property for later use, and a boolean variable called dirtyBit
is set to YES. Finally, the draw
method is called.
The draw method does the actual sprite rendering.
if (dirtyBit) {
transformation = GLKMatrix4MakeTranslation (position.x, 480.0 - position.y – self.height, 0.0f);
}
First, the method checks to see if the dirtyBit
is set. If so, a transformation matrix is created based on the position in which the sprite should be drawn. Because OpenGL ES has coordinates opposite to screen coordinates, the y coordinates must be subtracted from the height of the viewport. The transformation matrix is then used by the update method from the GLKView.
[effect prepareToDraw];
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, cubeIndices);
dirtyBit = NO;
This renders the quad and the texture on the screen, given the cubeIndices
array of vertex information defined earlier.
The last line renders to the screen all the information that was previously set up.
Sprites can now be drawn onto the screen, so all you need is a scene in which to do so. Most games start with a menu screen, and Raiders will be no different. The last class to explore is MenuSceneController.
MenuSceneController is the first scene that the player will see after opening the game. At this stage, we will include just a background sprite and a sprite to act as a play button.
At the moment the play button doesn’t do anything. We’ll save that for the next chapter.
You have learned a lot of new techniques, code, and terminology in this chapter. This will serve as core knowledge to be built on in upcoming chapters. If you didn’t quite grasp any of the concepts, you might want to review the chapter, along with the project source code.
In this chapter, you have seen how the game controller is the game’s director, directing resources and the flow of scenes. You have learned about points, vertices, textures, (u,v) and (s,t) mapping, interleaved vertex data, and how to use all of these to render sprites to the screen.
In the next chapter, you will create the game character classes, so the game will respond to touches, and you’ll learn how to move between scenes.
3.145.44.199