7. Creating Games with WebGL and Three.js

In previous chapters, we took great pains to learn the hard way first before diving into libraries and frameworks that make our coding life simpler. WebGL is a beast. Based on OpenGL ES 2.0, which itself is a simplification of the much older Open GL API, it’s still rather large. Although it stands to reason that one might use Canvas 2D or SVG without the benefit of a framework, the same is not the case with WebGL. There are many concerns to be taken into account, including lighting, texturing, depth of field, particle systems, and collision detection and response, that make operating without a framework much like walking a tight rope without a net. The framework we will be using extensively in this chapter is Three.js. Three.js is a 3D graphics library for modern browsers that, as mentioned before, can render to Canvas, WebGL, and SVG. It supports all that is possible in WebGL while allowing you to use the same code for all renderers, with some exceptions. The compatibility layer won’t excuse you from doing extensive testing, but at the very least it gives you a possible fallback option if the client’s computer isn’t exactly the latest and greatest. Three.js abstracts many of the pointy edges away—for example, in dealing with materials and shaders. It has built-in helpers for some of the more common 3D geographic shapes such as spheres, cubes, and cylinders, and a full-featured particle system, texture mapping, and basic collision detection. In this chapter, we will endeavor to stay at a high, general level in some cases but dive into the low-level details in others—whatever seems appropriate for the task at hand. Let’s begin the chapter by discussing OpenGL ES, the technology behind WebGL.

OpenGL ES, or OpenGL for Embedded Systems, is a specification for 3D graphics APIs running on devices such as mobile phones, tablets, and video game consoles. Most mobile device platforms (Android, iOS, Blackberry/QNX, WebOS) support some form of the spec. Video game implementers include Nintendo (Nintendo 3DS), Sony (PlayStation 3), and OpenPandora (Pandora). Although the names are similar and versions of OpenGL ES track to OpenGL, OpenGL ES is not OpenGL. OpenGL ES is a subset of OpenGL. Some key differences are the removal of glBegin/glEnd and glVertex* for drawing primitives. The use of display lists was deprecated in favor of vertex buffers. We won’t be using them directly because Three.js abstracts them all away, but it is important to be aware of the differences.

Moving to Three Dimensions

When we were drawing on the 2D Canvas context, we didn’t have to worry about an object’s depth or its position, either near or far from us. We just had a rectangular viewport representing the things we could see. For WebGL, we do have to consider the depth, and this unsurprisingly makes our transformation calculations a more complicated. Don’t worry—I’m not going into matrix math again. At least not yet. If you’re a bit hazy on all this, take a moment and turn back to Chapter 5, “Creating Games with the Canvas Tag.” We can control depth with the z-axis, as shown in Figure 7-1.

Figure 7-1. XYZ-axes

Image

The addition of the third axis gives us a new construct for representing a point in space: the vertex. The following snippet shows the code needed to create a vertex in Three.js:

new THREE.Vertex(new Vector3(0, 0, 0));

Draw two vertices and you have a line. Draw three vertices and you can have a triangle. The options are limitless. Right now we can draw a bunch of vertices on the screen, but we won’t be able to see them. That’s because they don’t have any relationship between them; at this point, they are just a bunch of random points in space. A mesh is a collection of vertices that describe an object. These vertices are arranged into faces that are composed of three or more vertices. To create a triangle, we need to perform the following tasks:

• Create a Geometry object to store the vertices.

• Add faces to tell the vertices how to arrange themselves.

• Create a mesh with the Geometry object and a material.

Don’t worry about how to define the material just yet. We’ll be covering materials a bit later in the chapter. We can see sample output in Figure 7-2 and the corresponding code to produce the mesh in Listing 7-1.

Figure 7-2. Triangle mesh

Image

Listing 7-1. Creating a Triangle


geometry = new THREE.Geometry();
geometry.vertices.push(new THREE.Vertex(new THREE.Vector3(0, 10, 0)));
geometry.vertices.push(new THREE.Vertex(new THREE.Vector3(-10, -10, 0)));
geometry.vertices.push(new THREE.Vertex(new THREE.Vector3(10, -10, 0)));
geometry.faces.push(new THREE.Face3(0,1,2));
var triangle = new THREE.Mesh(geometry, geoMaterial);


Giving Your Objects Some Swagger with Materials and Lighting

We now have a context of how the different vertices relate to each other, but we still won’t be able to see them. Why? We haven’t told the world what they should look like. We do that with materials and lighting.

Understanding Lighting

Besides keeping humanity from freezing to death, lighting in Three.js isn’t that dissimilar to what happens in the real world via the Sun. It has three types of lighting objects:

AmbientLight—Ambient lighting is the average of all the light generated from all light sources in an area. Objects rendered with only ambient lighting will appear two-dimensional. That’s because all vertices receive the same amount of light. One way to look at it is to consider ambient lighting to be like the thermostat on your air conditioner/furnace. Turning it on, in general, doesn’t make individual rooms cooler or warmer, it brings them all approximately to the same temperature.

PointLight—Point lighting is attenuated light coming from a specific location in world space. Light is emitted in all directions from the point and does make objects look more 3D. As an object moves further from the light source, the amount of light that can affect objects is less and less (attenuation, or colloquially “dropoff”). Point lights can also cause or contribute to specular reflections.

DirectionalLight—Directional lighting can be viewed as similar to shining a bunch of lamps on a subject from the same direction. Whereas point lights will attenuate over distance, directional lights deliver the same intensity as they stretch toward infinity or the specified maximum distance.

Listing 7-2 shows examples of the three light types, starting with AmbientLight, which has a sole parameter. intensity corresponds to how bright the light rays should be, and distance refers to the longest light ray before total falloff. castShadow is Boolean that determines whether or not the object illuminated by DirectionalLight with cast a shadow. Although not listed in the constructors, you can set the position on PointLight and DirectionalLight. Parameters listed in brackets are optional.

Listing 7-2. Lighting Examples


new THREE.AmbientLight(hexColor);
new THREE.PointLight(hexColor, [intensity], [distance]);
new THREE.DirectionalLight(hexColor, [intensity], [distance], [castShadow]);


Using Materials and Shaders

Materials make our objects less ordinary by giving them colors and textures. We are covering materials after going over lighting because the lighting of a scene greatly affects how a material appears to the user. Many atmospheric components go into determining the final color of a vertex or face including, but not limited to, the following:

• Lighting (ambient, point, and directional)

• Shadows

• Shaders

• The blend mode

• Occlusion

We’ve already covered lighting, and shadows don’t need much description, so let’s press on with shaders. A shader uses software instructions to calculate rendering effects generally on the Graphics Processor Unit (GPU) but can be done in software as well. There are generally three types of shaders:

Vertex shaders—Vertex shaders are run for each vertex in a mesh. They can alter properties such as the position, color, normals, lighting, and texture coordinates.

Pixel or fragment shaders—Fragment shaders calculate the color and other properties for each pixel in a mesh when rendered onscreen.

Geometry shaders—Geometry shaders are used to add or remove vertices on a mesh. One use is to add LOD (level of detail) effects to a scene—that is, to increase or decrease the number of vertices in a mesh as a object gets closer or further, respectively, from the screen.

WebGL and Three.js focus programmatically on the vertex and fragment shaders. The LOD aspect of geometry shaders is handled in Three.js, but a discrete object won’t be covered in any detail in this chapter. Before we get to talking about shaders and materials, there’s one important concept you need to learn about: the normal. It’s a vector that is perpendicular to a surface or vertex. It is often used in lighting calculations, which also affects how materials appear and can be used to create greater detail in models without increasing the polygon count. Several shading algorithms are either supported natively in Three.js or easily implementable. Let’s look at them briefly, from easy to difficult.

Flat Shading

Flat shading shades an object based on each polygon normal in a mesh. For very regular shapes such as rectangles, the calculations won’t be that different from some of the most advanced algorithms we’ll discuss. The problem with flat shading is that any model with a reasonable amount of polygons will look blocky, and users will be able to easily see where one polygon ends and another begins.

Lambertian Shading

Lambertian shading, put simply, reflects light equally in all directions. This causes models to look the same irrespective of the viewer’s point of view. Unfinished wood, concrete, and other matte surfaces have this characteristic. Because of the lack of dynamism, Lambertian shading, like flat shading, is very easy on the GPU.

Gouraud Shading

Gouraud shading, at every vertex, applies a normal that is the average of all the surface normals for the polygons that the vertex touches. This gives us smoother illumination than flat shading but at a low cost to the CPU/GPU. Because it is an average of several vertices, some amount of error will be involved as the applied normal is an estimate and will be plus or minus what the actual normal value was. This can be especially evident in anything that causes specular highlights (bright spots on reflective objects). Figure 7-3 shows an infographic of how a plane normal is calculated.

Figure 7-3. Gouraud shading normal calculation

Image

Phong Shading

Phong shading is the most costly of the four. Also called per-pixel shading, it takes the vertex normals and calculates the intermediate normal values for each pixel. This produces a lighting model that improves greatly on the results from Gouraud shading but is costly to the processor.

Somewhat related but technical not a shading model per se is Phong reflection. Phong reflection considers how that very few objects are perfectly shiny or rough. As such, the surfaces on an object are a combination of the two properties (specular and diffuse reflection, respectively). The components for Phong reflection are as follows:

• An ambient color for the amount of light evenly distributed through the object

• A diffuse color that scatters light in all directions

• A specular color for the highlights, presumably caused by the light source

You might also specify a shininess or opacity. All of these work in tandem to produce the final material. For example, a totally unshiny material isn’t going to consider the specular values.

Creating Your First Three.js Scene

One of the first things I learned to draw in OpenGL was a snowman scene from some random online tutorial. Snowmen are great because no one hates them but the sun, and it is easy to draw something that looks reasonably close to the subject. It also gives us a chance to discuss some of the built-in 3D shapes.

Setting Up the View

Before we get started, we need to do a little housekeeping to set up our environment to draw the snowman. In our truncated init function, we instantiate a renderer and scene. As mentioned before, Three.js can render to several different environments. Changing the renderer to CanvasRenderer or SVGRenderer is how you would do that. We also need to set the dimensions of the renderer. Three.js makes a best-effort attempt to render the scene with the given renderer. Although many effects and materials can be rendered with all the renderers, some are specific to WebGL. Below the init function, we have a pair of functions that handle our animation and rendering. requestAnimationFrame attempts to refresh the drawing as close to the monitor’s refresh rate as possible while short-circuiting drawing if the window is not visible. You can see this code in Listing 7-3.

Listing 7-3. Setting Up the Environment


function init() {
        // ...
        // create a renderer and scene
        renderer = new THREE.WebGLRenderer();
        renderer.setSize (WIDTH, HEIGHT);
        // some code
        }
function animate() {
        requestAnimationFrame (animate);
        render();
}



function render() {
        renderer.render(scene, camera);
}


In this chapter, you might notice the use of both “scene” and “scene graph.” The scene describes all the objects in the environment. The scene graph describes how the objects are arranged in relationship to each other. A scene graph is a collection of nodes arranged in a graph or tree structure. Each node may have child nodes. The essential point you need to know is that because Three.js is a scene graph, we only have to keep track of the objects we want to change. Otherwise, we add the node to the scene graph, call the render, and don’t worry about it.

On the off-chance that a few of you have never seen a snowman or even snow, a snowman is shown in Figure 7-4. We can see that his body is more bulbous than humanoid and that he has sticks for arms and a carrot nose.

Looks pretty daunting, huh? That complete scene can be drawn with only three shapes: spheres, cylinders, and planes. We use those shapes in different sizes and with different transformations to create the snowman. A sphere, shown in Figure 7-5, is a perfectly round shape that is the result of taking a circle and spinning it around its center point. Basketballs, baseballs, and the Earth are all spherical shapes.

Figure 7-4. Snowman

Image

Figure 7-5. Sphere

Image

Listing 7-4 shows the code needed to draw the snow parts of the snowman’s body. We start by instantiating a white material. Next, we draw three spheres, each successive one smaller than the one that preceded it. The first parameter in the Sphere constructor is the radius, followed by the number of steps to draw for the width and height. More steps give you a more round sphere but also add a lot more vertices to draw. Fewer steps yield an object that is quicker to draw but the lighting will not look as good. Lastly, we use the transformation functions on THREE.Mesh to move the spheres into place and then finish by adding them to the scene graph.

Listing 7-4. Drawing the Snowman’s Body


var topSegment, middleSegment, bottomSegment;
var whiteMaterial;

whiteMaterial = new THREE.MeshLambertMaterial({
        color:0xFFFFFF
})
bottomSegment = new THREE.Mesh(
        new THREE.Sphere(8, 16, 16), whiteMaterial
);
middleSegment = new THREE.Mesh(
        new THREE.Sphere(6, 16, 16), whiteMaterial
);
middleSegment.translateY(10);
topSegment = new THREE.Mesh(
        new THREE.Sphere(5, 16, 16), whiteMaterial
);
topSegment.translateY(19);

scene.addChild(topSegment);
scene.addChild(middleSegment);
scene.addChild(bottomSegment);


The second shape we will use in the scene is a cylinder, as shown in Figure 7-6. A cylinder is formed by taking a circle and extruding it along a straight line. “Extruding” is just a fancy way of saying “duplicate the object, move it a little bit, and then rinse and repeat.” Soup cans and some cups are cylindrical.

Figure 7-6. Cylinder

Image

Listing 7-5 shows the code to draw the stick-like arms of the snowman. We start by declaring a brownish material. Next, we create the arms. The parameter list starts with how many steps to use, followed by the starting and ending radii, and the length of the cylinder. We have to do a bit more work this time to move them into place.

Listing 7-5. Adding Arms


var arm, arm2, armMaterial;

armMaterial = new THREE.MeshLambertMaterial({
        color: 0x8B5A00
});

arm = new THREE.Mesh(
        new THREE.Cylinder(20, 0.3, 0.3, 10),
        armMaterial
);
arm2 = new THREE.Mesh(
        new THREE.Cylinder(20, 0.3, 0.3, 10),
        armMaterial
);

arm.rotation.x = 30;
arm.rotation.y = 10;
arm.translateX(8);
arm.translateZ(1);
arm.translateY(15);

arm2.rotation.x = -30;
arm2.rotation.y = 10;
arm2.translateX(-7);
arm2.translateZ(1);
arm2.translateY(15);

scene.addChild(arm);
scene.addChild(arm2);


A plane, shown in Figure 7-7, gives our scene a little bit of depth. Think of it as a giant sheet of paper or, better yet, as a big rug.

Figure 7-7. Plane

Image

Like the sphere and cylinder before it, the plane allows you to control how detailed the mesh is. After declaring the width and the height, you can set the respective step values. The code to draw a plane is shown in Listing 7-6.

Listing 7-6. Drawing a Plane


plane = new THREE.Mesh(
        new THREE.Plane(500,500, 20, 20),
        planeMaterial
);


Although we have three main types of shapes in the drawing, the snowman’s nose can be thought of as both a cylinder and a new shape type: a cone, as shown in Figure 7-8. Some graphics libraries will differentiate between the two, but the relation is similar to that of rectangles and squares: All cones are cylinders but not all cylinders are cones. The extrusion is the same, but the radius of each subsequent circle gets smaller and smaller until it reaches zero.

Figure 7-8. Cone

Image

You can see in Listing 7-7 that the only point of distinction between a cone and a cylinder is that one of the radii is close to zero.

Listing 7-7. Drawing a Nose


nose = new THREE.Mesh(
        new THREE.Cylinder(20, 0.8, 0.01, 3),
        noseMaterial
);


Viewing the World

If you tried to run the code we’ve written so far, you wouldn’t see anything on the page and suspect that it is broken. The reason we can’t see anything is because we haven’t told Three.js where we will be located and what we will be looking at. Placing a Camera in the scene is how we can view it. Based somewhat on the human eye, cameras have attributes that determine what can or cannot be seen. The Camera object signature is shown here:

var cam = new THREE.Camera( fov, aspect, near, far, [target - optional])

The most important parameter and also the first is fov, or the field of view (FOV). The FOV determines the amount of the world that can be seen at one time. It is often talked about in degrees. We can calculate it by putting a camera (or eye if you want to make it gruesome) on a tripod and pointing it out in the distance toward some objects. Looking through the viewfinder and not moving the camera, you notice the rightmost object you can see and draw a line from your position to that point. You do the same for the leftmost object. Now draw a line between the two objects. We now have a triangle that we can use to find the FOV. An infograph of this is shown in Figure 7-9. Rest assured that you do not have to calculate this...ever. The computer does all the work for you.

Figure 7-9. Calculating the field of view (FOV)

Image

The next parameter is aspect, for the aspect ratio. You might recognize the term from the specs on your monitor or television. Aspect ratio is the ratio of the longer dimension of your viewport to the shorter. Most screens are somewhere in the area of 4:3 (standard definition) or 16:9 (widescreen). Aspect ratio works in collaboration with the FOV to determine how much of the world is cropped from view.

The next two parameters, near and far, represent the clipping planes for your world. In a scene with thousands of objects and textures being drawn at once, it would be taxing on the CPU and GPU to try and show everything. Even worse, it would be wasteful to draw the things you can’t even see. The near clipping plane is usually relatively close to the user, whereas the far clipping plane is somewhere off in the distance. As objects cross the far plane, they spontaneously appear or disappear. Some games use fog to make the appearance and disappearance of objects more realistic. target is an optional parameter that allows you to designate an object to look at. Figure 7-10 shows how the first four parameters combined to make the viewing frustum.

Figure 7-10. Viewing frustum

Image

But, wait a minute, why didn’t we have to deal with any of this when were drawing on the 2D Canvas? The short answer is that we were; Canvas was doing it for us behind the scenes. A frustum for a 2D scene can be created by making a camera with the near and far values being the same, which is also the z value for all drawn objects. In effect, that camera is constrained to seeing a specific slice of the world.

Loading 3D Models with Three.js

Having to create everything in code would get tedious very fast. You might be happy to know that Three.js supports loading 3D models in its own JSON format and has file exporter scripts for Autodesk 3ds Max and Blender.

Autodesk 3ds Max—or colloquially 3D Studio MAX or just MAX—is widely recognized as the industry standard for creating, animating, and rendering 3D models. It is widely used not only by artists for games but also in TV and film. Built in to the product is a scripting language called MAXScript, which can be used to build client-side plugins.

Blender (www.blender.org) is a cross-platform, free, and open-source advanced 3D modeling application. It can create complex effects supported in commercial 3D modeling software such as UV wrapping, texture, bones and rigging, and particle system effects. It also bundles a nonlinear editor and Python API for in-application scripting. Blender is managed by the non-profit Blender Foundation, chaired by Blender’s creator Ton Roosendaal and developed by the Blender community (www.blenderartists.org/forum/). There is even a community magazine (http://blenderart.org/).

Part of the outreach the Blender Foundation conducts includes the Blender Conference (held annually in Amsterdam), the Suzanne Awards (which are competitively awarded to animators), and the production of several short films. Commercially, Blender has been used in television commercials, History Channel shows, and in the pre-production of Spider-Man 2.

The export scripts for both applications are located in the utils/exporters directory of Three.js. Check the appropriate vendor websites to find out how to install plugins, paying extra care to the version the plugin states it requires. Use a version newer than that required and it might not work correctly.

Listing 7-8 shows the code to asynchronously load a JSON model and add it to our scene graph. As you can see in Listing 7-9, the JSON model file can include materials that we can modify or use. The last line of createScene1 may look a bit daunting at first, but let’s break it down. Skipping the first two self-explanatory parameters, we have the global scale of the object, its x, y, and z positions, its rotation on the x, y, and z axes, followed by the material to use.

Listing 7-8. Loading a Model File


function drawCube() {
        var loader = new THREE.JSONLoader();
        loader.load( {model: "cube.js", callback: createScene1 });
}

function createScene1(geometry) {
        geometry.materials[0][0].shading = THREE.FlatShading;
        mesh = THREE.SceneUtils.addMesh( scene, geometry,
            250, 400, 0, 0, 0, 0, 0, geometry.materials[0] );
}


Truncated to the essence of what comprises the model file, Listing 7-9 gives you a gist of what is output from the Blender exporter. You can see the areas for the vertices, normals, materials, and faces. We’ll hold off on explaining some of the other areas of the file until later in this chapter.

Listing 7-9. Truncated Cube.js File


var model = {

    "version" : 2,
    "scale" : 1.00,
    "materials": [        {
        "DbgColor" : 15658734,
        "DbgIndex" : 0,
        "DbgName" : "Material",
        "colorAmbient" : [0.0, 0.0, 0.0],
        "colorDiffuse" : [0.64, 0.64, 0.64],
        "colorSpecular" : [0.5, 0.5, 0.5],
        "shading" : "Lambert",
        "specularCoef" : 50,
        "transparency" : 1.0,
        "vertexColors" : false
        }],

    "vertices": [1.00...],
    "morphTargets": [],
    "normals": [0.577349,..],
    "colors": [],
    "uvs": [[]],
    "faces": [35,...],
    "edges" : []
};

postMessage( model );
close();


Programming Shaders and Textures

If you want to forgo using the built-in material features in Three.js and possibly create more advanced effects, you can dip into the WebGL features and create your own vertex and fragment shaders. In Three.js, you would create a material as demonstrated earlier in the chapter and attach your shaders. To write shader code, you need to learn a little bit about OpenGL Shader Language, or GLSL for short.

GLSL is a high-level language with a C-like syntax. Although structured like programs, and even called that in some cases (the combination of a vertex shader and fragment shader), the shaders are not compiled but passed around as strings. They can be created at runtime, as some are in Three.js, or read in from files or <div> tags on a web page. Some of the more dangerous operators such as pointers are not present in GLSL, but it closely matches the feature set of C, the operators of C and C++, and can do pretty much anything you’d want to do, including flow control and creating/calling functions. GLSL also has some bundled graphics-processing-specific convenience functions.

Listings 7-10 and 7-11 show the code for a GLSL program that colors all vertices with a white color. Listing 7-10 is where we assign the color using a vec4 to represent the desired red, green, blue, and alpha values.

Listing 7-10. Sample Fragment Shader


<script id="shader-fs" type="x-shader/x-fragment">
    #ifdef GL_ES
    precision highp float;
    #endif

    void main(void) {
        gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
</script>


Listing 7-11 shows the code for a vertex shader. When we view vertices, although drawn in 3D space, they are projected into 2D space when they are shown on the screen. gl_Position locates the final position of the vertex onscreen by multiplying the projection and model view matrices by the location of the vertex. Don’t worry about the extra 1.0. That is present because in matrix multiplication, the dimensions have to match. projectionMatrix, modelViewMatrix, and position are all injected by Three.js for us. If you are adapting a GLSL program from another platform, you might see those programs explicitly declare these variables.

Listing 7-11. Sample Vertex Shader


<script id="shader-vs" type="x-shader/x-vertex">
    #ifdef GL_ES
    precision highp float;
    #endif



    void main(void) {
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
    }
</script>


To use the shaders in our application, we need to create a shader material for the objects that will use it. After this is instantiated, we can use it like any other material. Listing 7-12 shows the code to create a MeshShaderMaterial using the bare minimum properties. The code uses a JQuery-like library to get the contents of the script tags.

Listing 7-12. Creating a MeshShaderMaterial


var shaderMaterial = new THREE.MeshShaderMaterial({
                vertexShader: $('#vertexShader').get(0).innerHTML,
                fragmentShader: $('#fragmentShader').get(0).innerHTML
});


Often, shaders will need to do some advanced calculations, or you might want to pass data from your host application to your GLSL program. Shader variables let you do just that. There are three basis types:

Uniform—The value stays the same during a render of a frame and is available to both shaders.

Attribute—Read-only variables available to the vertex shader.

Varying—Allows the vertex and fragment shaders to share data.

In Listing 7-11, projectionMatrix and modelViewMatrix are uniforms and position is an attribute. When creating our own variables, it is important to note that GLSL programs are not JavaScript and require explicit declaration of types. In addition to the primitive types available in C/C++, there are also some GLSL specific ones. Table 7-1 shows the possible vector types, whereas Tables 7-2 and 7-3 show the matrix and texture types, respectively.

Table 7-1. GLSL Vector Types

Image

Table 7-2. GLSL Matrix Types

Image

Table 7-3. GLSL Texture Types

Image

Using Textures

Textures can range in shape and size, so we can’t always map a 1:1 relationship between the size of a texture and the face it will be applied to. To compensate for this, instead of mapping with the actual pixel sizes, we map the relationship between the texture and the face it will cover with texels. A texel, also known as a texture coordinate or texture pixel, is a pair of two values that range from 0.0 to 1.0 for the x- and y-axes. We assign a texel for each vertex in the object we are texturing. You can see an example of this in Figure 7-11.

Figure 7-11. Texels

Image

Instead of attempting to texture a complex object all at once like you would with a color, you can instead texture the individual faces of the object. This can optimize the texturing process and allow greater control over the look of the textured object. This process is called “UV mapping.”  You can see a simplified form of this in Figure 7-12 as a cube map. A cube map consists of six textures, one for the top, bottom, and four sides of the cube.

Figure 7-12. Cube map

Image

If we supply one texture, it is repeated on all the sides of the cube. If a face in the object to be textured isn’t present, the texture data for that face is discarded. Cube mapping allows us to use the same texture information for a cube, sphere, cylinder, or any other object.

With UV mapping, a template is created by unwrapping the triangles and laying them all flat. The artist could then paint on each individual face. Once textured, the object has the desired “skin.” Take, for instance, the games of the Dead Rising franchise, which allow the protagonists to change their clothes at will. The game developer did this by layering multiple UV maps. There could be a map for the body and skin texture, then another for undergarments, and another for outer clothing.

Listing 7-13 demonstrates how to apply a texture to a sphere. To avoid needing extra lights to illuminate the model, we use a white ambient color on the material. Three.js will do a lot of the heavy lifting for us, leaving us mostly free to hand it a texture using the map property, and it then constructs the cube map for us. Figure 7-13 shows the model produced by the code in Listing 7-13.

Listing 7-13. Texturing a Sphere


function drawScene() {
        var texture   = THREE.ImageUtils.loadTexture(
            "200407-bluemarble.jpg" );
        var material = new THREE.MeshPhongMaterial( {
            color: 0xFFFFFF, ambient: 0xFFFFFF, map:texture
        } );

        sphere = new THREE.Mesh(new THREE.Sphere(32, 32, 32), material);
        scene.addObject(sphere);
}


In the case of Listing 7-13, which shows how to texture a sphere, the UV coordinates were created for us. The Three.js JSON format has support for UV coordinates, and all of the packaged exporter plugins can export them as well.

Figure 7-13. Textured sphere

Image

Creating a Game with Three.js

Conway’s Game of Life is a cellular automaton simulation where individual cells (or, in our version, spheres) abide by certain rules to determine their transition between different states. Conway’s Game of Life is a great project because we don’t have to worry so much about game play, but it requires us to manage many objects in the scene at the same time. Here are the general rules:

• A live cell with fewer than a certain number of live neighbors dies.

• A live cell with more than a certain number of live neighbors dies.

• A dead cell with a certain number of live neighbors comes alive.

• A live cell with a certain number of neighbors lives on.

Conway’s is strictly 2D, and cells are born with three neighbors, die with more than three, and live on with two or three neighbors. Because we are using 3D and possibly different birth/death rules, we have to call the game “Life-like.” Unlike the original, where the simulation has a reasonably large space to “roam,” our simulation will be constrained to a customizable square grid with configurable birth/death rules. An added feature that is present in the Java3D application that inspired our app is different cell color based on the age of the cell. A screenshot of the running game is shown in Figure 7-14.

The code runs one cycle of 100 alive/dead transitions for each cell.

Figure 7-14. Life-like game screenshot

Image

Simulating the Real World with Game Physics

In Chapter 4, “How Games Work,” we discussed some very basic ways to do collision detection and create particle systems. In that chapter, we used informal bounding boxes to determine whether a collision had occurred, which resulted in a very constrained response. We also dabbled a bit with particle systems. In this chapter, we apply physics properties to the particles in a more formal way.

When we think of what makes humanoid-based games look realistic, we are seeking physics engines at work—or more specifically, rigid body dynamics. One way to think of this is to consider rigid bodies to be like bones in the human body. The bones themselves don’t bend but can be articulated at the joints and have a maximum of six degrees of freedom (translation and rotation in x, y, or z). We can also add constraints to restrict the motion so that our models aren’t dislocating joints when they move. These constraints could be as simple as making the object immovable, like a wall or the ground, or by using joints. Joints allow us to connect and constrain rigid bodies. The two types of joints that most physics systems support are hinge and ball-and-socket. Hinge joints restrict motion to one axis. Examples in the human body are knees and fingers. Ball-and-socket joints, on the other hand, allow a freer range of motion on possibly all six degrees of freedom. Your shoulders and hip joints are good examples of ball-and-socket joints.

Rigid bodies also have physical properties such as mass, inertia, velocity, and so on. Whereas your bones only account for 30% to 40% of your body’s mass, rigid bodies contain 100% of the simulated object’s mass. Rigid bodies don’t have to just be bone-like structures, they can be immovable objects such as walls and the ground or things encompassing one or many moving bodies.

Physics engines use a combination of primitive shapes to detect collisions, such as spheres, boxes, capsules, and free-form meshes. You are free to use as many as you want; for instance, you could detect a general collision with the right hand with one collision primitive and then drill down to the exact finger that was affected. After the primitives are all defined, the physics engine starts and slowly steps through time to check for collisions and changes to the transformation matrices of the objects and rigid bodies. These changes are reported to the render functions and combined with player input. There is another major type of dynamics called soft-body dynamics. Any type of cloth as well as fur, hair, and feathers would use soft-body dynamics. We won’t be covering this in any detail as it is an advanced topic. However, it warrants mentioning.

The physics engine we’ll be using for our demos is JigLibJS. It is a port of the Java variant of the popular JigLib library. JigLib is available at www.jiglibjs.org/. Although the original library is written in C++, in addition to JavaScript, JigLib has been ported to C# and Actionscript. Listing 7-14 shows the initial setup of our physics world. We begin by retrieving an instance of the PhysicsSystem and setting the gravity for the world and the type of solver to use. In this case, we used FAST, but the other options are NORMAL and ACCUMULATED. You give up a little accuracy for choosing FAST, over NORMAL and ACCUMULATED, but usually there isn’t a noticeable difference.

Listing 7-14. Setting Up JigLibJS


function initJigLib() {
        system = jigLib.PhysicsSystem.getInstance();
        system.setSolverType("FAST");
        system.setGravity( jigLib.Vector3DUtil.create(0, -9.8, 0, 0) );
}


After we create the physics system, we need to create a mesh for the ground and a corresponding physics rigid body to represent the ground. The properties we need to set on the rigid body, for the most part, match what we set on the mesh, with the exception of the additional call set_moveable(false) to make the ground unaffected by external forces. This code is shown in Listing 7-15.

Listing 7-15. Drawing the Ground


// create the ground
var plane = new THREE.Mesh(
        new THREE.Plane(75,75,10,10),
        new THREE.MeshLambertMaterial({
            color:0x222222
        })
);
plane.translateY(-10);
plane.rotation.x = -70;
scene.addObject(plane);

var ground = new jigLib.JPlane();
ground.set_y(-10);
ground.set_rotationX(-70);
ground.set_movable(false);
system.addBody(ground);
plane.rigidBody = ground;


Next, we need to draw our sphere and assign a rigid body to it. The code to create a Three.js sphere is pretty straightforward, so I’ve excluded it in Listing 7-16. If you need a refresher, check out the sources. We begin with instantiating the JSphere, passing it a null for the skin and a value for the radius. After setting the mass on the object, we can orient it in 3D space using the moveTo function and the Vector3DUtil class. We could have set the point to move to using the individual set_x, set_y, and set_z functions, as we did with the ground or by just passing in an array of values as shown in the commented line. One reason why you might want to construct a Vector3D is if you plan to somehow transform it after you create it.

Listing 7-16. Drawing a Sphere Rigid Body


// create rigid body
var body = new jigLib.JSphere(null, 8);
body.set_mass(8);
body.moveTo(jigLib.Vector3DUtil.create(sphere.position.x,   sphere.position.y,
sphere.position.z, 0));
//body.moveTo([sphere.position.x, sphere.position.y, sphere.position.z, 0]);

system.addBody(body);
sphere.rigidBody = body;


Lastly, we need some code to update our objects. The updateDynamicsWorld function in Listing 7-17 begins with a calculation to find the elapsed time since the previous run. The system takes the elapsed time to figure out how to apply the forces to the system. We next iterate over all the objects, checking to see if they have rigid bodies attached. If so, we alter the transformations of the meshes to match. In Listing 7-17, we are dealing only with the translation and rotation, but the listed orientation variable is how we would access the full transformation matrix.


Frames per Second Versus Time-Based Animation

When games are created based solely on frames per second, they might be optimized for a particular class of system or processor and thus run differently on others. You might notice this if you dust off some of your old 3.50 or 5.250 floppies. If the game was developed for a 386 20MHz computer, it might be unplayable on a 3GHz machine. Also, frame-based animation inappropriately assumes that it won’t have to share the processor with any other applications. CPUs, by design, rapidly switch between running processes. If a processor-hungry app is running in the background, such as a music player, you might see your frame rate rapidly decline.


Listing 7-17. Updating the World


function updateDynamicsWorld() {
        // find elapsed time from last update
        var t1 = new Date().getTime()
        var elapsedTime = t1 - t0;
        t0 = t1;

        system.integrate(elapsedTime/1000);
        for (var i = 0; i<scene.objects.length; i++) {
                var mesh = scene.objects[i];
                if (mesh.rigidBody) {
                   var state = mesh.rigidBody.get_currrentState();
                   var position = state.position;
                   var orientation = state.get_orientation().glmatrix;

                   mesh.position.x = position[0];
                   mesh.position.y = position[1];
                   mesh.position.z = position[2];

                   mesh.rotation.x = mesh.rigidBody.get_rotationX();
                   mesh.rotation.y = mesh.rigidBody.get_rotationY();
                   mesh.rotation.z = mesh.rigidBody.get_rotationZ();
                }
        }
}


Revisiting Particle Systems

Particle systems in Three.js solely deal with 2D objects. They use a concept called billboarding, wherein the textured face of the sprite is always facing the camera and/or user’s viewport. Billboarding allows two-dimensional shapes to appear to have depth in a 3D world. Billboarding allows us to save polygons, where possible, and is not relegated to particle systems. Some LOD algorithms will transition between not drawing an object, drawing it with a billboard, drawing it with a low-poly model, and drawing it with the high-resolution model. To create a particle system in Three.js, we first need to create a THREE.Geometry object to hold the vertex locations of the objects to draw. We then pass it to a ParticleSystem object with the vertices and a ParticleBasicMaterial to use for the particles. In this case, we loaded a PNG image for the particles and set the size in the material to the dimensions of the image. In Listing 7-18, you can see that we don’t ever instantiate the individual particles. After the geometry is assigned to the ParticleSystem, adding more particles has no effect even though it doesn’t give an error when we try and it seems to work. Any transformations are applied to the particle system as a whole, and each particle system can only have a single texture for all the particles.

Listing 7-18. Bubble Particle System


// create texture
ballTexture = THREE.ImageUtils.loadTexture( "ball.png" );
var material = new THREE.ParticleBasicMaterial(
      { size:52, depthFalse:false,
            transparent:true, map:ballTexture
});



// create vertices
geometry = new THREE.Geometry();
randX = Math.random()*100;
randY = Math.random()*100;
randZ = Math.random()*100;

for (var i = 0; i<numParticles; i++) {
        geometry.vertices.push(v(randX,randY,randZ));
}
particleSystem = new THREE.ParticleSystem(geometry, material)

scene.addObject(particleSystem);


Creating Scenes

Loading a single model is all fine and good, but what about loading a whole scene at once. Can we do that? We sure can. SceneLoader takes a JSON file and asynchronously loads the assets in it, freeing us to use ASCII and binary model files where appropriate.

Selecting Objects in a Scene

Figuring out which object on the screen is selected, also known as “picking,” is a bit more difficult in 3D space that it is in 2D. In 2D, we could read the x and y mouse positions and be able to easily check them against our object constraints. In 3D space, the fact we’re dealing with a 3D world projected onto 2D space creates a bit more work for us. One method is picking is to render the objects each in a unique color and then check the pixel color at the mouse’s position. You could do this by using the scene graph to render an offscreen canvas exclusively for picking. Provided the polygon counts aren’t that high or you are using a solely 2D canvas, color picking will perform very well. We could further increase speed by using lower detail models for our picking algorithms.

One of the drawbacks with color picking is that although it is great at locating the object that was picked, it is bad at telling us where the 3D projection of the selection point is located. To do this, we would have to give each face of each object a unique color. Doable, but not fun.

A more advanced method is to use ray casting, which is a means of testing for intersection by firing a beam toward a surface and reacting based on the first polygon encountered. A fork of Three.js (https://github.com/mindlapse/three.js/) contains code for picking using ray casting.

Animating Models

Making your models move like humans do is one way to make your games more realistic. A means of doing this is called “rigging,” where, in addition to the mesh of vertices, we also give the object an armature of bones, each of which affects the vertices around it with a given weight. When you pick up a coffee cup, your brain has to make several calculations on how to move your shoulder, upper arm, forearm, and wrist to finally know how to move your hand to grab the cup. This is known as forward kinematics. Inverse kinematics uses the final position of the hand to backtrack and figure out where the rest of the joints will be. Instead of calculating the exact position for each frame of an animation, we calculate a couple of them, also known as keyframes, and interpolate the values between them. Keyframes help guide the animation in the right direction. Too few keyframes, especially if the start and end states are vastly different, can cause unhuman-like limb contortion. Although MAX and Blender both support inverse kinematics and forward kinematics, the export scripts do not. That gives us two choices: We can write an export script to get the rigging information, or we can pose our objects in the 3D modeling application, export the individual keyframes, and then stitch them together in the application. When they deal with a single object, keyframes can also be discussed with the more precise term, morph target. Whereas a keyframe can be a snapshot of an object’s transformation or the individual locations of its object’s vertices, a morph target only describes the latter. In order to produce smooth animations between two targets, morph target influencers are used. If one target represents 2 seconds into the animation and the next represents 4 seconds, and if the current time is 2.25 seconds, then the target at t = 2 will have a greater influence than t = 4. Morph targets are a new and lightly documented area of Three.js (as in about 3 days old at the time this chapter was being finished). Check the Three.js sources for examples and more details.

Sourcing 3D Models

Although it isn’t hard to create inanimate objects such as trees, trunks, and basic furniture, creating photorealistic models and textures is outside the skillset of most people. If you know Photoshop or GIMP like the back of your hand and live and breathe Blender, Autodesk 3ds Max, or Maya, feel free to skip this section.

TurboSquid (www.turbosquid.com), formerly known as the Gamasutra Exchange, is an online marketplace for 2D/3D models, textures, materials, and application plugins. Over 200,000 models are available for download in a range of formats for open-source and commercial applications. The information page for each asset clearly lists the licensing terms.

If you use Google SketchUp (http://sketchup.google.com) to create models, you might be interested in the 3D Warehouse (http://sketchup.google.com/3dwarehouse). Even if you don’t use SketchUp, you might be interested because the 3D Warehouse is a great source for models of historic and significant buildings as well as objects from the real world. For example, if you were doing a spy game based in London, the 3D Warehouse would be a great place to get models of Big Ben and Westminster Abbey. Limited exporter scripts are bundled with the free version of SketchUp. You can either pony up for the commercial version ($495) or try to find some community sources scripts on the Internet.

For the adventurous, there is MakeHuman (www.makehuman.org). It is an open-source project that started as a Blender plugin and allows you to create highly customized human models by specifying the ethnic features, gender, age, body tone, weight, and stature. These models are also fully rigged and textured, allowing you to quickly integrate them into your games. You can even edit facial expressions, and there is support for importing BVH (BioVision Hierarchy) files, one of the industry standards for providing motion-capture data that can be used with rigged models. Figure 7-15 shows the basic MakeHuman application.

Figure 7-15. MakeHuman home screen

Image

Benchmarking Your Games

Programming with WebGL takes an incredible amount of processing power and isn’t always the most forgiving medium. One mistake that you might make early on is drawing too much. Whether you use raw WebGL or Three.js, drawing 100 spheres that are the same size doesn’t mean you need to create a new set of vertices for each one. That is a sure way to use up more processing power than needed. An unoptimized version of the Game of Life demo did just that and used up 1.4GB of RAM before crashing the browser tab. WebGL will happily use a copy of vertices over and over again with different transformation matrices to produce objects. Let’s discuss a couple of tools that will help you optimize and benchmark your applications.

Checking Frame Rate with Stats.js

Included in the sources for Three.js is a small utility library to measure the frame rate in WebGL scenes. Listing 7-19 shows the code to create a Stats object with some optional CSS to position the element at the upper-left corner of the window. Call stats.update() somewhere in the animate function and you’re all set.

Listing 7-19. Creating a Stats Element


stats = new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.top = '0px';
$("#container").appendChild(stats.domElement);


Using the WebGL Inspector

What Firebug and the Chrome/Safari Developer Tools are to HTML/CSS/JS, the WebGL Inspector (https://github.com/benvanik/WebGL-Inspector) aims to be for WebGL. The project, which is available as a Chrome extension, as a Firefox plugin, or can be bundled directly into an application, surfaces a panel to monitor what is going on in your WebGL application. You are able to see all the referenced textures and shader programs as well as capture individual frames. You can even walk though the calls that are being made on an individual frame, step by step. Figure 7-16 shows the WebGL Inspector running in a browser window.

Figure 7-16. WebGL Inspector

Image

Summary

It would be impossible to cover all there is to know about WebGL in one chapter. Instead we went with a more practical approach, discussing low-level details and concepts when needed while for the most part remaining high level, leveraging Three.js for the hard work. You can think of raw WebGL code as assembly code. Many learn it, but few have to use it from day to day. You learned how to leverage the low-level APIs for items such as shaders and how balance this with the abstractions for materials, texturing, and lighting that Three.js provides. You also learned how to integrate physics and 3D models into games. After we created a game with Three.js, you learned about some tools to help us optimize and benchmark our code.

Exercises

1. Load a model file and retrieve the material data from it.

2. Describe how to use texture coordinates to texture a triangle.

3. Write the code to texture a cylinder.

You can download chapter code and answers to the chapter exercises at www.informit.com/title/9780321767363.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.36.174