© Ted Hagos, Mario Zechner, J.F. DiMarzio and Robert Green 2020
T. Hagos et al.Beginning Android Games Developmenthttps://doi.org/10.1007/978-1-4842-6121-7_9

9. Introduction to OpenGL ES

Ted Hagos1 , Mario Zechner2, J. F. DiMarzio3 and Robert Green4
(1)
Makati, Philippines
(2)
Graz, Steiermark, Austria
(3)
Kissimmee, FL, USA
(4)
Portland, OR, USA
 
What we’ll cover:
  • About OpenGL ES

  • OpenGL ES theories

  • GLSurfaceView and GLSurfaceView.Renderer

  • Using Blender data in OpenGL ES

Starting from API level 11 (Android 3), the 2D rendering pipeline already supports hardware acceleration. When you draw on the Canvas (which is what we used in the last two games we built), the drawing operation is already done on the GPU; but this also meant the app consumes more RAM because of the increased resources required to enable hardware acceleration.

Building games using the Canvas isn’t a bad choice of tech if the game you’re building isn’t that complex; but when the level of visual complexities rises, the Canvas might run out of juice and won’t be able to keep up with your game requirements. You’ll need something more substantial. This is where OpenGL ES comes in.

What’s OpenGL ES

Open Graphics Library (OpenGL) came from Silicon Graphics (SGI); they were makers of high-end graphics workstations and mainframes. Initially, SGI had a proprietary graphics framework called IRIS GL (which grew to be an industry standard), but as competition increased, SGI opted to turn IRIS GL to an open framework. IRIS GL was stripped down of nongraphics-related functions and hardware-dependent features and became OpenGL.

OpenGL is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D graphics. It’s a lean mean machine for rendering polygons; it’s written in C as an API for interacting with a graphics processing unit (GPU) to achieve hardware accelerated rendering. It’s a very low-level hardware abstraction.

As small handheld devices became more and more common, OpenGL for Embedded Systems (OpenGL ES) was developed. OpenGL ES is a stripped-down version of the desktop version; it removed a lot of the more redundant API calls and simplified other elements to make it run efficiently on the less powerful CPUs in the market; as a result, OpenGL ES was widely adopted in many platforms such as HP webOS, Nintendo 3DS, iOS, and Android.

OpenGL ES is now an industry standard for (3D) graphics programming. It is maintained by the Khronos Group, which is an industry consortium whose members include, among others, ATI, NVIDIA, and Intel; together, these companies define and extend the standard.

Currently, there are six incremental versions of OpenGL ES: versions 1.0, 1.1, 2.0, 3.0, 3.1, and 3.2.
  • OpenGL ES 1.0 and 1.1—This API specification is supported by Android 1.0 and higher.

  • OpenGL ES 2.0—This API specification is supported by Android 2.2 (API level 8) and higher.

  • OpenGL ES 3.0—This API specification is supported by Android 4.3 (API level 18) and higher.

  • OpenGL ES 3.1—This API specification is supported by Android 5.0 (API level 21) and higher.

There are still developers, especially those who focus on games that run on multiple platforms, who write for OpenGL ES 1.0; this is because of its simplicity, flexibility, and standard implementation. All Android devices support OpenGL ES 1.0, some devices support 2.0, and any device after Jelly Bean supports OpenGL ES 3.0. At the time of writing, more than half of activated Android devices already support OpenGL ES 3.0. Table 9-1 shows the distribution and Figure 9-1 shows a nice pie chart to go with it; this data was taken from https://developer.android.com/about/dashboards#OpenGL.
Table 9-1

OpenGL ES version distribution

OpenGL ES Version

Distribution

GL 1.1 only

0.0%

GL 2.0

14.5%

GL 3.0

18.6%

GL 3.1

9.8%

GL 3.2

57.2%

../images/340874_4_En_9_Chapter/340874_4_En_9_Fig1_HTML.jpg
Figure 9-1

OpenGL ES version distribution

Note

Support for one particular version of OpenGL ES also implies support for any lower version (e.g., support for version 2.0 also implies support for 1.1).

It’s important to note that OpenGL ES 2.0 breaks compatibility with the 1.x versions. You can use either 1.x or 2.0, but not both at the same time. The reason for this is that the 1.x versions use a programming model called fixed-function pipeline , while versions 2.0 and up let you programmatically define parts of the rendering pipeline via shaders.

What does OpenGL ES do

The short answer is OpenGL ES just renders triangles on the screen, and it gives you some control on how those triangles are rendered. It’s probably best also to describe (as early as now) what OpenGL ES is not. It is not
  • A scene management API

  • A ray tracer

  • A physics engine

  • A game engine

  • A photorealistic rendering engine

OpenGL ES just renders triangles. Not much else.

Think of OpenGL ES as working like a camera. To take a picture, you have to go to the scene you want to photograph. Your scene is composed of objects that all have a position and orientation relative to your camera as well as different materials and textures. Glass is translucent and reflective; a table is probably made out of wood; a magazine has some photo of a face on it; and so on. Some of the objects might even move around (e.g., cars or people). Your camera also has properties, such as focal length, field of view, image resolution, size of the photo that will be taken, and a unique position and orientation within the world (relative to some origin). Even if both the objects and the camera are moving, when you press the shutter release, you catch a still image of the scene. For that small moment, everything stands still and is well defined, and the picture reflects exactly all those configurations of position, orientation, texture, materials, and lighting. Figure 9-2 shows an abstract scene with a camera, light, and three objects with different materials.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig2_HTML.jpg
Figure 9-2

Abstract scene

Each object has a position and orientation relative to the scene’s origin. The camera, indicated by the eye, also has a position in relation to the scene’s origin. The pyramid in Figure 9-2 is called the view volume or view frustum , which shows how much of the scene the camera captures and how the camera is oriented. The little white ball with the rays is the light source in the scene, which also has a position relative to the origin.

We can map this scene to OpenGL ES, but to do so, we need to define (1) models or objects, (2) lights, (3) camera, and (4) viewport.

Models or Objects

OpenGL ES is a triangle rendering machine. OpenGL ES objects are a collection of points in 3D space; their location is defined by three values. These values are joined together to form faces, which are flat surfaces that look a lot like triangles. The triangles are then joined together to form objects or pieces of objects (polygons).

The resolution of your shapes can be improved by increasing the number of polygons in it. Figure 9-3 shows various shapes with varying number of polygons.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig3_HTML.jpg
Figure 9-3

From simple shapes to complex shapes

On the far left of Figure 9-3 is a simple sphere; it doesn’t really go over well as a sphere if you look closely at it. The shape next to it (right) is also a sphere but with more polygons. The shapes, as they progress to the right, form complex contours; this can be achieved by increasing the number of polygons in the shape.

Lights

OpenGL ES offers a couple different light types with various attributes. They are just mathematical objects with positions and/or directions in 3D space, plus attributes such as color.

Camera

This is also a mathematical object that has a position and orientation in 3D space. Additionally, it has parameters that govern how much of the image we see, similar to a real camera. All these things together define a view volume or view frustum (indicated by the pyramid with the top cut off in Figure 9-2). Anything inside this pyramid can be seen by the camera; anything outside will not make it into the final picture.

Viewport

This defines the size and resolution of the final image. Think of it as the type of film you put into your analog camera or the image resolution you get for pictures taken with your digital camera.

Projections

OpenGL ES can construct a 2D bitmap of a scene from the camera’s point of view. While everything is defined in 3D space, OpenGL maps the 3D space to 2D via something called projections . A single triangle has three points defined in 3D space. To render such a triangle, OpenGL ES needs to know the coordinates of these 3D points within the pixel-based coordinate system of the framebuffer that are inside the triangle.

Matrices

OpenGL ES expresses projections in the form of matrices. The internals are quite involved; for our introductory purposes, we don’t need to bother with the internals of matrices; we simply need to know what they do with the points we define in our scene.
  • A matrix encodes transformations to be applied to a point. A transformation can be a projection, a translation (in which the point is moved around), a rotation around another point and axis, or a scale, among other things.

  • By multiplying such a matrix with a point, we apply the transformation to the point. For example, multiplying a point with a matrix that encodes a translation by 10 units on the x axis will move the point 10 units on the x axis and thereby modify its coordinates.

  • We can concatenate transformations stored in separate matrices into a single matrix by multiplying the matrices. When we multiply this single concatenated matrix with a point, all the transformations stored in that matrix will be applied to that point. The order in which the transformations are applied is dependent on the order in which we multiplied the matrices.

There are three different matrices in OpenGL ES that apply to the points in our models:
  • Model-view matrix—This matrix is used to place a model somewhere in the “world.” For example, if you have a model of a sphere and you want it located 100 meters to the east, you will use the model matrix to do this. We can use this matrix to move, rotate, or scale the points of our triangles (this is the model part of the model-view matrix). This matrix is also used to specify the position and orientation of our camera (this is the view part). If you want to view our sphere which is 100 meters to the east, we will have to move ourselves 100 meters to the east as well. Another way to think about this is that we remain stationary and the rest of the world moves 100 meters to the west.

  • Projection matrix—This is the view frustum of our camera. Since our screens are flat, we need to do a final transformation to “project” our view onto our screen and get that nice 3D perspective. This is what the projection matrix is used for.

  • Texture matrix—This matrix allows us to manipulate texture coordinates.

There’s a lot more theories we need to absorb in OpenGL ES programming, but let’s explore some of those theories alongside a simple coding exercise.

Rendering a Simple Sphere

OpenGL ES APIs are built into the Android framework, so we don’t need to import any other libraries or include any other dependencies into the project.

OpenGL ES is widely supported among Android devices, but just to be prudent, if you want to exclude Google Play users whose device do not support OpenGL ES, you need to add a uses-feature in the Android Manifest file, like this:
<uses-feature android:glEsVersion="0x00020000"
              android:required="true" />

The manifest entry is basically saying that the app expects the device to support OpenGL ES 2, which is practically all devices at the time of writing.

Additionally (and optionally), if your application uses texture compression, you must also declare it in the manifest so that the app only installs on compatible devices; Listing 9-1 shows how to do this in the Android Manifest.
<supports-gl-texture android:name="GL_OES_compressed_ETC1_RGB8_texture" />
<supports-gl-texture android:name="GL_OES_compressed_paletted_texture" />
Listing 9-1

AndroidManifest.xml, texture compression

Assuming you’ve already created a project with an empty Activity and a default activity_main layout file, the first thing to do is to add GLSurfaceView to the layout file. Modify activity_main.xml to match the contents of Listing 9-2.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  xmlns:tools="http://schemas.android.com/tools"
  android:layout_width="match_parent"
  android:layout_height="match_parent"
  tools:context=".MainActivity">
  <android.opengl.GLSurfaceView
    android:layout_width="400dp"
    android:layout_height="400dp"
    android:id="@+id/gl_view"
    />
</androidx.constraintlayout.widget.ConstraintLayout>
Listing 9-2

activity_main.xml

I removed the default TextView object and inserted a GLSurfaceView element with 400dp by 400dp size. Let’s keep it evenly square for now, so that our shape won’t skew. OpenGL assumes that drawing areas are always square.

Figure 9-4 shows the activity_main layout in design mode.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig4_HTML.jpg
Figure 9-4

activity_main.xml in design mode

The GLSurfaceView is an implementation of the SurfaceView class that uses a dedicated surface for displaying OpenGL rendering; this object manages a surface, which is a special piece of memory that can be composited into the Android view system. The GLSurfaceView runs on a dedicated thread to separate the rendering performance from the main UI thread.

Next, in MainActivity, let’s get a reference to the GLSurfaceView we just created. We can create a member variable on MainActivity that’s of type GLSurfaceView, then in the onCreate() method, we’ll get a reference to it using findViewByID. The code is shown in Listing 9-3.
public class MainActivity extends AppCompatActivity {
  private GLSurfaceView glView;
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    glView = findViewById(R.id.gl_view);
  }
}
Listing 9-3

Get a reference to the GLSurfaceView

Next, still on MainActivity, let’s determine if there’s support for OpenGL ES 2.0. This can be done by using an ActivityManager object which lets us interact with the global system state; we can use this to get the device configuration info, which in turn can tell us if the device supports OpenGL ES 2. The code to do this is shown in Listing 9-4.
ActivityManager am = (ActivityManager)
                     getSystemService(Context.ACTIVITY_SERVICE);
ConfigurationInfo ci = am.getDeviceConfigurationInfo();
boolean isES2Supported = ci.reqGlEsVersion > 0x20000;
Listing 9-4

Determine support for OpenGL ES 2.0

Once we know if the device supports OpenGL ES 2 (or not), we tell the surface that we’d like an OpenGL ES 2 compatible surface, and then we pass it in a custom renderer. The runtime will call this renderer whenever it’s time to adjust the surface or draw a new frame. Listing 9-5 shows the annotated code for MainActivity.
import android.app.ActivityManager;
import android.content.Context;
import android.content.pm.ConfigurationInfo;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import androidx.appcompat.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity {
  private GLSurfaceView glView;
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    glView = findViewById(R.id.gl_view);
    ActivityManager am = (ActivityManager)
         getSystemService(Context.ACTIVITY_SERVICE);
    ConfigurationInfo ci = am.getDeviceConfigurationInfo();
    boolean isES2Supported = ci.reqGlEsVersion > 0x20000;
    if(isES2Supported) {  ❶
      glView.setEGLContextClientVersion(2);  ❷
      glView.setRenderer(new GLSurfaceView.Renderer() {  ❸
        @Override
        public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
          glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); ❹
          // statements ❺
        }
        @Override
        public void onSurfaceChanged(GL10 gl10, int width, int height) {
          GLES20.glViewport(0,0, width, height); ❻
        }
        @Override
        public void onDrawFrame(GL10 gl10) {
          // statements ❼
        }
      });
    }
    else {
    }
  }
}
Listing 9-5

MainActivity, creation of OpenGL ES 2 environment

Once we know OpenGL ES 2 is supported, we proceed to creating an OpenGL ES 2 environment.

We tell the surface view that we want an OpenGL ES 2 compatible surface.

We create a custom renderer using an anonymous class, then passing an instance of that class to the setRenderer() method of the surface view.

We’re setting the render mode to draw only when there is a change to the drawing data.

This is a good place to create objects you will use for drawing; think of this as the equivalent of the Activity’s onCreate() method. This method may also be called if we lose the surface context and is later recreated.

The runtime calls this method once when the surface has been created and subsequently when, for some reason, the size of the surface changes. This is where you set the view port, because by the time this is called, we’ve got the dimensions of the surface. Think of this as the equivalent of the onSizeChanged() of the View class. This may also be called when the device switches orientation, for example, from portrait to landscape.

This is where we do our drawing. This is called when it’s time to draw a new frame.

The onDrawFrame() method of the Renderer is where we tell OpenGL ES to draw something on the surface. We’ll do this by passing an array of numbers which represents positions, colors, and so on. In our case, we’re going to draw a sphere. We can hand-code the arrays of numbers—which represent X,Y,Z coordinates of the vertices—that we need to pass OpenGL ES, but that may not help us to envision what we’re trying to draw. So, instead, let’s use a 3D creation suite like Blender (www.blender.org) to draw a shape.

Blender is open source; you can use it freely. Once you’re done with the download and installation, you can launch Blender, then delete the default cube by pressing X; next, press Shift+A and select MeshIco Sphere, as shown in Figure 9-5.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig5_HTML.jpg
Figure 9-5

Create an Icosphere

Now we’ve got a moderately interesting object with a couple of vertices—it will be cumbersome to hand-code these vertices; that’s why we took the Blender route.

To use the sphere in our app, we must export it as a Wavefront object. A Wavefront object is a geometry definition file format. It’s an open format and is adopted by 3D graphics application vendors. This is a simple data format that represents 3D geometry, namely, the position of each vertex; the faces that make each polygon are defined as a list of vertices. For our purposes, we’re only interested in the position of the vertices and the faces.

In Blender, go to FileExport Wavefront (.obj) as shown in Figure 9-6. In the following screen, give it a name (sphere.obj) and save it in a location of your choice. Don’t forget to note the export settings of Blender; check only the following:
  • Export as OBJ object

  • Triangulate faces

  • Keep vertex order

../images/340874_4_En_9_Chapter/340874_4_En_9_Fig6_HTML.jpg
Figure 9-6

Export the sphere to Wavefront object format

These are the settings I found to be easy to work with, especially when you’re about to parse the exported vertex and faces data.

The resulting object file is actually a text file; Listing 9-6 shows a partial listing of that sphere.obj.
# Blender v2.82 (sub 7) OBJ File: 'sphere.blend'
# www.blender.org
o Icosphere
v 0.000000 -1.000000 0.000000
v 0.723607 -0.447220 0.525725
v -0.276388 -0.447220 0.850649
v -0.894426 -0.447216 0.000000
v -0.276388 -0.447220 -0.850649
v 0.723607 -0.447220 -0.525725
v 0.276388 0.447220 0.850649
s off
f 1 14 13
f 2 14 16
f 1 13 18
f 1 18 20
f 1 20 17
f 2 16 23
f 3 15 25
f 4 19 27
f 5 21 29
Listing 9-6

Partial sphere.obj

Notice how each line starts with either a “v” or an “f”. A line that starts with a “v” represents a single vertex, and a line that starts with an “f” represents a face. The vertex lines have the X, Y, and Z coordinates of a vertex, while the face lines have the indices of the three vertices (which together form a face).

To keep things organized, let’s create a class that will represent our sphere object—we don’t really want to write all the drawing code inside the onDrawFrame() method now, do we?

Let’s create a new class and add it to the project. You can do this by using Android Studio’s context menu; right-click the package name (as shown in Figure 9-7), then choose NewJava Class.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig7_HTML.jpg
Figure 9-7

Create a new class

In the screen that follows, provide the name of the class (Sphere), as shown in Figure 9-8.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig8_HTML.jpg
Figure 9-8

Provide a name for the class

We’ll build the Sphere class a basic POJO that contains all the data that OpenGL ES requires to draw a shape. Listing 9-7 shows the starting code for Sphere.java.
public class Sphere {
  private List<String> vertList;
  private List<String> facesList;
  private Context ctx;
  private final String TAG = getClass().getName();
  public Sphere(Context context) {
    ctx = context;
    vertList = new ArrayList<>();
    facesList = new ArrayList<>();
  }
}
Listing 9-7

Sphere.java

The Sphere class has two List objects which will hold the vertices and faces data (which we will load from the OBJ file). Apart from that, there’s a Context object and a String object:
  • Context ctx—The context object will be needed by some of our methods, so I made it a member variable.

  • String TAG—I just need an identifying String for when we do some logging.

The idea is to read the exported Wavefront OBJ file and load the vertices and faces data into their corresponding List objects. Before we can read the file, we need to add it to the project. We can do that by creating an assets folder. An assets folder gives us the ability to add external files to the project and make them accessible to our code. If your project doesn’t have an assets folder, you can create them. To do that, use the context menu; right-click the “app” in the Project tool window (as shown in Figure 9-9), then select NewFolderAssets Folder.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig9_HTML.jpg
Figure 9-9

Create an assets folder

In the window that follows, click Finish, as shown in Figure 9-10.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig10_HTML.jpg
Figure 9-10

New Android component

Gradle will perform a “sync” after you’ve added a folder to the project. Figure 9-11 shows the Project tool window with the newly created assets folder.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig11_HTML.jpg
Figure 9-11

Assets folder created

Next, right-click the assets folder, then choose Reveal in Finder (as shown in Figure 9-12)—this is the prompt I got because I’m using macOS. If you’re on Windows, you will see “Show in Explorer instead.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig12_HTML.jpg
Figure 9-12

Reveal in Finder or Show in Explorer (for Windows users)

You can now transfer the sphere.obj file to the assets folder of the project.

Alternatively, you can copy the sphere.obj file to the assets folder using the Terminal of Android Studio (as shown in Figure 9-13).
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig13_HTML.jpg
Figure 9-13

Copy files using Terminal

Use whichever way is more convenient for you. Some prefer the GUI way, and some prefer the command line. Use the tools you’re more familiar with.

Now we can read the contents of the OBJ file and load them onto the ArrayList objects. In the Sphere class, add a method named loadVertices() and modify it to match Listing 9-8.
import java.util.Scanner;
// class definition and other statements
private void loadVertices() {
  try {
    Scanner scanner = new Scanner(ctx.getAssets().open("sphere.obj")); ❶
    while(scanner.hasNextLine()) {  ❷
      String line = scanner.nextLine(); ❸
      if(line.startsWith("v ")) {
        vertList.add(line);  ❹
      } else if(line.startsWith("f ")) {
        facesList.add(line); ❺
      }
    }
    scanner.close();
  }
  catch(IOException ioe) {
    Log.e(TAG, ioe.getMessage()); ❻
  }
}
Listing 9-8

loadVertices()

Create a new Scanner object and open the sphere.obj text file.

While we’re not yet at the end of the file, hasNextLine() will always return true.

Read the contents of the current line and save it to the line variable.

If the line starts with a “v”, add it to the vertList ArrayList.

If the line starts with an “f”, add it to the facesList ArrayList.

We’re coding our app using the Java language, but you need to remember that OpenGL ES is actually a bunch of C APIs. We can’t simply pass our list of vertices and faces to OpenGL ES directly. We need to convert our vertices and faces data into something OpenGL ES will understand.

Java and the native system might not store their bytes in the same order, so we use a special set of buffer classes and create a ByteBuffer large enough to hold our data and tell it to store its data using the native byte order. This is an extra step we need to do before passing our data to OpenGL. To do that, let’s add another method to the Sphere class; Listing 9-9 shows the contents of the createBuffers() method .
private FloatBuffer vertBuffer;  ❶
private ShortBuffer facesBuffer;
// some other statements
private void createBuffers() {
  // BUFFER FOR VERTICES
  ByteBuffer buffer1 = ByteBuffer.allocateDirect(vertList.size() * 3 * 4); ❷
  buffer1.order(ByteOrder.nativeOrder());
  vertBuffer = buffer1.asFloatBuffer();
  // BUFFER FOR FACES
  ByteBuffer buffer2 = ByteBuffer.allocateDirect(facesList.size() * 3 * 2); ❸
  buffer2.order(ByteOrder.nativeOrder());
  facesBuffer = buffer2.asShortBuffer();
  for(String vertex: vertList) {  ❹
    String coords[] = vertex.split(" ");  ❺
    float x = Float.parseFloat(coords[1]);
    float y = Float.parseFloat(coords[2]);
    float z = Float.parseFloat(coords[3]);
    vertBuffer.put(x);
    vertBuffer.put(y);
    vertBuffer.put(z);
  }
  vertBuffer.position(0);  ❻
  for(String face: facesList) {
    String vertexIndices[] = face.split(" ");  ❼
    short vertex1 = Short.parseShort(vertexIndices[1]);
    short vertex2 = Short.parseShort(vertexIndices[2]);
    short vertex3 = Short.parseShort(vertexIndices[3]);
    facesBuffer.put((short)(vertex1 - 1)); ❽
    facesBuffer.put((short)(vertex2 - 1));
    facesBuffer.put((short)(vertex3 - 1));
  }
}
Listing 9-9

createBuffers()

You have to add FloatBuffer and ShortBuffer member variables to the Sphere class. We will use this to hold the vertices and faces data.

Initialize the buffer using the allocateDirect() method. We’re allocating 4 bytes for each coordinate (because they are float numbers). Once the buffer is created, we convert it to a FloatBuffer by calling the asFloatBuffer() method.

Similarly, we initialize a ByteBuffer for the faces, but this time, we allocate only 2 bytes for each vertex index, because the indices are unsigned short. Next, we call the asShortBuffer() method to convert the ByteBuffer to a ShortBuffer.

To parse the vertices List object, we go through it using Java’s enhanced for-loop.

Each entry in the vertices List object is a line that holds the X,Y,Z position of the vertex, like 0.723607 -0.447220 0.525725; it’s separated by a space. So, we use the split() method of the String object using a white space as delimiter. This call will return an array of String with three elements. We convert these elements to float numbers and populate the FloatBuffer.

Reset the position of the buffer.

Same drill we did like in the vertices List, we split them into array elements, but this time convert them to short.

The indices start from 1 (not zero); so, we subtract 1 to the converted value before we add it to the ShortBuffer.

The next step is to create the shaders. We can’t render our 3D sphere if we don’t create the shaders; we need a vertex shader and a fragment shader. A shader is written in a C-like language called OpenGL Shading Language (GLSL for short).

A vertex shader is responsible for a 3D object’s vertices, while a fragment shader (also called a pixel shader) handles the coloring of the 3D object’s pixels.

To create the vertex shader, add a file to the project’s assets folder and name it vertex_shader.txt , as shown in Figure 9-14.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig14_HTML.jpg
Figure 9-14

New file

In the window that follows (Figure 9-15), enter the name of the file.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig15_HTML.jpg
Figure 9-15

Enter a new file name

Modify the newly created vertex_shader.txt to match the contents of Listing 9-10.
attribute vec4 position; ❶
uniform mat4 matrix; ❷
void main() {
    gl_Position = matrix * position; ❸
}
Listing 9-10

vertex_shader.txt

The attribute global variable receives the vertex position data from our Java program.

This is the uniform global variable view-project matrix from our Java code.

Inside the main() function, we set the value of gl_position (a GLSL built-in variable) to the product of the uniform and attribute global variables.

Next, we create the fragment shader. Like what we did in vertex_shader, add a file to the project and name it fragment_shader.txt. Modify the contents of the fragment shader program to match Listing 9-11.
precision mediump float;
void main() {
    gl_FragColor = vec4(0.481,1.000,0.865,1.000);
}
Listing 9-11

fragment_shader.txt

It’s a minimalistic fragment shader code; it basically assigns a light green color to all the pixels.

The next step is to load these shaders into our Java program and compile them. We will add another method to the Sphere class named createShaders(); its contents are shown in Listing 9-12.
// class definition and other statements
private int vertexShader;  ❶
private int fragmentShader;
private void createShaders() {
  try {
    Scanner scannerFrag = new Scanner(ctx.getAssets()
                              .open("fragment_shader.txt")); ❷
    Scanner scannerVert = new Scanner(ctx.getAssets()
                              .open("vertex_shader.txt")); ❸
    StringBuilder sbFrag = new StringBuilder(); ❹
    StringBuilder sbVert = new StringBuilder();
    while (scannerFrag.hasNext()) {
      sbFrag.append(scannerFrag.nextLine()); ❺
    }
    while(scannerVert.hasNext()) {
      sbVert.append(scannerVert.nextLine());
    }
    String vertexShaderCode = new String(sbVert.toString()); ❻
    String fragmentShaderCode = new String(sbFrag.toString());
    Log.d(TAG, vertexShaderCode);
    vertexShader = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER); ❼
    GLES20.glShaderSource(vertexShader, vertexShaderCode);
    fragmentShader = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
    GLES20.glShaderSource(fragmentShader, fragmentShaderCode);
    GLES20.glCompileShader(vertexShader);  ❽
    GLES20.glCompileShader(fragmentShader);
  }
  catch(IOException ioe) {
    Log.e(TAG, ioe.getMessage());
  }
}
Listing 9-12

createShaders()

Add member variable declarations for vertexShader and fragmentShader.

Open fragment_shader.txt for reading.

Open vertex_shader.txt for reading.

Create a StringBuffer to hold the partial Strings we will read from the Scanner object; do this for both fragment_shader.txt and vertex_shader.txt.

Append the current line to the StringBuffer (do this for both StringBuffer objects).

When all the lines in the Scanner object have been read and appended to the StringBuffer, we create a new String object. Do this for both StringBuffers.

The shader’s code must be added to the shader objects of OpenGL ES. We create a new shader using the glCreateShader() method, then we set the shader source using the newly created shader and the shader program code; do this for both vertex_shader and fragment_shader.

Finally, compile the shaders.

Before we can use the shaders, we need to link them to a program. We can’t use the shaders directly. This is what connects the output of the vertex shader with the input of the fragment shader. It’s also what lets us pass an input from our program and use the shader to draw our shapes.

We’ll create a new program object, and if that turns out well, we’ll attach the shaders. Let’s add a new method to the Sphere class and name it runProgram(); the code for this method is shown in Listing 9-13.
private int program; ❶
// other statements
private void runProgram() {
  program = GLES20.glCreateProgram(); ❷
  GLES20.glAttachShader(program, vertexShader); ❸
  GLES20.glAttachShader(program, fragmentShader); ❹
  GLES20.glLinkProgram(program); ❺
  GLES20.glUseProgram(program);
}
Listing 9-13

runProgram()

You need to create the program as a member variable in the Sphere class.

Use the glCreateProgram() method to create a program.

Attach the vertex shader to the program.

Attach the fragment shader to the program.

To start using the program, we need to link it using the glLinkProgram() method and put it to use via the glUseProgram() method.

Now that all the buffers and the shaders are ready, we can finally draw something to the screen. Let’s add another method to the Sphere class and name it draw(); the code for this method is shown in Listing 9-14.
import android.opengl.Matrix; ❶
// class definition and other statements
public void draw() {
  int position = GLES20.glGetAttribLocation(program, "position"); ❷
  GLES20.glEnableVertexAttribArray(position);
  GLES20.glVertexAttribPointer(position, 3, GLES20.GL_FLOAT, false, 3 * 4, vertBuffer); ❸
  float[] projectionMatrix = new float[16]; ❹
  float[] viewMatrix = new float[16];
  float[] productMatrix = new float[16];
  Matrix.frustumM(projectionMatrix, 0, -1, 1, -1, 1, 2, 9); ❺
  Matrix.setLookAtM(viewMatrix, 0, 0, 3, -4, 0, 0, 0, 0, 1, 0f); ❻
  Matrix.multiplyMM(productMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
  int matrix = GLES20.glGetUniformLocation(program, "matrix"); ❼
  GLES20.glUniformMatrix4fv(matrix, 1, false, productMatrix, 0);
  GLES20.glDrawElements(GLES20.GL_TRIANGLES, facesList.size() * 3,
                        GLES20.GL_UNSIGNED_SHORT, facesBuffer); ❽
  GLES20.glDisableVertexAttribArray(position);
}
Listing 9-14

draw()

You need to import the Matrix class.

If you remember in the vertex_shader.txt, we defined a position variable that’s supposed to receive vertex position data from our Java code; we’re about to send that data to this position variable. To do that, we must first get a reference of the position variable in the vertex_shader. We do that using the glGetAttribLocation() method, and then we enable it using the glEnableVertexAttribArray() method.

Point the position handle to the vertices buffer. The glVertexAttribPointer() method also expects the number of coordinates per vertex and the byte offset per vertex. Each coordinate is a float, so the byte offset is 3 * 4.

Our vertex shader expects a view-projection matrix, which is the product of the view and projection matrices. A view matrix allows us to specify the locations of the camera and the point it’s looking at. A projection matrix lets us map the square coordinates of the Android device and also specify the near and far planes of the viewing frustum. We simply create float arrays for these matrices.

Initialize the projection matrix using the frustumM() method of the Matrix class. You need to pass some arguments to this method; it expects the locations of the left, right, bottom, top, near, and far clip planes. When we defined the GLSurfaceView in our activity_main layout file, it’s already a square, so we can use the values -1 and 1 for the near and far clip planes.

The setLookAtM() method is used to initialize the view matrix. It expects the positions of the camera and the point it is looking at. Then calculate the product matrix using the multiplyMM() method.

Let’s pass the product matrix to the shader using the glGetUniformLocation() method. When we get the handle (the matrix variable), point it to the product matrix using the glUniformMatrix4fv() method.

The glDrawElements() method lets us use the faces buffer to create triangles; its arguments expect the total number of vertex indices, the type of each index, and the faces buffer.

Now that we’ve got the methods to load the vertices from a blender file, create all the buffers, compile the shaders, and create an OpenGL program, we can now tie all these methods together in the constructor of the Sphere class, as shown in Listing 9-15.
public Sphere(Context context) {
  ctx = context;
  vertList = new ArrayList<>();
  facesList = new ArrayList<>();
  loadVertices();
  createBuffers();
  createShaders();
  runProgram();
}
Listing 9-15

Constructor of the Sphere class

After adding all these methods, it may be difficult to keep the code straight. So, I’m showing all the contents of the Sphere class in Listing 9-16, for your reference.
import android.content.Context;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import android.opengl.GLES20;
import android.opengl.Matrix;
import android.util.Log;
public class Sphere {
  private FloatBuffer vertBuffer;
  private ShortBuffer facesBuffer;
  private List<String> vertList;
  private List<String> facesList;
  private Context ctx;
  private final String TAG = getClass().getName();
  private int vertexShader;
  private int fragmentShader;
  private int program;
  public Sphere(Context context) {
    ctx = context;
    vertList = new ArrayList<>();
    facesList = new ArrayList<>();
    loadVertices();
    createBuffers();
    createShaders();
    runProgram();
  }
  private void loadVertices() {
    try {
      Scanner scanner = new Scanner(ctx.getAssets().open("sphere.obj"));
      while(scanner.hasNextLine()) {
        String line = scanner.nextLine();
        if(line.startsWith("v ")) {
          vertList.add(line);
        } else if(line.startsWith("f ")) {
          facesList.add(line);
        }
      }
      scanner.close();
    }
    catch(IOException ioe) {
      Log.e(TAG, ioe.getMessage());
    }
  }
  private void createBuffers() {
    // BUFFER FOR VERTICES
    ByteBuffer buffer1 = ByteBuffer.allocateDirect(vertList.size() * 3 * 4);
    buffer1.order(ByteOrder.nativeOrder());
    vertBuffer = buffer1.asFloatBuffer();
    // BUFFER FOR FACES
    ByteBuffer buffer2 = ByteBuffer.allocateDirect(facesList.size() * 3 * 2);
    buffer2.order(ByteOrder.nativeOrder());
    facesBuffer = buffer2.asShortBuffer();
    for(String vertex: vertList) {
      String coords[] = vertex.split(" ");
      float x = Float.parseFloat(coords[1]);
      float y = Float.parseFloat(coords[2]);
      float z = Float.parseFloat(coords[3]);
      vertBuffer.put(x);
      vertBuffer.put(y);
      vertBuffer.put(z);
    }
    vertBuffer.position(0);
    for(String face: facesList) {
      String vertexIndices[] = face.split(" ");
      short vertex1 = Short.parseShort(vertexIndices[1]);
      short vertex2 = Short.parseShort(vertexIndices[2]);
      short vertex3 = Short.parseShort(vertexIndices[3]);
      facesBuffer.put((short)(vertex1 - 1));
      facesBuffer.put((short)(vertex2 - 1));
      facesBuffer.put((short)(vertex3 - 1));
    }
    facesBuffer.position(0);
  }
  private void createShaders() {
    try {
      Scanner scannerFrag = new Scanner(ctx.getAssets()
                                .open("fragment_shader.txt"));
      Scanner scannerVert = new Scanner(ctx.getAssets()
                                .open("vertex_shader.txt"));
      StringBuilder sbFrag = new StringBuilder();
      StringBuilder sbVert = new StringBuilder();
      while (scannerFrag.hasNext()) {
        sbFrag.append(scannerFrag.nextLine());
      }
      while(scannerVert.hasNext()) {
        sbVert.append(scannerVert.nextLine());
      }
      String vertexShaderCode = new String(sbVert.toString());
      String fragmentShaderCode = new String(sbFrag.toString());
      Log.d(TAG, vertexShaderCode);
      vertexShader = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
      GLES20.glShaderSource(vertexShader, vertexShaderCode);
      fragmentShader = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
      GLES20.glShaderSource(fragmentShader, fragmentShaderCode);
      GLES20.glCompileShader(vertexShader);
      GLES20.glCompileShader(fragmentShader);
    }
    catch(IOException ioe) {
      Log.e(TAG, ioe.getMessage());
    }
  }
  private void runProgram() {
    program = GLES20.glCreateProgram();
    GLES20.glAttachShader(program, vertexShader);
    GLES20.glAttachShader(program, fragmentShader);
    GLES20.glLinkProgram(program);
    GLES20.glUseProgram(program);
  }
  public void draw() {
    int position = GLES20.glGetAttribLocation(program, "position");
    GLES20.glEnableVertexAttribArray(position);
    GLES20.glVertexAttribPointer(position, 3, GLES20.GL_FLOAT, false, 3 * 4, vertBuffer);
    float[] projectionMatrix = new float[16];
    float[] viewMatrix = new float[16];
    float[] productMatrix = new float[16];
    Matrix.frustumM(projectionMatrix, 0, -1, 1, -1, 1, 2, 9);
    Matrix.setLookAtM(viewMatrix, 0, 0, 3, -4, 0, 0, 0, 0, 1, 0f);
    Matrix.multiplyMM(productMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
    int matrix = GLES20.glGetUniformLocation(program, "matrix");
    GLES20.glUniformMatrix4fv(matrix, 1, false, productMatrix, 0);
    GLES20.glDrawElements(GLES20.GL_TRIANGLES, facesList.size() * 3, GLES20.GL_UNSIGNED_SHORT, facesBuffer);
    GLES20.glDisableVertexAttribArray(position);
  }
}
Listing 9-16

Complete code for the Sphere class

Now that all of the code for the Sphere class is complete, we can go back to MainActivity. Remember in MainActivity that we created a Renderer object using an anonymous inner class. We created that renderer because a GLSurfaceView needs a renderer object so that it can, well, render 3D graphics. Listing 9-17 shows the complete code for MainActivity.
public class MainActivity extends AppCompatActivity {
  private GLSurfaceView glView;
  private Sphere sphere;  ❶
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    glView = findViewById(R.id.gl_view);
    ActivityManager am = (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
    ConfigurationInfo ci = am.getDeviceConfigurationInfo();
    boolean isES2Supported = ci.reqGlEsVersion > 0x20000;
    if(isES2Supported) {
      glView.setEGLContextClientVersion(2);
      glView.setRenderer(new GLSurfaceView.Renderer() {
        @Override
        public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
          glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
          sphere = new Sphere(getApplicationContext()); ❷
        }
        @Override
        public void onSurfaceChanged(GL10 gl10, int width, int height) {
          GLES20.glViewport(0,0, width, height);
        }
        @Override
        public void onDrawFrame(GL10 gl10) {
          sphere.draw(); ❸
        }
      });
    }
    else {
    }
  }
}
Listing 9-17

MainActivity, complete

Create a member variable as a reference to the sphere object we’re about to create.

Create the sphere object; pass the current context as an argument.

Call the draw() method of the sphere.

At this point, you’re ready to run the app. Figure 9-16 shows the app at runtime.
../images/340874_4_En_9_Chapter/340874_4_En_9_Fig16_HTML.jpg
Figure 9-16

Icosphere rendered in OpenGL ES

After nearly 300 lines of code, all we got was a little green Icosphere without much definition. Welcome to OpenGL ES programming. This should give you an idea how involved and how much work goes into an OpenGL ES game.

Key Takeaways

  • Starting with Android 3 (API level 11), drawings done on the Canvas already enjoy hardware acceleration, so it’s not a bad choice of tech for game programming. However, if the visual complexities of your game exceed the capabilities of the Canvas, you should consider drawing the graphics using OpenGL ES.

  • OpenGL ES is really good at just drawing triangles, not much else. It gives you a lot of control though on how you draw those triangles. With it, you can control the camera, the light source, and the texture, among other things.

  • Android SDK already has built-in support for OpenGL ES. The GLSurfaceView, which is what you will typically use for drawing OpenGL ES objects, is already included in the SDK.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.86.121