Point sprites with the geometry shader

Point sprites are simple quads (usually texture mapped) that are aligned such that they are always facing the camera. They are very useful for particle systems in 3D (refer to Chapter 9, Particles Systems and Animation) or 2D games. The point sprites are specified by the OpenGL application as single point primitives, via the GL_POINTS rendering mode. This simplifies the process, because the quad itself and the texture coordinates for the quad are determined automatically. The OpenGL side of the application can effectively treat them as point primitives, avoiding the need to compute the positions of the quad vertices.

The following screenshot shows a group of point sprites. Each sprite is rendered as a point primitive. The quad and texture coordinates are generated automatically (within the geometry shader) and aligned to face the camera.

Point sprites with the geometry shader

OpenGL already has built-in support for point sprites in the GL_POINTS rendering mode. When rendering point primitives using this mode, the points are rendered as screen-space squares that have a diameter (side length) as defined by the glPointSize function. In addition, OpenGL will automatically generate texture coordinates for the fragments of the square. These coordinates run from zero to one in each direction (left-to-right for s, bottom-to-top for t), and are accessible in the fragment shader via the gl_PointCoord built-in variable.

There are various ways to fine-tune the rendering of point sprites within OpenGL. One can define the origin of the automatically generated texture coordinates using the glPointParameter functions. The same set of functions also can be used to tweak the way that OpenGL defines the alpha value for points when multisampling is enabled.

The built-in support for point sprites does not allow the programmer to rotate the screen-space squares, or define them as different shapes such as rectangles or triangles. However, one can achieve similar effects with creative use of textures and transformations of the texture coordinates. For example, we could transform the texture coordinates using a rotation matrix to create the look of a rotating object even though the geometry itself is not actually rotating. In addition, the size of the point sprite is a screen-space size. In other words, the point size must be adjusted with the depth of the point sprite if we want to get a perspective effect (sprites get smaller with distance).

If these (and possibly other) issues make the default support for point sprites too limiting, we can use the geometry shader to generate our point sprites. In fact, this technique is a good example of using the geometry shader to generate different kinds of primitives than it receives. The basic idea here is that the geometry shader will receive point primitives (in camera coordinates) and will output a quad centered at the point and aligned so that it is facing the camera. The geometry shader will also automatically generate texture coordinates for the quad.

If desired, we could generate other shapes such as hexagons, or we could rotate the quads before they are output from the geometry shader. The possibilities are endless. Implementing the primitive generation within the geometry shader gives us a great deal of flexibility, but possibly at the cost of some efficiency. The default OpenGL support for point sprites is highly optimized and is likely to be faster in general.

Before jumping directly into the code, let's take a look at some of the mathematics. In the geometry shader, we'll need to generate the vertices of a quad that is centered at a point and aligned with the camera's coordinate system (eye coordinates). Given the point location (P) in camera coordinates, we can generate the vertices of the corners of the quad by simply translating P in a plane parallel to the x-y plane of the camera's coordinate system as shown in the following figure:

Point sprites with the geometry shader

The geometry shader will receive the point location in camera coordinates, and output the quad as a triangle strip with texture coordinates. The fragment shader will then just apply the texture to the quad.

Getting ready

For this example, we'll need to render a number of point primitives. The positions can be sent via attribute location 0. There's no need to provide normal vectors or texture coordinates for this one.

The following uniform variables are defined within the shaders, and need to be set within the OpenGL program:

  • Size2: This should be half the width of the sprite's square
  • SpriteTex: This is the texture unit containing the point sprite texture

As usual, uniforms for the standard transformation matrices are also defined within the shaders, and need to be set within the OpenGL program.

How to do it...

To create a shader program that can be used to render point primitives as quads, use the following steps:

  1. Use the following code for the vertex shader:
    layout (location = 0) in vec3 VertexPosition;
    
    uniform mat4 ModelViewMatrix;
    uniform mat3 NormalMatrix;
    uniform mat4 ProjectionMatrix;
    
    void main()
    {
        gl_Position = ModelViewMatrix * 
                      vec4(VertexPosition,1.0);
    }
  2. Use the following code for the geometry shader:
    layout( points ) in;
    layout( triangle_strip, max_vertices = 4 ) out;
    
    uniform float Size2;   // Half the width of the quad
    
    uniform mat4 ProjectionMatrix;
    
    out vec2 TexCoord;
    
    void main()
    {
        mat4 m = ProjectionMatrix;  // Reassign for brevity
    
        gl_Position = m * (vec4(-Size2,-Size2,0.0,0.0) + 
                           gl_in[0].gl_Position);
        TexCoord = vec2(0.0,0.0);
        EmitVertex();
    
        gl_Position = m * (vec4(Size2,-Size2,0.0,0.0) + 
                           gl_in[0].gl_Position);
        TexCoord = vec2(1.0,0.0);
        EmitVertex();
    
        gl_Position = m * (vec4(-Size2,Size2,0.0,0.0) + 
                           gl_in[0].gl_Position);
        TexCoord = vec2(0.0,1.0);
        EmitVertex();
    
        gl_Position = m * (vec4(Size2,Size2,0.0,0.0) + 
                           gl_in[0].gl_Position);
        TexCoord = vec2(1.0,1.0);
        EmitVertex();
    
        EndPrimitive();
    }
  3. Use the following code for the fragment shader:
    in vec2 TexCoord;  // From the geometry shader
    
    uniform sampler2D SpriteTex;
    
    layout( location = 0 ) out vec4 FragColor;
    
    void main()
    {
        FragColor = texture(SpriteTex, TexCoord);
    }
  4. Within the OpenGL render function, render a set of point primitives.

How it works...

The vertex shader is almost as simple as it can get. It converts the point's position to camera coordinates by multiplying by the model-view matrix, and assigns the result to the built-in output variable gl_Position.

In the geometry shader, we start by defining the kind of primitive that this geometry shader expects to receive. The first layout statement indicates that this geometry shader will receive point primitives.

layout( points ) in;

The next layout statement indicates the kind of primitives produced by this geometry shader, and the maximum number of vertices that will be output.

layout( triangle_strip, max_vertices = 4 ) out;

In this case, we want to produce a single quad for each point received, so we indicate that the output will be a triangle strip with a maximum of four vertices.

The input primitive is available to the geometry shader via the built-in input variable gl_in. Note that it is an array of structures. You might be wondering why this is an array since a point primitive is only defined by a single position. Well, in general the geometry shader can receive triangles, lines, or points (and possibly adjacency information). So, the number of values available may be more than one. If the input were triangles, the geometry shader would have access to three input values (associated with each vertex). In fact, it could have access to as many as six values when triangles_adjacency is used (more on that in a later recipe).

Note

The gl_in variable is an array of structs. Each struct contains the following fields: gl_Position, gl_PointSize, and gl_ClipDistance[]. In this example, we are only interested in gl_Position. However, the others can be set in the vertex shader to provide additional information to the geometry shader.

Within the main function of the geometry shader, we produce the quad (as a triangle strip) in the following way. For each vertex of the triangle strip we execute the following steps:

  1. Compute the attributes for the vertex (in this case the position and texture coordinate), and assign their values to the appropriate output variables (gl_Position and TexCoord). Note that the position is also transformed by the projection matrix. We do this because the variable gl_Position must be provided in clip coordinates to later stages of the pipeline.
  2. Emit the vertex (send it down the pipeline) by calling the built-in function EmitVertex().

Once we have emitted all vertices for the output primitive, we call EndPrimitive() to finalize the primitive and send it along.

Note

It is not strictly necessary to call EndPrimitive() in this case because it is implicitly called when the geometry shader finishes. However, like closing files, it is good practice to do so anyway.

The fragment shader is also very simple. It just applies the texture to the fragment using the (interpolated) texture coordinate provided by the geometry shader.

There's more...

This example is fairly straightforward and is intended as a gentle introduction to geometry shaders. We could expand on this by allowing the quad to rotate or to be oriented in different directions. We could also use the texture to discard fragments (in the fragment shader) in order to create point sprites of arbitrary shapes. The power of the geometry shader opens up plenty of possibilities!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.200.143