We have implemented simple lighting recipes in previous chapters. These unfortunately approximate some aspects of lighting. However, effects such as global illumination are not handled by the basic lights, as discussed earlier. In this respect, several techniques have been developed over the years which fake the global illumination effects. One such technique is Screen Space Ambient Occlusion (SSAO) which we will implement in this recipe.
As the name suggests, this method works in screen space. For any given pixel onscreen, the amount of occlusion due to its neighboring pixels can be obtained by looking at the difference in their depth value. In order to reduce the sampling artefacts, the neighbor coordinates are randomly offset. For a pixel whose depth values are close to one another, they belong to the geometry which is spatially lying close. Based on the difference of the depth values, an occlusion value is determined. Given in pseudocode, the algorithm may be given as follows:
Get the position (p), normal (n) and depth (d) value at current pixel position For each pixel in the neighborhood of current pixel Get the position (p0) of the neighborhood pixel Call proc. CalcAO(p, p0, n) End for Return the ambient occlusion amount as color
The ambient occlusion procedure is defined as follows:
const float DEPTH_TOLERANCE = 0.00001; proc CalcAO(p,p0,n) diff = p0-p-DEPTH_TOLERANCE; v = normalize(diff); d = length(diff)*scale; return max(0.1, dot(n,v)-bias)*(1.0/(1.0+d))*intensity; end proc
Note that we have three artist control parameters: scale, bias, and intensity. The scale parameter controls the size of the occlusion area, bias shifts the occlusion, and intensity controls the strength of the occlusion. The DEPTH_TOLERANCE
constant is added to remove depth-fighting artefacts.
The whole recipe proceeds as follows. We load our 3D model and render it into an offscreen texture using FBO. We use two FBOs: one for storing the eye space normals and depth, and another FBO is for filtering of intermediate results. For both the color attachment and the depth attachment of first FBO, floating point texture formats are used. For the color attachment, GL_RGBA32F
is used, whereas for depth texture, the GL_DEPTH_COMPONENT32F
floating point format is used. Floating point texture formats are used as we require more precision, otherwise truncation errors will show up in the rendering result. The second FBO is used for separable Gaussian smoothing as was carried out in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows. This FBO has two color attachments with the floating point texture format GL_RGBA32F
.
In the rendering function, the scene is first rendered normally. Then, the first shader is used to output the eye space normals. This is stored in the color attachment and the depth values are stored in the depth attachment of the first FBO. After this step, the filtering FBO is bound and the second shader is used, which uses the depth and normal textures from the first FBO to calculate the ambient occlusion result. Since the neighbor points are randomly offset, noise is introduced. The noisy result is then smoothed by applying separable gaussian smoothing. Finally, the filtered result is blended with the existing rendering by using conventional alpha blending.
The code for this recipe is contained in the Chapter6/SSAO
folder. We will be using the Obj model viewer from Chapter 5, Mesh Model Formats and and Particle Systems. We will add SSAO to the Obj model.
Let us start the recipe by following these simple steps:
ObjLoader
object. Call the ObjLoader::Load
function passing it the name of the OBJ file. Pass vectors to store the meshes
, vertices
, indices,
and materials
contained in the OBJ file.GL_RGBA32F
) for both of these. In addition, we create a second FBO for Gaussian smoothing of the SSAO output. We are using multiple texture units here as the second shader expects normal and depth textures to be bound to texture units 1 and 3 respectively.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_FRAMEBUFFER, fboID); glGenTextures(1, &normalTextureID); glGenTextures(1, &depthTextureID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, normalTextureID); //set texture parameters glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, WIDTH, HEIGHT, 0, GL_BGRA, GL_FLOAT, NULL); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthTextureID); //set texture parameters glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WIDTH, HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, normalTextureID, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D, depthTextureID, 0); glGenFramebuffers(1,&filterFBOID); glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID); glGenTextures(2, blurTexID); for(int i=0;i<2;i++) { glActiveTexture(GL_TEXTURE4+i); glBindTexture(GL_TEXTURE_2D, blurTexID[i]); //set texture parameters glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F,RTT_WIDTH, RTT_HEIGHT,0,GL_RGBA,GL_FLOAT,NULL); glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,blurTexID[i],0); }
glBindFramebuffer(GL_FRAMEBUFFER, fboID); glViewport(0,0,RTT_WIDTH, RTT_HEIGHT); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glBindVertexArray(vaoID); { ssaoFirstShader.Use(); glUniformMatrix4fv(ssaoFirstShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glUniformMatrix3fv(ssaoFirstShader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); for(size_t i=0;i<materials.size();i++) { Material* pMat = materials[i]; if(materials.size()==1) glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_SHORT, 0); else glDrawElements(GL_TRIANGLES, pMat->count, GL_UNSIGNED_SHORT, (const GLvoid*)(&indices[pMat->offset])); } ssaoFirstShader.UnUse(); }
The first vertex shader (Chapter6/SSAO/shaders/SSAO_FirstStep.vert
) outputs the eye space normal as shown in the following code snippet:
#version 330 core
layout(location = 0) in vec3 vVertex;
layout(location = 1) in vec3 vNormal;
uniform mat4 MVP;
uniform mat3 N;
smooth out vec3 vEyeSpaceNormal;
void main() {
vEyeSpaceNormal = N*vNormal;
gl_Position = MVP*vec4(vVertex,1);
}
The fragment shader (Chapter6/SSAO/shaders/SSAO_FirstStep.frag
) returns the interpolated normal, as the fragment color, shown as follows:
#version 330 core smooth in vec3 vEyeSpaceNormal; layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(normalize(vEyeSpaceNormal)*0.5 + 0.5, 1); }
Chapter6/SSAO/shaders/SSAO_SecondStep.frag
). This shader does the actual SSAO calculation. The input to the shader is the normals texture from step 3. This shader is invoked on a full screen quad.glBindFramebuffer(GL_FRAMEBUFFER,filterFBOID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glBindVertexArray(quadVAOID); ssaoSecondShader.Use(); glUniform1f(ssaoSecondShader("radius"), sampling_radius); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); ssaoSecondShader.UnUse();
Chapter6/SSAO/shaders/GaussH.frag and Chapter6/SSAO/shaders/GaussV.frag
). The separable Gaussian smoothing is added in to smooth out the ambient occlusion result.glDrawBuffer(GL_COLOR_ATTACHMENT1); glBindVertexArray(quadVAOID); gaussianV_shader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); glDrawBuffer(GL_COLOR_ATTACHMENT0); gaussianH_shader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
Chapter6/SSAO/shaders/final.frag
) to blend the output from steps 3 and 5. This shader simply renders the final output from the filtering stage using a full-screen quad.glBindFramebuffer(GL_FRAMEBUFFER,0); glViewport(0,0,WIDTH, HEIGHT); glDrawBuffer(GL_BACK_LEFT); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); finalShader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); finalShader.UnUse(); glDisable(GL_BLEND);
There are three steps in the SSAO calculation. The first step is the preparation of inputs, that is, the view space normals and depth. The normals are stored using the first step vertex shader (Chapter6/SSAO/shaders/SSAO_FirstStep.vert
).
vEyeSpaceNormal_Depth = N*vNormal; vec4 esPos = MV*vec4(vVertex,1); gl_Position = P*esPos;
The fragment shader (Chapter6/SSAO/shaders/SSAO_FirstStep.frag
) then outputs these values. The depth is extracted from the depth attachment of the FBO.
The second step is the actual SSAO calculation. We use a fragment shader (Chapter6/SSAO/shaders/SSAO_SecondStep.frag
) to perform this by first rendering a screen-aligned quad. Then, for each fragment, the corresponding normal and depth values are obtained from the render target, from the first step. Next, a loop is run to compare the depth values of the neighboring fragments and then an occlusion value is estimated.
float depth = texture(depthTex, vUV).r; if(depth<1.0) { vec3 n = normalize(texture(normalTex, vUV).xyz*2.0 - 1.0); vec4 p = invP*vec4(vUV,depth,1); p.xyz /= p.w; vec2 random = normalize(texture(noiseTex, viewportSize/random_size * vUV).rg * 2.0 - 1.0); float ao = 0.0; for(int i = 0; i < NUM_SAMPLES; i++) { float npw = (pw + radius * samples[i].x * random.x); float nph = (ph + radius * samples[i].y * random.y); vec2 uv = vUV + vec2(npw, nph); vec4 p0 = invP * vec4(vUV,texture2D(depthTex, uv ).r, 1.0); p0.xyz /= p0.w; ao += calcAO(p0, p, n); //calculate similar depth points from the neighborhood //and calcualte ambient occlusion amount } ao *= INV_NUM_SAMPLES/8.0; vFragColor = vec4(vec3(0), ao); }
After the second shader, we filter the SSAO output using separable Gaussian convolution. The default draw buffer is then restored and then the Gaussian filtered SSAO output is alpha blended with the normal rendering.
The demo application implementing this recipe shows the scene with three blocks on a planar quad. When run, the output is as shown in the following screenshot:
Pressing the Space bar disables SSAO to produce the following output. As can be seen, ambient occlusion helps in giving shaded cues that approximate how near or far objects are. We can also change the sampling radius by using the + and - keys.
18.188.175.182