Implementing volume rendering using 3D texture slicing

Volume rendering is a special class of rendering algorithms that allows us to portray fuzzy phenomena, such as smoke. There are numerous algorithms for volume rendering. To start our quest, we will focus on the simplest method called 3D texture slicing. This method approximates the volume-density function by slicing the dataset in front-to-back or back-to-front order and then blends the proxy slices using hardware-supported blending. Since it relies on the rasterization hardware, this method is very fast on the modern GPU.

The pseudo code for view-aligned 3D texture slicing is as follows:

  1. Get the current view direction vector.
  2. Calculate the min/max distance of unit cube vertices by doing a dot product of each unit cube vertex with the view direction vector.
  3. Calculate all possible intersections parameter (λ) of the plane perpendicular to the view direction with all edges of the unit cube going from the nearest to farthest vertex, using min/max distances from step 1.
  4. Use the intersection parameter λ (from step 3) to move in the viewing direction and find the intersection points. Three to six intersection vertices will be generated.
  5. Store the intersection points in the specified order to generate triangular primitives, which are the proxy geometries.
  6. Update the buffer object memory with the new vertices.

Getting ready

The code for this recipe is in the Chapter7/3DTextureSlicing directory.

How to do it…

Let us start our recipe by following these simple steps:

  1. Load the volume dataset by reading the external volume datafile and passing the data into an OpenGL texture. Also enable hardware mipmap generation. Typically, the volume datafiles store densities that are obtained from using a cross-sectional imaging modality such as CT or MRI scans. Each CT/MRI scan is a 2D slice. We accumulate these slices in Z direction to obtain a 3D texture, which is simply an array of 2D textures. The densities store different material types, for example, values ranging from 0 to 20 are typically occupied by air. As we have an 8-bit unsigned dataset, we store the dataset into a local array of GLubyte type. If we had an unsigned 16-bit dataset, we would have stored it into a local array of GLushort type. In case of 3D textures, in addition to the S and T parameters, we have an additional parameter R that controls the slice we are at in the 3D texture.
    std::ifstream infile(volume_file.c_str(), std::ios_base::binary);
    if(infile.good()) {
       GLubyte* pData = new GLubyte[XDIM*YDIM*ZDIM];
       infile.read(reinterpret_cast<char*>(pData),   
       XDIM*YDIM*ZDIM*sizeof(GLubyte));
       infile.close();
       glGenTextures(1, &textureID);
       glBindTexture(GL_TEXTURE_3D, textureID); 
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, 
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T,   
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R,    
       GL_CLAMP);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, 
                       GL_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER,   
                       GL_LINEAR_MIPMAP_LINEAR);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_BASE_LEVEL, 0);
       glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 4);
    glTexImage3D(GL_TEXTURE_3D,0,GL_RED,XDIM,YDIM,ZDIM,0,GL_RED,GL_UNSIGNED_BYTE,pData);
       glGenerateMipmap(GL_TEXTURE_3D);
       return true;
    } else {
       return false;
    }

    The filtering parameters for 3D textures are similar to the 2D texture parameters that we have seen before. Mipmaps are collections of down-sampled versions of a texture that are used for level of detail (LOD) functionality. That is, they help to use a down-sampled version of the texture if the viewer is very far from the object on which the texture is applied. This helps improve the performance of the application. We have to specify the max number of levels (GL_TEXTURE_MAX_LEVEL), which is the maximum number of mipmaps generated from the given texture. In addition, the base level (GL_TEXTURE_BASE_LEVEL) denotes the first level for the mipmap that is used when the object is closest.

    The glGenerateMipMap function works by generating derived arrays by repeated filtered reduction operation on the previous level. So let's say that we have three mipmap levels and our 3D texture has a resolution of 256×256×256 at level 0. For level 1 mipmap, the level 0 data will be reduced to half the size by filtered reduction to 128×128×128. For level 2 mipmap, the level 1 data will be filtered and reduced to 64×64×64. Finally, for level 3 mipmap, the level 2 data will be filtered and reduced to 32×32×32.

  2. Setup a vertex array object and a vertex buffer object to store the geometry of the proxy slices. Make sure that the buffer object usage is specified as GL_DYNAMIC_DRAW. The initial glBufferData call allocates GPU memory for the maximum number of slices. The vTextureSlices array is defined globally and it stores the vertices produced by texture slicing operation for triangulation. The glBufferData is initialized with 0 as the data will be filled at runtime dynamically.
    const int MAX_SLICES = 512;
    glm::vec3 vTextureSlices[MAX_SLICES*12];
    
    glGenVertexArrays(1, &volumeVAO);
    glGenBuffers(1, &volumeVBO);  
    glBindVertexArray(volumeVAO);
    glBindBuffer (GL_ARRAY_BUFFER, volumeVBO);
    glBufferData (GL_ARRAY_BUFFER, sizeof(vTextureSlices), 0, GL_DYNAMIC_DRAW);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,0,0); 
    glBindVertexArray(0);
  3. Implement slicing of volume by finding intersections of a unit cube with proxy slices perpendicular to the viewing direction. This is carried out by the SliceVolume function. We use a unit cube since our data has equal size in all three axes that is, 256×256×256. If we have a non-equal sized dataset, we can scale the unit cube appropriately.
       //determine max and min distances
        glm::vec3 vecStart[12];
        glm::vec3 vecDir[12];
        float lambda[12];
        float lambda_inc[12];
        float denom = 0;
        float plane_dist = min_dist;
        float plane_dist_inc = (max_dist-min_dist)/float(num_slices);
    
        //determine vecStart and vecDir values
        glm::vec3 intersection[6];
        float dL[12];
    
        for(int i=num_slices-1;i>=0;i--) {
            for(int e = 0; e < 12; e++) 
            {
                dL[e] = lambda[e] + i*lambda_inc[e];
            }
    
            if  ((dL[0] >= 0.0) && (dL[0] < 1.0))    { 
                intersection[0] = vecStart[0] + dL[0]*vecDir[0];
            }
            //like wise for all intersection points		 
            int indices[]={0,1,2, 0,2,3, 0,3,4, 0,4,5};
            for(int i=0;i<12;i++)
            vTextureSlices[count++]=intersection[indices[i]];
        }
        //update buffer object
        glBindBuffer(GL_ARRAY_BUFFER, volumeVBO);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vTextureSlices),  &(vTextureSlices[0].x));
  4. In the render function, set the over blending, bind the volume vertex array object, bind the shader, and then call the glDrawArrays function.
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); 
    glBindVertexArray(volumeVAO);	
    shader.Use(); 
    glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP));
    glDrawArrays(GL_TRIANGLES, 0, sizeof(vTextureSlices)/sizeof(vTextureSlices[0]));
    shader.UnUse(); 
    glDisable(GL_BLEND);

How it works…

Volume rendering using 3D texture slicing approximates the volume rendering integral by alpha-blending textured slices. The first step is loading and generating a 3D texture from the volume data. After loading the volume dataset, the slicing of the volume is carried out using proxy slices. These are oriented perpendicular to the viewing direction. Moreover, we have to find the intersection of the proxy polygons with the unit cube boundaries. This is carried out by the SliceVolume function. Note that slicing is carried out only when the view is rotated.

We first obtain the view direction vector (viewDir), which is the third column in the model-view matrix. The first column of the model-view matrix stores the right vector and the second column stores the up vector. We will now detail how the SliceVolume function works internally. We find the minimum and maximum vertex in the current viewing direction by calculating the maximum and minimum distance of the 8 unit vertices in the viewing direction. These distances are obtained using the dot product of each unit cube vertex with the view direction vector:

float max_dist = glm::dot(viewDir, vertexList[0]);
float min_dist = max_dist;
int max_index = 0;
int count = 0;
for(int i=1;i<8;i++) {
   float dist = glm::dot(viewDir, vertexList[i]);
   if(dist > max_dist) {
      max_dist = dist;
      max_index = i;
   }
   if(dist<min_dist)
      min_dist = dist;
}
int max_dim = FindAbsMax(viewDir);
min_dist -= EPSILON;
max_dist += EPSILON;

There are only three unique paths when going from the nearest vertex to the farthest vertex from the camera. We store all possible paths for each vertex into an edge table, which is defined as follows:

int edgeList[8][12]={{0,1,5,6, 4,8,11,9, 3,7,2,10 }, //v0 is front
                 {0,4,3,11,  1,2,6,7,   5,9,8,10 }, //v1 is front
                 {1,5,0,8,   2,3,7,4,   6,10,9,11}, //v2 is front
    { 7,11,10,8, 2,6,1,9,   3,0,4,5  }, // v3 is front
    { 8,5,9,1,   11,10,7,6, 4,3,0,2  }, // v4 is front
    { 9,6,10,2,  8,11,4,7,  5,0,1,3  }, // v5 is front
    { 9,8,5,4,   6,1,2,0,   10,7,11,3}, // v6 is front
    { 10,9,6,5,  7,2,3,1,   11,4,8,0 }  // v7 is front

Next, plane intersection distances are estimated for the 12 edge indices of the unit cube:

glm::vec3 vecStart[12];
glm::vec3 vecDir[12];
float lambda[12];
float lambda_inc[12];
float denom = 0;
float plane_dist = min_dist;
float plane_dist_inc = (max_dist-min_dist)/float(num_slices);
for(int i=0;i<12;i++) {
    vecStart[i]=vertexList[edges[edgeList[max_index][i]][0]];
    vecDir[i]=vertexList[edges[edgeList[max_index][i]][1]]-
             vecStart[i];
    denom = glm::dot(vecDir[i], viewDir);
    if (1.0 + denom != 1.0) {
      lambda_inc[i] =  plane_dist_inc/denom;
      lambda[i]=(plane_dist-glm::dot(vecStart[i],viewDir))/denom;
    } else {
        lambda[i]     = -1.0;
        lambda_inc[i] =  0.0;
    }
}

Finally, the interpolated intersections with the unit cube edges are carried out by moving back-to-front in the viewing direction. After proxy slices have been generated, the vertex buffer object is updated with the new data.

for(int i=num_slices-1;i>=0;i--) {
   for(int e = 0; e < 12; e++) {
      dL[e] = lambda[e] + i*lambda_inc[e];
   }
   if  ((dL[0] >= 0.0) && (dL[0] < 1.0))  {
      intersection[0] = vecStart[0] + dL[0]*vecDir[0];
   } else if ((dL[1] >= 0.0) && (dL[1] < 1.0))  {
      intersection[0] = vecStart[1] + dL[1]*vecDir[1];
   } else if ((dL[3] >= 0.0) && (dL[3] < 1.0))  {
      intersection[0] = vecStart[3] + dL[3]*vecDir[3];
   } else continue;

   if ((dL[2] >= 0.0) && (dL[2] < 1.0)){
      intersection[1] = vecStart[2] + dL[2]*vecDir[2];
   } else if ((dL[0] >= 0.0) && (dL[0] < 1.0)){
      intersection[1] = vecStart[0] + dL[0]*vecDir[0];
   } else if ((dL[1] >= 0.0) && (dL[1] < 1.0)){
      intersection[1] = vecStart[1] + dL[1]*vecDir[1];
   } else {
      intersection[1] = vecStart[3] + dL[3]*vecDir[3];
   }
   //similarly for others edges unitl intersection[5]
   int indices[]={0,1,2, 0,2,3, 0,3,4, 0,4,5};
   for(int i=0;i<12;i++)
      vTextureSlices[count++]=intersection[indices[i]];
}
glBindBuffer(GL_ARRAY_BUFFER, volumeVBO);
glBufferSubData(GL_ARRAY_BUFFER, 0,  sizeof(vTextureSlices), &(vTextureSlices[0].x));

In the rendering function, the appropriate shader is bound. The vertex shader calculates the clip space position by multiplying the object space vertex position (vPosition) with the combined model view projection (MVP) matrix. It also calculates the 3D texture coordinates (vUV) for the volume data. Since we render a unit cube, the minimum vertex position will be (-0.5,-0.5,-0.5) and the maximum vertex position will be (0.5,0.5,0.5). Since our 3D texture lookup requires coordinates from (0,0,0) to (1,1,1), we add (0.5,0.5,0.5) to the object space vertex position to obtain the correct 3D texture coordinates.

smooth out vec3 vUV;
void main() {  
    gl_Position = MVP*vec4(vVertex.xyz,1);
    vUV = vVertex + vec3(0.5);
}

The fragment shader then uses the 3D texture coordinates to sample the volume data (which is now accessed through a new sampler type sampler3D for 3D textures) to display the density. At the time of creation of the 3D texture, we specified the internal format as GL_RED (the third parameter of the glTexImage3D function). Therefore, we can now access our densities through the red channel of the texture sampler. To get a shader of grey, we set the same value for green, blue, and alpha channels as well.

smooth in vec3 vUV;
uniform sampler3D volume;
void main(void) {
   vFragColor = texture(volume, vUV).rrrr;
}

In previous OpenGL versions, we would store the volume densities in a special internal format GL_INTENSITY. This is deprecated in the OpenGL3.3 core profile. So now we have to use GL_RED, GL_GREEN, GL_BLUE, or GL_ALPHA internal formats.

There's more…

The output from the demo application for this recipe volume renders the engine dataset using 3D texture slicing. In the demo code, we can change the number of slices by pressing the + and - keys.

There's more…

We now show how we obtain the result by showing an image containing successive 3D texture slicing images in the same viewing direction from 8 slices all the way to 256 slices. The results are given in the following screenshot. The wireframe view is shown in the top row, whereas the alpha-blended result is shown in the bottom row.

There's more…

As can be seen, increasing the number of slices improves the volume rendering result. When the total number of slices goes beyond 256 slices, we do not see a significant difference in the rendering result. However, we begin to see a sharp decrease in performance as we increase the total number of slices beyond 350. This is because more geometry is transferred to the GPU and that reduces performance.

Note that we can see the black halo around the volume dataset. This is due to acquisition artifacts, for example, noise or air that was stored during scanning of the engine dataset. These kinds of artifacts can be removed by either applying a transfer function to remove the unwanted densities or simply removing the unwanted densities in the fragment shader as we will do in the Implementing volumetric lighting using half-angle slicing recipe later.

See also

  • The 3.5.2 Viewport-Aligned Slices section in Chapter 3, GPU-based Volume Rendering, Real-time Volume Graphics, AK Peters/CRC Press, page numbers 73 to 79
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.222.12