Previously, we tried different ad hoc approaches to store and handle 3D geometry data in our graphical applications. The mesh data layout for vertex and index buffers was hardcoded into each of our demo apps. By doing so, it was easier to focus on other important parts of the graphics pipeline. As we move into the territory of more complex graphics applications, we will require additional control over the storage of different 3D meshes within system memory and GPU buffers. However, our focus remains on guiding you through the main principles and practices rather than on pure efficiency.
In this chapter, you will learn how to store and handle mesh geometry data in a more organized way. We will cover the following recipes:
Here is what it takes to run the code from this chapter on your Linux or Windows PC. You will need a GPU with recent drivers supporting OpenGL 4.6 and Vulkan 1.1. The source code can be downloaded from https://github.com/PacktPublishing/3D-Graphics-Rendering-Cookbook.
To run the demo applications of this chapter, you are advised to download and unpack the entire Amazon Lumberyard Bistro dataset from the McGuire Computer Graphics Archive. You can find this at http://casual-effects.com/data/index.html. Of course, you can use smaller meshes if you cannot download the 2.4 GB package.
In Chapter 3, Getting Started with OpenGL and Vulkan and Chapter 4, Adding User Interaction and Productivity Tools, we used fixed formats for our meshes, which changed between demos and also implicitly included a description of the material; for example, a hardcoded texture was used to provide color information. Let's define a unified mesh storage format that covers all use cases for the remainder of this book.
A triangle mesh is defined by indices and vertices. Each vertex is defined as a set of floating-point attributes. All of the auxiliary physical properties of an object, such as collision detection data, mass, and moments of inertia, can be represented by a mesh. In comparison, other information, such as surface material properties, can be stored outside of the mesh as external metadata.
This recipe describes the basic data structures that we will use to store mesh data for the remainder of this book. The full corresponding source code is located in the shared/scene/VtxData.h header.
A vector of homogenous vertex attributes stored contiguously is called a vertex stream. Examples of such attributes include vertex positions, texture coordinates, and normal vectors, with each of the three representing one attribute. Each attribute can consist of one or multiple floating-point components. Vertex positions have three components, texture coordinates usually have two components, and so on.
LOD is an index buffer of reduced size that uses existing vertices and, therefore, can be used directly for rendering with the original vertex buffer.
We define a mesh as a collection of all vertex data streams and a collection of all index buffers – one for each LOD. The length of all vertex data streams is the same and is called the "vertex count." Put simply, we always use 32-bit offsets for our data.
All of the vertex data streams and LOD index buffers are packed into a single blob. This allows us to load data in a single fread() call or even use memory mapping to allow direct data access. This simple vertex data representation also enables us to directly upload the mesh to a GPU. The most interesting aspect is the ability to combine the data for multiple meshes in a single file (or, equivalently, into two large buffers – one for indices and the other for vertex attributes). This will come in very handy later when we learn how to implement a LOD switching technique on GPU.
In this recipe, we will only deal with geometrical data. The LOD creation process is covered in the Generating LODs using MeshOptimizer recipe, and the material data export process is covered in subsequent chapters. Let's get started by declaring the main data structure for our mesh:
constexpr const uint32_t kMaxLODs = 8;
constexpr const uint32_t kMaxStreams = 8;
struct Mesh final {
uint32_t lodCount;
uint32_t streamCount;
uint32_t materialID;
uint32_t meshSize;
uint32_t vertexCount;
uint32_t lodOffset[kMaxLODs];
inline uint64_t lodSize(uint32_t lod) {
return lodOffset[lod+1] - lodOffset[lod];
}
Important note
Besides the element size, we might want to store the element type, such as byte, short integer, or float. This information is important for performance reasons in real-world applications. To simplify the code in this book, we will not do it here.
uint64_t streamOffset[kMaxStreams];
uint32_t streamElementSize[kMaxStreams];
};
Note
For this book, we assume tightly-packed (Interleaved) vertex attribute streams only. However, it is not difficult to extend the proposed schema to support non-interleaved data storage. One major drawback is that such data reorganization would require us to change all the vertex-pulling code of the vertex shaders. If you are developing production code, measure which storage format works faster on your target hardware before committing to one particular approach.
Our mesh data file begins with a simple header to allow for the rapid fetching of the mesh list. Let's take a look at how it is declared:
struct MeshFileHeader {
uint32_t magicValue;
uint32_t meshCount;
uint32_t dataBlockStartOffset;
uint32_t indexDataSize;
uint32_t vertexDataSize;
};
The file continues with the list of Mesh structures. After the header and a list of individual mesh descriptors, we store a large index and vertex data block that can be loaded all at once.
Let's go through all of the remaining data structures that are required to store our meshes. To use a mesh file in a rendering application, we need to have an array of mesh descriptions and two arrays with index and vertex data:
std::vector<Mesh> meshes;
std::vector<uint8_t> indexData;
std::vector<uint8_t> vertexData;
The pseudocode for loading such a file is just four fread() calls. They appear as follows:
FILE *f = fopen("data/meshes/test.meshes", "rb");
MeshFileHeader header;
fread(&header, 1, sizeof(header), f);
fread(
meshes.data(), header.meshCount, sizeof(Mesh), f);
indexData.resize(header.indexDataSize);
vertexData.resize(header.vertexDataSize);
fread(indexData.data(), 1, header.indexDataSize, f);
fread(vertexData.data(), 1, header.vertexDataSize, f);
Alternatively, index and vertex buffers can be combined into a single large byte buffer. We will leave it as an exercise for the reader.
Later, the indexData and vertexData containers can be uploaded into the GPU directly and accessed as data buffers from shaders to implement programmable vertex pulling, as described in Chapter 2, Using Essential Libraries. We will return to this in later recipes.
This geometry data format is pretty straightforward for the purpose of storing static mesh data. If the meshes can be changed, reloaded, or loaded asynchronously, we can store separate meshes into dedicated files.
Since it is impossible to predict all use cases, and since this book is all about rendering and not some general-purpose gaming engine creation, it is up to the reader to make decisions about adding extra features such as mesh skinning. One simple example of such a decision is the addition of material data directly inside the mesh file. Technically, all we need to do is add a materialCount field to the MeshFileHeader structure and store a list of material descriptions right after the list of meshes. Even doing such a simple thing immediately raises more questions. Should we pack texture data in the same file? If yes, then how complex should the texture format be? What material model should we use? And so forth. For now, we will just leave the mesh geometry data separated from the material descriptions. We will come back to materials in the Chapter 7, Graphics Rendering Pipeline.
In the previous chapters, we learned how to use the Assimp library to load and render 3D models stored in different file formats. In real-world graphics applications, the loading of a 3D model can be a tedious and multistage process. Besides just loading, we might want to preprocess a mesh in a specific way, such as optimizing geometry data or computing LODs for meshes. This process might become slow for sizable meshes, so it makes perfect sense to preprocess meshes offline, before an application starts, and load them later in the app, as described in the Organizing the storage of mesh data recipe. Let's learn how to implement a skeleton for a simple offline mesh conversion tool.
The source code for the geometry conversion tool described in this chapter can be found in the Chapter5/MeshConvert folder. The entire project is covered in several recipes, including Implementing a geometry conversion tool and Generating LODs using MeshOptimizer.
Let's examine how the Assimp library is used to export mesh data and save it inside a binary file using the data structures defined in the Organizing the storage of mesh data recipe:
#include <vector>
#include <assimp/scene.h>
#include <assimp/postprocess.h>
#include <assimp/cimport.h>
#include "shared/VtxData.h"
bool verbose = true;
std::vector<Mesh> meshes;
std::vector<uint32_t> indexData;
std::vector<float> vertexData;
uint32_t indexOffset = 0;
uint32_t vertexOffset = 0;
bool exportTextures = false;
bool exportNormals = false;
uint32_t numElementsToStore = 3;
The main mesh conversion logic of this tool is implemented in the convertAIMesh() function, which takes in an Assimp mesh and converts it into our mesh representation. Let's take a look at how it is implemented:
Mesh convertAIMesh(const aiMesh* m)
{
const bool hasTexCoords = m->HasTextureCoords(0);
const uint32_t numIndices = m->mNumFaces * 3;
const uint32_t numElements = numElementsToStore;
const uint32_t streamElementSize = static_cast<uint32_t>( numElements * sizeof(float));
const uint32_t meshSize = static_cast<uint32_t>( m->mNumVertices * streamElementSize + numIndices * sizeof(uint32_t) );
const Mesh result = {
.lodCount = 1,
.streamCount = 1,
.materialID = 0,
.meshSize = meshSize,
.vertexCount = m->mNumVertices,
.lodOffset = { indexOffset * sizeof(uint32_t), (indexOffset + numIndices) * sizeof(uint32_t) },
.streamOffset = { vertexOffset * streamElementSize },
.streamElementSize = { streamElementSize }
};
for (size_t i = 0; i != m->mNumVertices; i++) {
const aiVector3D& v = m->mVertices[i];
const aiVector3D& n = m->mNormals[i];
const aiVector3D& t = hasTexCoords ? m->mTextureCoords[0][i] : aiVector3D();
vertexData.push_back(v.x);
vertexData.push_back(v.y);
vertexData.push_back(v.z);
if (exportTextures) {
vertexData.push_back(t.x);
vertexData.push_back(t.y);
}
if (exportNormals) {
vertexData.push_back(n.x);
vertexData.push_back(n.y);
vertexData.push_back(n.z);
}
}
for (size_t i = 0; i != m->mNumFaces; i++) {
const aiFace& F = m->mFaces[i];
indexData.push_back(F.mIndices[0] + vertexOffset);
indexData.push_back(F.mIndices[1] + vertexOffset);
indexData.push_back(F.mIndices[2] + vertexOffset);
}
indexOffset += numIndices;
vertexOffset += m->mNumVertices;
return result;
}
Processing the file comprises loading the scene and converting each mesh into an internal format. Let's take a look at the loadFile() function to learn how to do it:
bool loadFile(const char* fileName) {
if (verbose) printf("Loading '%s'... ", fileName);
const unsigned int flags = | aiProcess_JoinIdenticalVertices | aiProcess_Triangulate | aiProcess_GenSmoothNormals | aiProcess_PreTransformVertices | aiProcess_RemoveRedundantMaterials | aiProcess_FindDegenerates | aiProcess_FindInvalidData | aiProcess_FindInstances | aiProcess_OptimizeMeshes;
const aiScene* scene = aiImportFile(fileName, flags);
if (!scene || !scene->HasMeshes()) {
printf("Unable to load '%s' ", fileName);
return false;
}
meshes.reserve(scene->mNumMeshes);
for (size_t i = 0; i != scene->mNumMeshes; i++)
meshes.push_back( convertAIMesh(scene->mMeshes[i]));
return true;
}
Saving converted meshes inside our file format is the reverse process of reading meshes from the file described in the Organizing the storage of mesh data recipe:
inline void saveMeshesToFile(FILE* f) {
const MeshFileHeader header = { .magicValue = 0x12345678, .meshCount = (uint32_t)meshes.size(), .dataBlockStartOffset = (uint32_t)(sizeof(MeshFileHeader) + meshes.size()*sizeof(Mesh)),
.indexDataSize = indexData.size() * sizeof(uint32_t), .vertexDataSize = vertexData.size() * sizeof(float) };
fwrite(&header, 1, sizeof(header), f);
fwrite( meshes.data(), header.meshCount, sizeof(Mesh), f);
fwrite( indexData.data(), 1, header.indexDataSize, f);
fwrite( vertexData.data(), 1, header.vertexDataSize, f);
}
Let's put all of this code into a functioning mesh converter app:
int main(int argc, char** argv) {
bool exportTextures = false;
bool exportNormals = false;
printf("Usage: meshconvert <input> <output> --export-texcoords | -t] [--export-normals | -n] ");
printf("Options: ");
printf(" --export-texcoords | -t: export texture coordinates ");
printf(" --export-normals | -n: export normals ");
exit(255);
}
Note
This sort of manual command-line parsing is tedious and error-prone. It is used for simplicity in this book. In real-world applications, normally, you would use a command-line parsing library. We recommend that you try Argh! from https://github.com/adishavit/argh.
for (int i = 3 ; i < argc ; i++) {
exportTextures |= !strcmp(argv[i], "--export-texcoords") || !strcmp(argv[i], "-t");
exportNormals |= !strcmp(argv[i], "--export-normals") || !strcmp(argv[i], "-n");
const bool exportAll = !strcmp(argv[i], "-tn") || !strcmp(argv[i], "-nt");
exportTextures |= exportAll;
exportNormals |= exportAll;
}
if (exportTextures) numElementsToStore += 2;
if (exportNormals ) numElementsToStore += 3;
if ( !loadFile(argv[1]) ) exit(255);
After loading and converting all of the meshes, we save the output file:
FILE *f = fopen(argv[2], "wb");
saveMeshesToFile(f);
fclose(f);
return 0;
}
To use the mesh conversion tool, let's invoke it to convert one of the Lumberyard Bistro meshes into our mesh format. That can be done with the following command:
Ch5_Tool05_MeshConvert_Release exterior.obj exterior.mesh -tn
The output mesh is saved inside the exterior.mesh file. Let's go through the rest of this chapter to learn how to render this mesh with Vulkan.
The complete source code of the converter can be found in the Chapter5/MeshConvert folder. The final version of the tool contains LOD-generation functionality, which will be discussed later in the Generating LODs using MeshOptimizer recipe.
Indirect rendering is the process of issuing drawing commands to the graphics API, where most of the parameters to those commands come from GPU buffers. It is a part of many modern GPU usage paradigms, and it exists in all contemporary rendering APIs in some form. For example, we can do indirect rendering with OpenGL using the glDraw*Indirect*() family of functions. Instead of dealing with OpenGL here, let's get more technical and learn how to combine indirect rendering in Vulkan with the mesh data format that we introduced in the Organizing the storage of mesh data recipe.
Once we have defined the mesh data structures, we also need to render them. To do this, we allocate GPU buffers for the vertex and index data using the previously described functions, upload all the data to GPU, and, finally, fill the command buffers to render these buffers at each frame.
The whole point of the previously defined Mesh data structure is the ability to render multiple meshes in a single Vulkan command. Since version 1.0 of the API, Vulkan supports the technique of indirect rendering. This means we do not need to issue the vkCmdDraw() command for each and every mesh. Instead, we create a GPU buffer and fill it with an array of VkDrawIndirectCommand structures, fill these structures with appropriate offsets into our index and vertex data buffers, and, finally, emit a single vkCmdDrawIndirect() call.
Before we proceed with rendering, let's introduce a data structure to represent an individual mesh instance in our 3D world. We will use it to specify which meshes we want to render, how to transform them, and which material and LOD level should be used:
struct InstanceData {
float transform[16];
uint32_t meshIndex;
uint32_t materialIndex;
uint32_t LOD;
uint32_t indexOffset;
};
As mentioned in the previous Chapter 4, Adding User Interaction and Productivity Tools, we implement another layer for our frame composition system:
class MultiMeshRenderer: public RendererBase {
public:
MultiMeshRenderer( VulkanRenderDevice& vkDev, const char* meshFile, const char* instanceFile, const char* materialFile, const char* vtxShaderFile, const char* fragShaderFile);
private:
std::vector<InstanceData> instances;
std::vector<Mesh> meshes;
std::vector<uint32_t> indexData;
std::vector<float> vertexData;
VulkanRenderDevice& vkDev;
VkBuffer storageBuffer_;
VkDeviceMemory storageBufferMemory_;
uint32_t maxVertexBufferSize_, maxIndexBufferSize_;
uint32_t maxInstances_;
uint32_t maxInstanceSize_, maxMaterialSize_;
VkBuffer materialBuffer_;
VkDeviceMemory materialBufferMemory_;
std::vector<VkBuffer> indirectBuffers_;
std::vector<VkDeviceMemory> indirectBuffersMemory_;
std::vector<VkBuffer> instanceBuffers_;
std::vector<VkDeviceMemory> instanceBuffersMemory_;
bool createDescriptorSet(VulkanRenderDevice& vkDev);
void updateUniformBuffer(VulkanRenderDevice& vkDev, size_t currentImage, const mat4&m)
{
uploadBufferData(vkDev, uniformBuffersMemory_[currentImage], 0, glm::value_ptr(m), sizeof(mat4));
}
void updateInstanceBuffer(VulkanRenderDevice& vkDev, size_t currentImage, uint32_t instanceSize, const void* instanceData)
{
uploadBufferData(vkDev, instanceBuffersMemory_[currentImage], 0, instanceData, instanceSize);
}
void updateGeometryBuffers( VulkanRenderDevice& vkDev, uint32_t vertexCount, uint32_t indexCount, const void* vertices, const void* indices)
{
uploadBufferData(vkDev, storageBufferMemory_, 0, vertices, vertexCount);
uploadBufferData(vkDev, storageBufferMemory_, maxVertexBufferSize_, indices, indexCount);
}
void updateIndirectBuffers( VulkanRenderDevice& vkDev, size_t currentImage)
{
VkDrawIndirectCommand* data = nullptr;
vkMapMemory(vkDev.device, indirectBuffersMemory_[currentImage], 0, 2 * sizeof(VkDrawIndirectCommand), 0, (void **)&data);
for (uint32_t i = 0 ; i < maxInstances_ ; i++) {
const uint32_t j = instances[i].meshIndex;
data[i] = { .vertexCount = static_cast<uint32_t>( meshes[j].lodSize( instances[i].LOD) / sizeof(uint32_t)),
.instanceCount = 1,
.firstVertex = static_cast<uint32_t>( meshes[j].streamOffset[0] / meshes[j].streamElementSize[0]), .firstInstance = i };
}
vkUnmapMemory(vkDev.device, indirectBuffersMemory_[currentImage]);
}
virtual void fillCommandBuffer( VkCommandBuffer commandBuffer, size_t currentImage) override
{
beginRenderPass(commandBuffer, currentImage);
vkCmdDrawIndirect(commandBuffer, indirectBuffers_[currentImage], 0, maxInstances_, sizeof(VkDrawIndirectCommand));
vkCmdEndRenderPass(commandBuffer);
}
virtual ~MultiMeshRenderer() {
VkDevice device = vkDev.device;
vkDestroyBuffer(device, storageBuffer_, nullptr);
vkFreeMemory( device, storageBufferMemory_, nullptr);
for (size_t i = 0; i < swapchainFramebuffers_.size(); i++)
{
vkDestroyBuffer( device, instanceBuffers_[i], nullptr);
vkFreeMemory( device, instanceBuffersMemory_[i], nullptr);
vkDestroyBuffer( device, indirectBuffers_[i], nullptr);
vkFreeMemory( device, indirectBuffersMemory_[i], nullptr);
}
vkDestroyBuffer(device, materialBuffer_, nullptr);
vkFreeMemory( device, materialBufferMemory_, nullptr);
destroyVulkanImage(device, depthTexture_);
}
The longest part of the code is the constructor. To describe the initialization process, we need to define two helper functions:
void MultiMeshRenderer::loadInstanceData( const char* instanceFile)
{
FILE* f = fopen(instanceFile, "rb");
fseek(f, 0, SEEK_END);
size_t fsize = ftell(f);
fseek(f, 0, SEEK_SET);
After determining the size of the input file, we should calculate the number of instances in this file:
maxInstances_ = static_cast<uint32_t>( fsize / sizeof(InstanceData));
instances.resize(maxInstances_);
A single fread() call gets the instance data loading job done:
if (fread(instances.data(), sizeof(InstanceData), maxInstances_, f) != maxInstances_)
{
printf("Unable to read instance data ");
exit(255);
}
fclose(f);
}
MeshFileHeader MultiMeshRenderer::loadMeshData( const char* meshFile)
{
MeshFileHeader header;
FILE* f = fopen(meshFile, "rb");
The loading process is the same as in the pseudocode from the Implementing a geometry conversion tool recipe:
if (fread(&header, 1, sizeof(header), f) != sizeof(header)) {
printf("Unable to read mesh file header ");
exit(255);
}
meshes.resize(header.meshCount);
After reading the file header, we read individual mesh descriptions:
if (fread(meshes.data(), sizeof(Mesh), header.meshCount, f) != header.meshCount) {
printf("Could not read mesh descriptors ");
exit(255);
}
Two more fread() calls read the mesh indices and vertex data:
indexData.resize( header.indexDataSize / sizeof(uint32_t));
vertexData.resize( header.vertexDataSize / sizeof(float));
if ((fread(indexData.data(), 1,header.indexDataSize, f) != header.indexDataSize) ||
(fread(vertexData.data(),1,header.vertexDataSize, f) != header.vertexDataSize))
{
printf("Unable to read index/vertex data ");
exit(255);
}
fclose(f);
To ensure the correct initialization within the constructor, we return the initialized MeshFileHeader object:
return header;
}
We are ready to describe the initialization procedure of the MultiMeshRenderer class:
MultiMeshRenderer::MultiMeshRenderer( VulkanRenderDevice& vkDev, const char* meshFile, const char* instanceFile, const char* materialFile, const char* vtxShaderFile, const char* fragShaderFile)
: vkDev(vkDev), RendererBase(vkDev, VulkanImage())
{
In the same way as our other renderers, we create a render pass object:
if (!createColorAndDepthRenderPass(vkDev, false, &renderPass_, RenderPassCreateInfo()))
{
printf("Failed to create render pass ");
exit(EXIT_FAILURE);
}
framebufferWidth_ = vkDev.framebufferWidth;
framebufferHeight_ = vkDev.framebufferHeight;
createDepthResources(vkDev, framebufferWidth_, framebufferHeight_, depthTexture_);
loadInstanceData(instanceFile);
MeshFileHeader header = loadMesh(meshFile);
const uint32_t indirectDataSize = maxInstances_ * sizeof(VkDrawIndirectCommand);
maxInstanceSize_ = maxInstances_ * sizeof(InstanceData);
maxMaterialSize_ = 1024;
instanceBuffers_.resize( vkDev.swapchainImages.size());
instanceBuffersMemory_.resize( vkDev.swapchainImages.size());
indirectBuffers_.resize( vkDev.swapchainImages.size());
indirectBuffersMemory_.resize( vkDev.swapchainImages.size());
For this recipe, we do not need materials or textures. So, we will just allocate the buffer for the material data and avoid using it for now:
if (!createBuffer(vkDev.device, vkDev.physicalDevice, materialDataSize, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, materialBuffer_, materialBufferMemory_))
{
printf("Cannot create material buffer ");
exit(EXIT_FAILURE);
}
maxVertexBufferSize_ = header.vertexDataSize;
maxIndexBufferSize_ = header.indexDataSize;
In the previous chapters, we were lucky that the size of our vertex data was a multiple of 16 bytes. Now, we want to store arbitrary arrays of mesh vertices and face indices, so this forces us to support arbitrary offsets of GPU sub-buffers. Our descriptor set for MultiMeshRenderer has two logical storage buffers for index and vertex data. In the following snippet, we pad the vertex data with zeros so that its size has the necessary alignment properties:
VkPhysicalDeviceProperties devProps;
vkGetPhysicalDeviceProperties( vkDev.physicalDevice, &devProps);
const uint32_t offsetAlignment = devProps.limits.minStorageBufferOffsetAlignment;
if ((maxVertexBufferSize_&(offsetAlignment-1)) != 0)
{
int floats = (offsetAlignment - (maxVertexBufferSize_&(offsetAlignment-1))) / sizeof(float);
for (int ii = 0; ii < floats; ii++)
vertexData.push_back(0);
maxVertexBufferSize_ = (maxVertexBufferSize_+offsetAlignment) & ~(offsetAlignment - 1);
}
if (!createBuffer(vkDev.device, vkDev.physicalDevice, maxVertexBufferSize_ + maxIndexBufferSize_, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, storageBuffer_, storageBufferMemory_))
{
printf("Cannot create vertex/index buffer ");
exit(EXIT_FAILURE);
}
updateGeometryBuffers(vkDev, header.vertexDataSize, header.indexDataSize, vertexData.data(), indexData.data());
for (size_t i = 0; i < vkDev.swapchainImages.size(); i++)
{
if (!createBuffer(vkDev.device, vkDev.physicalDevice, indirectDataSize, VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, indirectBuffers_[i], indirectBuffersMemory_[i]))
{
printf("Cannot create indirect buffer ");
exit(EXIT_FAILURE);
}
updateIndirectBuffers(vkDev, i);
In the demo application code snippet at the end of this recipe, we do not update this buffer during runtime. However, it might be necessary to do so if we want to set the LOD for the meshes when the camera position changes.
if (!createBuffer(vkDev.device, vkDev.physicalDevice, instanceDataSize, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT, instanceBuffers_[i], instanceBuffersMemory_[i]))
{
printf("Cannot create instance buffer ");
exit(EXIT_FAILURE);
}
updateInstanceBuffer( vkDev, i, instanceDataSize, instances.data());
}
This completes the description of our initialization process. Now, let's turn to the shader source code:
#version 460
layout(location = 0) out vec3 uvw;
struct ImDrawVert {
float x, y, z;
};
struct InstanceData {
mat4 xfrm;
uint mesh;
uint matID;
uint lod;
};
struct MaterialData {
uint tex2D;
};
layout(binding = 0) uniform UniformBuffer { mat4 inMtx; } ubo;
layout(binding = 1) readonly buffer SBO {ImDrawVert data[];} sbo;
layout(binding = 2) readonly buffer IBO {uint data[];} ibo;
layout(binding = 3) readonly buffer InstBO {InstanceData data[];} instanceBuffer;
vec3 colors[4] = vec3[]( vec3(1.0, 0.0, 0.0), vec3(0.0, 1.0, 0.0), vec3(0.0, 0.0, 1.0), vec3(0.0, 0.0, 0.0));
void main() {
uint idx = ibo.data[gl_VertexIndex];
ImDrawVert v = sbo.data[idx];
uvw = (gl_BaseInstance >= 4) ? vec3(1,1,0): colors[gl_BaseInstance];
mat4 xfrm = transpose( instanceBuffer.data[gl_BaseInstance].xfrm);
gl_Position = ubo.inMtx * xfrm * vec4(v.x, v.y, v.z, 1.0);
}
The data/shaders/chapter05/VK01.frag fragment shader simply outputs the color passed in the uvw variable from the vertex shader. In the subsequent Chapter 7, Graphics Rendering Pipeline, we will use the material information buffer and read material parameters from there. For now, a solid color is enough to run our multi-mesh rendering code:
#version 460
layout(location = 0) in vec3 uvw;
layout(location = 0) out vec4 outColor;
void main()
{
outColor = vec4(uvw, 0.5);
}
The vkCmdDrawIndirect() function is an extension to the Vulkan API, and it must be explicitly enabled during the Vulkan render device initialization phase:
void initVulkan()
{
createInstance(&vk.instance);
if (!setupDebugCallbacks( vk.instance, &vk.messenger, &vk.reportCallback))
{
exit(EXIT_FAILURE);
}
if (glfwCreateWindowSurface( vk.instance, window, nullptr, &vk.surface))
{
exit(EXIT_FAILURE);
}
if (!initVulkanRenderDevice(vk, vkDev, kScreenWidth, kScreenHeight, isDeviceSuitable, { .multiDrawIndirect = VK_TRUE, .drawIndirectFirstInstance = VK_TRUE }))
{
exit(EXIT_FAILURE);
}
clear = std::make_unique<VulkanClear>( vkDev, VulkanImage());
finish = std::make_unique<VulkanFinish>( vkDev, VulkanImage());
multiRenderer = std::make_unique<MultiMeshRenderer>( vkDev, "data/meshes/test.cubes", "data/meshes/test.grid", "", "data/shaders/chapter05/VK01.vert", "data/shaders/chapter05/VK01.frag");
}
multiRenderer->updateUniformBuffer( vkDev, imageIndex, mtx);
multiRenderer->fillCommandBuffer( commandBuffer, imageIndex);
It might be challenging to write a modern Vulkan renderer from scratch. For those who are interested, we would like to recommend an open source project, https://github.com/zeux/niagara, by Arseny Kapoulkine, which tries to achieve exactly that. Many advanced Vulkan topics are covered in his YouTube streaming sessions.
In the previous recipes of this chapter, we learned how to organize geometry storage in a more systematic way. To debug our applications, it is useful to have a visible representation of the coordinate system so that a viewer can quickly infer the camera orientation and position just by looking at a rendered image. A natural way to represent a coordinate system in an image is to render an infinite grid where the grid plane is aligned with one of the coordinate planes. Let's learn how to implement a decent-looking grid in GLSL.
The full C++ source code for this recipe can be found in Chapter5/GL01_Grid. The corresponding GLSL shaders are located in the data/shaders/chapter05/GL01_grid.frag and data/shaders/chapter05/GL01_grid.vert files.
To parametrize our grid, we should introduce some parameters. They can be found and tweaked in the data/shaders/chapter05/GridParameters.h GLSL include file:
float gridSize = 100.0;
float gridCellSize = 0.025;
vec4 gridColorThin = vec4(0.5, 0.5, 0.5, 1.0);
vec4 gridColorThick = vec4(0.0, 0.0, 0.0, 1.0);
const float gridMinPixelsBetweenCells = 2.0;
layout (location=0) out vec2 uv;
const vec3 pos[4] = vec3[4]( vec3(-1.0, 0.0, -1.0), vec3( 1.0, 0.0, -1.0), vec3( 1.0, 0.0, 1.0), vec3(-1.0, 0.0, 1.0));
const int indices[6] = int[6](0, 1, 2, 2, 3, 0);
void main() {
vec3 vpos = pos[indices[gl_VertexID]]* gridSize;
gl_Position = proj * view * vec4(vpos, 1.0);
uv = vpos.xz;
}
The fragment shader is somewhat more complex. It will calculate a programmatic texture that looks like a grid. The grid lines are rendered based on how fast the uv coordinates change in the image space to avoid the Moiré pattern. Therefore, we are going to need screen space derivatives:
float log10(float x) {
return log(x) / log(10.0);
}
float satf(float x) {
return clamp(x, 0.0, 1.0);
}
vec2 satv(vec2 x) {
return clamp(x, vec2(0.0), vec2(1.0));
}
float max2(vec2 v) {
return max(v.x, v.y);
}
vec2 dudv = vec2(
length(vec2(dFdx(uv.x), dFdy(uv.x))),
length(vec2(dFdx(uv.y), dFdy(uv.y)))
);
float lodLevel = max(0.0, log10((length(dudv) * gridMinPixelsBetweenCells) / gridCellSize) + 1.0);
float lodFade = fract(lodLevel);
Besides the LOD value itself, we are going to need a fading factor to render smooth transitions between the adjacent levels. This can be obtained by taking a fractional part of the floating-point LOD level. A logarithm base of 10 is used to ensure each next LOD covers at least pow(10, lodLevel) more cells of the previous LOD.
float lod0 = gridCellSize * pow(10.0, floor(lodLevel+0));
float lod1 = gridCellSize * pow(10.0, floor(lodLevel+1));
float lod2 = gridCellSize * pow(10.0, floor(lodLevel+2));
dudv *= 4.0;
float lod0a = max2( vec2(1.0) - abs(satv(mod(uv, lod0) / dudv) * 2.0 – vec2(1.0)) );
float lod1a = max2(vec2(1.0) - abs(satv(mod(uv, lod1) / dudv) * 2.0 – vec2(1.0)) );
float lod2a = max2(vec2(1.0) - abs(satv(mod(uv, lod2) / dudv) * 2.0 – vec2(1.0)) );
vec4 c = lod2a > 0.0 ? gridColorThick : lod1a > 0.0 ? mix(gridColorThick, gridColorThin, lodFade) : gridColorThin;
float opacityFalloff = (1.0 - satf(length(uv) / gridSize));
c.a *= lod2a > 0.0 ? lod2a : lod1a > 0.0 ? lod1a : (lod0a * (1.0-lodFade));
c.a *= opacityFalloff;
out_FragColor = c;
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
...
const PerFrameData = { .view = view, .proj = p, .cameraPos = glm::vec4(camera.getPosition(), 1.0f)
};
glNamedBufferSubData(perFrameDataBuffer, 0, kUniformBufferSize, &perFrameData);
glDrawArraysInstancedBaseInstance( GL_TRIANGLES, 0, 6, 1, 0);
View the complete example at Chapter5/GL01_Grid for a self-contained demo app. The camera can be controlled with the WSAD keys and a mouse. The resulting image should appear similar to the following screenshot:
Figure 5.1 – The GLSL grid
Besides only considering the distance to the camera to calculate the antialiasing falloff factor, we can use the angle between the viewing vector and the grid line. This will make the overall look and feel of the grid more visually pleasing and can be an interesting improvement if you want to implement a grid not only as an internal debugging tool but also as a part of a customer-facing product, such as an editor. Please refer to the Our Machinery blog for additional details about how to implement a more complicated grid (https://ourmachinery.com/post/borderland-between-rendering-and-editor-part-1).
In the previous recipes, we learned how to build a mesh preprocessing pipeline and convert 3D meshes from data exchange formats, such as .obj or .gltf2, into our runtime mesh data format and render it via the Vulkan API. Let's switch gears and examine how to render this converted data using OpenGL.
The full source code for this recipe is located in Chapter5/GL03_MeshRenderer. It is recommended that you revisit the Implementing a geometry conversion tool recipe before continuing further.
Let's implement a simple GLMesh helper class to render our mesh using OpenGL:
class GLMesh final {
public:
GLMesh(const uint32_t* indices, uint32_t indicesSizeBytes, const float* vertexData, uint32_t verticesSizeBytes)
: numIndices_(indicesSizeBytes / sizeof(uint32_t))
, bufferIndices_(indicesSizeBytes, indices, 0)
, bufferVertices_(verticesSizeBytes, vertexData, 0)
{
glCreateVertexArrays(1, &vao_);
glVertexArrayElementBuffer( vao_, bufferIndices_.getHandle());
glVertexArrayVertexBuffer(vao_, 0, bufferVertices_.getHandle(), 0, sizeof(vec3));
glEnableVertexArrayAttrib(vao_, 0);
glVertexArrayAttribFormat( vao_, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao_, 0, 0);
}
void draw() const {
glBindVertexArray(vao_);
glDrawElements(GL_TRIANGLES, static_cast<GLsizei>(numIndices_), GL_UNSIGNED_INT, nullptr);
}
~GLMesh() {
glDeleteVertexArrays(1, &vao_);
}
private:
GLuint vao_;
uint32_t numIndices_;
GLBuffer bufferIndices_;
GLBuffer bufferVertices_;
};
Now we should read the converted mesh data from our file and load it into a new GLMesh object. Let's discuss how to do this next. Perform the following steps:
FILE* f = fopen("data/meshes/test.meshes", "rb");
if (!f) {
printf("Unable to open mesh file ");
exit(255);
}
MeshFileHeader header;
if (fread(&header, 1, sizeof(header), f) != sizeof(header)) {
printf("Unable to read mesh file header ");
exit(255);
}
std::vector<Mesh> meshes1;
const auto meshCount = header.meshCount;
meshes1.resize(meshCount);
if (fread(meshes1.data(), sizeof(Mesh), meshCount, f) != meshCount) {
printf("Could not read meshes ");
exit(255);
}
std::vector<uint32_t> indexData;
std::vector<float> vertexData;
const auto idxDataSize = header.indexDataSize;
const auto vtxDataSize = header.vertexDataSize;
indexData.resize(idxDataSize / sizeof(uint32_t));
vertexData.resize(vtxDataSize / sizeof(float));
if ((fread(indexData.data(), 1, idxDataSize, f) != idxDataSize) || (fread(vertexData.data(), 1, vtxDataSize, f) != vtxDataSize)) {
printf("Unable to read index/vertex data ");
exit(255);
}
fclose(f);
GLMesh mesh(indexData.data(), idxDataSize, vertexData.data(), vtxDataSize);
Now we can go ahead and configure the OpenGL for rendering. To do that, we follow these simple steps:
GLShader shdGridVertex( "data/shaders/chapter05/GL01_grid.vert");
GLShader shdGridFragment( "data/shaders/chapter05/GL01_grid.frag");
GLProgram progGrid(shdGridVertex, shdGridFragment);
GLShader shaderVertex( "data/shaders/chapter05/GL03_mesh_inst.vert");
GLShader shaderGeometry( "data/shaders/chapter05/GL03_mesh_inst.geom");
GLShader shaderFragment( "data/shaders/chapter05/GL03_mesh_inst.frag");
GLProgram program( shaderVertex, shaderGeometry, shaderFragment);
const mat4 m(1.0f);
GLBuffer modelMatrices( sizeof(mat4), value_ptr(m), GL_DYNAMIC_STORAGE_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, modelMatrices.getHandle());
layout(std430, binding = 2) restrict readonly buffer Matrices
{
mat4 in_Model[];
}
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
program.useProgram();
mesh.draw();
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
progGrid.useProgram();
glDrawArraysInstancedBaseInstance (GL_TRIANGLES, 0, 6, 1, 0);
The running application will render an image of the Lumberyard Bistro mesh that looks similar to the following screenshot:
Figure 5.2 – The Amazon Lumberyard Bistro mesh geometry loaded and rendered
Earlier in this chapter, in the Implementing a geometry conversion tool recipe, we talked about preprocessing a mesh so that we can store it in a runtime efficient data format. One important part of the preprocessing pipeline is optimizing geometry and generating simplified meshes for real-time discrete LOD algorithms that we might want to use later. Let's learn how to generate simplified meshes using the MeshOptimizer library.
It is recommended that you revisit the Introducing MeshOptimizer recipe from Chapter 2, Using Essential Libraries. The complete source code for this recipe can be found in Chapter5/MeshConvert.
We are going to add a processLODs() function to our MeshConverter tool so that we can generate all of the necessary LOD meshes for a specified set of indices and vertices. Let's go through this function step by step to learn how to do it:
void processLODs( std::vector<uint32_t> indices, const std::vector<float>& vertices, std::vector<std::vector<uint32_t>>& outLods)
{
size_t verticesCountIn = vertices.size() / 3;
size_t targetIndicesCount = indices.size();
uint8_t LOD = 1;
printf(" LOD0: %i indices", int(indices.size()));
outLods.push_back(indices);
while ( targetIndicesCount > 1024 && LOD < 8 ) {
targetIndicesCount = indices.size()/2;
bool sloppy = false;
size_t numOptIndices = meshopt_simplify( indices.data(), indices.data(), (uint32_t)indices.size(), vertices.data(), verticesCountIn, sizeof( float ) * 3, targetIndicesCount, 0.02f );
if (static_cast<size_t>(numOptIndices * 1.1f) > indices.size()) {
if (LOD > 1) {
numOptIndices = meshopt_simplifySloppy( indices.data(), indices.data(), indices.size(), vertices.data(), verticesCountIn, sizeof(float) * 3, targetIndicesCount);
sloppy = true;
if ( numOptIndices == indices.size() ) break;
}
else break;
}
indices.resize( numOptIndices );
meshopt_optimizeVertexCache( indices.data(), indices.data(), indices.size(), verticesCountIn );
printf(" LOD%i: %i indices %s", int(LOD), int(numOptIndices), sloppy ? "[sloppy]" : "");
LOD++;
outLods.push_back(indices);
}
}
This code will generate up to eight LOD meshes for a given set of indices and vertices, and it will store them inside our runtime mesh format data structures. We will learn how to make use of these LODs in Chapter 10, Advanced Rendering Techniques and Optimizations.
The MeshOptimizer library contains many other useful algorithms, such as triangle strip generation, index and vertex buffer compression, mesh animation data compression, and more. All of these might be very useful for your geometry preprocessing stage, depending on the kind of graphics software you are writing. Please refer to the official documentation and releases page to view the latest features. You can find this at https://github.com/zeux/meshoptimizer.
Now, let's switch gears and learn how to integrate hardware tessellation functionality into the OpenGL 4.6 graphics rendering pipeline.
Hardware tessellation is a feature that was introduced in OpenGL 4.0. It is implemented as a set of two new shader stages types in the graphics pipeline. The first shader stage is called the tessellation control shader, and the second stage is called the tessellation evaluation shader. The tessellation control shader operates on a set of vertices, which are called control points and define a geometric surface called a patch. The shader can manipulate the control points and calculate the required tessellation level. The tessellation evaluation shader can access the barycentric coordinates of the tessellated triangles and can use them to interpolate any per-vertex attributes that are required, such as texture coordinates, colors, and more. Let's go through the code to examine how these OpenGL pipeline stages can be used to triangulate a mesh depending on the distance from the camera.
The complete source code for this recipe is located in Chapter5/GL02_Tessellation.
Before we can tackle the actual GLSL shaders, we should augment our OpenGL shader loading code with a new shader type:
GLenum GLShaderTypeFromFileName(const char* fileName)
{
if (endsWith(fileName, ".vert")) return GL_VERTEX_SHADER;
if (endsWith(fileName, ".frag")) return GL_FRAGMENT_SHADER;
if (endsWith(fileName, ".geom")) return GL_GEOMETRY_SHADER;
if (endsWith(fileName, ".tesc")) return GL_TESS_CONTROL_SHADER;
if (endsWith(fileName, ".tese")) return GL_TESS_EVALUATION_SHADER;
if (endsWith(fileName, ".comp")) return GL_COMPUTE_SHADER;
assert(false);
return 0;
}
GLProgram(const GLShader& a, const GLShader& b, const GLShader& c, const GLShader& d, const GLShader& e);
GLProgram::GLProgram( const GLShader& a, const GLShader& b, const GLShader& c, const GLShader& d, const GLShader& e)
: handle_(glCreateProgram())
{
glAttachShader(handle_, a.getHandle());
glAttachShader(handle_, b.getHandle());
glAttachShader(handle_, c.getHandle());
glAttachShader(handle_, d.getHandle());
glAttachShader(handle_, e.getHandle());
glLinkProgram(handle_);
printProgramInfoLog(handle_);
}
What we want to do now is write shaders that will calculate per-vertex tessellation levels based on the distance to the camera. In this way, we can render more geometrical details in the areas that are closer to the viewer. To do that, we should start with a vertex shader, such as datashaderschapter05.
GL02_duck.vert, which will compute the world positions of the vertices and pass them down to the tessellation control shader:
#version 460 core
layout(std140, binding = 0) uniform PerFrameData {
mat4 view;
mat4 proj;
vec4 cameraPos;
float tessellationScale;
};
struct Vertex {
float p[3];
float tc[2];
};
layout(std430, binding = 1) restrict readonly buffer Vertices
{
Vertex in_Vertices[];
};
layout(std430, binding = 2) restrict readonly buffer Matrices
{
mat4 in_Model[];
};
vec3 getPosition(int i) {
return vec3( in_Vertices[i].p[0], in_Vertices[i].p[1], in_Vertices[i].p[2]);
}
vec2 getTexCoord(int i) {
return vec2( in_Vertices[i].tc[0], in_Vertices[i].tc[1]);
}
layout (location=0) out vec2 uv_in;
layout (location=1) out vec3 worldPos_in;
void main() {
mat4 MVP = proj * view * in_Model[gl_DrawID];
vec3 pos = getPosition(gl_VertexID);
gl_Position = MVP * vec4(pos, 1.0);
uv_in = getTexCoord(gl_VertexID);
worldPos_in = ( in_Model[gl_DrawID] * vec4(pos, 1.0) ).xyz;
}
Now we can move on to the next shader stage and view the tessellation control shader, data/shaders/chapter05/GL02_duck.tesc:
#version 460 core
layout (vertices = 3) out;
layout (location = 0) in vec2 uv_in[];
layout (location = 1) in vec3 worldPos_in[];
layout(std140, binding = 0) uniform PerFrameData {
mat4 view;
mat4 proj;
vec4 cameraPos;
float tessellationScale;
};
in gl_PerVertex {
vec4 gl_Position;
} gl_in[];
out gl_PerVertex {
vec4 gl_Position;
} gl_out[];
struct vertex {
vec2 uv;
};
layout(location = 0) out vertex Out[];
float getTessLevel(float distance0, float distance1) {
const float distanceScale1 = 7.0;
const float distanceScale2 = 10.0;
const float avgDistance = (distance0 + distance1) * 0.5;
if (avgDistance <= distanceScale1 * tessellationScale)
return 5.0;
else if (avgDistance <= distanceScale2 * tessellationScale)
return 3.0;
return 1.0;
}
void main() {
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
Out[gl_InvocationID].uv = uv_in[gl_InvocationID];
vec3 c = cameraPos.xyz;
float eyeToVertexDistance0 = distance(c, worldPos_in[0]);
float eyeToVertexDistance1 = distance(c, worldPos_in[1]);
float eyeToVertexDistance2 = distance(c, worldPos_in[2]);
gl_TessLevelOuter[0] = getTessLevel( eyeToVertexDistance1, eyeToVertexDistance2);
gl_TessLevelOuter[1] = getTessLevel( eyeToVertexDistance2, eyeToVertexDistance0);
gl_TessLevelOuter[2] = getTessLevel( eyeToVertexDistance0, eyeToVertexDistance1);
gl_TessLevelInner[0] = gl_TessLevelOuter[2];
};
Let's take a look at the data/shaders/chapter05/GL02_duck.tese tessellation evaluation shader:
#version 460 core
layout(triangles, equal_spacing, ccw) in;
struct vertex {
vec2 uv;
};
in gl_PerVertex {
vec4 gl_Position;
} gl_in[];
layout(location = 0) in vertex In[];
out gl_PerVertex {
vec4 gl_Position;
};
layout (location=0) out vec2 uv;
vec2 interpolate2(in vec2 v0, in vec2 v1, in vec2 v2){
return v0 * gl_TessCoord.x + v1 * gl_TessCoord.y + v2 * gl_TessCoord.z;
}
vec4 interpolate4(in vec4 v0, in vec4 v1, in vec4 v2){
return v0 * gl_TessCoord.x + v1 * gl_TessCoord.y + v2 * gl_TessCoord.z;
}
void main() {
gl_Position = interpolate4(gl_in[0].gl_Position, gl_in[1].gl_Position, gl_in[2].gl_Position);
uv = interpolate2(In[0].uv, In[1].uv, In[2].uv);
};
The next stage of our hardware tessellation graphics pipeline is the data/shaders/chapter05/GL02_duck.geom geometry shader. We use it to generate barycentric coordinates for all of the small tessellated triangles. It is used to render a nice antialiased wireframe overlay on top of our colored mesh, as described in Chapter 2, Using Essential Libraries:
#version 460 core
layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;
layout (location=0) in vec2 uv[];
layout (location=0) out vec2 uvs;
layout (location=1) out vec3 barycoords;
void main() {
const vec3 bc[3] = vec3[]( vec3(1.0, 0.0, 0.0), vec3(0.0, 1.0, 0.0), vec3(0.0, 0.0, 1.0) );
for ( int i = 0; i < 3; i++ ) {
gl_Position = gl_in[i].gl_Position;
uvs = uv[i];
barycoords = bc[i];
EmitVertex();
}
EndPrimitive();
}
The final stage of this rendering pipeline is the datashaderschapter05GL02_duck.frag fragment shader:
#version 460 core
layout (location=0) in vec2 uvs;
layout (location=1) in vec3 barycoords;
layout (location=0) out vec4 out_FragColor;
layout (location=0) uniform sampler2D texture0;
float edgeFactor(float thickness) {
vec3 a3 = smoothstep(vec3(0.0), fwidth(barycoords) * thickness, barycoords);
return min( min( a3.x, a3.y ), a3.z );
}
void main() {
vec4 color = texture(texture0, uvs);
out_FragColor = mix( color * vec4(0.8), color, edgeFactor(1.0) );
};
The GLSL shader part of our OpenGL hardware tessellation pipeline is over. Now it is time to look at the C++ code. The source code is located in the Chapter5/GL02_Tessellation/src/main.cpp file:
GLShader shaderVertex( "data/shaders/chapter05/GL02_duck.vert");
GLShader shaderTessControl( "data/shaders/chapter05/GL02_duck.tesc");
GLShader shaderTessEval( "data/shaders/chapter05/GL02_duck.tese");
GLShader shaderGeometry( "data/shaders/chapter05/GL02_duck.geom");
GLShader shaderFragment( "data/shaders/chapter05/GL02_duck.frag");
GLProgram program(shaderVertex, shaderTessControl, shaderTessEval,shaderGeometry, shaderFragment);
The data/rubber_duck/scene.gltf mesh loading code is identical to that of the previous chapter, so we will skip it here. What's more important is how we render the ImGui widget to control the tessellation scale factor:
ImGuiGLRenderer rendererUI;
io.DisplaySize = ImVec2((float)width, (float)height);
ImGui::NewFrame();
ImGui::SliderFloat("Tessellation scale", &tessellationScale, 1.0f, 2.0f, "%.1f");
ImGui::Render();
rendererUI.render( width, height, ImGui::GetDrawData() );
Here is a screenshot of the running demo application:
Figure 5.3 – A tessellated duck
Note how the different tessellation levels vary based on the distance to the camera. Try playing with the control slider to emphasize the effect.
This recipe can be used as a cornerstone to hardware mesh tessellation techniques in your OpenGL applications. One natural step forward would be to apply a displacement map to the fine-grained tessellated vertices using the direction of normal vectors. Please refer to https://www.geeks3d.com/20100804/test-opengl-4-tessellation-with-displacement-mapping for inspiration. If you want to go serious on the adaptive tessellation of subdivision surfaces, there is a chapter in the GPU Gems 2 book, which covers this advanced topic in more detail.