The most important properties for a mesh, other than the vertices themselves, are material and lighting. With the ability to specify the texture, color, and the way the surface behaves with lighting, we can generate much more realistic scenes.
In the previous recipe, we have already added two of the three input vectors needed for calculating the lighting:
The final input vector is light direction. For this sample, we are going to create a simple directional light, that is, a light that has a constant direction anywhere in the scene (like the sun). We will define this light with a color and direction.
We also introduce the following material properties:
From the three input vectors and the previous material properties, we determine the four lighting output components: the ambient reflection, diffuse reflection, specular reflection, and emissive lighting.
For this recipe, we need the vertices to include a normal vector and the supporting changes from the previous recipe.
The completed project can be found in the companion code as Ch02_02MaterialAndLighting
.
The first thing we will do is make changes to ShadersCommon.hlsl
to include a new per material constant buffer and add directional light to the per frame constant buffer to store the light's color and direction.
directional light
class; this must be placed before the PerFrame
structure.// A simple directional light (e.g. the sun) struct DirectionalLight { float4 Color; float3 Direction; };
PerFrame
structure to include the light.cbuffer PerFrame: register (b1)
{
DirectionalLight Light;
float3 CameraPosition;
};
2
).cbuffer PerMaterial : register (b2) { float4 MaterialAmbient; float4 MaterialDiffuse; float4 MaterialSpecular; float MaterialSpecularPower; bool HasTexture; float4 MaterialEmissive; float4 UVTransform; };
ShadersVS.hlsl
and add the highlighted code:result.Diffuse = vertex.Color * MaterialDiffuse;
If we do not set a vertex color or material diffuse, the color will be black as the colors are multiplied. Therefore, it is important that both are provided with a value. If the diffused color should have no impact upon the final color, for example, the texture sample provides all the necessary surface colors, the vertex and material diffuse colors should be set to white (1.0f, 1.0f, 1.0f, and 1.0f). Alternatively, if the vertex color is to be ignored, provide the correct brightness, some grayscale value, and vice versa.
UVTransform
matrix to the UV coordinates. The following code shows how this is done:/ Apply material UV transformation result.TextureUV = mul(float4(vertex.TextureUV.x, vertex.TextureUV.y, 0, 1), (float4x2)UVTransform).xy;
ConstantBuffers.cs
, we need to update the PerFrame
structure to also include the directional light and to create a new structure, PerMaterial
, while keeping in mind the HLSL 16-byte alignment.[StructLayout(LayoutKind.Sequential, Pack = 1)] public struct DirectionalLight { public SharpDX.Color4 Color; public SharpDX.Vector3 Direction; float _padding0; } [StructLayout(LayoutKind.Sequential, Pack = 1)] public struct PerFrame { public DirectionalLight Light; ... } [StructLayout(LayoutKind.Sequential, Pack = 1)] public struct PerMaterial { public Color4 Ambient; public Color4 Diffuse; public Color4 Specular; public float SpecularPower; public uint HasTexture; // Has texture 0 false, 1 true Vector2 _padding0; public Color4 Emissive; public Matrix UVTransform; // Support UV transforms }
D3DApp
class, create a new private member field, perMaterialBuffer
, and initialize the constant buffer in CreateDeviceDependentResources
. As the material constant buffer will be used by both the vertex and pixel shaders, assign the buffer to each of them using SetConstantBuffer
with the slot 2
.We're now ready to update the render loop.
var perMaterial = new ConstantBuffers.PerMaterial(); perMaterial.Ambient = new Color4(0.2f); perMaterial.Diffuse = Color.White; perMaterial.Emissive = new Color4(0); perMaterial.Specular = Color.White; perMaterial.SpecularPower = 20f; perMaterial.HasTexture = 0; perMaterial.UVTransform = Matrix.Identity; context.UpdateSubresource(ref perMaterial, perMaterialBuffer);
perFrame
variable with the following lines of code.perFrame.Light.Color = Color.White; var lightDir = Vector3.Transform(new Vector3(1f, -1f, -1f), worldMatrix); perFrame.Light.Direction = new Vector3(lightDir.X, lightDir.Y, lightDir.Z);
We will now implement three lighting shaders: diffuse (using Lambert's cosine law), Phong, and Blinn-Phong.
Follow the given steps for implementing diffuse shaders:
Common.hlsl
, add the following function to determine the diffuse reflection:float3 Lambert(float4 pixelDiffuse, float3 normal, float3 toLight) { // Calculate diffuse color (Lambert's Cosine Law - dot // product of light and normal). Saturate to clamp the // value within 0 to 1. float3 diffuseAmount = saturate(dot(normal, toLight)) return pixelDiffuse.rgb * diffuseAmount; }
ShadersDiffusePS.hlsl
(again, remember the encoding), add the include directive #include "Common.hlsl"
, and then declare a texture and texture sampler:Texture2D Texture0 : register(t0); SamplerState Sampler : register(s0);
PSMain
function, include this code:// After interpolation the values are not necessarily // normalized float3 normal = normalize(pixel.WorldNormal); float3 toEye = normalize(CameraPosition – pixel.WorldPosition); float3 toLight = normalize(-Light.Direction); // Texture sample (use white if no texture) float4 sample = (float4)1.0f; if (HasTexture) sample = Texture0.Sample(Sampler, pixel.TextureUV); float3 ambient = MaterialAmbient.rgb; float3 emissive = MaterialEmissive.rgb; float3 diffuse = Lambert(pixel.Diffuse, normal, toLight); // Calculate final color component float3 color = (saturate(ambient+diffuse) * sample.rgb) * Light.Color.rgb + emissive; // We saturate ambient+diffuse to ensure there is no over- // brightness on the texture sample if the sum is greater // than 1 (we would not do this for HDR rendering) // Calculate final alpha value float alpha = pixel.Diffuse.a * sample.a; return float4(color, alpha);
D3DApp.CreateDeviceDependentResources
, create the pixel shader as per the simple and depth pixel shaders.Follow the given steps for implementing Phong shaders:
Common.hlsl
, we will now add a function for determining specular reflection using the Phong reflection model.float3 SpecularPhong(float3 normal, float3 toLight, float3 toEye) { // R = reflect(i,n) => R = i - 2 * n * dot(i,n) float3 reflection = reflect(-toLight, normal); // Calculate the specular amount (smaller specular power = // larger specular highlight) Cannot allow a power of 0 // otherwise the model will appear black and white float specularAmount = pow(saturate(dot(reflection,toEye)), max(MaterialSpecularPower,0.00001f)); return MaterialSpecular.rgb * specularAmount; }
ShadersPhongPS.hlsl
. After the include directive, create the PSMain
function with the same contents as the diffuse shader, except for the following two changes for calculating the color component:float3 specular = SpecularPhong(normal, toLight, toEye);
float3 color = (saturate(ambient+diffuse) * sample.rgb + specular) * Light.Color.rgb + emissive;
D3DApp.CreateDeviceDependentResources
, create the pixel shader as per the simple and depth pixel shaders.Follow the given steps for implementing Blinn-Phong shaders:
Common.hlsl
, we will create a pixel shader that uses the Blinn-Phong shading model. This is similar to the Phong reflection model; however, instead of the costly reflection calculation per pixel, we use a half-way vector.float3 SpecularBlinnPhong(float3 normal, float3 toLight, float3 toEye) { // Calculate the half vector float3 halfway = normalize(toLight + toEye); // Saturate is used to prevent backface light reflection // Calculate specular (smaller power = larger highlight) float specularAmount = pow(saturate(dot(normal, halfway)), max(MaterialSpecularPower,0.00001f)); return MaterialSpecular.rgb * specularAmount; }
ShadersBlinnPhongPS.hlsl
. After the include directives, Texture2D
and SamplerState
, add the PSMain
function as per the Phong shader, except in this case, call SpecularBlinnPhong
instead of SpecularPhong
. Again, create the pixel shader object in your D3DApp
class.We first added the light's color and direction to our per frame constant buffer. This groups the camera location and light together as in most situations they will not change between the start and end of a frame. The new per material buffer on the other hand could change many times per frame but not necessarily for each object.
We have set the light's direction in world space as we are performing all our light calculations in this space.
We have added a new structure for storing material properties. These properties are based on the information that we will be loading from the Visual Studio graphics content pipeline CMO file.
By adding the UV transformation matrix to the per material constant buffer, we have completed support for UV mapping for Visual Studio CMO meshes. The UV transform is used to rotate or flip the vertex UV coordinates. This is necessary depending on how the mesh has been converted by the Visual Studio graphics content pipeline. It can also be useful to use the UV transform when changing a mesh vertex's winding order.
UV mapping is the process of unwrapping a mesh and assigning 2D texture coordinates to vertices in such a way that when rendered in 3D space, the texture wraps around the object. This process is performed within the 3D modeling software and looks something like the following screenshot. From a rendering point of view, we are interested in the UV coordinates assigned and the UV transform applied to the mesh.
When performing the UV unwrapping process, it is important to consider the impact that mip-mapping will have on the final render result as this can lead to color bleeding. Mip-mapping is the process of sampling from lower resolution versions of a texture to control the level of detail for objects that are further away. For example, if a UV coordinate borders two colors with little room to wiggle at a lower resolution, the two colors may get blended together.
Mipmaps can be created for a DDS texture within the Visual Studio graphics editor or at runtime when loading a texture by configuring the texture description with the ResourceOptionFlags.GenerateMipMaps
flag.
var desc = new Texture2DDescription(); ... desc.OptionsFlags = ResourceOptionFlags.GenerateMipMaps;
In our common shader file, we have implemented three lighting formulas. The Lambert function calculates the diffuse reflection using Lambert's Cosine law, while the SpecularPhong
and SpecularBlinnPhong
methods calculate the specular reflection using the Phong and Blinn-Phong models respectively. Our pixel shaders then combine the components for ambient light, emissive light, and diffuse, and in the case of the two specular light models, specular reflection.
Blinn-Phong is a modification of the Phong reflection model that produces a slightly larger specular highlight when using the same specular power as Phong. This can easily be corrected by increasing the specular power when using the Blinn-Phong shader. The Blinn-Phong shader generally produces more desirable results than the Phong method (see http://people.csail.mit.edu/wojciech/BRDFValidation/index.html).
Both specular formulas use the normal, light, and view unit vectors shown in the previous figure (note that the vector directions for Light and View are the vectors towards the light and view). The Phong model requires the calculation of a reflection vector (using the normal and the from-light vector, and not to-light vector). The dot product of the reflection vector and view vector (or the eye vector) are then used to determine the specular amount (along with the specular power material constant).
float3 Reflection = -Light - 2 * Normal * dot(Light,Normal)
Blinn-Phong uses the vector halfway between the view and the light instead; this is then used in the dot product between the normal and halfway vectors.
float3 halfway = normalize(Light + View);
Although Blinn-Phong is less efficient than pure Phong (as the normalization contains a square root calculation) for directional lights, the halfway vector can be calculated outside of the pixel shader as it does not rely on the normal vector. In most other cases where lights are not treated to be at infinity (as in directional lights), the Phong model will be faster.
float3 normalizedV = v / sqrt(dot(v, v));
Currently we are saturating the result of the lighting operations to keep the value within the 0.0 to 1.0 range. If instead, we are supporting high-dynamic-range (HDR) rendering, we could choose a render target format that supports more than 8 bits per component (for example, Format.R16G16B16A16_Float
) and stores a larger range of values. To display the result, this technique requires further processing called tone mapping to map colors from HDR to a lower dynamic range that matches the capabilities of the display device.
Included in the companion source code is an additional project (Ch03_02WithCubeMapping.csproj) that demonstrates how to sample a texture cube (or cube map) using float4 sample = CubeMap.Sample(SamplerClamp, normal);
.
The support for sampling the texture cube was added by the following methods:
DxTex.exe
) found in the June 2010 DirectX SDKTextureCube
method in the pixel shader, passing the normal vector as the UVW coordinate18.188.152.157