Lighting is one of the most important features to add to any shader if you want to add a sense of realism to your game. Players use visual cues such as size and shape to determine the relative position of objects in a 3D game, and lighting and shadows are two important cues that help players judge the depth of objects in relation to one another. That said, it’s not just games with realistic graphics that rely on lighting – heavily stylized games also benefit from this added information. In this chapter, we will see how lighting can be added to objects, starting with relatively simple lighting models and gradually building up to a complicated lighting model based on the physical properties of your objects.
Lighting Models
A lighting model is our way of describing the way light sources interact with the surfaces of objects in the game. Typically, lighting models can’t perfectly recreate the lighting from a real-world scene, but they are a close approximation. Broadly speaking, we can split the light falling onto an object into local or direct illumination, which is the result of a direct interaction between the surface of an object and a light source, and global illumination, which occurs when a proportion of light reflects off a surface and shines on another surface. Let’s discuss several types of light before we work with them inside a shader.
Ambient Light
If you sit in an enclosed room in the daytime, even if your curtains are shut, the room will still be lit because the light will shine through the gaps in the curtains and bounce all over the room. As a result, most of the objects in the room will have roughly the same level of illumination despite none of them being directly illuminated by the sun. Similarly, even shadowed areas on a bright day will appear highly illuminated.
The amount of ambient light applied to each object is a setting we can manually change at will. If our lighting model only considered ambient light, then the equation to calculate the final color of an object would look like this:
That’s not very interesting so far! It’s worth noting at this point that the light can be any color, so this isn’t just a single floating-point value – it’s an RGB color, just like any other. To build a more interesting model, let’s add different types of direct illumination.
Diffuse Light
The dot product between n and l gives us the proportion of ambient light acting on the surface. It’s also worth mentioning that these vectors should be normalized prior to any lighting calculation, because that means the result will always be between –1 and 1. Since the dot product can be negative, we clamp negative values to zero; otherwise, we will encounter visual errors. This value is then multiplied by the color of the light source to give us the total diffuse lighting contribution. We can model the diffuse light with the following equation:
Let’s update our lighting model from Equation 10-1 to include diffuse lighting too. Lighting is additive, so all we need to do to calculate the total amount of lighting acting on the object is to sum the individual types of light.
Diffuse lighting is perhaps the most noticeable type of lighting, especially when you move the light source or the object. However, there are other types of lighting that depend on the position of the viewer.
Specular Light
When you view a shiny object, the position and strength of the reflective highlight changes whenever you move the object or the angle at which you look at it. This is called specular lighting, and it occurs when the surface of an object is smooth. With diffuse light, the surface typically has imperceptible bumps and other imperfections, which mean reflected light is scattered in all directions. With specular light, on the other hand, all or most of the light rays that reach the object’s surface are reflected at the same angle. This means there is always a part of the surface that strongly reflects many rays directly into the viewer, which is why you see very bright highlights on a small section of the surface.
The result of the dot product is raised by a power, α, where a higher power represents a higher degree of shininess. This value is also multiplied by the light color to obtain the final specular light value.
Then, the specular lighting can be obtained using the dot product between n and h. We still need to raise the result by a power, α, although this method typically requires higher powers for a similar result to the first approach.
Now that we can calculate specular highlights on our objects, we can add specular lighting to the lighting model.
Most basic lighting models would stop here and use only these types of light, but there is another type of light that I am quite fond of including in my shaders, so we will briefly cover it too.
Fresnel Light
When you view objects at a very shallow angle, sometimes they will appear bright. You may have seen this effect before in real life in places like large bodies of clear water or the surface of a polished table. The steeper the angle, the less bright the surface will appear. This is called Fresnel lighting (pronounced like “fruh-nell”). Fresnel light typically isn’t included in many classical lighting models, but I like to include it in many of my shaders, especially if my game uses a stylized aesthetic.
In games, it is common to supply a power, β, to control the influence of Fresnel lighting, like we did for specular lighting. When you increase the power, the Fresnel light gets less prominent. Fresnel light is also multiplied by the light color, just like diffuse and specular light were.
If you so choose, your lighting model can include Fresnel lighting in addition to the others.
You should now know the theory behind several types of lighting that interact with the surface of an object. Now let’s look at how to incorporate them into shaders.
Blinn-Phong Reflection Model
The Phong reflection model was developed by Bui Tuong Phong in the mid-1970s to model the way light interacts with the surface of an object, based on the properties of the object itself. It combines ambient, diffuse, and specular lighting to approximate how lighting operates in a real-world scene, as seen in Equation 10-6. The Blinn-Phong reflection model implements Blinn’s alternative calculation of specular lighting into the existing Phong reflection model, as discussed previously. These reflection models work by taking the surface properties of many points on the object’s surface (such as the base color, shininess, and normal vector) and the light vector, view vector, and half vector relative to those points and calculating the final color at each of those points.
Flat shading, Gouraud shading, and Phong shading are three methods for evaluating the amount of light on a surface. Each technique evaluates different locations on the object’s surface and uses different interpolation techniques to obtain the final light amount on each pixel. Let’s see how each of these techniques works.
Flat Shading
Flat shading methods use a single lighting value for each face of the mesh. Since every pixel in a triangle has the same amount of light falling on it, each face of the mesh appears flat, hence the name. To achieve flat shading, all pixels belonging to a particular triangle use the same vectors for the lighting calculations so that the final lighting value is the same for each of those pixels. However, we can still use textures for the base color of the object, so the pixels of a given face can still have different output colors.
Flat shading is a very efficient rendering technique, so it can be used to minimize the performance impact of your game, or when combined with low-poly meshes, flat shading can be used to achieve a stylized aesthetic in your game. Now that we know what flat shading is, let’s see an example of how to use it in both HLSL shader code and Shader Graph. This shader will support basic texture mapping and will tint each triangle of the mesh based on its exposure to the primary directional light present in the scene. For now, this is the only light we will consider.
Flat Shading in HLSL
Code skeleton for the FlatShading shader
The Properties block
Declaring properties in HLSL in the built-in pipeline
Declaring properties in HLSL in URP
The appdata struct
The normalOS input takes the normal vectors attached to each vertex of the mesh and automatically uploads them to the shader for us to use.
The v2f struct requires the clip-space position and the UVs to be passed to the fragment shader. Since we are using flat shading, we can calculate all the lighting inside the vertex shader and send it to the fragment shader inside the v2f struct, so we won’t need to also include the normal vector in v2f. However, flat shading requires us to only calculate the lighting once per triangle so we must prevent the lighting value being interpolated between each vertex of the triangle using the nointerpolation keyword. There is no special semantic to use for lighting values, so we’ll just use the next available general-use interpolator, which is TEXCOORD1.
Remember that TEXCOORD0, TEXCOORD1, and so on are known as “interpolators.” However, this doesn’t mean their values must be interpolated (mixed) between vertices. Shader terminology can often be confusing! The nointerpolation keyword prevents interpolation from taking place, which means the result from the first vertex of each triangle is used in the v2f struct and sent to every fragment for that triangle.
The v2f struct
The fragment shader
Finally, let’s handle the vertex shader, which will be doing most of the heavy lifting for this shader. The code we’ll use to access lighting information differs wildly between the built-in pipeline and URP, so let’s deal with both versions separately.
Accessing Lights in the Built-In Pipeline
Using the ForwardBase LightMode in the built-in pipeline
Including Lighting.cginc and UnityCG.cginc in the built-in pipeline
The vert function skeleton in the built-in pipeline
Converting from object- to world-space normals
Ambient lighting in the built-in pipeline
The diffuse lighting comes next. We can access the color of the primary directional light with the _LightColor0 variable, defined in Lighting.cginc, as well as its direction with the _WorldSpaceLightPos0 variable. Since it is a directional light, the positioning of the light relative to the object doesn’t matter. Once we have those values, we can use Equation 10-2 to calculate the amount of diffuse light.
The name _WorldSpaceLightPos0 might be confusing because it’s called “pos” but it’s getting the light direction in this example. Essentially, this variable contains details about the most prominent light in the scene. That’s usually a directional light, in which case the variable returns its direction. If it is a different type of light, like a point light, this variable does indeed contain its position in world space.
Diffuse lighting with one directional light in the built-in pipeline
Adding together lighting components in the built-in pipeline
You should now see flat shading in objects in your scene that look just like Figure 10-9. Now let’s see how to do all this in URP instead.
Accessing Lights in URP
Specifying URP inside the Tags block in the SubShader
Using UniversalForward in URP
Including Core.hlsl and Lighting.hlsl
The vert function skeleton in URP
Object- to world-space calculation for normals
Ambient lighting in URP
Diffuse lighting with one directional light in URP
Adding together lighting components
With that, you should now see flat shading just like in Figure 10-9 in your scene. We have now seen how flat shading works in shader code, so let’s move on to Shader Graph and see how we can implement flat shading there.
Flat Shading in Shader Graph
This graph is going to look a bit different from the ones we’ve made so far. In previous examples, each graph we have seen has been an Unlit graph, which means Unity does not automatically apply lighting to the object. However, now that we’re starting to incorporate lighting into our shaders, it’s time to start thinking about Shader Graph’s Lit option. Let’s create a new Lit graph and name it “FlatShading.shadergraph”.
A Lit shader applies the lighting model automatically to objects, but instead of using the Blinn-Phong lighting model we’ve discussed previously, it uses Physically Based Rendering (PBR). We will explore PBR lighting later in this chapter, but for now we will focus on getting the flat shading effect to work. It is a lot of work to get true Blinn-Phong lighting to work inside Shader Graph, and it’s a little bit overkill for this effect, so for now, we will implement flat shading using PBR. The upshoot is that the only thing we need to modify is the normal vector output in the Fragment section of the master stack. By replacing the normal vector, which by default is a per-pixel normal vector that has been interpolated across the surface of the mesh, with a per-triangle normal vector, which we can calculate, we’ll end up with the flat shading we desire.
Now we’ll calculate the new normals. We don’t have access to anything like the nointerpolation modifier that we used in HLSL code, so we must calculate the per-triangle normal vector ourselves inside the fragment stage. In this stage of rendering, the shader has no knowledge of where the other vertices of the triangle are, so we can’t just calculate the normals based on that information. However, we do know that any triangle face is always flat, and we can exploit that fact. The ddx and ddy functions in shaders, which are called partial derivative functions, can be used to calculate any input on the current pixel and an adjacent pixel (horizontally for ddx and vertically for ddy) to obtain two values and then return the difference between them. The equivalent nodes in Shader Graph are called DDX and DDY.
For example, if we input the world-space position to the ddx and ddy functions, we would obtain two small vectors, perpendicular to each other, that lie on the triangle’s surface. The shrewd among you may have realized we can use the cross product on both those vectors to obtain the normal vector pointing away from the triangle, which is exactly what we wanted. The nice thing about this calculation is that, because the triangle is flat, we obtain the same normal vector for each pixel of the triangle, which results in a flat-shaded object. We just have to be careful with the order in which we use the two values with the cross product – the ddy comes first and then ddx.
This technique allows us to generate per-triangle normals in Shader Graph, but this is far more expensive than the HLSL equivalent, because rather than calculating them once for each triangle and passing that value to each fragment for the lighting calculations like HLSL does, in Shader Graph we must recalculate the normal vector for each fragment, which includes an expensive cross product calculation. However, it’s not so prohibitively expensive that you would notice a significant slowdown in your game using this method.
You now know how to implement flat shading into your game, no matter whether you are using HLSL code or Shader Graph. Next, let’s look at another type of shading: Gouraud shading.
Gouraud Shading
The advantage of Gouraud shading over flat shading is that we get a lighting gradient between each vertex due to the use of interpolation and we can implement specular lighting now. However, it’s slightly more resource-intensive than flat shading, and specular lighting still suffers from severe artifacts that can be avoided by using a high-poly object to obtain higher-“resolution” reflections. That introduces problems of its own, however – when we cover Phong shading, we’ll see that subdividing the geometry is unnecessary. But we’re getting ahead of ourselves – let’s see how Gouraud shading works in shader code and then in Shader Graph.
The Gouraud shading effect is interesting as a curiosity or if you are explicitly going for a retro 3D look for your game, but otherwise, you will probably want to use Phong shading instead, which I cover next.
Gouraud Shading in HLSL
Skeleton code for the GouraudShading shader
Gouraud shading properties
Declaring properties in the built-in pipeline
Declaring properties in URP
The v2f struct with split lighting components
Fragment shader adding diffuse and specular lighting contributions
Now we come to the vertex shader, where most of the calculations take place. The code is very different between the built-in pipeline and URP, so I’ll split the rest of the example into two sections.
Gouraud Vertex Shader in the Built-In Pipeline
Calculating vectors required for Gouraud shading in the built-in pipeline
Half vector and specular lighting calculations in the built-in pipeline
Setting diffuseLighting and specularLighting values
All the parts of the vertex shader are in place now, so the shader is complete for the built-in pipeline, and you will see Gouraud shading as in Figure 10-14 on your objects. Let’s see how to write the vertex shader in URP.
Gouraud Vertex Shader in URP
Calculating vectors required for Gouraud shading in URP
Half vector, specular lighting, and v2f lighting variables in URP
The main differences between this and the built-in pipeline version of the shader are just a matter of different function and variable names, but the resulting visuals (as in Figure 10-14) should be almost identical. Let’s see how this effect can be made in Shader Graph now.
Gouraud Shading in Shader Graph
Implementing Gouraud shading in Shader Graph entails more work than doing the same thing in HLSL. In fact, per-vertex lighting is impossible to achieve in Shader Graph versions prior to 12.0 (Unity 2021.2) because we only have access to three interpolators in the vertex stage: the Position, Normal, and Tangent vectors. With Shader Graph 12.0, we get access to custom interpolators that let us funnel custom data from the vertex stage to the fragment stage, like how the v2f struct in shader code gives us full control over the data passed between the vert and frag functions.
Although I claim it’s impossible, you probably can implement per-vertex lighting in older versions of Shader Graph. However, it will require “hacky” ways of getting round Shader Graph’s limitations and will probably require injecting a lot of code using the Custom Function node, so it’s de facto impossible using the intended behavior of Shader Graph. Regrettably, that means the GouraudShading effect only works in Shader Graph 12.0 (Unity 2021.2) and above.
Another roadblock we will encounter is the fact that Shader Graph does not yet have a built-in node that grabs lighting data such as position, direction, and color from the main light or any additional lights. Therefore, we will need to create a Custom Function node to obtain that information, for which we need to write a short section of shader code. Apologies to anyone wishing to avoid code entirely! We’ll wrap that function inside a Sub Graph so that it’s easy to access this behavior in any future graph that requires it. Using that Sub Graph, we can carry out the diffuse and specular lighting calculations required for the shader.
A built-in Get Main Light node, or equivalent, has been “under consideration” by Unity for a while now. One day it might be available in the base Shader Graph package for URP! In lieu of this node, as of the writing of this book, a Get Main Light Direction node is available in Shader Graph 13.0 (for Unity 2022.1) and up. You still won’t have access to the light color or shadowing, but it’s a start.
GetMainLight Sub Graph
Let’s do this step by step. First, I’m going to write a shader file containing the code to access the lighting information. Then, I’ll create a Sub Graph with relevant inputs and outputs that we will be able to use in any graph that requires lighting information. Finally, I’ll create a Custom Function node inside that Sub Graph, which accesses the custom HLSL code, and wire up the Sub Graph inputs and outputs to that node.
We need to put the variable precision at the name after an underscore, that is, FunctionName_float or FunctionName_half.
- We put all the input and output variables inside the parameter list in the function signature:
Inputs look like any regular function parameter (such as “float3 input”).
Outputs use the out keyword (such as “out float3 output”).
The function uses the void return type.
GetMainLight custom function signature
This function takes the world-space position as input and outputs the direction, color, distance attenuation, and shadow attenuation of the main light at that position. In almost every case, the main light will be a directional light. We won’t need the last two outputs just yet, but we’ll include them here for completeness.
Dummy values for inside a Shader Graph preview window
Using GetMainLight inside a custom function
That’s all we need to do for the code. Let’s now create the Sub Graph that will contain the Custom Function node that uses this code – name it “GetMainLight.shadersubgraph”. You’ll see an interface similar to a main graph, except the output stack will contain only a single float4 output by default. The first thing we’ll do is set up the inputs and outputs of the Sub Graph.
If you get red errors on the Custom Function node, you might need to go into the Graph Settings and change the Precision of the graph to Single. We can now use main light information in our graphs, which will be enough for the diffuse and specular lighting, but we still need to handle ambient light. We can use the same process to create a Sub Graph just for ambient light too. Although Shader Graph ships with an Ambient node, this only exposes three modes of ambient light – sky, equator, and ground – none of them work as well as the method we used in the HLSL version of the shader, so we’re going to use the same code here.
GetAmbientLight Sub Graph
GetAmbientLight function
Create the Sub Graph by right-clicking the Project View and choosing Create ➤ Shader ➤ Sub Graph and naming it “GetAmbientLight”.
Add a Custom Function node to the graph and change the File option to GetAmbientLight.hlsl and the Function option to “AmbientLight”.
Create a WorldNormal Vector3 input and an IsVertex Boolean input for both the Sub Graph and the Custom Function node.
Create an Ambient Vector3 output for both the Sub Graph and the Custom Function node.
Connect the input nodes and Output node to the Custom Function node as required.
These Sub Graphs can now be used in any future graph that requires access to either main light or ambient light information. Without further ado, let’s write a graph that can perform Gouraud shading.
Gouraud Shading with Custom Interpolators
Start by creating a new Unlit graph. This time, we’re going back to an Unlit graph so that we can perform the lighting calculations ourselves in the vertex stage and avoid Unity applying a second layer of lighting to the object automatically. Name this graph “GouraudShading.shadergraph”.
We will be doing the lighting calculations in the vertex shader and will need to pass them to the fragment shader. To do this, we need to make use of Shader Graph’s custom interpolator feature. If you right-click inside the Vertex section of the master stack and choose Add Block Node, a Custom Interpolator option will show up. Add two of those and name them “DiffuseLighting” and “SpecularLighting”, respectively, in the Node Settings. Both use the Vector3 type.
Perform the Dot Product between the Normal Vector and the Direction output of GetMainLight to obtain the amount of diffuse light.
Saturate the result to remove any negative results.
Multiply by the Color output of GetMainLight to tint the diffuse light with the main light’s color.
Add the GetAmbientLight result to the diffuse light.
Output the result to the DiffuseLighting custom interpolator on the master stack’s Vertex section.
Add the View Vector and the Direction output of GetMainLight together and then Normalize the result to obtain the half vector.
Take the Dot Product between the Normal Vector and that half vector to obtain the amount of specular light.
Saturate the result to remove any negative results.
Raise the result to the power of the Gloss Power property using a Power node.
Multiply by the Color output of GetMainLight to tint the specular light with the main light’s color.
Output the result to the SpecularLighting custom interpolator on the master stack’s Vertex section.
Sample the Base Texture with a Sample Texture 2D node and multiply the output by Base Color.
Multiply the result by the DiffuseLighting custom interpolator value.
Add the SpecularLighting custom interpolator value.
Output the result to the Base Color output on the master stack’s fragment section.
With those steps followed, you’ll see a result just like in Figure 10-14. This is a relatively complicated graph, so the nodes might be hard to see all on-screen at once! Now that we have seen how Gouraud shading works in both HLSL and Shader Graph, let’s see how Phong shading improves on Gouraud shading.
Phong Shading
First, let’s clarify what we mean by Phong shading. We’ve talked about the Phong reflection model, where we add ambient, diffuse, and specular contributions at different points to calculate the amount of light incident on an object’s surface. Phong shading is a different thing, but annoyingly we use the same terminology, which is often conflated.
Let’s see how Phong shading can be implemented in HLSL and then in Shader Graph.
Phong Shading in HLSL
Skeleton code for the PhongShading shader
Modifying the v2f struct to include the normal vector and view vector
The code looks very different between the built-in pipeline and URP from here on, so I’ll split this section in two according to the render pipeline you’re using.
Phong Shading in the Built-In Pipeline
The vertex shader for Phong shading in the built-in pipeline
Renormalizing the normal and view vectors
Calculating lighting in the fragment shader in the built-in pipeline
Once you’ve moved the lighting code to the fragment shader like this, you’ll instantly notice a better specular highlight on objects, as seen in Figure 10-23. The diffuse lighting calculations are also more accurate, but generally they’re not as noticeable as the specular highlight improvement. Let’s see how this works in URP.
Phong Shading in URP
The vertex shader for Phong shading in URP
Calculating lighting in the fragment shader in URP
The URP version of the shader now works the same as the built-in pipeline version as seen in Figure 10-23, and you should see a huge improvement in the specular highlight on objects with this shader. We can add Fresnel light to the equation with small additions to the shader.
Fresnel Light Modification
Adding a _FresnelPower property to the Properties block
Declaring _FresnelPower in the built-in pipeline
Declaring _FresnelPower in URP
Adding Fresnel lighting support in the built-in pipeline
Adding Fresnel lighting support in URP
Once these lines of code are added, our objects will look like those in Figure 10-24. To round off this section, let’s see how Phong shading works in Shader Graph, complete with Fresnel lighting support at the end.
Phong Shading in Shader Graph
Phong shading in Shader Graph looks a lot like Gouraud shading in Shader Graph, except the calculations can now be done in the fragment stage rather than the vertex stage. We’ll also be adding Fresnel lighting support to the graph.
Add the same properties as seen in Figure 10-19.
Add the set of nodes seen in Figure 10-20 for the diffuse light and ambient light calculations. Do not connect those nodes to a custom interpolator.
Add the set of nodes seen in Figure 10-21 for the specular light calculations. Do not connect those nodes to a custom interpolator either.
- Add the nodes seen in Figure 10-22 for the final lighting calculations, except
Replace the DiffuseLighting custom interpolator node with the output of the diffuse light node group.
Replace the SpecularLighting custom interpolator node with the output of the specular light node group.
Output the result of the final addition to Base Color on the master stack.
Now we have covered lighting objects in shaders by simulating the amount of ambient, diffuse, specular, and Fresnel lighting acting on the object. Next, we will see what functionality Unity provides out of the box to help us light objects without needing to calculate all the lighting ourselves.
Physically Based Rendering
Physically Based Rendering is exactly what it sounds like: rendering objects based on the physical properties of a surface, such as its albedo color, roughness/smoothness, and metallicity. Since 2015, with the release of Unity 5, the built-in pipeline has supported PBR through the Standard shader, and in both URP and HDRP, the Lit shader supports PBR. Let’s see the common features of a PBR shader and then create our own PBR shaders using the helper functions and macros provided by Unity.
Smoothness
Diffuse and specular light arise due to the way light reflects off a surface. Diffuse light occurs because incoming light rays get reflected in all directions due to complex interactions between the light and the surface of the object. A perfectly diffuse surface, one that reflects light equally in all directions, is called Lambertian; in fact, we have been using Lambertian reflectance as our model for diffuse light reflection in the shaders we have written. Although real-world materials don’t reflect light equally in all directions, Lambertian reflection is still a good approximation for diffuse reflection.
Smoothness is only one part of PBR lighting. Typically, PBR shaders include two modes, which provide extra control over how an object is rendered: metallic mode, which lets us model objects on a scale between fully metallic and non-metallic, and specular mode, which gives us direct control over the color of specular reflections, rather than leaving it to other physical properties of the object. Let’s see how both modes work.
It’s mostly down to personal preference which of the two workflows you choose. I prefer the metallic workflow because it is easier to design materials by looking up real-world values from lookup tables online, which list the metallic and smoothness ranges of real materials.
Metallic Mode
Metallic mode is only one of two workflows commonly used with PBR. Specular mode may provide benefits if you want direct control over the specular highlight.
Specular Mode
So far, we’ve seen that some components of our lighting models rely on the normal vector of the object’s surface. The next feature we’ll look at that is commonly featured in PBR shaders will let us modify the normal vector to simulate different surface shapes.
Normal Mapping
Normal mapping is a technique that lets us simulate detailed surfaces on an otherwise low-detail mesh using a texture (called a normal map). This allows us to add imperfections and other surface elements without needing to vastly increase the polygon count of the mesh. The advantage is that small details, which would otherwise require hundreds or even thousands of additional triangles to represent, can now be replaced by a texture that takes up comparatively little graphics memory, and we can use the same low-poly model with different normal maps if we want to swap out the surface details on an object easily.
Although normal maps are not exclusive to PBR materials, they are certainly related to lighting, and they are used to mimic the physical properties of the surface, so I think it’s useful to introduce them alongside PBR materials. That said, you could create a shader that uses normal mapping with Blinn-Phong lighting if you wanted to. Normal maps have an indirect influence on the way lighting gets applied to an object. There is another kind of texture we can use to directly control the strength of ambient light on the object.
Ambient Occlusion
Now that we have seen a few textures that control the way external lights interact with the surface of the object, let’s see how objects can directly control light emission from their own surface.
Emission
To see color bleeding from an emissive object, you must have a bloom filter attached to the camera. We will cover bloom and other post-process effects in the next chapter.
You should now know about the most common components of a PBR shader and how they work in theory. Using this knowledge, let’s create a PBR shader in Unity for each pipeline in shader code and Shader Graph.
PBR in the Built-In Pipeline
In the built-in pipeline, we can use Unity’s surface shaders to help us with the lighting. Surface shaders are a feature that are exclusive to the built-in pipeline that let us define a lighting model (or use one included with Unity) and then define the surface properties of the object (such as albedo color, smoothness, emission, and the other properties listed previously). Then Unity will automatically carry out the lighting calculations for us. Let’s see how they work.
Surface shaders are exclusive to the built-in pipeline. There isn’t a direct parallel in code in any other pipeline as of the writing of this book.
PBR shader skeleton for the built-in pipeline
There’s a lot of unfamiliar code in this snippet, so let’s break down what’s happening. The first key difference is that the shader code is encased in a CGPROGRAM block rather than an HLSLPROGRAM block. Typically, modern Unity shaders are written in HLSL entirely, but surface shaders were designed in a time where Cg was the primary shading language in Unity, and as such, many of the built-in surface shader features are designed around that. Some of the built-in structs use the fixed data type, which doesn’t exist in HLSL. The fixed data type has at least 10 bits of fractional precision, although some hardware uses up to 32 bits, the same as a float. Otherwise, most shader syntax we’ve seen so far will work the same as in HLSL. Also, the code block is placed directly inside a SubShader rather than inside a Pass, because Unity may generate several shader passes based on the surface shader code.
Surface shader #pragma statement
Lambert – Ambient and diffuse lighting only
BlinnPhong – Ambient, diffuse, and specular lighting with the Blinn-Phong lighting model
Standard – Physically Based Rendering in metallic mode
StandardSpecular – Physically Based Rendering in specular mode
In the preceding example, the Standard lighting model was used, which means the object will automatically have PBR lighting applied to the object. After specifying the lighting model, we can optionally add other shader features. In this example, the fullforwardshadows option means that Unity will generate shadows for multiple lights (we will focus on shadows later in this chapter).
float3 viewDir – A vector in world space from the surface of the object to the camera
float4 screenPos – The position of a point on the surface in screen space
float3 worldPos – The position of a point on the surface in world space
UVs work slightly differently from regular shaders – to include them in Input, we must specify a float2 with the name of a texture prefixed with “uv”. For example, to include UVs that are set up to use the tiling and offset settings of the _BaseTex texture, we must include an entry in the Input struct called uv_BaseTex, which you can see in the preceding code snippet. In fact, we are going to include only this variable inside the Input struct.
If you want, you can add sets of UVs for other textures to the Input struct, but I am going to use only one set of UVs, as you’ll see. This means each texture supplied to a material using this shader should have all details lined up the same way on each texture.
The SurfaceOutputStandard struct
_BaseColor – This is a regular non-HDR Color property that controls the albedo color of the object.
_BaseTex – A Texture2D that also controls the albedo color.
_MetallicTex – A grayscale Texture2D that controls the metallicity of each part of the object. By default, it is white.
_MetallicStrength – A Float between 0 and 1 that acts as a multiplier to the Metallic Map values. By default, it is 0.
_Smoothness – A Float between 0 and 1 that defines how smooth the object is. By default, it is 0.5.
_NormalTex – A Texture2D that can be used to add normal mapping to the shader. The default Mode option should be Normal Map, so that if no map is chosen, a flat normal map is used.
_NormalStrength – A Float that acts as a modifier to the values sampled from the normal map. The higher the value is, the more strongly the normal map influences the lighting. By default, it is 1.
USE_EMISSION_ON – This Boolean keyword property can be used to toggle emission on and off. We will use a Float property called _EmissionOn to control it.
_EmissionTex – A Texture2D that controls whether any portions of the object glow, even in low-light conditions. By default, this is a white texture.
_EmissionColor – A Color that acts as a multiplier to the values used for the emission map. It should be HDR-enabled so that it can use an extended range of color values. By default, it is black, corresponding to no emissive light.
_AOTex – A Texture2D that is used to dim parts of the mesh that are obscured by small details. By default, this should be white, corresponding to full ambient lighting for all parts of the mesh.
The Properties block in the built-in pipeline
Declaring properties inside the CGPROGRAM block for a surface shader
Declaring the USE_EMISSION_ON keyword
Now, whenever we toggle the Boolean value of the _EmissionOn property on the material, Unity will turn the USE_EMISSION_ON keyword on or off according to the property’s value. We can use that to control whether emission is used in the shader. Speaking of which, we can now turn our attention to the surf function.
Albedo, Metallic, and Smoothness outputs in a surface shader
Normal mapping in a surface shader
Emission output in a surface shader
The surface shader is now complete, so you will now be able to attach it to a material and see PBR shading on your object. Try changing the material settings to see how different surface properties influence the appearance of the object. Now let’s see how PBR works in URP.
PBR in URP
PBR shader skeleton for URP
Properties in HLSL in URP
- Inside the Lighting.hlsl file, which is included in this shader, Unity provides a function called UniversalFragmentPBR that calculates and applies PBR lighting for us.
The function takes two structs as input: InputData and SurfaceData, which themselves are included in the InputData.hlsl and SurfaceData.hlsl files, respectively.
We must include SurfaceData.hlsl ourselves, but InputData.hlsl is already included in Core.hlsl.
The InputData struct can primarily be filled in with variables passed from the vertex shader. This means the v2f struct will contain many entries.
The SurfaceData struct can be filled with similar data to the surface shader outputs we wrote for the built-in pipeline, such as albedo, emission, metallic, and so on. This should be calculated in the fragment shader.
Shader features are turned on or off with #pragma keyword directives. We will include one of our own to control whether emission is active, plus a few that are required by Unity to add certain functionality to the shader.
The USE_EMISSION_ON keyword is one we’re adding to control whether emission should be used in the shader or not.
The _MAIN_LIGHT_SHADOWS, _MAIN_LIGHT_SHADOWS_CASCADE, and _MAIN_LIGHT_SHADOWS_SCREEN keywords all control how shadows from the main light interact with the object.
The _ADDITIONAL_LIGHTS_VERTEX and _ADDITIONAL_LIGHTS keywords control how light from all lights except the primary light gets applied to the object.
The _ADDITIONAL_LIGHT_SHADOWS keyword controls how shadows from the additional lights get applied to the object.
_REFLECTION_PROBE_BLENDING and _REFLECTION_PROBE_BOX_PROJECTION allow Unity to blend reflection probes if the object is between two probes.
_SHADOWS_SOFT can be used to soften the edges of shadows. Otherwise, the object will have a hard border between shadowed and lit regions.
The _SCREEN_SPACE_OCCLUSION keyword enables screen-space ambient occlusion. This is separate from our occlusion texture.
LIGHTMAP_SHADOW_MIXING, SHADOWS_SHADOWMASK, DIRLIGHTMAP_COMBINED, LIGHTMAP_ON, and DYNAMICLIGHTMAP_ON all relate to lightmapping.
Lighting keywords in URP
When you see a giant pile of unfamiliar keywords like this, it’s very easy to feel overwhelmed. When this happens, I try to bear in mind that each individual keyword is digestible, so rather than adding them all to the shader at the same time, try adding one or two and see how they impact the final shader. Then comment those out and add one or two others to see how they work. It’s a lot easier to understand what each keyword does when you see individual changes like that.
The appdata struct
The v2f struct
As we have seen previously, the TRANSFORM_TEX macro applies tiling and offset parameters of a given texture to the UVs.
The GetVertexPositionInputs and GetVertexNormalInputs functions convert the position, normal, and tangent vectors to several different spaces for us.
The GetWorldSpaceNormalizedViewDir function, which we have also seen before, converts the view vector to world space for us.
The OUTPUT_LIGHTMAP_UV macro applies tiling and offset to the lightmap UVs using the values in unity_LightmapST. Think of it as TRANSFORM_TEX but for lightmaps instead.
The OUTPUT_SH macro sets up spherical harmonics, which are used for ambient lighting evaluation.
The vert function
The SurfaceData struct
Zeroing out the SurfaceData struct members
The full createSurfaceData function
The important members of the InputData struct
As with the SurfaceData struct, we don’t need to populate every member of this struct, but it should give you a good idea of the features that you could add to the shader if you want. I’ve included a createInputData function to populate the struct. It takes a v2f as a parameter along with the tangent-space normal vector from SurfaceData. The most intensive part of this function is calculating the tangentToWorld matrix, which requires calculating the bitangent vector. Recall that the bitangent vector is perpendicular to both the normal and tangent vectors, and we can calculate it by taking the cross product of those two existing vectors.
The full createInputData function
The frag function
With that, the PBR shader for URP is now complete, and you will see PBR lighting on any objects whose material uses this shader. As with the built-in pipeline version, try changing the properties of the material to see how the appearance of each object changes. Now that we have covered the PBR shader in shader code, let’s move on to Shader Graph.
PBR in Shader Graph
Now that all these properties have been connected, you will see PBR lighting on objects. As with the code versions of this shader effect, try tweaking the material properties to see how the appearance of the object changes.
Thankfully, each of these PBR shaders applies shadows from other objects when calculating the amount of lighting on the object. However, some shaders are not yet able to cast shadows. In the next section, we’ll see how to add shadow-casting support to our shaders.
Shadow Casting
Shadow Casting in the Built-In Pipeline
Adding a shadow caster pass to an existing shader
A shadow-casting pass in the built-in pipeline
The ShadowCaster LightMode tells Unity that this pass is exclusively to be used as a shadow-casting pass.
The vertShadowCaster function works like any other vertex shader and deals with the object position and texture offsets.
The fragShadowCaster renders the object to the shadow map. The shadow map is then used by other objects to determine whether they should receive shadows.
The multi_compile_shadowcaster directive sets up the macros that are required to make shadow casting work.
The multi_compile_instancing directive makes this pass work with GPU instancing.
The UnityStandardShadow.cginc include file is where most of these features are included.
Once you’ve added this pass to your shader, your objects should start to cast shadows, as seen in Figure 10-42. Now let’s see how the process of adding real-time shadows differs in URP.
Shadow Casting in URP
A shadow-casting pass in URP
The ShadowCaster LightMode tells Unity that this is a shadow-casting pass so that it can be run at the correct point in the graphics pipeline.
The ShadowPassVertex function, the vertex shader, positions objects correctly on-screen and applies shadow biasing to ensure shadows render correctly.
The ShadowPassFragment function, the fragment shader, renders values into the shadow map.
The ShadowCasterPass.hlsl file contains these two functions and other helper functions used within. ShadowCasterPass.hlsl requires some of the contents of Common.hlsl, CommonMaterial.hlsl, and SurfaceInput.hlsl.
The multi_compile_instancing directive makes this pass work with GPU instancing.
Once the shadow caster pass has been added to your shader, you will see shadows like those in Figure 10-42.
Shadow Casting in Shader Graph
It’s as easy as that – just untick a box! With that, you should know how to enable and disable shadow casting in each of Unity’s pipelines and in Shader Graph.
Summary
The total amount of light falling on a surface can be modeled as the sum of individual types of light, such as ambient, diffuse, specular, and Fresnel light.
Ambient light is used to approximate global illumination from the environment.
Diffuse light is proportional to the angle between the normal vector on the surface of an object and the light vector.
Specular light is proportional to the angle between the reflected light vector and the view vector. The Blinn approximation removes the costly reflection step by using the dot product of the half vector and normal vector instead. The half vector is halfway between the view vector and light vector.
Flat shading evaluates light once per triangle.
Gouraud shading evaluates light once per vertex and interpolates the result across fragments.
Phong shading interpolates the normal vector across fragments and calculates lighting per fragment for more realistic results.
Physically Based Rendering uses the physical properties of a surface – its albedo color, smoothness, metallicity, specularity, normals, and occlusion – in the lighting calculation.
Shadow casting can be enabled on objects by using Unity’s built-in code or by using the tick boxes provided in Shader Graph.