© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
D. IlettBuilding Quality Shaders for Unity®https://doi.org/10.1007/978-1-4842-8652-4_3

3. Your Very First Shader

Daniel Ilett1  
(1)
Coventry, UK
 

It’s finally time to make our first shader! If you’ve ever followed a tutorial to learn a new programming language, then your first program was probably logging “Hello, World!” to the console or the screen. It’s a bit harder to do that with shaders, so we’ll settle with displaying a mesh and applying a basic color to it. In this chapter, I will show you how to set up your scene and explain which components need to be attached to objects. Then, we will see how shaders in Unity work, and we will write our very first vertex and fragment shader functions. The differences between shaders in each of Unity’s render pipelines are explored. Finally, I will cover the basic shader syntax that you will be seeing throughout the book.

Shader code isn’t the only way to write shaders in Unity. If you prefer to avoid programming, then the next chapter will introduce Shader Graph, Unity’s visual shader editor. However, you will still find the first part of this chapter useful. To prepare for writing our very first shader, let’s set up a new Unity project.

Project Setup

Before we can jump into writing our first shader, we have an important decision to make: which render pipeline will we use? I briefly described these in Chapter 1, but to recap, there are three pipelines:
  • The built-in render pipeline was Unity’s only rendering code up until 2017. Most older learning resources available regarding shaders use the built-in pipeline, although some are very outdated.

  • The Universal Render Pipeline, URP, is a more “lightweight” pipeline aimed at lower-end machines and cross-platform development. It used to be named the “Lightweight Render Pipeline,” or LWRP.

  • The High-Definition Render Pipeline, HDRP, is intended for games that require high-fidelity rendering and lighting quality. A restricted number of platforms support HDRP.

If you don’t know much about each render pipeline, then it can be very tricky to make this decision up front. After all, while it is possible to switch from one pipeline to another mid-development, it can be difficult to untangle your work from the original pipeline and rework it into the new one. With that in mind, here are a few use cases that I hope will assist you in making your choice:
  • If you’re just starting out with Unity and are here to learn shaders without having a specific project in mind, I recommend starting with URP. Unity intends to make URP the default pipeline for new projects in the future, so an increasing proportion of learning resources will move away from the built-in pipeline and toward URP.

  • If you are working on a multi-platform project targeting high-end consoles (e.g., PS5, PS4, Xbox One, Xbox Series X/S) and PC, then you can choose any pipeline, subject to other requirements. These are the platforms that currently support HDRP, which is the best choice if you want to use cutting-edge graphics.

  • If you plan to target mobile, web, or other consoles (e.g., Nintendo Switch), then do not use HDRP.

  • If you plan only to use shader code and not Shader Graph, then I recommend not using HDRP. Although code-based shaders are possible in HDRP, learning resources are lacking compared with the other two pipelines; Unity themselves recommend using Shader Graph when working in HDRP. That’s what I’ll be doing throughout the book!

  • If you are left with a choice between the built-in pipeline and URP, it can be difficult to fall on either side of the fence. I still recommend URP because it is going to receive the most active development in the future, but if you do pick the built-in pipeline, then rest assured almost all the book’s examples will still work!

Once you have picked a pipeline, let’s create a new project via the Unity Hub.

Creating a Project

When you open the Unity Hub, you will see the screen like that in Figure 3-1, which lists all your existing Unity projects if you have any. I’ve blurred out my project names – sorry if you wanted to snoop on what I’ve been working on!

A screenshot of a Unity Hub 3.2.0 window lists the projects under the Projects tab with sections; name, modified, and editor version.

Figure 3-1

The Unity Hub, which opens on the Projects tab

Click the New project button in the top-right corner of this window. You’ll see another window where we are presented with many templates, as shown in Figure 3-2. I’m going to show you how to build shaders primarily in 3D, so let’s pick the respective 3D option for the pipeline you will be using:
  • If you’re using the built-in pipeline, pick the template simply called “3D.”

  • In URP, pick the template called “3D Sample Scene (URP).” This template contains an example scene with a few assets already set up for you.

  • Similarly, in HDRP, pick “3D Sample Scene (HDRP).”

A screenshot of a Unity Hub 3.2.0 window depicts a new project of 3D of H D R P, under the all templates tab.

Figure 3-2

The New project screen of the Unity Hub

Type a project name and save location into the fields on the right-hand side of the screen and click the Create project button. Unity will create a new folder and populate it with the template files. Then the Unity Editor will open. We can now set up a scene ready to create and test our shaders.

Setting Up the Scene

If we are going to start writing shaders, we’ll need to attach them to an object in the scene to see what it looks like at runtime. I usually add a humble sphere to test most of my 3D shaders, so let’s add one now by following these steps:
  • Add the Unity primitive sphere to your scene, which you can do via the toolbar through GameObject ➤ 3D Object ➤ Sphere. You may find it useful at times to use other meshes, but a sphere is perfect for quickly testing most effects.

  • We use materials in Unity to attach shaders to objects. Materials are contained in the Assets folder alongside other assets like textures, scripts, and meshes. Create one by right-clicking in the Project View and selecting Create ➤ Material. Name it something like “HelloWorld”, since we’ll be making our very first shader.
    • Unity automatically uses a default shader: the Standard shader in the built-in pipeline and the Lit shader in URP and HDRP.

  • Drag the material onto the sphere mesh you added. The appearance of the sphere may change slightly when you do so.

Note

At this point, if you are using HDRP or want to only use Shader Graph, skip to the next chapter to make your first shader. Writing shaders with code is possible in HDRP, but it is a magnitude more difficult to do due to a lack of learning resources and the increased complexity in using HDRP.

  • Create a new shader file by right-clicking in the Project View and choosing Create ➤ Shader ➤ Unlit Shader, which copies a template shader into the new file with a .shader file extension. Name it “HelloWorld.shader” since it’s our first shader. You don’t need to type the extension.

  • Select the HelloWorld material we created previously, and in the Shader drop-down at the top of the Inspector window, find “Unlit/HelloWorld”. See Figure 3-3 to see an example material in Unity.

A screenshot of an inspector window presents a sphere with the new material properties; shader, main color, render queue, and double sided global illumination.

Figure 3-3

A material created in Unity. From top to bottom, this window features a drop-down to pick the shader used by the material, a section for shader properties that we can tweak, and a preview window at the bottom

Tip

You can change how the preview window on a material behaves using the row of buttons just above the preview. Click the play button to animate the material over time. Use the next button along to change to a different preview mesh. Use the button with yellow dots on it to tweak how many light sources are simulated. Use the drop-down with the half-blue sphere icon to specify a reflection probe for the preview. And use the menu on the right-hand side to dock and undock the preview window.

The template shader is already a completed shader, but it’s no use to start off with a completed shader since we’re here to learn. Open the shader file by double-clicking it in the Project View and delete the file contents – any material using the shader will turn magenta, which happens whenever the shader fails to compile properly. Now that our scene is set up, we can focus on writing the shader file. We will start by discussing shader syntax.

Note

If you installed Unity with the default settings, you most likely have Visual Studio or Visual Studio Code installed. Therefore, when you double-click a shader file, Unity will open it in one of those. You can customize which editor is used via Preferences ➤ External Tools.

Writing a Shader in Unity

There are several popular graphics APIs (Application Programming Interfaces) that are used by programs, such as Unity, to handle graphics for us. Each graphics API has a corresponding shading language:
  • The OpenGL API, a popular cross-platform graphics library, uses a shading language called GLSL (for OpenGL Shading Language).

  • DirectX, which is designed for use on Microsoft’s platforms, uses HLSL (High-Level Shading Language).

  • Cg, a deprecated shading language developed by Nvidia, uses the common feature set of GLSL and HLSL and can cross-compile to each depending on the target hardware.

The shading language is what you write shader code in. A game or game engine will compile shaders written in one of those languages to run on the GPU. Although it is possible to write shaders using any one of GLSL, HLSL, or Cg, modern Unity shaders are written in HLSL.

Note

In the past, Unity shaders used Cg as the primary shading language. Over time, the default has switched to HLSL. Unity will automatically cross-compile your shader code for the target platform.

There is an extra layer to it in Unity. Unity uses a proprietary language called ShaderLab, which acts as a wrapper around the shading languages I just mentioned. All code-based shaders in Unity are written in ShaderLab syntax, and it achieves several aims at once:
  • ShaderLab provides ways to communicate between the Unity Editor, C# scripts, and the underlying shader language.

  • It provides an easy way to override common shader settings. In other game engines, you might need to delve into settings windows or write graphics API code to change blend, clipping, or culling settings, but in Unity, we can write those commands directly in ShaderLab.

  • ShaderLab provides a cascading system that allows us to write several shaders in the same file, and Unity will pick the first compatible shader to run. This means we can write shaders for different hardware or render pipelines and the one that matches up with the user’s hardware and your project’s chosen render pipeline will get picked.

It’ll become a lot easier to understand how this all works with a practical example, so let’s start writing some ShaderLab.

Writing ShaderLab Code

In this example, we will write a shader to display an object with a single color, and we’ll add the option to change that color from within the Unity Editor. Most of the code required for this shader is the same between the built-in and Universal render pipelines, but there are a few differences, which I will explain when we reach them.

Note

When there is a difference between the code required for each pipeline, I will present you with two code blocks labeled with the pipeline they are intended for. Choose only the one for your pipeline.

Open the HelloWorld.shader file. Inside the file, we’ll start by naming the shader using the Shader keyword. This name will appear when viewing any material in the Inspector if you use the Shader drop-down at the top of the material (see Figure 3-3). After declaring the name, the rest of the shader is enclosed within a pair of curly braces, so we will put any subsequent code inside these braces.

Tip

You can include folders inside the name – for example, naming the shader “Examples/HelloWorld” places the shader under the folder “Example” alongside any other shaders that use that folder in their path.

Shader "Examples/HelloWorld"
{
}
Listing 3-1

Beginning a shader file

Inside the braces, we will declare a list of material properties. These can be thought of as the shader’s variables, as this list of properties will appear in the Unity Editor on any selected material that uses this shader. Properties are powerful, because they let us create several materials that use the same shader, but with different variable values. There are several types of property we will see throughout the book, but to start with, we will add a single Color property. The syntax is similar to declaring the Shader – we will write the Properties keyword, followed by a set of curly braces. The syntax for the properties themselves looks a bit strange at first.
Shader "Examples/HelloWorld"
{
      Properties
      {
            _BaseColor(“Base Color”, Color) = (1,1,1,1)
      }
}
Listing 3-2

Declaring properties in ShaderLab

The line of code declaring the property has lots of parts to it, so let’s break down the weird syntax:
  • Conventionally, shader property names start with an underscore, and we capitalize the start of each word. In this case, _BaseColor is the computer-readable name of the property, and we will refer to it in shader code later as _BaseColor.

  • Inside the parentheses, we first specify a human-readable name in double quotes, which Unity uses in the Inspector. In this case, the name we chose is “Base Color”, which is like the code-readable name anyway.

  • Next comes the type of the variable, which is Color. We could also have types like Texture2D, Cubemap, Float, and so on – these are all types we’ll see later.

  • Finally, we give the property a default value after the equals sign, which is used when you create a new material with this shader.

Note

It seems strange to have two different names for each property, but it’s useful to have both types of names like this because we might want to use certain technical names within the code, but another person working with this shader to create materials in the Inspector might not understand (or need to know) what the code name means. A human-readable name makes it clearer what the property is for.

Colors are made up of four components: red, green, blue, and alpha/transparency. In this example, the color is opaque white by default since all four components have a value of 1. You may have encountered different standards for color values before, so here’s how Unity deals with them:
  • Colors are often stored as unsigned (positive) integers between 0 and 255. This requires 8 bits of storage space. 0 means no color, and 255 means full color.

  • Colors are made up of a mix of red, green, and blue. Plus we have an “alpha” value that represents transparency. Therefore, we use four channels of 8 bits each for colors.

  • In Unity, especially in shaders, we instead use floating-point values between 0 and 1 to represent each color channel. A floating-point number has a fractional part. A color value of (1, 1, 1, 1) means all four channels use the maximum value, which appears as fully opaque white.

  • We use this representation in shaders for higher precision. All you need to remember is that a regular color value is between 0 and 1.

Now that we’ve dealt with the Properties block, we will add a SubShader.

Adding a SubShader

With ShaderLab, we can add several SubShader blocks with different features to ensure that this shader will work on different kinds of hardware or different render pipelines, but in this example, we will only add one.
Shader "Examples/HelloWorld"
{
      Properties { ... }
      SubShader
      {
      }
}
Listing 3-3

Adding a SubShader in ShaderLab

If you define multiple SubShader blocks, Unity picks the first one that works on your combination of hardware and render pipeline. When your hardware is incompatible with every SubShader, the shader will fail to compile, and Unity will display the error material, which is magenta.

Note

Always put the SubShader with the highest requirements first. There doesn’t seem to be a hard limit on the number of SubShaders you can include in one file, but you’ll find it difficult to maintain the file if you add too many.

There’s also the Fallback system, which you can use to specify the name of an alternative shader file to use if every SubShader in this shader file is incompatible – Unity carries out the same process of checking every SubShader in that file (and if they don’t work, every Fallback too) and picks the first that works. Fallback shaders should be specified after the closing brace of the final SubShader block. If you decide you don’t want to use a Fallback, you can choose not to include the keyword or explicitly write Fallback Off.
Shader "Examples/HelloWorld"
{
      Properties { ... }
      SubShader { ... }
      SubShader { ... }
      Fallback "Unlit/Color"
}
Listing 3-4

Specifying the Unlit/Color shader as a fallback

Inside the SubShader, we will start to add settings that control how the shader will operate – there are a lot of possible options, but we’ll add only one for now: Tags.

SubShader Tags

The Tags block lets us specify whether the shader is opaque or transparent, set whether this object is rendered after others, and specify which render pipeline this SubShader works with. Each tag is a key-value pair of two strings, where the first string is the name of the tag and the second string is its value. Let’s add a RenderType tag to specify we want to use opaque rendering for this object.

Note

We can add code comments in ShaderLab in a similar manner to C-style languages: single-line comments start with a double forward slash //, and multiline comments are enclosed between /* and */.

SubShader
{
      Tags
      {
            // Render alongside other opaque objects.
            "RenderType" = "Opaque"
      }
}
Listing 3-5

Adding Tags inside a SubShader in ShaderLab

Inside the Tags block, we can also specify the Queue to determine when this object gets drawn. Earlier, I gave a simplified explanation of how Unity draws objects: all opaque objects first and then all transparent objects. It’s a bit more in-depth than that. The Queue is an integer value, where lower values get rendered first. There are a few preset values:
  • Background = 1000

  • Geometry = 2000

  • AlphaTest = 2450

  • Transparent = 3000

  • Overlay = 4000

If you would like to use a value other than these presets, we can add or subtract values. To set a Queue value of 1500, we can say Background+500 or Geometry-500. Using these default values, you can see that any objects in the Background, Geometry, or AlphaTest queue get rendered before anything in the Transparent queue. For opaque objects like this one, we usually stick with Geometry, so we will insert the following line inside the Tags block.
Tags
{
      "RenderType" = "Opaque"
      "Queue" = "Geometry"
}
Listing 3-6

Setting the rendering queue in the Tags block

The last tag I want to add for now is the RenderPipeline tag. This tag can be used to restrict a SubShader to a specific pipeline, which is extremely useful if you’re using features or syntax exclusive to one pipeline. You can even include multiple SubShaders in the file, each one supporting a different pipeline. Here are the values you should use:
  • For URP, the tag value is “UniversalPipeline”.

  • In HDRP, the tag value is “HDRenderPipeline”.

  • In the built-in pipeline, there is no corresponding tag value. Place any SubShader blocks for the built-in pipeline at the bottom of the list.

With that in mind, if you are using URP, we’ll add the following tag to our shader. This is the first pipeline-dependent piece of code we’re adding. In the built-in pipeline, you shouldn’t add this.
Tags
{
      "RenderType" = "Opaque"
      "Queue" = "Geometry"
      "RenderPipeline" = "UniversalPipeline"
}
Listing 3-7

Adding a RenderPipeline tag in the Tags block in URP

With the Tags block out of the way, the last bit of ShaderLab we need to add is a Pass block.

Adding a Pass

A Pass is where we add the “proper” shader code and start making things appear on-screen. A pass is one complete cycle of rendering an object; a SubShader can contain multiple Pass blocks, and if there is more than one, then Unity will run all of them from top to bottom.
SubShader
{
      Tags { ... }
      Pass
      {
      }
}
Listing 3-8

Creating a shader pass inside the SubShader

It’s also possible to add a second Tags block inside a Pass block. The most common reason for doing so is to label passes with a LightMode tag, which tells Unity what the pass will be used for. We don’t always need to add one, although when I’m working in URP, I like to explicitly add one to each pass because URP only allows you to add one pass with each valid LightMode tag. We’ll explore those tags in later sections. For now, if you are using URP, we will add a LightMode tag called UniversalForward, which is used for “standard” geometry rendering with the Forward Renderer. In the built-in pipeline, don’t worry about adding a LightMode tag for now.
Pass
{
      Tags
      {
            "LightMode" = "UniversalForward"
      }
}
Listing 3-9

Using the UniversalForward LightMode tag in URP

Inside the Pass, we will also specify which shading language we are using. In the past, Unity used the Cg language for its shaders, but the language has since been discontinued, and Unity shaders now use HLSL (although it is also possible to write GLSL shaders too). I’m mentioning this here because we are going to use two enclosing keywords to wrap our shader code – HLSLPROGRAM and ENDHLSL.

Note

You might find tutorials online that still use the Cg language, which requires code to be enclosed in CGPROGRAM and ENDCG. Most of the syntax is identical between Cg and HLSL, but we’re going to exclusively do things the modern way in HLSL.

SubShader
{
      Tags { ... }
      Pass
      {
            HLSLPROGRAM
                  // HLSL code goes in here.
      ENDHLSL
      }
}
Listing 3-10

Specifying the shading language

We’re finally ready to write some HLSL code. How exciting! From this point, all code will be written between the HLSLPROGRAM and ENDHLSL keywords. We’ll no longer be writing in Unity’s proprietary ShaderLab language and will instead be writing in the HLSL shading language. Next, let’s do some setup for our shader.

Pragma Directives and Includes

We’re going to write vertex and fragment functions to determine what the shader does. I usually name them vert and frag, respectively. These are just regular HLSL functions that we can name however we want, so to tell Unity which functions are the vertex and fragment shaders, respectively, we use special preprocessing directives like the following.
HLSLPROGRAM
      #pragma vertex vert
      #pragma fragment frag
ENDHLSL
Listing 3-11

#pragma statements for declaring vertex and fragment functions

#pragma statements pop up quite often when writing shaders. We use them to define shader functions, like Listing 3-11, as well as to compile shaders for certain platforms or require certain hardware features. We also use a different preprocessor statement, #include, to include other shader files inside this one. It’s in the name really! Quite helpfully, Unity provides a large number of shader include files containing useful functions, matrices, and macros that we frequently need. The location of these files differs depending on the pipeline you’re using:
  • In the built-in pipeline, it’s not easy to access these within the engine directly. They can be found at [Unity root installation folder]/Editor/Data/CGIncludes.
    • The most important and frequently used file is UnityCG.cginc. Don’t let the cginc file extension confuse you – it’s still compatible with HLSL.

  • In URP and HDRP, include files can be accessed in-Editor. In the Project View, scroll down to the Packages section and find the following folders:
    • Core RP Library/ShaderLibrary contains core shader files common to both pipelines.

    • Universal RP/ShaderLibrary contains URP’s shader files.

    • High Definition RP/Runtime/Material contains HDRP’s shader files in a series of subfolders.

In each shader, we will include a standard library helper file containing the most useful macros and functions that are key to writing shaders. In the built-in pipeline, we must include the UnityCG.cginc file I mentioned, and in URP, we’ll include the Core.hlsl file from the URP shader library. Pick the following code that corresponds to your pipeline.
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
Listing 3-12

Including Unity’s standard shader library in the built-in pipeline

#pragma vertex vert
#pragma fragment frag
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
Listing 3-13

Including Unity’s standard shader library in URP

The first step of the graphics pipeline involves collecting all the data from the scene to pass to the shader, so we need to devise some way of obtaining the data here on the shader side. We’ll do that via structs.

Controlling Data Flow with Structs

We pass data between shader stages via containers called structs, which contain a bunch of variables. The first struct contains all the data we want to pull from the mesh and pass to the vertex shader.

The appdata Struct

We usually name this struct appdata, VertexInput, or Attributes; I will stick with the name appdata throughout the book because Unity’s built-in structs are named similarly, although you can name this whatever you want. Each instance of appdata contains data about one vertex of the mesh, and for now, all we need is the position of the vertex. Vertex positions are defined in object space, where each position is relative to the origin point of the mesh (for a refresher on object space, see Figure 2-10).

HLSL requires us to add what’s called a semantic to each variable. It’s just a bit of added information that tells Unity what each variable will be used for in the next shader stage – for example, vertex positions need to use the POSITION semantic. Semantic names don’t need to be capitalized, although most documentation will use capitalized names. We will make it clear that the vertex position is in object space by naming the variable positionOS.

Note

A full list of semantics can be found on the Microsoft HLSL website. At the time of writing, it can be found here: https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-semantics.

#include "include-file-for-your-pipeline"
struct appdata
{
      float4 positionOS : POSITION;
};
Listing 3-14

The appdata struct for passing data to the vertex shader

Take note of the semicolon after the closing brace! The type of the positionOS variable is float4 because we are using floating-point values to represent each component of the position, and there are four components. We will cover the core types in HLSL and how to use them later in the chapter. While we are thinking about structs, we will also write the struct for data being passed between the vertex and fragment shaders.

The v2f Struct
This struct is commonly called v2f, VertexOutput, or Varyings, but I will be sticking with v2f, which stands for “vertex-to-fragment.” Recall that the rasterization step happens after the vertex shader and before the fragment shader, so we need to know which types of data will be output from the vertex shader. This might not be the same as the data input to the vertex shader – for instance, we may calculate or generate our own types of data from scratch inside the vertex shader. For our first shader, we’ll only be outputting the clip-space position of each vertex, so we will name this variable positionCS (see Figure 2-15 for a look at clip space).
struct appdata { ... };
struct v2f
{
      float4 positionCS : SV_POSITION;
};
Listing 3-15

The v2f struct for passing data from the vertex shader to the fragment shader

You’ll notice that the semantic is different here. HLSL makes a distinction between a position being input to and output by the vertex shader, so we use the SV_POSITION semantic instead. Like all semantics, other learning resources might choose not to capitalize the name. Next, we will deal with variables.

Variables in HLSL

Although we declared the _BaseColor property back in the Properties block, we need to declare it again inside HLSL. It’s also possible to declare variables here that are not specified in the Properties block – in that case, we would need to use C# scripting to set the values of those variables rather than modifying values in the material’s Inspector. _BaseColor is, obviously, a color, which doesn’t have a special type in HLSL. It’s just a four-element vector of floating-point numbers, for which we use the float4 type. We declare these variables just below the structs we just wrote.

Note

Unity may also generate certain shader variables for us. We need to declare some of them inside HLSL, but we won’t need to include them in Properties or pass the data to the shader ourselves with scripting. An example of this kind of variable is _CameraDepthTexture, which we will see later.

struct v2f { ... };
float4 _BaseColor;
Listing 3-16

Declaring variables in HLSL in the built-in pipeline

The rules regarding variables are slightly different when using URP. This code will still work, but I’m including a section once we’ve finished the shader explaining how to tweak the code to make use of features exclusive to URP and HDRP. With that change aside, everything is set up for us to start writing the two shader functions, vert and frag.

The Vertex Shader

The vertex shader function needs to transform the vertex positions from object space to clip space, which would usually involve a series of transformations from object to world space, then from world to view space, and then from view to clip space. A full description of this process is available in Chapter 2. The combined transformation is called the model-view-projection transformation, and Unity provides a function to apply the transformation for us.
  • In the built-in pipeline, this function is called UnityObjectToClipPos:
    • The name is long, but it intends to clarify what it is doing: it’s carrying out the object-to-clip transformation, and it’s operating on positions.

    • There are similarly named functions in the built-in pipeline, such as UnityObjectToWorldDir, which performs the object-to-world transformation and operates on direction vectors.

  • In URP, this function is called TransformObjectToHClip:
    • Similarly, the name is meant to tell you what the function is doing, and other functions in the core shader library are named using similar conventions.

  • In both pipelines, the respective function takes the object-space position as input and returns the clip-space position as output.

The vert function, which is our vertex shader, is just like any regular function, with a return type (in this case, v2f) and a list of parameters (we accept an appdata instance as input). Pick the correct code for the pipeline you’re using.
v2f vert (appdata v)
{
      v2f o;
      o.positionCS = UnityObjectToClipPos(v.positionOS);
      return o;
}
Listing 3-17

The vertex shader in the built-in pipeline

v2f vert (appdata v)
{
      v2f o;
      o.positionCS = TransformObjectToHClip(v.positionOS);
      return o;
}
Listing 3-18

The vertex shader in URP

Finally, we will write the fragment shader function, frag.

The Fragment Shader

The only argument to the function is the v2f struct that was output by the vert function, and we have a float4 return type, because the fragment shader will calculate and return the color of each fragment. The key difference between the two functions is that we need to specify a semantic for the fragment output, which is SV_TARGET. Inside the function, the only thing we need to do is return the _BaseColor, which was input to the shader as a property.
float4 frag (v2f i) : SV_TARGET
{
      return _BaseColor;
}
Listing 3-19

The fragment shader

Although we didn’t use any of the data from the v2f input ourselves, Unity automatically uses the variable with the SV_POSITION semantic to rasterize the object into fragments, so we didn’t set up the v2f struct for nothing! If you’ve followed each step correctly, then your shader will compile, and the Inspector should display the correct shader properties when you select the material, as seen in Figure 3-4. Success!

An inspector window with a shaded sphere, under the Hello world material lists properties; shader, base color, render queue, and double sided global illumination.

Figure 3-4

Our first shader attached to a material

We can change the behavior of the material in a few ways through the Inspector window:
  • You can see the Base Color property on the material, which we can tweak to change the color of the preview at the bottom of the window and any object that uses this material.

  • It is also possible to override the Queue we defined within the shader – instead of using the Geometry queue, which has a value of 2000, we can set any integer value here to modify how Unity renders the object. There may be edge cases where this is necessary, but I usually leave this field alone and let it inherit the value from the shader.

  • If we tick the Double Sided Global Illumination option, then Unity will account for both sides of each face of the mesh while lightmapping, even if only one side of each face is rendered normally.

We have successfully written a shader that renders an object in a single color with no lighting, which is about as “Hello World” as you can get. Congratulations for making it to this stage! Now that we’ve written our first shader, let’s revisit one of the key differences between writing shaders for the built-in pipeline and URP.

The SRP Batcher and Constant Buffers

As we have seen, there are sometimes differences between shaders designed for each of Unity’s render pipelines. Some of these differences amount to changing a specific function name because the core libraries differ slightly between pipelines, and other differences represent a fundamental change in how the pipelines operate. In this section, I want to provide an overview of one difference in particular: the SRP Batcher.

Note

If you are just starting out with Unity, you may find it useful to stick with the built-in pipeline for now and come back to the Universal Render Pipeline later, since a lot of tutorials out there were written for the built-in pipeline. However, if you are planning on using Shader Graph, then you will require URP or HDRP. Eventually, URP will become the default for new projects in Unity, and future learning materials will focus on it.

This is a system supported by all Scriptable Render Pipelines (including URP and HDRP) to render objects more efficiently than traditional methods, but our shaders need to conform to a handful of rules to be compatible with the SRP Batcher. Namely, we need to include most of our variable declarations inside a special structure called a constant buffer.

This type of buffer lets us specify that the variables within stay constant throughout the shader pass – in other words, the value of the variables won’t abruptly change at any point while the shader is running. We use a pair of macros, CBUFFER_START(name) and CBUFFER_END, to enclose our constant buffer, and we provide the special name UnityPerMaterial for the buffer containing all properties that might change between materials. In place of Listing 3-16, we will use the following code instead.
struct v2f { ... };
CBUFFER_START(UnityPerMaterial)
      float4 _BaseColor;
CBUFFER_END
Listing 3-20

The CBUFFER for declaring variables in URP

We don’t need to change the variable names or types in any way – we just need to enclose them in the constant buffer. By making this change, Unity can batch together objects that use the same shader and the same mesh and render them using a single draw call. I’ll be going into much more detail about optimizations like this in Chapter 13, but this is one SRP-exclusive feature I want to keep in mind throughout the book! Now that we have written our first shader, let’s cover basic shader features that we didn’t see in that shader.

Common Shader Syntax

Shaders come with many types, operators, and functions, which we’ll be seeing a lot throughout the book, so I will introduce the most important ones here. By the end of this section, you should understand the difference between similar variable types and what they are used for, as well as operators that work on those types.

Scalar Types

There are three main data types that are used for floating-point numbers: float, double, and half. We’ve seen float before – it uses 32 bits to represent numbers. The half type uses 16 bits, which makes it great for low-precision values with at most three digits after the decimal point, but some platforms map them to float anyway. And the double type uses 64 bits, but you will rarely need that much precision. When writing numbers inside a shader (a raw number written down is called a literal), we can add a postfix character to specify the type – f for float, h for half, and d for double.
float x = 3.7f;
half y = 9.4h;
double z = 27.047d;
Listing 3-21

The float, half, and double data types

You may also see the fixed type in some older shaders, which is supported by Cg and uses 11 bits for representing low-precision data like non-HDR colors, but it’s not part of HLSL. There are also integer data types – int and uint represent signed and unsigned integers, respectively. Both types use 32 bits. A signed integer can represent negative numbers, while unsigned integers cannot; both can represent the same number of numbers, but their range is shifted.
int a = -7;
uint b = 9;
Listing 3-22

The int and uint types

We can use common math operators with these types. If we mix types, then HLSL will intelligently interpret what type the output should be. We can add using the + operator, subtract using -, multiply with *, and divide using the / operator. The unary operator is also defined, and we can use it to make a single value negative. These operators follow the same precedence rules as regular math; brackets are evaluated first, then powers, then multiplication and division, and then addition and subtraction. There’s also the modulus operator, %, which returns the remainder after division. Unlike many languages, HLSL defines the modulus operation for both integers and floating-point types.
7 * 4       // = 28
-19         // = -19
6.4f + 9    // = 15.4f
14 / 4      // = 3 (integer types truncate the fractional part)
7 % 3       // = 1
7.4f % 3    // = 1.4f
Listing 3-23

Valid math operations in HLSL

Vector Types

Vector types in HLSL are made by combining the scalar types we just covered. The way we construct these types is simple: take the name of the scalar type we are basing the vector type on and add a number to the end that represents the number of elements. If we need a two-element vector of floating-point numbers, we use the float2 type. A three-element vector of integers? That’s an int3.

Note

Personally, I wish conventional programming languages commonly supported these kinds of types out of the box.

To declare any of these types, we can use a constructor like the following.
float2 x = float2(5.4f, 9.2f);
int3 y = int3(2, -4);
uint1 = uint1(3);
Listing 3-24

Vector types in HLSL

The number of elements must be between 1 and 4 inclusive. As you can see in Listing 3-24, it’s possible to use a vector type with one element, like int1, which is practically the same as the scalar type it is based on. We already saw float4 being used to represent positions when we wrote the example shader earlier in this chapter. Vectors are used to represent all manner of things in HLSL such as colors, positions, and directions, so there are operations on vectors that you should be aware of. Like scalars, we can use common math operators with vectors. The +, -, *, /, and % operators work element-wise. This means, for example, that the * operator works very differently from the dot product, which is typically thought of as “multiplying” vectors.
float2 x = float2(1.2f, 2.4f);
float2 y = float2(-3.1f, 4.6f);
x * y;     // = (-3.72f, 11.04f)
x – y;     // = (4.3f, -2.2f)
y + x;     // = (-1.9f, 7.0f)
y / x;     // = (-2.58333f, 1.91666f)
Listing 3-25

Vector math operations

We can access the individual elements of a vector in many ways. Vectors can have a size anywhere between 1 and 4, and we can access those four elements using {VectorName}.x, .y, .z, and .w to get the first, second, third, and fourth component, respectively, as long as the vector contains the element you’re trying to access. Vectors can also contain color data, so, helpfully, we can use {VectorName}.r, .g, .b, and .a to get each of the four components – the two naming systems are aliases for one another. Otherwise, we can use array indexing syntax like a classic programming language to access elements, where the indices start from zero. We can perform operations on those individual components as if they were scalar values or assign values to those components.
float3 example = float3(1.9f, -2.7f, 3.5f);
example.x + example.z;     // = 5.4f
example.y = 2.4f;          // example = (1.9f, 2.4f, 3.5f)
example.b;                 // = 3.5f
example[0] + example[2];   // = 5.4f
Listing 3-26

Accessing vector components

What if we need to access multiple components of a vector? There are several ways to do that. For instance, if we need to convert the first three parts of a float4 into a new float3, then we can access those components one by one using any of the methods we just covered.
float4 example = float4(1.2f, 2.4f, -5.2f, -0.7f);
float3 other = float3(example.x, example.y, example.z);
Listing 3-27

Accessing multiple vector components

However, this is cumbersome to type out every time. Thankfully, it is possible to access multiple vector components at once, in any order, with possible repetition – this is called swizzling, and frankly, it’s one of the best features of shading languages. This allows us to create a new vector of up to four components by mixing and matching the components of an existing vector. The following are all valid statements in HLSL.
float4 example = float4(1.2f, 2.4f, -5.2f, -0.7f);
float3 ex1 = example.xyz;  // = (1.2f, 2.4f, -5.2f)
float3 ex2 = example.rgb;  // = (1.2f, 2.4f, -5.2f)
float3 ex3 = example.xxx;  // = (1.2f, 1.2f, 1.2f)
float4 ex4 = example.wzyx; // = (-0.7f, -5.2f, 2.4f, 1.2f)
float4 ex5 = example.yyxx; // = (2.4f, 2.4f, 1.2f, 1.2f)
Listing 3-28

Using swizzling to access multiple vector components

Swizzling makes writing shaders far quicker than it otherwise would be, as you will constantly need to access more than one vector component like this. Unfortunately, you can’t swizzle using array index syntax, so you don’t tend to see that syntax used as often in shaders as the other accessing methods. You may also need to create vectors that are made up of parts of multiple existing vectors, which is also easy to do in HLSL.
float4 example1 = float4(-7.4f, 2.1f, 3.2f, 3.3f);
float3 example2 = float3(2.2f, 8.9f, 9.0f);
float4 example3 = float4(example1.xy, example2.xy);
// example3 = (-7.4f, 2.1f, 2.2f, 8.9f)
Listing 3-29

Combining parts of multiple vectors into new vectors

Matrix Types

Matrix types are a bit more complicated than vector types. We can define matrices in a similar way to vectors by writing the scalar type that the matrix will contain, followed by the dimension. As you may recall from Chapter 2, matrices are referred to by their size by writing the number of rows by the number of columns, so a 3 × 2 matrix of floating-point numbers has three rows and two columns, and we use the type float3x2 to represent it. We construct matrices by listing the elements of each row in a list, like the following.
float3x3 example = float3x3
(
     1, 0, 0,         // First row
     0, 1, 0,         // Second row
     0, 0, 1          // Third row
);
Listing 3-30

Constructing the 3 × 3 identity matrix in HLSL

It’s not necessary to space out a matrix like this by stating each row on a separate line. We can just as easily condense Listing 3-30 onto a single line, but you will find it easier to read a matrix that has been written out like so. Now that we have a matrix, how do we access its elements? We can access elements using array indexing syntax like we could with vectors. However, like with vectors, it’s not possible to swizzle with that syntax. Since matrices are two-dimensional structures, we need two array indices to access a single element: first for the row and second for the column. Or, if we want to grab an entire row at a time, we can supply just one index for the row.
float ex1 = example[0][0];                   // = 1
float ex2 = example[0][1] + example[0][2];   // = 0
float ex3 = example[2][2];                   // = 1
float ex4 = example[3][3];                   // invalid syntax
float3 ex5 = example[0];                     // = (1, 0, 0)
Listing 3-31

Accessing matrix elements in HLSL

So what about swizzling? There are two other ways to access matrix elements. First, we can say {MatrixName}._mxy, where xy are the zero-indexed row and column you want to access. For instance, example._m00 gets the top-left element of the example matrix. The syntax is a bit unwieldy, but bear with it! The other way to access elements is to say {MatrixName}._xy, where xy are now the one-indexed row and column you want. Writing example._11 also gets the top-left element of the example matrix. If you don’t like the discrepancy between both those incredibly similar schemes, I’d say learn one and stick to it – throughout this book, I will use the zero-indexed version that uses m in the name. But be aware that other resources could use either syntax. Using these two methods, we can swizzle to create new vectors or matrices (up to four elements can be pulled from the matrix at a time using swizzling).
float2 ex1 = example._m11_m11;             // = (1, 1)
float4 ex2 = example._m00_m01_m10_m11;     // = (1, 0, 0, 1)
float2x2 ex3 = example._m00_m01_m10_m11;   // = 2x2 identity
Listing 3-32

Swizzling matrix elements in HLSL

Finally, like vectors, matrices support common math operators, which work per component. This means, among other things, that the * operator is not the same as doing matrix multiplication as I described in Chapter 2. There’s a dedicated mul function for matrix multiplication, which we will see later.
float3x3 a =
(
     1, 0, 1,
     0, 3, 0,
     0, 0, 2
);
float3x3 b =
(
     2, 0, 0,
     0, 2, 0,
     0, 0, 2
);
float3x3 result = a * b;
// result = ( 2, 0, 0,
//            0, 6, 0,
//            0, 0, 4 )
Listing 3-33

The * operator on matrices

Included Variables

Unity includes several variables to aid your shader programing. While it is possible to send arbitrary data to a shader ourselves through C# scripting, Unity sends a lot of data to the shader automatically, such as time-based variables, transformation matrices, and camera properties. Some of these will be explored in detail in their respective chapters, so we will cover only a selection of the variables here.

Transformation Matrices

Transformation matrices are the backbone of the graphics pipeline, so it makes sense for Unity to declare the key matrices for us and make them available inside shaders. The syntax for the name of most of these matrices is UNITY_MATRIX_{NAME}. Table 3-1 is a non-exhaustive list of the most important matrices that are available; each one is of type float4x4.
Table 3-1

Matrices provided by Unity

Matrix name

Description

UNITY_MATRIX_M

unity_ObjectToWorld

The model matrix that transforms from object space to world space. These two names are aliases for one another.

UNITY_MATRIX_I_M

unity_WorldToObject

The inverse model matrix that transforms from world space to object space. These two names are aliases for one another only on URP; for the built-in pipeline, only unity_WorldToObject exists.

UNITY_MATRIX_V

The view matrix that transforms from world space to view/camera space.

UNITY_MATRIX_P

The projection matrix that transforms from view space to clip space.

UNITY_MATRIX_MV

The model-view matrix that transforms from object space directly to view space.

UNITY_MATRIX_VP

The view-projection matrix that transforms from world space to clip space. This can be considered the “camera matrix” since both view and projection are reliant on the camera properties.

UNITY_MATRIX_MVP

The model-view-projection matrix that transforms from object space directly to clip space. This matrix is often used in the vertex shader.

Time-Based Variables

Shaders can be animated over time without requiring us to write external time data to the shader. Unity already provides plenty of time variables for our shaders, covering the time since the level was loaded and the time since the last frame execution. Let’s see these variables in action.

The variable _Time is a float4 that contains the time since level load in four commonly used formats. Let’s say the time since level load is called t. The x-component of _Time stores t/20, which is useful if you need a slow timer in your shader. The y-component stores an unedited t value, so you’ll use _Time.y if you need the exact number of seconds since level load. _Time.z stores 2t, and _Time.w stores 3t, which are both useful if you need a fast timer in your shader. Of course, not all these variables will work in all cases, so a good solution is to include a property in your shader, perhaps called _Speed, and use _Time * _Speed in your calculations. The advantage of writing your shaders in that way is that you can modify the speed of animations per material.
float t = _Time.y;
float fastT = _Time.w;
Listing 3-34

The _Time variable

Another useful application of _Time is to create a clock that ticks up to a certain value and loops back round to zero. The following code snippet will count to 1 second and then loop back to zero and start counting to 1 again.
float loopedTime = _Time.y % 1.0f;
Listing 3-35

Creating a looping timer using _Time

Surprisingly often, you will use sin(_Time.y) or cos(_Time.y) in your shaders. A shorthand for both functions can be found in the additional variables _SinTime and _CosTime, respectively. Both are of type float4, and they respectively contain the sine and cosine of t/8, t/4, t/2, and t, in that order.
float sineTime1 = sin(_Time);
float sineTime2 = _SinTime.w;
Listing 3-36

Using the sine of _Time. The following two statements are equivalent

Finally, we can access the time since the previous frame was rendered, which is conventionally called delta time. This is also a float4, where each value contains the time in seconds since last frame in different formats. Let’s call the delta time in seconds dt. The variable unity_DeltaTime stores dt in the x-component and 1/dt in the y-component. We can also access the smoothed delta time, which is dt averaged out over a handful of frames – this avoids the value of dt spiking temporarily when a single frame takes unusually long to process. Let’s call the smoothed delta time sdt. Unity stores sdt in unity_DeltaTime.z and 1/sdt in unity_DeltaTime.w. I don’t often find myself needing delta time in shaders, but it’s useful to know it’s there.
float deltaTime = unity_DeltaTime.x;
Listing 3-37

Using delta time in shaders

Summary

In this chapter, we saw how to create a basic unlit color shader in Unity in the built-in and Universal render pipelines. Shaders must take data about the scene and transform the position of each vertex into a different coordinate space using the functions provided by Unity. Then, we can color each pixel, or fragment. A language called ShaderLab acts as a wrapper around the shader code, and it lets us define the macroscopic features of the shader and provides an interface between the Unity Editor and the shader. In the next chapter, we will see these same concepts in the context of Shader Graph. In this chapter, we learned the following points:
  • Shaders must be attached to a material to be applied to a mesh.

  • HLSL, GLSL, and Cg are examples of shader languages. The standard shader language in Unity is HLSL, since Cg has been deprecated.

  • Unity’s proprietary language, ShaderLab, wraps around shader code and provides an interface between the shader and the rest of Unity.

  • Properties are shader variables that we can edit on a material.

  • A ShaderLab file can contain many SubShaders, and Unity picks the first one that is valid on the hardware.

  • Tags can be used to customize the rendering order and render pipeline for a specific SubShader or Pass.

  • Unity provides helpful macros and functions that are commonly used in shaders.

  • URP shaders must declare most variables inside a constant buffer.

  • HDRP uses Shader Graph instead of code for most user-generated shaders.

  • There are several core variable types in HLSL that represent scalars, vectors, and matrices of different dimensions.

  • Swizzling can be used as a shorthand to access vector components in any order or combination, with possible repetition.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.71.237