© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
D. IlettBuilding Quality Shaders for Unity®https://doi.org/10.1007/978-1-4842-8652-4_1

1. Introduction to Shaders in Unity

Daniel Ilett1  
(1)
Coventry, UK
 

Everybody wants to make gorgeous games, and shaders are one of the most powerful tools in the technical artist’s toolkit to achieve just that. In this chapter, we will take a step back and explore, briefly, how rendering in Unity works and where shaders fit into the story. By the end of the chapter, you will understand the basics of the rendering process, what Unity’s different render pipelines are, and how we will use programs called shaders to control how our games look.

Rendering

A game engine is nothing but a toolbox filled with software geared toward making games, and the bit of software we’re most interested in is the renderer – the code that takes meshes, sprites, and shaders and turns them all into the images you see on your screen. Unity is no different from other engines in this regard. At a high level, the renderer processes a set of data through a series of distinct steps, as seen in Figure 1-1. Some of the stages are handled automatically by Unity, so we won’t need to worry about them, while others are controlled entirely by using shaders that we will write. As we will see, each stage has a specific purpose, and some are even optional. Let’s briefly explore the history of the graphics pipeline and see how shaders came about in the first place.

A rendering graphics pipeline converts the raw vertex data to per fragment operations through the vertex shader, geometry and tessellation, rasterization, and fragment shader.

Figure 1-1

The stages of the graphics pipeline. The stages surrounded by a dotted line indicate parts of the pipeline we can customize using shaders

In ancient times (the 1990s), with the advent of real-time 3D graphics, the functionality afforded to graphics programmers was limited. We had what’s called the fixed-function pipeline, where we could configure a relatively small set of specialized functions (such as “do lighting,” “add textures,” and so on) that ran on the GPUs of the time. This was faster than previous software rendering methods, which ran on the CPU, but came with heavy limitations. The fixed-function pipeline is akin to building a car with five options for the body shape, seven for color, and three for the engine size; it covered a lot of use cases, but we need finer control to build a car with any shape imaginable with all the colors of the rainbow and beyond.

Enter the programmable pipeline. In modern computer graphics, we can completely control the behavior of certain parts of the rendering process using small programs called shaders. No longer are we constrained to a limited set of functions – now, we can write C-style code to instruct the graphics card to do any combination of effects. The impossible became possible. Whereas the fixed-function pipeline is like flicking a bunch of levers someone else made for us, the programmable pipeline is like building an entire machine from scratch using raw materials to our exact specifications.

As you can see from Figure 1-1, there are several stages, each of which carries out a specific purpose:
  • The vertex shader lets us control how objects get placed on-screen. This type of shader typically receives vertex data from a mesh and uses mathematical transformations to position them on-screen properly. You can also use this shader stage to animate vertex positions over time.

  • The tessellation shader can be used to subdivide a mesh into smaller triangles, creating additional vertices. This is useful when combined with vertex shaders that offset the positions of vertices, as you can get more detailed results.

  • The geometry shader can be used to create whatever new primitive shapes you want, anywhere you want. Although this stage can incur a performance hit, it can do things that are difficult to achieve with the other shader stages.

  • The fragment shader is used to color each pixel of the object. Here, we can apply color tint, texture, and lighting to each part of the object – for that reason, most of the “interesting” work is often carried out in the fragment shader.

  • Compute shaders are another kind of shader that exist outside the pipeline and can be used for arbitrary computation on the GPU. These computations don’t necessarily need to involve graphics at all, although compute shaders still have a use in some graphics-only contexts.

The vertex shader and fragment shader stages are the two most important stages that we have control over, so we’ll be seeing them in almost every shader we write. The other stages are optional. In Unity, we will primarily be using HLSL (High-Level Shading Language) to write our shader code. There are several kinds of shading language available, and the way we write shaders in Unity has changed over time, so we will explore this in greater detail in Chapter 3 when we write our very first shader file. Now that we know what a shader is, let’s look deeper at the flow of data throughout the rendering process.

Game Data

At the very start of the pipeline, we have a bunch of data. When we place objects such as characters or landscapes into a scene in Unity, we are implicitly adding data that needs to be passed to the first stages of the graphics pipeline. Here’s a brief list of the kinds of data the graphics pipeline needs:
  • The position and orientation of the camera defines how every other object will appear on the screen. Some objects will be obscured from view if they are behind the camera, outside of its field of view, or if they are too far from or too close to the camera.

  • Meshes, or 3D models, can be defined by a set of vertices (points), connected by edges, three of which make up a triangle. These vertices are passed to the vertex shader, alongside data such as vertex colors and texture coordinates. In Unity, the Mesh Renderer component is responsible for passing this data to the shader.

  • Sprites in 2D can be considered a square quad made up of two triangles. Unity’s Sprite Renderer component passes the two triangles to the vertex shader.

  • Objects on the UI (user interface), such as text or images, use specialized components to pass data to the shader in a similar manner.

Objects are processed one at a time through each stage of the graphics pipeline. Before processing, Unity may sort the data in certain ways. For instance, a typical graphics pipeline will render all opaque objects and then will render semitransparent objects over the top, starting from the furthest transparent object from the camera (i.e., back to front). Once the preprocessing stage is completed, the next step is to render each object in turn, starting with the vertex shader.

The Vertex Shader

As I mentioned, Unity’s renderer components are responsible for passing data to the vertex shader. Here’s where things start to get exciting for technical artists like us! Most vertex shaders will look the same: we take vertex positions, which start relative to the object being rendered, and use a series of matrix transformations to put them in the correct position on-screen. We can also implement some effects at the vertex stage. For example, we can generate waves to animate a water plane or expand the mesh to create an inflation or explosion effect.

Alongside positions, the vertex shader also transforms other vertex data. Meshes can have several pieces of information attached to each vertex, such as vertex colors, texture coordinates, and other arbitrary data we might choose to add. The vertex shader doesn’t just perform these transformations; it also passes the data to subsequent shader stages, so we can modify data however we want. For example, in some of the shaders we’ll see later, we will pass the world position of vertices so we can use it in the fragment shader.

Between the vertex and fragment shader stages, a process called rasterization takes place, in which the triangles that make up a mesh are sliced into a 2D grid of fragments. In most cases, a fragment corresponds to one pixel on your screen. Think of the rasterizer as an advanced version of MS Paint, which takes the triangles of the mesh and converts them into an image the size of the game window – that image is called the frame buffer. During rasterization, other properties are interpolated between vertices. For example, if we consider an edge where the two vertices have black and white vertex colors, respectively, then the new colors of pixels along that edge will be varying shades of gray. Once rasterization has finished, we move on to the fragment shader.

The Fragment Shader

This is sometimes called the pixel shader, and it is perhaps the most powerful and flexible stage of the graphical pipeline. The fragment shader is responsible for coloring each pixel on the screen, so we can implement a wide variety of effects here, from textures to lighting, to transparency, and so on. Special kinds of shaders called post-processing shaders, which can operate on the entire screen, can be used for additional effects, such as simple color mapping, screen animations, depth-based effects, and special types of screen-space shading.

Once the fragment shader has finished, a final round of processing occurs. These processes include depth testing, where opaque fragments may be discarded if they would otherwise be drawn behind another opaque fragment from another object, and blending, where the colors of semitransparent objects are mixed – blended – with colors that have already been drawn to the screen. We may also use a stencil, which stops certain pixels from being rendered to the screen, as you can see in stage 6 of Figure 1-1.

Of course, I’ve simplified many of the stages for this brief primer. Later on in the book, we’ll explore the vertex and fragment shaders to the fullest, and we’ll see optional types of shader designed for highly specialized tasks. That said, most of the shaders you will write throughout your shader career will involve moving vertices around and coloring fragments. So far, we’ve seen how the graphics pipeline operates in general, but there are a few other things to be aware of before we dive in.

Unity’s Render Pipelines

Now we come to the elephant in the room. Before 2017, Unity had a single rendering pipeline for all use cases: modern high-end PC and console, virtual reality, low-end mobile devices, and everything in between. According to Unity themselves, this involved compromises, which sacrifice performance for flexibility. On top of that, Unity’s rendering code was something of a “black box,” which was impenetrable without a Unity source code license, even with comprehensive documentation and an active developer community.

Unity chose to overhaul its renderer by introducing Scriptable Render Pipelines (SRPs). To keep things brief, a SRP gives developers control over exactly how Unity renders everything, letting us add or remove stages to or from the rendering loop as required. Realizing that not all developers want to spend the time building a renderer from scratch (indeed, one of the key reasons many people choose a game engine in the first place is that all the work on the rendering code is done for you), Unity provides two template render pipelines: the High-Definition Render Pipeline (HDRP), which targets high-end console and PC gaming, and the Universal Render Pipeline, which is designed for lower-end machines and mobiles, although it can also run on more powerful hardware. All SRPs, including custom SRPs you write and the two template render pipelines, bring exclusive support for new systems, which I will mention throughout the book.

The legacy rendering system is also available for those who already started their projects in older Unity versions and is now called the built-in render pipeline. For most new projects targeting a broad set of hardware, it is recommended that you start a project with URP – eventually, Unity will make this the default for new projects. Unfortunately for us, shaders sometimes differ slightly between all three pipelines, which is why I feel it’s important to make the distinction between them early on. In this book, I will do my best to explain the differences between each and present you with shader examples that work in all three where possible.

Note

Although it is possible to swap pipelines partway through development, it can be painful to do so, especially with larger projects. If you have already started a project, I recommend sticking with the render pipeline you chose unless there is a feature only supported by a different pipeline that you absolutely require.

Shader Graph

With the advent of the SRPs came a few exclusive features. For us, none of those features are quite as impactful as Shader Graph, Unity’s node-based shader editor. Traditionally, shaders have existed only as code, which puts them firmly on the “programmer” side of the “programmer-to-artist” spectrum. But in the last decade or so, one of the biggest innovations in the field of technical art has been the shift to visual editors for shaders. These editors are somewhat akin to visual coding tools, which replace lines of code with nodes, which are bundles of functionality that can be connected into a graph. For many, visual editors like these are far easier to get to grips with than code, because you can visualize the progression of a shader at each step. Unlike code shaders, a visual editor can preview what your shader looks like at each node so you can debug your game’s visuals with ease.

Note

Originally, Shader Graph was exclusive to SRP-based pipelines. In Unity 2021.2, however, support for Shader Graph was ported to the built-in pipeline. Unity seems to be keeping it a bit quiet, as most of the online documentation for Shader Graph seems to avoid saying so!

Throughout this book, I will show you examples in both shader code and Shader Graph, because I believe that both will be important to technical artists going forward. Chapter 3 will focus on shader code, while Chapter 4 will serve as your introduction to Shader Graph.

Summary

We covered a lot in this chapter! You should now be aware of the key terminology that will be used throughout the rest of this book. Here’s a rundown of what we learned:
  • The rendering/graphics pipeline is a series of stages that operate on data.

  • Vertex shaders are used to position objects on-screen.

  • Triangle faces are converted to fragments/pixels during rasterization.

  • Data is interpolated between vertices during the rasterization stage.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.56.45