1

A Primer to the Third Dimension

Welcome!

It’s a pleasure to have you join us on this journey to learn the fundamentals of 3D game development. Firstly, we will introduce you to the team who wrote this book.

  • Travis Bapiste (3D Artist) directed the art, modeled every model in the game, rigged the character, and helped define the design of the story.
  • Russell Craig (Sr. Software Engineer) created the scripts for the mechanics.
  • Ryan Stunkel (Sound designer) created and implemented all the sounds throughout the project.
  • Anthony Davis (Sr. Technical Artist) wrote the book, managed the project, built effects and shaders, and polished the project.

Ensuring we brought out the best of our collective experience of over 50 years (with 4 brains behind every page in this book) was a roller-coaster (and too much fun!) each day. We’ve spent over six months and two revisions to the entire book (as well as hundreds of GIFs that we have exchanged during the process) to include the most suitable use-cases that explain new concepts and, most importantly, offer a teaching approach that works. In the end, we believe we’ve successfully created a book that would have shaped the trajectory of our careers in game development and pushed us ahead by at least 3-5 years.

This book will equip you with all the tools you’ll need to start building; however, you might need more support and advice en-route to turn your ideas into creations.

That’s where our Discord server comes into play. It introduces the element of interactivity for us to connect, read the book together and have a conversation about your 3D game projects. I am available on Discord more than ever to ensure you get through with the book with ease, so please feel free to come say hi and ask any questions!

Don’t forget to drop in your quick intro in the channel #introduce-yourself when you join in: https://packt.link/unity3dgamedev

Well, let’s get started!

Goal of this book

Our goal with this book is to enable every reader to build the right mindset to think about 3D games, and then show them all the steps we took to create ours. An absolute beginner is welcome to work through this book, however the topics may ramp up in difficulty quite quickly. Though difficult, if you stick with it, you will have taken multiple steps towards mastery in game development. The main target audience for this book is those with some prior knowledge in game development, though regardless of your experience, we hope to create an enjoyable learning journey for you. The concepts we will cover soon become complex with characters, programming, design patterns, and more that we’ll learn.

To make the best use of the book, I’d recommend you follow the approach below:

  • Read through the chapters, deliberately taking breaks to think about the concepts.
  • When something is brand new, check our project in GitHub to see if viewing it in action can help explain it further. If it doesn’t, take to Google to do your own research on it.
  • If something isn’t available in the project, send me a message over Discord or seek help from peers in the community server—the link is shared above.
  • Move on to the next section and repeat!

This approach will allow you to take ownership over the areas you struggle with; once you have gone through the process, you can seek help from peers. The problems that you encounter may also be encountered by others. Solving them and bringing them to the Discord or having your peers help with the solution emboldens the overall knowledge of the community.

This book is designed for you to read through our approach and then look into the project to understand all the underpinnings. It’s more important to understand the design of why we did what we did first. We take time to go over fundamentals of the Unity interface as well, but tech can be learned over time with plenty of resources online.

Some things you will not find in here are how to model characters, rig, or animate them. We speak very little about this process as that is its own training. We do go over why we designed our character the way we did, to help you on your journey to do the same. The project has all the animations and cinematics in it, so the final products are available to see the results of our work. This approach is a strong way to learn, and we teach you why things are done the way that they are. This way, you get to see the end result, and you’re allowed to be creative and give your own thought to design, as well as work through the process on your own with new tools while working your way through the chapters.

Lastly, before we sink our teeth into the content, we’d like to advise you to open the GitHub repo, navigate to the Builds folder, and play it for yourself. This will help you to see what our small team put together in its complete form. After playing it through, you can visualize what we went through while building this project from the start.

Let’s dive into what topics we will cover in this chapter:

  • Coming around to 3D
  • Essential Unity concepts
  • The Unity interface

Let’s get started by familiarizing ourselves with the basic components of 3D game development.

Coming around to 3D

We will be going over a basic understanding of 3D work within this section. From coordinate systems to the makeup of how the 3D model is rendered, we will only go surface-level to ensure that you fully understand the foundations as you progress through this journey. By reading through this, you will gain a strong understanding of how Unity displays items.

Coordinate systems

3D coordinate systems are not all the same in each 3D application! As is demonstrated in Figure 1.1, Unity is a left-handed world coordinate system with +y facing upward. Looking at Figure 1.1, you can visualize the difference between left-handed and right-handed systems.

Figure 1.1: Coordinate systems

While we work within these coordinate systems, you will see the positions of objects represented in an array of three values within parentheses as follows:

(0, 100, 0)

This represents (x, y, z) respectively. This is a good habit to get into as programming utilizes very similar syntax when writing positions within scripts. When we talk about position, it is commonly referred to as the transform inside whichever Digital Content Creator (DCC) you’re using. In Unity, the transform holds position, rotation, and scale.

Now we understand the world coordinates, (x, y, z), and that those coordinates each start at 0, represented by (0, 0, 0). In Figure 1.2 below, where the colored lines meet is (0, 0, 0) in the world. The cube has its own transform, which encompasses that object’s transform, rotation, and scale. Keep in mind that transform holds the local position, rotation, and scale. World transforms are calculated from this following their hierarchy.

Figure 1.2: 3D coordinate system

The cube in Figure 1.2 is at (1, 1.5, 2). This is called world space as the item’s transform is being represented through the world’s coordinates starting from (0, 0, 0).

Figure 1.3: World space vs local space

Now that we know the cube’s transform is in relation to the world (0, 0, 0), we will go over the parent-child relationship that describes the local space. In Figure 1.3 above, the sphere is a child of the cube. The sphere’s local position is (0, 1, 0) in relation to the cube. Interestingly, if you now move the cube, the sphere will follow as it’s only offset from the cube and its transforms will remain (0, 1, 0) in relation to the cube.

Vectors

Traditionally, a vector is a unit that has more than one element with a direction. In a 3D setting, a Vector3 will look very similar to what we’ve worked with so far. (0, 0, 0) is a Vector3! Vectors are used in very many solutions for game elements and logic. Usually, the developer will normalize vectors so, that way, the magnitude will always equal 1. This allows the developer to work with the data very easily as 0 is the start, 0.5 is halfway, and 1 is the end of the vector.

Cameras

Cameras are incredibly useful components! They humbly show us their perspective, which allows our players to experience what we are trying to convey to them. As you may have guessed, a camera also has a transform, just like all GameObjects (which we will describe later in the chapter) in the hierarchy. Cameras also have several parameters that can be changed to obtain different visual effects.

Different game elements and genres use cameras in different ways. For example, the game Resident Evil uses static cameras to give a sense of tension, not knowing what’s outside the window or around the corner, while Tomb Raider pulls the camera in close while the player character Lara goes through caverns, giving a sense of intimacy and emotional understanding, with her face looking uncomfortable in tight spaces.

Cameras are essential to the experience you will be creating for your users. Take time to play with them and learn compositional concepts to maximize the push of emotions in the player’s experience.

Faces, edges, vertices, and meshes

3D objects are made up of multiple parts, as seen in Figure 1.4. Vertices, represented by the green circles, are points in space relative to the world (0, 0, 0). Each object has a list of these vertices and their corresponding connections.

Two vertices connected make an edge, represented by a red line. A face is made when either three or four edges connect to make a triangle or a quad. Sometimes quads are called a plane when not connected to any other faces. When all of these parts are together, you have a mesh.

Figure 1.4: Vertices, edges, faces, and meshes

Materials, textures, and shaders

Now that you know what a mesh is comprised of in all DCC tools, let’s look into how Unity displays that mesh to you. At the base level is a shader. Shaders can be thought of as small programs, which have their own language and run on the GPU, so Unity can render the objects in your scene on your screen. You can think of the shader as a large template for materials to be created.

The next level up is materials. A material is a set of attributes that are defined by the shader to be manipulated, which helps show what the object looks like. Each rendering pipeline will have separate shaders: Built-in, Universal Rendering Pipeline (URP), or High Definition Rendering Pipeline. For this book, we are using the second option, which is also the most widely used: URP.

Figure 1.5 shows an example of a material using the URP’s Standard Lit shader. This allows us to manipulate surface options, inputs for that surface, and some advanced options. For now, let’s just talk about Base Map, the first item in the Surface Inputs section. The term Base Map is being used here as a combination of the Diffuse/Albedo and Tint. Diffuse/Albedo is used to define the base color (red) that will be applied to the surface—in this case, white.

If you placed a texture into this map by either dragging a texture onto the square (green) to the left of the base map or clicking on the circle (blue) in between the box and the name, after that, you can tint the surface with the color if there need to be any adjustments.

Figure 1.5: Base material attributes

Figure 1.6 shows a simple example of what a cube would look like with a tint, texture, and the same texture with the tint changed. As we progress through the book, we will unlock more and more functions of materials, shaders, and textures.

Figure 1.6: Tint and texture base color

Textures can provide incredible detail for your 3D model.

When creating a texture, the resolution is an important consideration. The first part of the resolution that needs to be understood is “power of 2” sizes. Powers of 2 are as follows:

2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc.

These numbers represent the pixel size for both width and height. There are cases where you may need to mix the sizes as long as they fit the power of 2 scale. Examples are:

  • 256×256
  • 1024×1024
  • 256×1024 (this is less common to see, but is valid)

The second consideration regarding resolution is the size itself. The easiest way to work through this consideration is by thinking about how large the 3D object will be on your screen. If you have a 1920x1080 screen resolution, that is 1920 pixels wide by 1080 pixels tall. If the object in question is only going to take up 10% of the screen and will rarely be seen any closer, you may consider a 256x256 texture. By contrast, if you are making an emotional, character-driven game where emotions and facial expressions matter, you may want a 4096x4096 or 4K texture on just the face during those cutscenes.

Rigidbody physics

Unity assumes that every GameObject does not need to be evaluated every frame for physics. Unity uses Nvidia’s PhysX engine for its physics simulations. To get any calculated physics responses, the GameObject needs a Rigidbody component added.

By adding the Rigidbody component to the GameObject, you are then adding some properties to the GameObject seen in the inspector in Figure 1.7 below.

Figure 1.7: Rigidbody

One Unity unit of mass is equal to 1 kg of mass. This affects the physics decisions upon collisions. Drag units add friction, reducing the velocity over time. Angular drag is similar but constrained to only rotation speed. Use Gravity either turns gravity on or off, equal to standard Earth gravity (0, -9.81, 0) so the mass makes sense! Sometimes you may not want to use Earth gravity, so you can change the physics settings to make the gravity what you would like.

A thorough explanation of Rigidbody will be worked through in Chapter 7, Rigidbodies and Physics Interaction. We will be using Rigidbodies in the creation of characters as well as environments and interactive gameplay.

Collision detection

A GameObject with a Rigidbody without any colliders will not fully utilize the physics and with gravity turned on will just fall through the world. There are quite a few colliders to play with to best suit your games’ needs. In Figure 1.8 below, you can see that there are separate colliders for 2D. These use a different physics system from 3D. If you are using 2D only for your game, make sure to run with the 2D colliders.

Figure 1.8: Collider component options

You are also welcome to add multiple colliders—with the basic options seen in Figure 1.8 above—to an object to best suit the shape of the GameObject. It is very common to see colliders on empty GameObjects that are children of the primary object, to allow the easy transformation of the colliders. We will see this in practice in Chapter 4, Characters, and Chapter 5, Environment.

The Unity interface

The interface for Unity is separated into several major components. In Figure 1.9 below, we will go over the scene (red) and the items within its interface as well as how to manipulate their properties in the inspector (orange). Then we will go into items that aren’t active in the scene but are available to add in the project window (yellow). Finally, we will go over the game view (green) and the package manager (separate from Figure 1.9).

Figure 1.9: Overall interface

Scene view and hierarchy

The scene view and hierarchy work in tandem. The hierarchy is how the scene will be rendered when the game is played. The scene view allows you to manipulate the GameObjects and their values in real time. Furthermore, when the editor is in Play mode, the game can make changes to the GameObjects in the hierarchy.

When the GameObjects are being manipulated in Play mode, to include if you change them yourself in the scene view, after you stop the game, the GameObjects will revert to their original state before play has started.

Figure 1.10: Scene and hierarchy

In Figure 1.10 above, there is a lot of information that can be seen right away. On the left, in the hierarchy, you can see that there are objects in the scene. These objects all have a transform, which places them in the world. If you double-click on an item or click on an item, put your mouse in the scene view, and then press f, you will then focus on that GameObject, which puts the item centered on the scene’s viewport.

When you have an item selected, you can see that at the object’s pivot point—usually the center of the object—there is a tool showing colored arrows. The tool allows you to position the GameObject in space. You can also position the object on a plane by selecting the little square in between two axes.

In the upper right of Figure 1.10, you will see a camera gizmo. This little gizmo will allow you easily orient the viewport camera to the front, sides, top, bottom, or change it to an isometric camera or perspective with a single click.

Now that you have seen the item in the scene, selected by left-clicking in the scene or the hierarchy, you may want to change some properties or add components to that GameObject. This is where the inspector comes into play.

Inspector

To manipulate a GameObject’s value, when you select the GameObject in the scene or hierarchy, the inspector will update to show you the viable options to change per GameObject.

Figure 1.11: Inspector window

The inspector window in Figure 1.11 shows that a good amount of this item has been chosen. At the top, the name is Cube and the blue cube to the left denotes a prefab data type. You are able to make changes to the prefab itself by clicking the Open button just below the name. This will create a new scene view that shows the prefab only. When you make changes to the prefab, it will make a change to all instanced prefabs in any scene that is referencing it.

The transform component shows the position, rotation, and scale of the prefab in the scene.

The mesh filter shows the vertices, edges, and faces that make up that polygon.

Below that is the mesh renderer. This component will allow the rendering of the mesh rendered in the mesh filter component. We can set the material here and other options that pertain to this item’s specific lighting and probes, which we will cover in Chapter 12, Final Touches.

Now, below this is a collider and a Rigidbody. These work in tandem and help this object to react to physics in real time, according to the settings on the components.

We’ve talked a lot about items in the scene and their properties, but where are they housed outside of the scene if they’re only referenced items? The Project window will answer this question.

The Project window

Here you will find assets that will be instanced in the scene or used as a component to fully realize the game you are building.

Figure 1.12: Project window

This window is the physical representation of the GameObjects that are referenced. All of the items in the assets folder seen in Figure 1.12 are physically on your hard drive. Unity makes meta files that house all of the properties of the items.

The interesting thing about having the raw files in the Project window is that you can make changes to the items and when you focus on the Unity project (click on the Unity app), it will readjust the meta files and reload the items in the scene. This makes it so that, you can iterate on scripts and art faster!

We’ve looked at the GameObjects in the scene, placed them by manipulating the transforms, and know where the GameObjects were referenced from. Now we should look at the game view to know how the game itself looks.

Game view

The game view is similar to the scene view; however, it follows the rules that are built in the scene view. The game will automatically render scene content through the main camera unless you define a different camera to render through.

Figure 1.13: Game view

You can see that this looks very similar to the scene window, but the top has different options. At the top left, we can see the Display dropdown. This allows us to change cameras if we have multiple in the scene. The ratio is to the right of that, which is helpful to look at so you can target certain devices. Scale, to the right of the screen ratio, is helpful to quickly make the window larger or zoom in for debugging. Maximize On Play will maximize the screen on play to take advantage of the full screen. Mute Audio mutes the game’s audio. Stats will give a small overview of the stats in the game view.

Later on in this project, during optimization, we will go through profiling for a much more in-depth way to look at what may be causing issues within the gameplay in terms of memory usage and other optimization opportunities.

Figure 1.14: Game statistics

Continuing on to the right is Gizmos. This is a set of items that show in the game view in Figure 1.14, which you might now want to see. In this menu, you are able to turn them off or on depending on your needs.

Package Manager

Your Unity ID will house the packages you’ve bought from the Unity Asset Store as well as the packages you may have on your hard drive or GitHub! You can use the package manager to import the packages into your project.

You can get to these packages under Window > Package Manager as seen in Figure 1.15 below.

Figure 1.15: Package Manager path

After you open the package manager, you will initially be shown what packages are in the project. You can change the top-left dropdown to see what is standard in Unity or what packages you have bought in the Unity Asset Store.

Figure 1.16: Package Manager

By choosing Unity Registry, you’ll see a list of the Unity tested packages that come free and are part of the Unity platform, available if you need them. You can read up on every package in the documents that are provided via the link on the right-hand side labeled View documentation when you click on a package on the left.

If you select In Project, it will show you what packages are already installed with the current project that is loaded. This is helpful when you want to uninstall a package that may not be needed.

My Assets are the assets that you’ve bought or the project you are on and those associated with your Unity ID as paid for previously.

Built-in is standard with any project. You may need to enable or disable a built-in package depending on what your needs are. Explore them and disable what is not needed; a tidy project now leads to less optimization later.

Essential Unity concepts

In the first section, we already went over some Unity concepts. We will go over them in a bit more detail here as you’ve read previously where several of these might be used. Unity houses a very modular focus on the items that are housed within the game development environment.

Assets

Unity treats every file as an asset; everything including a 3D model, a texture file, a sprite, a particle system, and so on. In your project, you will have an Assets folder as the base folder to house all of your project items. These could be textures, 3D models, particle systems, materials, shaders, animations, sprites, and the list goes on. As we add more to our project, the Assets folder should be organized and ready to grow. It is strongly recommended to keep your folder structure organized so that you or your team aren’t wasting time trying to find that one texture item that was left in a random folder by accident.

Scenes

A scene houses all of the gameplay logic, GameObjects, cinematics, and everything else that your game will reference to render or interact with.

Scenes are also used to cut up gameplay sections to bring down the load times. If you imagine trying to load every single asset on a modern game every time you loaded it up, it would take way too much precious gaming time.

GameObjects

Most assets that are referenced in a scene will be a GameObject (GO). There are some instances in which an asset can only be a component of a GO. The one common factor that you will see with all GOs is that they have the Transform component. As we read at the beginning of this chapter, a transform holds the local position, rotation, and scale. World transforms are calculated from this following their hierarchy. GOs can have a long list of components connected to give functionality or data to be used in scripts for mechanics to grow.

Components

GOs have the ability to house multiple pieces of functionality attached as “components.” Each component has its own unique properties. The entire list of components you can add is fairly extensive, as you can see in Figure 1.17 below.

Figure 1.17: Component list

Each of these sections has smaller subsections. We will go over quite a few of them in this book. When you add an asset to the scene hierarchy that requires components, Unity will add them by default. An example of this default action happening is when you drag a 3D mesh into the hierarchy, the GOs will have a mesh renderer component attached to the object automatically.

Scripts

One component that is often used on a GameObject is a script. This is where all of the logic and mechanics will be built onto your GameObjects. Whether you want to change the color, jump, change the time of day, or collect an item, you will need to add that logic in a script on the object.

In Unity, the primary language is C# (pronounced “C sharp”). This is a strongly typed programming language, meaning that there must be a type assigned to any variable that is being manipulated.

We will be using scripts in a multitude of ways and I know you are excited to get right into coding, but first, we need to get into other Unity standard processes.

Prefabs

Utilizing the modular and strong object-oriented nature of Unity, we can put together a grouping of items with default values set on their components, which can be instanced in the scene at any time and house their own values.

To make a prefab, you drag a GameObject from the hierarchy in the scene to the asset browser. It will create a new prefab as well as turning that GameObject into the newly created prefab. It will also turn blue by default in the hierarchy as seen in Figure 1.18.

Figure 1.18: Prefab in hierarchy

Packages

To take the modular components to a whole new level, Unity can take a package with all of its dependencies and export them out so you can bring them into other projects! Even better, you can sell your packages to other game developers from the Unity Asset Store!

Now that you have a solid foundation in 3D and Unity terms, let’s open it up and go over the interface itself. The next section will be a look into all of the most common interface pieces of Unity.

Summary

Together, we went over several key areas to begin your journey in game development. In this chapter, we laid the foundation for what is to come by going over some fundamental features of three primary topics. For the third dimension, we went over the coordinate system, vectors, cameras, 3D meshes, and the basics of Rigidbody physics and collision detection. This was enough of the basics to allow us to get into Unity concepts, such as assets and GameObjects, followed by scripting in C# and prefab basics. To end this chapter, we went through a virtual tour of the Unity interface—scenes, the hierarchy, inspectors, and the package manager.

In the next chapter, we will be going over design and prototyping fundamentals. This will allow you to follow along while we describe our thought processes for the project being created throughout this book. It will also lay the foundational knowledge for you to follow when you make your own projects, following your completion of this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.96.146