3 Adding enemies and projectiles to the 3D game

This chapter covers

  • Taking aim and firing, both for the player and for enemies

  • Detecting and responding to hits

  • Making enemies that wander around

  • Spawning new objects in the scene

The movement demo from the previous chapter was pretty cool but still not really a game. Let’s turn that movement demo into a first-person shooter. If you think about what else we need now, it boils down to the ability to shoot and having things to shoot at.

First, we’re going to write scripts that enable the player to shoot objects in the scene. Then, we’re going to build enemies to populate the scene, including code to both wander around aimlessly and react to being hit. Finally, we’re going to enable the enemies to fight back, emitting fireballs at the player. None of the scripts from chapter 2 need to change; instead, we’ll add scripts to the project—scripts that handle the additional features.

I’ve chosen a first-person shooter for this project for a couple of reasons. One is simply that FPS games are popular: people like shooting games, so let’s make a shooting game. A subtler reason has to do with the techniques you’ll learn; this project is a great way to learn about several fundamental concepts in 3D simulations. For example, shooting games are a great way to teach raycasting. In a bit, we’ll get into the specifics of what that is, but for now, you need to know only that it’s a useful concept for many tasks in 3D simulations. Although raycasting is useful in a wide variety of situations, it just so happens that using raycasting makes the most intuitive sense for shooting.

Creating wandering targets to shoot at gives us a great excuse to explore code for computer-controlled characters, as well as to use techniques for sending messages and spawning objects. In fact, this wandering behavior is another place that raycasting is valuable, so we’re already going to be looking at a different application of the technique after having first learned about it with shooting. Similarly, the approach to sending messages that’s demonstrated in this project is also useful elsewhere. In future chapters, you’ll see other applications for these techniques, and even within this one project we’ll go over alternative situations.

Ultimately, we’ll approach this project one new feature at a time, with the game always playable at every step, but also always feeling like there’s a missing part to work on next. This road map breaks the steps into small, understandable changes, with only one new feature added at a time:

  1. Write code enabling the player to shoot into the scene.

  2. Create static targets that react to being hit.

  3. Make the targets wander around.

  4. Spawn the wandering targets automatically.

  5. Enable the targets/enemies to shoot fireballs at the player.

NOTE This chapter’s project assumes you already have a first-person movement demo to build on. We created a movement demo in chapter 2, but if you skipped straight to this chapter, you will need to download the sample files for chapter 2.

3.1 Shooting via raycasts

The first new feature to introduce into the 3D demo is shooting. Looking around and moving are certainly crucial features for a first-person shooter, but it’s not a game until players can affect the simulation and apply their skills. Shooting in 3D games can be implemented with a few approaches, and one of the most important approaches is raycasting.

3.1.1 What is raycasting?

As the name indicates, raycasting casts a ray into the scene. Clear, right? Well, okay, so what exactly is a ray?

DEFINITION A ray is an imaginary or invisible line in the scene that starts at a point of origin and extends out in a specific direction.

In raycasting, you create a ray and then determine what intersects it. Figure 3.1 illustrates the concept. Consider what happens when you fire a bullet from a gun: the bullet starts at the position of the gun and then flies forward in a straight line until it hits something. A ray is analogous to the path of the bullet, and raycasting is analogous to firing the bullet and seeing what it hits.

CH03_F01_Hocking3

Figure 3.1 A ray is an imaginary line, and raycasting is finding where that line intersects.

As you can imagine, the math behind raycasting often gets complicated. Not only is it tricky to calculate the intersection of a line with a 3D plane, but you need to do that for all polygons of all mesh objects in the scene (remember, a mesh object is a 3D visual constructed from lots of connected lines and shapes). Fortunately, Unity handles the difficult math behind raycasting, but you still have to worry about higher-level concerns such as where the ray is being cast from and why.

In this project, the answer to the latter question (why) is to simulate a bullet being fired into the scene. For a first-person shooter, the ray generally starts at the camera position and then extends out through the center of the camera view. In other words, you’re checking for objects straight in front of the camera; Unity provides commands to make that task simple. Let’s look at these commands.

3.1.2 Using the ScreenPointToRay command for shooting

You’ll implement shooting by projecting a ray that starts at the camera and extends forward through the center of the view. Unity provides the ScreenPointToRay() method to perform this action.

Figure 3.2 illustrates what happens when this method is invoked. It creates a ray that starts at the camera and projects at an angle, passing through the given screen coordinates. Usually, the coordinates of the mouse position are used for mouse picking (selecting the object under the mouse), but for first-person shooting, the center of the screen is used. Once you have a ray, it can be passed to the Physics.Raycast() method to perform raycasting using that ray.

CH03_F02_Hocking3

Figure 3.2 ScreenPointToRay() projects a ray from the camera through the given screen coordinates.

Let’s write code that uses the methods we just discussed. In Unity, create a new C# script, call it RayShooter, attach that script to the camera (not the player object), and then write the code from this listing in it.

Listing 3.1 RayShooter script to attach to the camera

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class RayShooter : MonoBehaviour {
  private Camera cam;
 
  void Start() {
    cam = GetComponent<Camera>();                                          
  }
 
  void Update() {
    if (Input.GetMouseButtonDown(0)) {                                     
      Vector3 point = new Vector3(cam.pixelWidth/2, cam.pixelHeight/2, 0); 
      Ray ray = cam.ScreenPointToRay(point);                               
      RaycastHit hit;
      if (Physics.Raycast(ray, out hit)) {                                 
        Debug.Log("Hit " + hit.point);                                     
      }
    }
  }
}

Access other components attached to the same object.

Respond to the left (first) mouse button.

The middle of the screen is half its width and height.

Create the ray at that position by using ScreenPointToRay().

The raycast fills a referenced variable with information.

Retrieve coordinates where the ray hit.

You should note several things in this code listing. First, the Camera component is retrieved in Start(), just like the CharacterController in the previous chapter. Then, the rest of the code is put in Update() because it needs to check the mouse repeatedly, as opposed to just one time. The Input.GetMouseButtonDown() method returns true or false, depending on whether the mouse has been clicked, so putting that command in a conditional means the enclosed code runs only when the mouse has been clicked. You want to shoot when the player clicks the mouse—hence the conditional check of the mouse button.

A vector is created to define the screen coordinates for the ray (remember that a vector is several related numbers stored together). The camera’s pixelWidth and pixelHeight values give you the size of the screen, so dividing those values in half gives you the center of the screen. Although screen coordinates are 2D, with only horizontal and vertical components and no depth, a Vector3 was created because ScreenPointToRay() requires that data type (presumably because calculating the ray involves arithmetic on 3D vectors). ScreenPointToRay() was called with this set of coordinates, resulting in a Ray object (a code object, not a game object; the two can be confused sometimes).

The ray is then passed to the Raycast() method, but it’s not the only object passed in. There’s also a RaycastHit data structure; RaycastHit is a bundle of information about the intersection of the ray, including where the intersection happened and what object was intersected. The C# syntax out ensures that the data structure manipulated within the command is the same object that exists outside the command, as opposed to the objects being separate copies in the different function scopes.

With those parameters in place, the Physics.Raycast() method can do its work. This method checks for intersections with the given ray, fills in data about the intersection, and returns true if the ray hit anything. Because a Boolean value is returned, this method can be put in a conditional check, just as you used Input.GetMouseButtonDown() earlier.

For now, the code emits a console message to indicate when an intersection occurred. This console message displays the 3D coordinates of the point where the ray hit (the x, y, z values we discussed in chapter 2). But it can be hard to visualize where exactly the ray hit; similarly, it can be hard to tell where the center of the screen is (the location where the ray shoots through). Let’s add visual indicators to address both problems.

3.1.3 Adding visual indicators for aiming and hits

Our next step is to add two kinds of visual indicators: an aiming spot at the center of the screen and a mark in the scene where the ray hit. For a first-person shooter, the latter is usually bullet holes, but for now, you’re going to put a blank sphere on the spot (and use a coroutine to remove the sphere after 1 second). Figure 3.3 shows what you’ll see.

DEFINITION Coroutines are a way of handling tasks that execute incrementally over time. In contrast, most functions make the program wait until they finish.

First, let’s add indicators to mark where the ray hits. Listing 3.2 shows the script after making this addition. Run around the scene, shooting; it’s pretty fun seeing the sphere indicators!

CH03_F03_Hocking3

Figure 3.3 Shooting repeatedly after adding visual indicators for aiming and hits

Listing 3.2 RayShooter script with sphere indicators added

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class RayShooter : MonoBehaviour {
  private Camera cam;
 
  void Start() {
    cam = GetComponent<Camera>();
  }
 
  void Update() {                                             
    if (Input.GetMouseButtonDown(0)) {
      Vector3 point = new Vector3(cam.pixelWidth/2, cam.pixelHeight/2, 0);
      Ray ray = cam.ScreenPointToRay(point);
      RaycastHit hit;
      if (Physics.Raycast(ray, out hit)) {
        StartCoroutine(SphereIndicator(hit.point));           
      }
    }
  }
 
  private IEnumerator SphereIndicator(Vector3 pos) {          
    GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
    sphere.transform.position = pos;
 
    yield return new WaitForSeconds(1);                       
 
    Destroy(sphere);                                          
  }
}

This function is mostly the same raycasting code from listing 3.1.

Launch a coroutine in response to a hit.

Coroutines use IEnumerator functions.

The yield keyword tells coroutines where to pause.

Remove this GameObject and clear its memory.

The new method is SphereIndicator(), plus a one-line modification in the existing Update() method. This method creates a sphere at a point in the scene and then removes that sphere a second later. Calling SphereIndicator() from the raycasting code ensures that there will be visual indicators showing exactly where the ray hit. This function is defined with IEnumerator, and that type is tied in with the concept of coroutines.

Technically, coroutines aren’t asynchronous (asynchronous operations don’t stop the rest of the code from running; think of downloading an image in the script of a website), but through clever use of enumerators, Unity makes coroutines behave similarly to asynchronous functions. The secret sauce in coroutines is the yield keyword; that keyword causes the coroutine to temporarily pause, handing back the program flow and picking up again from that point in the next frame. In this way, coroutines seemingly run in the background of a program, through a repeated cycle of running partway and then returning to the rest of the program.

As the name indicates, StartCoroutine() sets a coroutine in motion. Once a coroutine is started, it keeps running until the function is finished; it pauses along the way. Note the subtle but significant point that the method passed to StartCoroutine() has a set of parentheses following the name: this syntax means you’re calling that function, as opposed to passing its name. The called function runs until it hits a yield command, at which point the function pauses.

SphereIndicator() creates a sphere at a specific point, pauses for the yield statement, and then destroys the sphere after the coroutine resumes. The length of the pause is controlled by the value returned at yield. A few types of return values work in coroutines, but the most straightforward is to return a specific length of time to wait. Returning WaitForSeconds(1) causes the coroutine to pause for 1 second. Create a sphere, pause for 1 second, and then destroy the sphere: that sequence sets up a temporary visual indicator.

Listing 3.2 gave you indicators to mark where the ray hits. But you also want an aiming spot in the center of the screen.

Listing 3.3 Visual indicator for aiming

...
void Start() {
  cam = GetComponent<Camera>();
 
  Cursor.lockState = CursorLockMode.Locked;          
  Cursor.visible = false;                            
}
 
void OnGUI() {
  int size = 12;                                     
  float posX = cam.pixelWidth/2 - size/4;
  float posY = cam.pixelHeight/2 - size/2;
  GUI.Label(new Rect(posX, posY, size, size), "*");  
}
...

Hide the mouse cursor at the center of the screen.

This is just the rough size of this font.

The GUI.Label() command displays text onscreen.

Another new method has been added to the RayShooter class, called OnGUI(). Unity comes with both a basic and more advanced UI system. Because the basic system has a lot of limitations, we’ll build a more flexible advanced UI in future chapters, but for now, it’s much easier to display a point in the center of the screen by using the basic UI. Much like Start() and Update(), every MonoBehaviour automatically responds to an OnGUI() method. That function runs every frame right after the 3D scene is rendered, resulting in everything drawn during OnGUI() appearing on top of the 3D scene (imagine stickers applied to a painting of a landscape).

DEFINITION Render is the action of the computer drawing the pixels of the 3D scene. Although the scene is defined using x-, y-, and z-coordinates, the actual display on your monitor is a 2D grid of colored pixels. To display the 3D scene, the computer needs to calculate the color of all the pixels in the 2D grid; running that algorithm is referred to as rendering.

Inside OnGUI(), the code defines 2D coordinates for the display (shifted slightly to account for the size of the label) and then calls GUI.Label(). That method displays a text label. Because the string passed to the label is an asterisk (*), you end up with that character displayed in the center of the screen. Now it’s much easier to aim in our nascent FPS game!

Listing 3.3 also adds cursor settings to the Start() method. All that’s happening is that the values are being set for cursor visibility and locking. The script will work perfectly fine if you omit the cursor values, but these settings make first-person controls work a bit more smoothly. The mouse cursor will stay in the center of the screen, and to avoid cluttering the view, will turn invisible and will reappear only when you press Esc.

WARNING Always remember that you can press Esc to unlock the mouse cursor in order to move it away from the middle of the Game view. While the mouse cursor is locked, it’s impossible to click the Play button and stop the game.

That wraps up the first-person shooting code . . . well, that wraps up the player’s end of the interaction, anyway, but we still need to take care of targets.

3.2 Scripting reactive targets

Being able to shoot is all well and good, but at the moment, players don’t have anything to shoot at. We’re going to create a target object and give it a script that will respond to being hit. Or rather, we’ll slightly modify the shooting code to notify the target when hit, and then the script on the target will react when notified.

3.2.1 Determining what was hit

First, you need to create a new object to shoot at. Create a new cube object (GameObject > 3D Object > Cube) and then scale it up vertically by setting the Y scale to 2 and leaving X and Z at 1. Position the new object at 0, 1, 0 to put it on the floor in the middle of the room, and name the object Enemy.

Create a new script called ReactiveTarget and attach that to the newly created box. Soon, you’ll write code for this script, but leave it as the default for now; you’re creating this script file ahead of time because the next code listing requires it to exist in order to compile.

Go back to RayShooter and modify the raycasting code according to the following listing. Run the new code and shoot the new target; debug messages appear in the console instead of sphere indicators in the scene.

Listing 3.4 Detecting whether the target object was hit

...
if (Physics.Raycast(ray, out hit)) {
  GameObject hitObject = hit.transform.gameObject;                   
  ReactiveTarget target = hitObject.GetComponent<ReactiveTarget>();
  if (target != null) {                                              
    Debug.Log("Target hit");
  } else {
    StartCoroutine(SphereIndicator(hit.point));
  }
}
...

Retrieve the object the ray hit.

Check for the ReactiveTarget component on the object.

Notice that you retrieve the object from RaycastHit, just as the coordinates were retrieved for the sphere indicators. Technically, the hit information doesn’t return the game object hit; it indicates the Transform component hit. You can then access gameObject as a property of transform.

Then, you use the GetComponent()method on the object to check whether it’s a reactive target (that is, whether it has the ReactiveTarget script attached). As you saw previously, that method returns components of a specific type that are attached to the GameObject. If no component of that type is attached to the object, GetComponent() won’t return anything. You check whether null was returned and run different code in each case.

If the hit object is a reactive target, the code emits a debug message instead of starting the coroutine for sphere indicators. Now let’s inform the target object about the hit so it can react.

3.2.2 Alerting the target that it was hit

All that’s needed in the code is a one-line change, as shown next.

Listing 3.5 Sending a message to the target object

...
if (target != null) {
  target.ReactToHit();        
} else {
  StartCoroutine(SphereIndicator(hit.point));
}
...

Call a method of the target instead of just emitting the debug message.

Now the shooting code calls a method of the target, so let’s write that target method. In the ReactiveTarget script, write in the code from the next listing. The target object will fall over and disappear when you shoot it; refer to figure 3.4.

Listing 3.6 ReactiveTarget script that dies when hit

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class ReactiveTarget : MonoBehaviour {
 
  public void ReactToHit() {                
    StartCoroutine(Die());
  }
 
  private IEnumerator Die() {               
    this.transform.Rotate(-75, 0, 0);
    
    yield return new WaitForSeconds(1.5f);
    
    Destroy(this.gameObject);               
  }
}

Method called by the shooting script

Topple the enemy, wait 1.5 seconds, and then destroy the enemy.

A script can destroy itself (just as it could a separate object).

Most of this code should be familiar to you from previous scripts, so we’ll go over it only briefly. First, you define the ReactToHit()method, because that’s the method name called in the shooting script. This method starts a coroutine that’s similar to the sphere indicator code from earlier; the main difference is that it operates on the object of this script rather than creating a separate object. Expressions like this.gameObject refer to the GameObject that this script is attached to (and the this keyword is optional, so code could refer to gameObject without anything in front of it).

The first line of the coroutine function makes the object tip over. As discussed in chapter 2, rotations can be defined as an angle around each of the three coordinate axes, x, y, and z. Because we don’t want the object to rotate side to side, leave Y and Z as 0 and assign an angle to the X rotation.

CH03_F04_Hocking3

Figure 3.4 The target object falling over when hit

NOTE The transform is applied instantly, but you may prefer seeing the movement when objects topple over. Once you start looking beyond this book for more advanced topics, you might want to look up tweens, systems used to make objects move smoothly over time.

The second line of the method uses the yield keyword that’s so significant to coroutines, pausing the function there and returning the number of seconds to wait before resuming. Finally, the game object destroys itself in the last line of the function. Destroy(this.gameObject) is called after the wait time, just as the code called Destroy(sphere) before.

WARNING Be sure to call Destroy() on this.gameObject and not simply this! Don’t get confused between the two; this refers only to this script component, whereas this.gameObject refers to the object the script is attached to.

The target now reacts to being shot—great! But it doesn’t do anything else on its own, so let’s add more behavior to make this target a proper enemy character.

3.3 Basic wandering AI

A static target isn’t terribly interesting, so let’s write code that’ll make the enemy wander around. Code for wandering around is pretty much the simplest example of artificial intelligence (AI), or computer-controlled entities. In this case, the entity is an enemy in a game, but it could also be a robot in the real world or a voice that plays chess, for example.

3.3.1 Diagramming how basic AI works

Multiple approaches to AI exist (seriously, AI is a major area of research for computer scientists). For our purposes, we’ll stick with a simple one. As you become more experienced and your games get more sophisticated, you’ll probably want to explore the various approaches to AI.

Figure 3.5 depicts the basic process. In every frame, the AI code will scan around its environment to determine whether it needs to react. If an obstacle appears in its way, the enemy turns to face a different direction. Regardless of whether the enemy needs to turn, it will always move forward steadily. As such, the enemy will ping-pong around the room, always moving forward and turning to avoid walls.

CH03_F05_Hocking3

Figure 3.5 Basic AI: cyclical process of moving forward and avoiding obstacles

The code will look pretty familiar, because it moves enemies forward by using the same commands as moving the player forward. The AI code will also use raycasting, similar to, but in a different context from, shooting.

3.3.2 “Seeing” obstacles with a raycast

As you saw in the introduction to this chapter, raycasting is a technique that’s useful for multiple tasks within 3D simulations. One easily grasped task is shooting, but another task raycasting can be useful for is scanning around the scene. Given that scanning around the scene is a step in AI code, that means raycasting is used in AI code.

Earlier, you created a ray that originated from the camera, because that’s where the player was looking from. This time, you’ll create a ray that originates from the enemy. The first ray shot out through the center of the screen, but this time the ray will shoot forward in front of the character; figure 3.6 illustrates this. Then, just as the shooting code used RaycastHit information to determine whether anything was hit and where, the AI code will use RaycastHit information to determine whether anything is in front of the enemy and, if so, how far away.

CH03_F06_Hocking3

Figure 3.6 Using raycasting to “see” obstacles

One difference between raycasting for shooting and raycasting for AI is the radius of the ray. For shooting, the ray was treated as infinitely thin, but for AI, the ray will be treated as having a large cross section. In terms of the code, this means using the SphereCast()method instead of Raycast(). The reason for this difference is that bullets are tiny, whereas checking for obstacles in front of the character requires us to account for the width of the character.

Create a new script called WanderingAI, attach that to the target object (alongside the ReactiveTarget script), and write the code from the next listing. Play the scene now and you should see the enemy wandering around the room; you can still shoot the target, and it will react the same way as before.

Listing 3.7 Basic WanderingAI script

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class WanderingAI : MonoBehaviour {
  public float speed = 3.0f;                                   
  public float obstacleRange = 5.0f;
 
  void Update() {
    transform.Translate(0, 0, speed * Time.deltaTime);         
 
    Ray ray = new Ray(transform.position, transform.forward);  
    RaycastHit hit;
    if (Physics.SphereCast(ray, 0.75f, out hit)) {             
      if (hit.distance < obstacleRange) {
        float angle = Random.Range(-110, 110);                 
        transform.Rotate(0, angle, 0);
      }
    }
  }
}

Values for the speed of movement and the distance at which to react to obstacles

Move forward continuously every frame, regardless of turning.

A ray at the same position and pointing in the same direction as the character

Perform raycasting with a circular volume around the ray.

Turn toward a semi-random new direction.

This listing adds a couple of variables to represent the speed of movement and the distance at which the AI reacts to obstacles. Then, transform.Translate() is added in the Update() method to move forward continuously (including the use of deltaTime for frame rate-independent movement). In Update(), you’ll also see raycasting code that looks a lot like the shooting script from earlier; again, the same technique of raycasting is being used here to see instead of shoot. The ray is created using the enemy’s position and direction, instead of using the camera.

As explained earlier, the raycasting calculation is done using the Physics.SphereCast() method. This method takes a radius parameter to determine how far around the ray to detect intersections, but in every other respect, it’s exactly the same as Physics.Raycast(). This similarity includes how the command fills in hit information, checks for intersections just as before, and uses the distance property to be sure to react only when the enemy gets near an obstacle (as opposed to a wall across the room).

When the enemy has a nearby obstacle right in front of it, the code rotates the character a semi-random amount toward a new direction. I say semi-random because the values are constrained to the minimum and maximum values that make sense for this situation. Specifically, we use the Random.Range() method, which Unity provides for obtaining a random value between constraints. In this case, the constraints were just slightly beyond an exact left or right turn, allowing the character to turn sufficiently to avoid obstacles.

3.3.3 Tracking the character’s state

One oddity of the current behavior is that the enemy keeps moving forward after falling over from being hit. That’s because, right now, the Translate() method runs every frame no matter what. Let’s make small adjustments to the code to keep track of whether the character is alive—or to put it in another (more technical) way, we want to track the alive state of the character.

Having the code keep track of and respond differently to the current state of the object is a common code pattern in many areas of programming, not just AI. More sophisticated implementations of this approach are referred to as state machines, or possibly even finite-state machines.

DEFINITION A finite-state machine (FSM) is a code structure in which the current state of the object is tracked, well-defined transitions exist between states, and the code behaves differently based on the state.

We’re not going to implement a full FSM, but it’s no coincidence that a common place to see the initials FSM is in discussions of AI. A full FSM would have many states for the many behaviors of a sophisticated AI application, but in this basic AI, we need to track only whether the character is alive. The next listing adds a Boolean value, isAlive, toward the top of the script, and the code needs occasional conditional checks of that value. With those checks in place, the movement code runs only while the enemy is alive.

Listing 3.8 WanderingAI script with alive state added

...
private bool isAlive;                                  
 
void Start() {
  isAlive = true;                                      
}
 
void Update() {
  if (isAlive) {                                       
    transform.Translate(0, 0, speed * Time.deltaTime);
    ...
  }
}
 
public void SetAlive(bool alive) {                     
  isAlive = alive;
}
...

Boolean value to track whether the enemy is alive

Initialize that value.

Move only if the character is alive.

Public method allowing outside code to affect the “alive” state

The ReactiveTarget script can now tell the WanderingAI script whether the enemy is alive.

Listing 3.9 ReactiveTarget tells WanderingAI when it dies

...
public void ReactToHit() {
    WanderingAI behavior = GetComponent<WanderingAI>();
    if (behavior != null) {                              
        behavior.SetAlive(false);
    }
    StartCoroutine(Die());
}
...

Check if this character has a WanderingAI script; it might not.

AI code structure

The AI code in this chapter is contained within a single class so that learning and understanding it is straightforward. This code structure is perfectly fine for simple AI needs, so don’t be afraid that you’ve done something wrong and that a more complex code structure is an absolute requirement. For more complex AI needs (such as a game with a wide variety of highly intelligent characters), a more robust code structure can help facilitate developing the AI.


As alluded to in chapter 1’s example for composition versus inheritance, sometimes you’ll want to split chunks of the AI into separate scripts. Doing so will enable you to mix and match components, generating unique behavior for each character. Think about the similarities and differences among your characters, and those differences will guide you as you design your code architecture. For example, if your game has some enemies that move by charging headlong at the player and some that slink around in the shadows, you may want to make Locomotion a separate component. Then you can create scripts for both LocomotionCharge and LocomotionSlink, and use different Locomotion components on different enemies.


The exact AI code structure you want depends on the design of your specific game; there’s no one right way to do it. Unity makes it easy to design flexible code architectures like this.

3.4 Spawning enemy prefabs

At the moment, only one enemy is in the scene, and when it dies, the scene is empty. Let’s make the game spawn enemies so that whenever the enemy dies, a new one appears. This is easily done in Unity by using prefabs.

3.4.1 What is a prefab?

Prefabs are a flexible approach to visually defining interactive objects. In a nutshell, a prefab is a fully fleshed-out game object (with components already attached and set up) that doesn’t exist in any specific scene but rather exists as an asset that can be copied into any scene.

This copying can be done manually, to ensure that the enemy object (or other prefab) is the same in every scene. More importantly, though, prefabs can also be spawned from code; you can place copies of the object into the scene by using commands in scripts and not only by doing so manually in the visual editor.

DEFINITION An asset is any file that shows up in the Project view; these could be 2D images, 3D models, code files, scenes, and so on. I mentioned this term briefly in chapter 1 but didn’t emphasize it until now.

A copy of a prefab is called an instance, analogous to instance referring to a specific code object created from a class. Try to keep the terminology straight: prefab refers to the game object existing outside of any scene; instance refers to a copy of the object that’s placed in a scene.

DEFINITION Also analogous to object-oriented terminology, instantiate is the action of creating an instance.

3.4.2 Creating the enemy prefab

To create a prefab, first create an object in the scene that will become the prefab. Because our enemy object will become a prefab, we’ve already done this first step. Now all we do is drag the object down from the Hierarchy view and drop it in the Project view; this will automatically save the object as a prefab (see figure 3.7).

CH03_F07_Hocking3

Figure 3.7 Drag objects from Hierarchy to Project to create prefabs.

Back in the Hierarchy view, the original object’s name will turn blue to signify that it’s now linked to a prefab. We don’t actually want the object in the scene anymore (we’re going to spawn the prefab, not use the instance already in the scene), so delete the enemy object now. If you want to edit the prefab further, just double-click the prefab in the Project view to open it and then click the back arrow at the top left of the Hierarchy view to close it again.

WARNING The interface for working with prefabs has improved a lot since earlier versions of Unity, but editing prefabs can still cause confusion. For example, you are not technically in any scene after you double-click a prefab, so remember to click the back arrow in the Hierarchy view when you are done editing the prefab. In addition, if you nest prefabs (so that one prefab contains other prefabs), working with them can get confusing.

Now we have the actual prefab object to spawn in the scene, so let’s write code to create instances of the prefab.

3.4.3 Instantiating from an invisible SceneController

Although the prefab itself doesn’t exist in the scene, an object must be in the scene for the enemy spawning code to attach to. We’ll create an empty game object and can attach the script to that, but the object won’t be visible in the scene.

TIP The use of empty GameObjects for attaching script components is a common pattern in Unity development. This trick is used for abstract tasks that don’t apply to any specific object in the scene. Unity scripts are intended to be attached to visible objects, but not every task makes sense that way.

Choose GameObject > Create Empty, rename the new object Controller, and ensure that its position is 0, 0, 0. (Technically, the position doesn’t matter because the object isn’t visible, but putting it at the origin will make life simpler if you ever parent anything to it.) Create a script called SceneController.

Listing 3.10 SceneController that spawns the enemy prefab

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class SceneController : MonoBehaviour {
  [SerializeField] GameObject enemyPrefab;                  
  private GameObject enemy;                                 
 
  void Update() {
                                      
    if (enemy == null) {                                    
      enemy = Instantiate(enemyPrefab) as GameObject;       
      enemy.transform.position = new Vector3(0, 1, 0);
      float angle = Random.Range(0, 360);
      enemy.transform.Rotate(0, angle, 0);
    }
  }
}

Serialized variable for linking to the prefab object

Private variable to keep track of the enemy instance in the scene

Spawn a new enemy only if one isn’t already in the scene.

Method that copies the prefab object

Attach this script to the controller object, and in the Inspector you’ll see a variable slot for the enemy prefab. This works similarly to public variables, but there’s an important difference.

TIP To reference objects in Unity’s editor, I recommend decorating variables with SerializeField instead of declaring them to be public. As explained in chapter 2, public variables show up in the Inspector (in other words, they’re serialized by Unity), so most tutorials and sample code you’ll see use public variables for all serialized values. But these variables can also be modified by other scripts (these are public variables, after all), whereas the SerializeField attribute allows you to keep the variables private. C# defaults to private if a variable isn’t explicitly made public, and that’s better in most cases because you want to expose that variable in the Inspector but don’t want the value to be changed by other scripts.

WARNING Prior to version 2019.4, Unity had a bug in which SerializeField would cause the compiler to emit a warning about that field not being initialized. If you ever encounter this bug, the script still functions fine, so technically you can just ignore those warnings or get rid of them by adding = null to those fields.

Drag the prefab asset up from Project to the empty variable slot. When the mouse gets near, you should see the slot highlight to indicate that the object can be linked there (see figure 3.8). Once the enemy prefab is linked to the SceneController script, play the scene to see the code in action. An enemy will appear in the middle of the room just as before, but now if you shoot the enemy, it will be replaced by a new enemy. That’s much better than just one enemy that’s gone forever!

CH03_F08_Hocking3

Figure 3.8 Link the enemy prefab to the script’s prefab slot.

TIP This approach of dragging objects onto the Inspector’s variable slots is a handy technique that comes up in a lot of scripts. Here we linked a prefab to the script, but you can also link to objects in the scene and can even link to specific components (rather than the overall GameObject). In future chapters, we’ll use this technique often.

The core of this script is the Instantiate() method, so take note of that line. When we instantiate the prefab, that creates a copy in the scene. By default, Instantiate() returns the new object as a generic Object type, but Object is pretty useless directly, and we need to handle it as a GameObject. In C#, use the as keyword for typecasting to convert from one type of code object into another type (written with the syntax original-object as new-type).

The instantiated object is stored in enemy, a private variable of the GameObject type. (Keep the distinction between a prefab and an instance of the prefab straight: enemyPrefab stores the prefab; enemy stores the instance.) The if statement that checks the stored object ensures that Instantiate() is called only when enemy is empty (or null, in coder-speak). The variable starts out empty, so the instantiating code runs once right from the beginning of the session. The object returned by Instantiate() is then stored in enemy so that the instantiating code won’t run again.

Because the enemy destroys itself when shot, that empties the enemy variable and causes Instantiate() to be run again. In this way, an enemy is always in the scene.

Destroying GameObjects and memory management

It’s somewhat unexpected for existing references to become null when an object destroys itself. In a memory-managed programming language like C#, normally you aren’t able to directly destroy objects; you can only dereference them so that they can be destroyed automatically. This is still true within Unity, but the way GameObjects are handled behind the scenes makes it look like they were destroyed directly.


To display objects in the scene, Unity has to have a reference to all objects in its scene graph. As such, even if you removed all references to the GameObject in your code, this scene graph reference would still prevent the object from being destroyed automatically. Because of this, Unity provides the Destroy()method to tell the game engine, “Remove this object from the scene graph.” As part of that behind-the-scenes functionality, Unity also overloads the == operator to return true when checking for null. Technically, that object still exists in memory, but it may as well not exist any longer, so Unity has it appearing as null. You could confirm this by calling GetInstanceID()on the destroyed object.


Note that the developers of Unity have considered changing this behavior to more standard memory management. If they do, this spawning code will need to change as well, probably by swapping the (enemy==null) check with a new parameter like (enemy.isDestroyed).


(If most of this discussion was Greek to you, just don’t worry about it; this was a tangential technical discussion for people interested in these obscure details.)

3.5 Shooting by instantiating objects

All right, let’s add another bit of functionality to the enemies. Much as we did with the player, first we made them move—now let’s make them shoot! As I mentioned back when introducing raycasting, that was just one of the approaches to implementing shooting. Another approach involves instantiating prefabs, so let’s take that approach to making the enemies shoot back. The goal of this section is to see figure 3.9 when playing.

CH03_F09_Hocking3

Figure 3.9 Enemy shooting a fireball at the player.

3.5.1 Creating the projectile prefab

This time, shooting will involve a projectile in the scene. Shooting with raycasting was basically instantaneous, registering a hit the moment the mouse was clicked, but this time enemies are going to emit fireballs that fly through the air. Admittedly, they’ll be moving pretty fast, but not instantaneously, giving the player a chance to dodge out of the way. Instead of using raycasting to detect hits, we’ll use collision detection (the same collision system that keeps the moving player from passing through walls).

The code will spawn fireballs in the same way that enemies spawn: by instantiating a prefab. As explained in the previous section, the first step when creating a prefab is to create an object in the scene that will become the prefab, so let’s create a fireball.

To start, choose GameObject > 3D Object > Sphere. Rename the new object Fireball. Now create a new script, also called Fireball, and attach that script to this object. Eventually, we’ll write code in this script, but leave it as the default for now while we work on a few other parts of the Fireball object. So that it appears like a fireball and not just a gray sphere, we’re going to give the object a bright orange color. Surface properties such as color are controlled using materials.

DEFINITION A material is a packet of information that defines the surface properties of any 3D object that the material is attached to. These surface properties can include color, shininess, and even subtle roughness.

Choose Assets > Create > Material. Name the new material something like Flame and drag it onto the object in the scene. Select the material in the Project view in order to see the material’s properties in the Inspector. As figure 3.10 shows, click the color swatch labeled Albedo (that’s a technical term that refers to the main color of a surface). Clicking that will bring up a color picker in its own window; slide both the rainbow-colored ring and the main picking area to set the color to orange.

CH03_F10_Hocking3

Figure 3.10 Setting the color of a material

We’re also going to brighten the material to make it look more like fire. Adjust the Emission value (one of the other attributes in the Inspector). The check box is off by default, so turn it on to brighten up the material.

Now you can turn the fireball object into a prefab by dragging the object down from Hierarchy into Project, just as you did with the enemy prefab. As with the enemy, we need only the prefab now, so delete the instance in the Hierarchy. Great—we have a new prefab to use as a projectile! Next up is writing code to shoot using that projectile.

3.5.2 Shooting the projectile and colliding with a target

Let’s make adjustments to the enemy in order to emit fireballs. Because code to recognize the player will require a new script (just like ReactiveTarget was required by the code to recognize the target), first create a new script and name it PlayerCharacter. Attach this script to the player object in the scene. Now open up WanderingAI and add to the code from this listing.

Listing 3.11 WanderingAI additions for emitting fireballs

...
[SerializeField] GameObject fireballPrefab;                    
private GameObject fireball;
...
if (Physics.SphereCast(ray, 0.75f, out hit)) {
  GameObject hitObject = hit.transform.gameObject;
  if (hitObject.GetComponent<PlayerCharacter>()) {             
    if (fireball == null) {                                    
      fireball = Instantiate(fireballPrefab) as GameObject;    
      fireball.transform.position =
        transform.TransformPoint(Vector3.forward * 1.5f);      
      fireball.transform.rotation = transform.rotation;
    }
  }
  else if (hit.distance < obstacleRange) {
    float angle = Random.Range(-110, 110);
    transform.Rotate(0, angle, 0);
  }
}
...

Add these two fields before any methods, just as in SceneController.

Player is detected in the same way as the target object in RayShooter.

Same null Game-Object logic as SceneController

Instantiate() method here is just as it was in SceneController.

Place the fireball in front of the enemy and point in the same direction.

You’ll notice that all the annotations in this listing refer to similar (or the same) bits in previous scripts. Previous code listings showed everything needed for emitting fireballs; now we’re mashing together and remixing bits of code to fit in the new context.

Just as in SceneController, you need to add two GameObject fields toward the top of the script: a serialized variable for linking the prefab to, and a private variable for keeping track of the instance created by the code. After doing a raycast, the code checks for the PlayerCharacter on the object hit; this works just as the shooting code checking for ReactiveTarget on the object hit. The code that instantiates a fireball when there isn’t already one in the scene works like the code that instantiates an enemy. The positioning and rotation are different, though; this time, you place the instance just in front of the enemy and point it in the same direction.

Once all the new code is in place, a new Fireball Prefab slot will appear in the Inspector when you select the Enemy prefab, like the Enemy Prefab slot in the Scene-Controller component. Click the Enemy prefab in the Project view (double-click to actually open the prefab, but just a single click selects it), and the Inspector will show that object’s components, as if you’d selected an object in the scene. Although the earlier warning about interface awkwardness often applies when editing prefabs, the interface makes it easy to adjust the components on a prefab without opening it, and that’s all we’re doing. As shown in figure 3.11, drag the Fireball prefab from Project onto the Fireball Prefab slot in the Inspector (again, just as you did with SceneController).

CH03_F11_Hocking3

Figure 3.11 Link the fireball prefab to the script’s prefab slot.

Now the enemy will fire at the player when the player is directly ahead of it . . . well, try to fire. The bright orange sphere appears in front of the enemy but just sits there because we haven’t written its script yet. Let’s do that now.

Listing 3.12 Fireball script that reacts to collisions

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class Fireball : MonoBehaviour {
  public float speed = 10.0f;
  public int damage = 1;
 
  void Update() {
    transform.Translate(0, 0, speed * Time.deltaTime);              
  }
 
  void OnTriggerEnter(Collider other) {                             
    PlayerCharacter player = other.GetComponent<PlayerCharacter>();
    if (player != null) {                                           
      Debug.Log("Player hit");
    }
    Destroy(this.gameObject);
  }
}

Move forward in the direction it faces.

Called when another object collides with this trigger

Check if the other object is a PlayerCharacter.

The crucial new bit to this code is the OnTriggerEnter() method, called automatically when the object has a collision, such as with the walls or with the player. At the moment, this code won’t work entirely; if you run it, the fireball will fly forward thanks to the Translate() line, but the trigger won’t run, queuing up a new fireball by destroying the current one. A couple of other adjustments need to be made to components on the Fireball object. The first change is making the collider a trigger. To adjust that, go to the Inspector and click the Is Trigger check box in the Sphere Collider component.

TIP A collider component set as a trigger will still react to touching/overlapping other objects but will no longer stop other objects from physically passing through.

The fireball also needs a Rigidbody, a component used by the physics system in Unity. By giving the fireball a Rigidbody component, you ensure that the physics system is able to register collision triggers for that object. Click Add Component at the bottom of the Inspector and choose Physics > Rigidbody. In the component that’s added, deselect Use Gravity (see figure 3.12) so that the fireball won’t be pulled down by gravity.

CH03_F12_Hocking3

Figure 3.12 Turn off gravity in the Rigidbody component.

Play now, and fireballs are destroyed when they hit something. Because the fireball-emitting code runs whenever a fireball isn’t already in the scene, the enemy will shoot more fireballs at the player. Now just one more thing remains for shooting at the player: making the player react to being hit.

3.5.3 Damaging the player

Earlier, you created a PlayerCharacter script but left it empty. Now you’ll write code to have the player react to being hit.

Listing 3.13 Player that can take damage

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
 
public class PlayerCharacter : MonoBehaviour {
  private int health;
 
  void Start() {
    health = 5;                           
  }
 
  public void Hurt(int damage) {
    health -= damage;                     
    Debug.Log($"Health: {health}");       
  }
}

Initialize the health value.

Decrement the player’s health.

Construct the message by using string interpolation.

The listing defines a field for the player’s health and reduces the health on command. In later chapters, we’ll go over text displays to show information on the screen, but for now, we can display information about the player’s health only by using debug messages.

DEFINITION String interpolation is a mechanism to insert the evaluation of code (for example, the value of a variable) into a string. Several programming languages support string interpolation, including C#. For example, look at the health message in listing 3.13.

Now you need to go back to the Fireball script to call the player’s Hurt() method. Replace the debug line in the Fireball script with player.Hurt(damage) to tell the player they’ve been hit. And that’s the final bit of code we need!

Whew! That was a pretty intense chapter, with lots of code introduced. Combining the previous chapter with this one, you now have most of the functionality in place for a first-person shooter.

Summary

  • A ray is an imaginary line projected into the scene.

  • Raycasting operations are useful for both shooting and sensing obstacles.

  • Making a character wander around involves basic AI.

  • New objects are spawned by instantiating prefabs.

  • Coroutines are used to spread out functions over time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.170.97