Sensing – hearing

The hearing we'll implement is one of the more basic models you can have. It's not as direct as vision, and requires a different approach. We'll assume that hearing is defined by hearingRange, and that the hearing ability has a linear fall off to that radius. We'll also assume that the sound emits something (in this case, footsteps), the volume of which is relative to the object's velocity. This would make sense in a stealth game, where sneaking should emit less sound than running. Sound is not blocked by obstacles or modified in any other way, apart from the distance between the target and the listener.

How to do it...

We will start by defining a class that all objects emitting sounds will use. This will require the following steps to be performed:

  1. We create a class called SoundEmitterControl, extending AbstractControl.
  2. It needs three fields, a Vector3f called lastPosition, a float for noiseEmitted, and another float called maxSpeed.
  3. In the controlUpdate method, we sample the velocity the spatial has. This is the distance between the current worldTranslation and lastPosition. Divided by tpf (time-per-frame) we get the distance per second, as shown in the following code:
    float movementSpeed = lastPosition.distance(spatial.getWorldTranslation()) / tpf;
  4. If it's actually moving, we see how much it moves compared to maxSpeed. Normalized between 0 and 1, this value becomes noiseEmitted, as shown in the following code:
    movementSpeed = Math.min(movementSpeed, maxSpeed);
    noiseEmitted = movementSpeed / maxSpeed;
  5. Finally, we set lastPosition to current worldTranslation. Now we will implement the changes to detect sound in AIControl. This will have five steps. We start by defining a float called hearingRange. In the sense() method, we parse the list of targetableObjects and see if they have SoundEmitterControl. If any does, we check the distance between it and the AI using the following code:
    float distance = s.getWorldTranslation().distance(spatial.getWorldTranslation());
  6. We get the noiseEmitted value from SoundEmitterControl and see how much is picked up by the AI, as shown in the following code:
    float distanceFactor = 1f - Math.min(distance, hearingRange) / hearingRange;
    float soundHeard = distanceFactor * noiseEmitted;
  7. If the threshold of 0.25f is exceeded, the AI has heard the sound and will react.

How it works...

The SoundEmitterControl class is meant to define how much sound a moving character makes. It does this by measuring the distance traveled each frame, and translates it to speed per second by dividing by the time-per-frame. It's been adapted slightly to work for the free-flying camera used in the test case. That's why maxSpeed is set to 25. It uses maxSpeed to define how much noise the spatial is causing, on a scale of 0 to 1.

In the AI control class, we use the sense() method to test whether the AI has heard anything. It has a hearingRange field, with the range falling in a linear fashion from the location of the AI. Outside this range, no sound would be detected by the AI.

The method measures the distance from the sound emitting spatial, and factors this with the noise value it emits. For this example, a threshold of 0.25 is used to define whether the sound is loud enough for the AI to react.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.98.208