Controlling groups of AI

In this recipe, we'll kill two birds with one stone and implement both an interface for group AI management and look at weighted decision making.

In many ways, the architecture will be similar to the Decision making – Finite State Machine recipe. It's recommended to have a look at it before making this recipe. The big difference from the normal state machine is that instead of the states having definite outcomes, an AI Manager will look at the current needs, and assign units to different tasks.

This recipe will also make use of an AIControl class. This is also an extension of the AIControl that can be found in the Creating a reusable AI control class recipe.

As an example, we'll use resource gathering units in an RTS. In this simplified game, there are two resources, wood and food. Food is consumed continuously by the workers and is the driving force behind the decision. The AI Manager will try to keep the levels of the food storage at a set minimum level, taking into account the current consumption rate. The scarcer the food becomes, the more units will be assigned to gather it. Any unit not occupied by food gathering will be assigned to wood gathering instead.

How to do it...

We'll start by defining a GatherResourceState class. It extends the same AIState we defined in the Decision making – Finite State Machine recipe. This will be implemented by performing the following steps:

  1. First of all it needs access to the AIControl called aiControl.
  2. It needs two additional fields, a Spatial defining something to pick up called resource, and an integer called amountCarried.
  3. In controlUpdate method, we define two branches. The first is for if the unit isn't carrying anything, amountCarried == 0. In this case, the unit should move towards resource. Once it gets close enough, it should pick up some, and amountCarried should be increased, as shown in the following code:
    Vector3f direction = resource.getWorldTranslation().subtract(this.spatial.getWorldTranslation());
    if(direction.length() > 1f){
      direction.normalizeLocal();
      aiControl.move(direction, true);
    } else {
      amountCarried = 10;
    }
  4. In the other case, amountCarried is more than 0. Now, the unit should move towards the HQ instead. Once it's close enough, finishTask() is called.
  5. The finishTask method calls the AI Manager via aiControl to increase the resource amount that the state handles with the supplied amount as follows:
    aiControl.getAiManager().onFinishTask(this.getClass(), amountCarried);
    amountCarried = 0;
  6. Finally, we create two new classes that extend this class, namely GatherFoodState and GatherWoodState.

With the new state handled, we can focus on the AIControl class. It will follow the pattern established elsewhere in the chapter, but it needs some new functionality. This will be implemented by performing the following three steps:

  1. It needs two new fields. The first is an AIAppState called aiManager. It also needs to keep track of its state in an AIAppState called currentState.
  2. In the setSpatial method, we add the two gathering states to our control, and make sure they're disabled, as shown in the following code:
    this.spatial.addControl(new GatherFoodState());
    this.spatial.addControl(new GatherWoodState());
    this.spatial.getControl(GatherFoodState.class).setEnabled(false);
    this.spatial.getControl(GatherWoodState.class).setEnabled(false);
  3. We also add a method to set the state, setCurrentState. Sidestepping conventions, it should not set an instance of a state, but enable an existing state the AI control class has, while disabling the previous state (if any), as shown in the following code:
    public void setCurrentState(Class<? extends AIStateRTS> newState) {
      if(this.currentState != null && this.currentState.getClass() != newState){
        this.currentState.setEnabled(false);
      }
      this.currentState = state;
      this.currentState.setEnabled(true);
    }

Now we have to write a class that manages the units. It will be based on the AppState pattern, and consists of the following steps:

  1. We begin by creating a new class called AIAppState that extends AbstractAppState.
  2. It needs a List<AIControl> of the units it controls, called aiList. We also add Map<Class<? extends AIStateRTS>, Spatial> called resources that contains the resources in the world that can be gathered.
  3. It then needs to keep track of its stock of wood and food. There are also fields for the current foodConsumption value per second, minimumFoodStorage it would like to keep, and a timer for how long before it wants to reevaluate its decisions.
  4. The update method is pretty simple. It starts by subtracting foodConsumption from the storage. Then, if timer has reached 0, it will call the evaluate method, as shown in the following code:
    food -= foodConsumption * tpf;
    food = Math.max(0, food);
    timer-= tpf;
    if(timer <= 0f){
      evaluate();
      timer = 5f;
    }
  5. In the evaluate method, we begin by establishing the food requirement, as shown in the following code:
    float foodRequirement = foodConsumption * 20f + minimumFoodStorage;
  6. Then we decide how urgent food gathering is, on a factor of 0.0 - 1.0, as shown in the following code:
    float factorFood = 1f - (Math.min(food, foodRequirement)) / foodRequirement;
  7. Now we decide how many workers should be assigned to food gathering by taking that factor and multiplying it by the total amount of workers, as shown in the following code:
    int numWorkers = aiList.size();
    int requiredFoodGatherers = (int) Math.round(numWorkers * factorFood);
    int foodGatherers = workersByState(GatherFoodState.class);
  8. We create a helper method, called workersByState, that returns the number of workers assigned to a given state, as shown in the following code:
    private int workersByState(Class<? extends AIStateRTS> state){
      int amount = 0;
      for(AIControl_RTS ai: aiList){
        if(ai.getCurrentState() != null && ai.getCurrentState().getClass() == state){
          amount++;
        }
      }
      return amount;
    }
  9. Comparing the current gathers with the required amount, we know whether to increase or decrease the number of food gatherers. We then set the state to change according to whether more or less food gatherers are required, as shown in the following code:
    int foodGatherers = workersByState(GatherFoodState.class);
    int toSet = requiredFoodGatherers – foodGatherers;
    Class<? extends AIStateRTS> state = null;
    if(toSet > 0){
      state = GatherFoodState.class;
    } else if (toSet < 0){
      state = GatherWoodState.class;
      toSet = -toSet;
    }
  10. We can create another method, called setWorkerState, that loops through aiList and calls setCurrentState of the first available worker. It reruns true if it has successfully set the state of a unit, as shown in the following code:
    private boolean setWorkerState(Class<? extends AIStateRTS> state){
      for(AIControl_RTS ai: aiList){
        if(ai.getCurrentState() == null || ai.getCurrentState().getClass() != state){
          ai.setCurrentState(state);
          ((GatherResourceState)ai.getCurrentState()).setResource(resources.get(state));
          return true;
        }
      }
      return false;
    }
  11. The example implementation also requires that we set the resource for that state in the form of a spatial. This is so that the units know where they can pick up some of the resource. It can be set somewhere in the application, as shown in the following code:
    aiAppState.setResource(GatherFoodState.class, foodSpatial);
    aiAppState.setResource(GatherWoodState.class, woodSpatial);

How it works...

At the beginning of the game, we add one green food resource, and one brown wood resource, some distance away from the HQ (at 0,0,0). The AIAppState starts by looking at the current food storage, seeing it's low, it will assign an AI to go to the food resource and bring back food.

The AIAppState evaluate method starts by establishing the need for food gathering. It does this by dividing the food stores by the current requirement. By setting the food in the algorithm to not be able to exceed the requirement, we make sure we get a figure between 0.0 and 1.0.

It then takes the amount of units available, and decides how many of those should be gathering food, based on the factorFood figure, rounding it off to the nearest integer.

The result is compared to how many are currently on a food gathering mission, and adjusts the number to suit the current need, assigning them to either food or wood gathering.

The worker AI is completely controlled by the state they're set to by the manager, and in this recipe, all they can do is move to one resource or the other. They have no idle state, and are expected to always have some task.

The two states we use in the recipe are actually the same class. Both resources are gathered in the same way, and GatherFoodState and GatherWoodState are only used as identifiers. In a real game, they might well behave differently from each other. If not, it might be a good idea to use a parameterized version of GatherResourceState instead.

There's more

This recipe only has two different states, where one is the deciding one. What do we do if we have, let's say five equally important resources or tasks to consider? The principles are very much the same:

  • Begin by normalizing the need for each task between 0.0 and 1.0. This makes it easier to balance things.
  • Next, add all the values together, and divide each value by the sum. Now, each value is balanced with each other, and the total of all values is 1.0.

In this recipe, the evaluation is done continuously, but it might just as well be applied when an AI has finished a task, to see what it should do next. In that case, the task could be picked at random among the distributed values to make it more dynamic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.172.38