In this recipe, we'll kill two birds with one stone and implement both an interface for group AI management and look at weighted decision making.
In many ways, the architecture will be similar to the Decision making – Finite State Machine recipe. It's recommended to have a look at it before making this recipe. The big difference from the normal state machine is that instead of the states having definite outcomes, an AI Manager will look at the current needs, and assign units to different tasks.
This recipe will also make use of an AIControl
class. This is also an extension of the AIControl
that can be found in the Creating a reusable AI control class recipe.
As an example, we'll use resource gathering units in an RTS. In this simplified game, there are two resources, wood and food. Food is consumed continuously by the workers and is the driving force behind the decision. The AI Manager will try to keep the levels of the food storage at a set minimum level, taking into account the current consumption rate. The scarcer the food becomes, the more units will be assigned to gather it. Any unit not occupied by food gathering will be assigned to wood gathering instead.
We'll start by defining a GatherResourceState
class. It extends the same AIState
we defined in the Decision making – Finite State Machine recipe. This will be implemented by performing the following steps:
aiControl
.Spatial
defining something to pick up called resource
, and an integer called amountCarried
.controlUpdate
method, we define two branches. The first is for if the unit isn't carrying anything, amountCarried == 0
. In this case, the unit should move towards resource
. Once it gets close enough, it should pick up some, and amountCarried
should be increased, as shown in the following code:Vector3f direction = resource.getWorldTranslation().subtract(this.spatial.getWorldTranslation()); if(direction.length() > 1f){ direction.normalizeLocal(); aiControl.move(direction, true); } else { amountCarried = 10; }
amountCarried
is more than 0
. Now, the unit should move towards the HQ instead. Once it's close enough, finishTask()
is called.finishTask
method calls the AI Manager via aiControl
to increase the resource amount that the state handles with the supplied amount as follows:aiControl.getAiManager().onFinishTask(this.getClass(), amountCarried); amountCarried = 0;
GatherFoodState
and GatherWoodState
.With the new state handled, we can focus on the AIControl
class. It will follow the pattern established elsewhere in the chapter, but it needs some new functionality. This will be implemented by performing the following three steps:
AIAppState
called aiManager
. It also needs to keep track of its state in an AIAppState
called currentState
.setSpatial
method, we add the two gathering states to our control, and make sure they're disabled, as shown in the following code:this.spatial.addControl(new GatherFoodState()); this.spatial.addControl(new GatherWoodState()); this.spatial.getControl(GatherFoodState.class).setEnabled(false); this.spatial.getControl(GatherWoodState.class).setEnabled(false);
setCurrentState
. Sidestepping conventions, it should not set an instance of a state, but enable an existing state the AI control class has, while disabling the previous state (if any), as shown in the following code:public void setCurrentState(Class<? extends AIStateRTS> newState) { if(this.currentState != null && this.currentState.getClass() != newState){ this.currentState.setEnabled(false); } this.currentState = state; this.currentState.setEnabled(true); }
Now we have to write a class that manages the units. It will be based on the AppState
pattern, and consists of the following steps:
AIAppState
that extends AbstractAppState
.List<AIControl>
of the units it controls, called aiList
. We also add Map<Class<? extends AIStateRTS>
, Spatial>
called resources
that contains the resources in the world that can be gathered.wood
and food
. There are also fields for the current foodConsumption
value per second, minimumFoodStorage
it would like to keep, and a timer
for how long before it wants to reevaluate its decisions.update
method is pretty simple. It starts by subtracting foodConsumption
from the storage. Then, if timer
has reached 0
, it will call the evaluate
method, as shown in the following code:food -= foodConsumption * tpf; food = Math.max(0, food); timer-= tpf; if(timer <= 0f){ evaluate(); timer = 5f; }
evaluate
method, we begin by establishing the food requirement, as shown in the following code:float foodRequirement = foodConsumption * 20f + minimumFoodStorage;
float factorFood = 1f - (Math.min(food, foodRequirement)) / foodRequirement;
int numWorkers = aiList.size(); int requiredFoodGatherers = (int) Math.round(numWorkers * factorFood); int foodGatherers = workersByState(GatherFoodState.class);
workersByState
, that returns the number of workers assigned to a given state, as shown in the following code:private int workersByState(Class<? extends AIStateRTS> state){ int amount = 0; for(AIControl_RTS ai: aiList){ if(ai.getCurrentState() != null && ai.getCurrentState().getClass() == state){ amount++; } } return amount; }
int foodGatherers = workersByState(GatherFoodState.class); int toSet = requiredFoodGatherers – foodGatherers; Class<? extends AIStateRTS> state = null; if(toSet > 0){ state = GatherFoodState.class; } else if (toSet < 0){ state = GatherWoodState.class; toSet = -toSet; }
setWorkerState
, that loops through aiList
and calls setCurrentState
of the first available worker. It reruns true
if it has successfully set the state of a unit, as shown in the following code:private boolean setWorkerState(Class<? extends AIStateRTS> state){ for(AIControl_RTS ai: aiList){ if(ai.getCurrentState() == null || ai.getCurrentState().getClass() != state){ ai.setCurrentState(state); ((GatherResourceState)ai.getCurrentState()).setResource(resources.get(state)); return true; } } return false; }
aiAppState.setResource(GatherFoodState.class, foodSpatial); aiAppState.setResource(GatherWoodState.class, woodSpatial);
At the beginning of the game, we add one green food resource, and one brown wood resource, some distance away from the HQ (at 0,0,0). The AIAppState
starts by looking at the current food storage, seeing it's low, it will assign an AI to go to the food resource and bring back food.
The AIAppState
evaluate method starts by establishing the need for food gathering. It does this by dividing the food stores by the current requirement. By setting the food in the algorithm to not be able to exceed the requirement, we make sure we get a figure between 0.0 and 1.0.
It then takes the amount of units available, and decides how many of those should be gathering food, based on the factorFood
figure, rounding it off to the nearest integer.
The result is compared to how many are currently on a food gathering mission, and adjusts the number to suit the current need, assigning them to either food or wood gathering.
The worker AI is completely controlled by the state they're set to by the manager, and in this recipe, all they can do is move to one resource or the other. They have no idle state, and are expected to always have some task.
The two states we use in the recipe are actually the same class. Both resources are gathered in the same way, and GatherFoodState
and GatherWoodState
are only used as identifiers. In a real game, they might well behave differently from each other. If not, it might be a good idea to use a parameterized version of GatherResourceState
instead.
This recipe only has two different states, where one is the deciding one. What do we do if we have, let's say five equally important resources or tasks to consider? The principles are very much the same:
In this recipe, the evaluation is done continuously, but it might just as well be applied when an AI has finished a task, to see what it should do next. In that case, the task could be picked at random among the distributed values to make it more dynamic.
18.188.57.172