Representing states with numerical values: Markov system

Having learned about fuzzy logic, it may do us well to mix some approaches and probably extend the functionality with finite-state machines. However, fuzzy logic doesn't work directly with values—they have to be defuzzified before they have a meaning within its scope. A Markov chain is a mathematical system that allows us to develop a decision-making system that can be seen as a fuzzy state machine.

Getting ready

This recipe uses the matrix and vector classes that come with Unity to illustrate the theoretical approach and make a working example, but it can be improved with our own matrix and vector classes with the proper implementation of the required member functions, such as vector-matrix multiplication.

How to do it...

  1. Create the parent class for handling transitions:
    using UnityEngine;
    using System.Collections;
    
    public class MarkovTransition : MonoBehaviour
    {
        public Matrix4x4 matrix;
        public MonoBehaviour action;
    }
  2. Implement the IsTriggered member function:
    public virtual bool IsTriggered()
    {
        // implementation details here
        return false;
    }
  3. Define the Markov state machine with its member variables:
    using UnityEngine;
    using System.Collections;
    using System.Collections.Generic;
    
    public class MarkovStateMachine : MonoBehaviour
    {
        public Vector4 state;
        public Matrix4x4 defaultMatrix;
        public float timeReset;
        public float timeCurrent;
        public List<MarkovTransition> transitions;
        private MonoBehaviour action;
    }
  4. Define the Start function for initialization:
    void Start()
    {
        timeCurrent = timeReset;
    }
  5. Implement the Update function:
    void Update()
    {
        if (action != null)
            action.enabled = false;
    
        MarkovTransition triggeredTransition;
        triggeredTransition = null;
        // next steps
    }
  6. Look for a triggered transition:
    foreach (MarkovTransition mt in transitions)
    {
        if (mt.IsTriggered())
        {
            triggeredTransition = mt;
            break;
        }
    }
  7. If found, compute its matrix into the game state:
    if (triggeredTransition != null)
    {
        timeCurrent = timeReset;
        Matrix4x4 matrix = triggeredTransition.matrix;
        state = matrix * state;
        action = triggeredTransition.action;
    }
  8. Otherwise, update the countdown timer and compute the default matrix into the game state, if necessary:
    else
    {
        timeCurrent -= Time.deltaTime;
        if (timeCurrent <= 0f)
        {
            state = defaultMatrix * state;
            timeCurrent = timeReset;
            action = null;
        }
    }

How it works...

We define a game state based on the numerical value of the vector 4 member variable, with each position corresponding to a single state. The values in the game state change according to the matrix attached to each transition. When transitions are triggered, the game state changes, but we also have a countdown timer to handle a default transition and change the game accordingly. This is useful when we need to reset the game state after a period of time or just apply a regular transformation.

See also

For more theoretical insights regarding the Markov process' application to game AI, please refer to Ian Millington's book, Artificial Intelligence for Games.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.151.164