Negascouting

Including a search strategy also makes room for new challenges. Negascouting is the result of narrowing the search by improving the pruning heuristic. It is based on a concept called a search window, which is the interval between the alpha and beta values. So, reducing the search window increases the chance of a branch being pruned.

Getting ready…

It is important to know the difference between a dynamic member function and a static member function, as well as recursion. A dynamic member function is bound to the instance of the class, while the static member function is bound to the class itself. The static method allows us to call it without instantiating an object. This is great for general-purpose algorithms, such as the one we are developing in this recipe.

In the case of recursion, it's not always clear that (unlike iteration) this is an iterative process that requires a base case (also called the stop condition) and a recursive case (the one to keep iterating).

How to do it…

We will add a new function to the BoardAI class as follows:

  1. Create the ABNegascout function:
    public static float ABNegascout (
            Board board,
            int player,
            int maxDepth,
            int currentDepth,
            ref Move bestMove,
            float alpha,
            float beta)
    {
        // next steps here
    }
  2. Validate the base case:
    if (board.IsGameOver() || currentDepth == maxDepth)
        return board.Evaluate(player);
  3. Set the initial values:
    bestMove = null;
    float bestScore = Mathf.NegativeInfinity;
    float adaptiveBeta = beta;
  4. Loop through every available move and return the best score:
    foreach (Move m in board.GetMoves())
    {
        // next steps here
    }
    return bestScore;
  5. Create a new game state from the current move:
    Board b = board.MakeMove(m);
  6. Set the values for the recursion:
    Move currentMove = null;
    float recursedScore;
    int depth = currentDepth + 1;
    float max = Mathf.Max(alpha, bestScore);
  7. Call the recursion:
    recursedScore = ABNegamax(b, player, maxDepth, depth, ref currentMove, -adaptiveBeta, max);
  8. Set the current score and validate it:
    float currentScore = -recursedScore;
    if (currentScore > bestScore)
    {
        // next steps here
    }
  9. Validate for pruning:
    if (adaptiveBeta == beta || currentDepth >= maxDepth - 2)
    {
        bestScore = currentScore;
        bestMove = currentMove;
    }
  10. Otherwise, take a look around:
    else
    {
        float negativeBest;
        negativeBest = ABNegascout(b, player, maxDepth, depth, ref bestMove, -beta, -currentScore);
        bestScore = -negativeBest;
    }
  11. Stop the loop if necessary. Otherwise, update the adaptive value:
    if (bestScore >= beta)
        return bestScore;
        
    adaptiveBeta = Mathf.Max(alpha, bestScore) + 1f;

How it works…

This algorithm works by examining the first move of each node. The following moves are examined using a scout pass with a narrower window based on the first move. If the pass fails, it is repeated using a full-width window. As a result, a large number of branches are pruned and failures are avoided.

See also

  • The AB Negamaxing recipe
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.135.225