CHAPTER 12

Decision Trees

Chapters 10 and 11 described tree algorithms in general and balanced trees in particular. They explained algorithms that you can use to build and maintain trees, but they didn't describe any algorithms that use trees to solve a particular problem.

This chapter describes decision trees, which you can use to model situations where you can solve a problem by making a series of decisions. Each branch in the tree represents a single choice. A leaf node represents a complete set of decisions that produces a final solution. The goal is to find the best possible set of choices or the best leaf node in the tree.

For example, suppose you want to divide a set of objects of various weights into two piles that have the same total weight. You could model this problem with a binary where the left branch at level K of the tree corresponds to including the Kth object in the first pile and the right branch corresponds to including the Kth object in the second pile. A complete path through the tree corresponds to a complete assignment of objects to the two piles. The goal is to find a path that gives an even distribution of weight.

Decision trees are extremely useful and can model all sorts of situations where you can use a series of steps to produce a solution. Unfortunately, decision trees are often truly enormous. For example, the binary tree described in the preceding paragraph representing the division of N objects into two piles has 2N leaf nodes, so searching the entire tree may be impossible. For example, a tree representing the division of 50 objects has approximately 1.13 × 1015 leaf nodes. Even if you could examine 1 million of those nodes per second, it would take more than 2,100 years to examine every node.

This chapter describes some different kinds of decision trees. It explains techniques you can use to search these trees efficiently so that you can find solutions to larger problems than you could find by using a brute-force approach. It also explains heuristic methods you can use to find approximate solutions to some problems when searching a tree completely isn't feasible.

The following section starts the discussion of decision tree search algorithms by covering a very specific kind of search: game tree searches.

Searching Game Trees

You can model games such as chess, checkers, Go, and tic-tac-toe (naughts and crosses) with a game tree where each branch represents a move by one of the players. If at some point in the game a player has 10 possible moves, the tree at that point has 10 possible branches. A complete path through the tree corresponds to a complete game.

Like all decision trees, game trees grow extremely quickly. For example, suppose a chess game lasts 40 moves (each player moves 20 times) and has an average of about 30 possible moves per turn. The total number of paths through the game tree is roughly 3040 ≈ 1.2 × 1059. Exhaustively searching such a tree with a computer that could examine 1 billion possible paths per second would take roughly 2.3 × 1044 years. (See http://en.wikipedia.org/wiki/Shannon_number for a discussion of the Shannon number, an estimate of the complexity of chess.)

Tic-tac-toe is a more tractable problem, although the game tree is still huge. In the first move, the X player initially has nine choices. In the second move, player O has eight choices. At each move, the current player has one fewer choice than the other player had in the previous move, so a total of 9 × 8 × 7 × ... × 1 = 9! = 362,880 paths are possible through the game tree.

Some of those paths are illegal. For example, if X takes the top three squares in the first five moves, the game is over, so any paths through the tree that begin with X taking those squares don't go all the way to the ninth level of the tree.

If you remove all the paths that end early, the game tree still contains roughly a quarter million leaf nodes, so the tree is still fairly large.

The following sections describe algorithmic techniques you can use to search a tic-tac-toe game tree. The discussion uses tic-tac-toe because that problem is reasonably small, but the same techniques apply to any similar game, such as chess, checkers, or Go.

Minimax

To decide whether one move is preferable to another during a game, you need to decide what value the different board positions have. For example, if you can place an X in a particular square in a tic-tac-toe game, and doing so lets you win, that board position has a high value. Conversely, if placing an X in a different position allows O to win, that board position has a low value.

Different games use different board position values that can depend on many factors, such as whether you win, whether your opponent wins, whether your pieces occupy certain parts of the board, and whether your pieces can threaten certain positions. In tic-tac-toe, you can define four board values:

  • 4: The board position will end in a win for this player.
  • 3: It's unclear whether the current board position will result in a win, loss, or draw.
  • 2: The board position will end in a draw.
  • 1: The board position will end in a loss for this player.

Figure 12-1 shows board positions demonstrating each of these values. In the upper-left board position, X will win in the next move. The board position in the upper right gives a loss to X, because O will win no matter where X goes in the next turn. The lower-left board position is uncertain, assuming that you can search only a few levels into the game tree. Finally, the board position in the lower right will end in a draw no matter where X and O move on their final moves.

images

Figure 12-1: To pick a move, the program must assign values to board positions.

There's an obvious relationship among these values. If player 1 wins, player 2 loses. If a game ends in a draw for player 1, it ends in a draw for player 2. If the board value is unknown for player 1, it's unclear for player 2 also.

For complicated games, the outcome of a particular board position is often uncertain because the program cannot search the game tree thoroughly enough to examine all the possible outcomes. In cases such as those, the program must assign approximate values to different positions so that the program can pick the best one.

On a reasonably fast computer, a tic-tac-toe program can search the entire game tree, so the value 3 isn't really necessary. It is included here so that you can see how to handle more complicated games. (You can get the same effect in a tic-tac-toe program by not allowing the program to search more than a few levels through the game tree.)

NOTE Because you can search the entire tic-tac-toe game tree, it's fairly obvious that, starting from the first move, X can force a win, O can force a win, or one of the players can force a draw. If both players understand the game tree completely, there's really no game. The only way there could be any doubt about the outcome is if one of the players makes a mistake.

It's much less obvious that the same is true for more complicated games such as chess. If the players had perfect knowledge of the game tree, one or the other could force a win or draw with no doubt about the outcome. It's the fact that the game tree is too big to understand completely that makes the game interesting.

Minimax is a game tree search strategy in which at each move you try to minimize the maximum value your opponent can achieve. For example, if you can make two moves, the first giving your opponent a win and the second giving your opponent a loss, you should take the second move.

The following pseudocode shows the minimax algorithm at a high level:

// Find the best move for player1. Minimax(Board: board_position, Move: best_move, Value: best_value, Player: player1, Player: player2, Integer: depth, Integer: max_depth) // See if we have exceeded our allowed depth of recursion. If (depth > max_depth) Then // We have exceeded the maximum allowed depth of recursion. // The outcome for this board position is unknown. best_value = Unknown Return End If // Find the move that gives player2 the lowest value. Value: lowest_value = Infinity Move: lowest_move For Each <possible test move> <Update board_position to make the test move> // Evaluate this board position. If <this is a win, loss, or draw> Then <Set lowest_value and lowest_move appropriately> Else // Recursively try other future moves. Value: test_value Move: test_move Minimax(board_position, test_move, test_value, player2, player1, depth, max_depth) // See if we found a worse move for player2. If (test_value < lowest_value) Then // This is an improvement. Save it. lowest_value = test_value lowest_move = test_move End If End If <Restore board_position to unmake the test move> Next <possible test move> // Save the best move. best_move = lowest_move // Convert board values for player2 into values for player 1. If (lowest_value == Win) best_value = Loss Else If (lowest_value == Loss) best_value = Win Else ... End If End Minimax

The algorithm starts by checking its depth of recursion. If it has exceeded its maximum allowed depth of recursion, the algorithm cannot determine the game's eventual outcome from this board position, so it sets best_value to Unknown and returns.

To find the best move for player1, the algorithm must find the move that gives player2 the worst board value. The algorithm creates variables to keep track of the lowest board value found so far for player2. It sets lowest_value equal to Infinity so that any board value it finds replaces the initial value of Infinity.

Next the algorithm loops through all the moves player1 could make. The Minimax algorithm makes a move and then recursively calls itself to find the best move player2 could make after player1 makes that test move.

After the recursive call returns, the algorithm compares the best result player2 could obtain with the value saved in lowest_value. If the test value is lower, the algorithm updates lowest_value and lowest_move, so it knows that this move is preferable (to player1).

After it finishes examining all the possible test moves, the algorithm knows which move player1 should make to give player2 the worst possible board position. It saves that move and then converts the value of the board for player2 into the value for player1. For example, if the best board position makes player2 lose, it makes player1 win, and vice versa.

In cases where player2 doesn't win or lose, it's a little less clear how to convert from a player2 value to a player1 value. For tic-tac-toe, the Unknown and Draw values are the same for both players. For example, if a board position gives player2 a draw, it gives player1 a draw as well.

For a more complicated game such as chess, a board position's value might be a number between −100 and +100, where +100 represents a win and −100 represents a loss. In that case, player2's value for a board position might simply be the negative of player1's value for the same board position.

One side effect of a simple minimax strategy that can sometimes be a problem is that the program considers all solutions that have the same board value equally desirable. To see why that can be a problem, suppose a game is close enough to the end for the program to realize that it will lose no matter what it does. In that case, it selects the first move it considers while searching the game tree, because all moves give the same result. That move may seem random or even foolish. For example, the program might pick a move that gives its opponent a win in the next move when a different move might have delayed the inevitable for two or three more moves. In contrast, a human would probably pick a move that made the game last longer, hoping the opponent will make a mistake or won't realize that the game is as good as over.

Conversely, the program might find a way to win in six moves and pick that over another strategy that would win in only two moves.

You can address these problems by favoring longer sequences of moves that lead to losses or ties and shorter sequences of moves that lead to wins.

A simple minimax strategy is enough for a winning tic-tac-toe game, but for more complicated games a program cannot search the entire game tree. The following sections describe some strategies you can use to search larger game trees.

Initial Moves and Responses

One way to reduce the size of the game tree is to store precomputed initial moves and responses. If you search the game tree ahead of time to find the best possible initial move, you can simply have the program make that move if it has the first turn. Instead of spending a noticeable amount of time searching for a first move, the program can move instantly.

The user moves next, so the computer doesn't need to move again until two moves have been made. The size of the game tree at that point depends on the particular moves made, but the tree will be much smaller than the original game tree. For example, the entire tic-tac-toe game tree contains 255,168 possible games. If X picks the upper-left square and O picks the upper-middle square, the remaining game tree contains only 3,668 possible games. That may still be too many to enumerate by hand, but it's a very small tree for a computer to search.

If the user moves first, the game tree also shrinks dramatically. If the user picks the upper-left square for the first move, the remaining game tree contains only 27,732 possible games. This is a lot more than the number of games after the second move, but it's a lot smaller than the entire game tree. With one additional change, you can make that number even smaller.

X has only nine choices for a first move. If you precalculate all the best responses to those first moves, you can make the program simply look up the appropriate response. Instead of searching a game tree containing 27,732 possible games, the program only needs to look up a response.

The user then moves again, so the program doesn't need to search the game tree until three moves have been made—one by the user, one a precalculated response, and another by the user. At that point the game tree is much smaller. For example, if X takes the upper-left square, O takes the upper-middle square, and X takes the upper-right square, the remaining game tree contains only 592 possible games. That's actually small enough that you could search the tree by hand if you wanted to.

In a more complicated game like chess, the game tree is infinitely large for practical purposes, so trimming the top few levels of the tree won't help as much. Skipping three moves might let you reduce the number of possible games from around 1.2 × 1059 to roughly 4.5 × 1054, but that's still much too big to search completely.

Using precalculated moves and responses does let a chess program make its first few moves quickly, however. It also lets you spend lots of time studying game openings so that you can invest extra time in planning those moves. It also lets the program avoid openings that would give it a big initial disadvantage.

Game Tree Heuristics

The game trees for all but the simplest games are much too big to search completely, so in general there's no way to know if a particular move will lead to a better solution than another. Although you can't always know for certain that a particular move will be beneficial, sometimes you can use a heuristic to indicate a move's value.

A heuristic (pronounced “hyoo-riss-tik”) is an algorithm that is likely to produce a good result but that is not guaranteed to do so. Heuristics can't help you search the entire game tree, but they can give you rules for deciding which parts of the tree to avoid and which parts deserve special attention.

One type of game heuristic is to look for patterns in the board position. For example, one heuristic that some chess players use is “When ahead, trade mercilessly.” This means if you have the advantage and you can trade one of your pieces for a piece of equal value, you should do so. That can make your relative advantage greater and makes the game tree smaller so that it's easier to search in the future.

Other patterns that a chess program may look for include long sequences of trades, castling moves, moves that threaten multiple pieces, discovered check, moves that threaten the king or queen, promotion, en passant, and so forth.

When a program recognizes one of these patterns, it can alter the strategy it uses to search the game tree. For example, if the program sees a long series of exchanges, it might exceed its normal maximum depth of recursion to follow the exchange to the end to see if it will come out ahead.

Another kind of heuristic assigns numeric values to locations on the board and then modifies a board's total value based on the values of the locations occupied or threatened by a player's pieces. For example, in tic-tac-toe you might assign each square a number indicating the number of wins that include it. For example, the upper-left corner would have the value 3, because there are three ways to win by using that square. Figure 12-2 shows the square values for this heuristic.

images

Figure 12-2: The value of a tic-tac-toe square is the number of ways you can use that square to win.

In chess, the center four squares occupy a critical location, so you might give those squares more value. You also might want to assign different values for squares occupied by a piece and squares threatened by a piece.

In most games, the values of the board locations would change over time. For example, in the early stages of a chess game the central four squares are important. At the very end of the game, however, it is whether a player can achieve a checkmate that is important, not whether the player controls those squares.

NOTE Writing a Reversi game is an interesting exercise in game programming. The rules are much simpler than those for chess, but the game tree is much larger than the tree for tic-tac-toe, so you can't search it completely. The way pieces move is much simpler than chess, so some patterns at least are easier to recognize. By using board location values alone and some tree searching, you can build a reasonably strong Reversi program. For more information on Reversi, including the rules and some notes about strategy, see http://en.wikipedia.org/wiki/Reversi.

The later section “Decision Tree Heuristics” has more to say about heuristics.

Searching General Decision Trees

By modeling a game's moves as a tree, you convert the problem of picking a good move into a search for the best path through the tree. Similarly, you can model many other decision processes with a tree.

For example, consider the partition problem. You have a collection of objects of a given weight (or cost or value or some other measure), and you need to divide them into two groups that have the same total weight. In some cases, this is easy. If you have four objects with weights 2, 4, 1, and 1, it's obvious that you can put the large object in the first group and the other objects in the second group. Similarly, if you have an even number of objects that all have the same weight, you can simply place half in one group and half in the other.

The problem is much harder if you have a large number of objects with varying weights. In that case, you can model the process of deciding which objects go in which group with a binary decision tree. Here the Kth level of the tree represents a decision about the Kth object. A left branch represents putting the object in the first group, and a right branch represents putting the object in the second group.

Figure 12-3 shows a complete decision tree for a partition problem with four objects having weights 2, 4, 1, and 1. A path through the tree represents a complete assignment of objects to the two groups. For example, the path that follows the root's left branch and then the next three right branches puts the first object (weight 2) in the first group and the other objects (weights 4, 1, and 1) in the second group. The numbers below the tree show the total weights of the two groups—in this case, 2 and 6.

images

Figure 12-3: You use a decision tree to model the partition problem.

Notice that only two leaf nodes in Figure 12-3 correspond to dividing the objects' weights equally so that both groups have a total weight of 4. The two solutions are basically the same solution with the objects in the two groups switched.

NOTE In fact, any solution you find will have a complementary solution with the two groups switched. If you arbitrarily pick an item and place it in the first group before starting the search, you can shorten the tree by one level. That eliminates solutions that have the chosen item in the second group, but the tree will still contain solutions if there are any.

The decision tree shown in Figure 12-3 is fairly large even though it represents a problem with only four objects. For larger problems, the decision tree is enormous. For example, if you need to divide 50 objects into two groups, the tree holds 250 leaf nodes, representing roughly 1.13 × 1015 possible solutions. If only a few of the arrangements produce an even division of weight, it could be very difficult to find a good solution.

The following section explains the difference between two versions of problems such as the partition problem — one that is very hard to solve, and one that is extremely hard to solve. The sections after that explain general methods you can use to search decision trees efficiently.

Optimization Problems

Problems such as the partition problem often come in two closely related forms. The first form asks if a particular solution is possible. The second form asks you to find the best solution possible.

For the partition problem, the first question asks whether you can divide the objects into two groups with equal total weights. The second question asks you to divide the objects into two groups with total weights as close to equal as possible. The second question is called an optimization problem because you can divide the objects in many ways, and you must find the optimum division.

The optimization version of the problem is in some ways easier because it allows approximate solutions. The other version of the problem requires a strictly yes-or-no answer.

For example, suppose you need to divide into two groups 100 items with a total combined weight of 400. If you search the decision tree and find an exactly equal division, you know the answer to the first question is yes. However, you might search the tree for hours or even days and never find a division that is exactly equal. In that case, you cannot conclude that no such division exists, only that you haven't found one.

In contrast, you can easily find solutions to the optimization version of the problem. Those solutions may not be very good, but at least you can find an answer that approximates the best possible solution. If you search the decision tree long enough, usually you can find a solution that is reasonably good, even if it isn't perfect. Of course, you might get lucky and find a solution that divides the objects exactly evenly. If you don't find such a solution, you cannot conclude that no such solution exists, but at least you've found an approximate solution.

The following sections discuss methods you can use to search decision trees. The first two describe methods you can use to solve either the optimization or nonoptimization version of a problem. The final section, on decision tree heuristics, works only on the optimization version of a problem.

Exhaustive Search

The simplest way to search a decision tree is to visit all its nodes, looking for the best solution. Note that you don't actually need to build a decision tree to search it. You just need a way to keep track of where you are in the tree. Many algorithms use recursion to pick branches at different levels of the tree, and those recursive calls can keep track of their positions in the tree.

For example, the following pseudocode shows a basic high-level algorithm that exhaustively searches for a solution to the optimization version of the partition problem:

StartExhaustiveSearch() <Initialize best solution so it is replaced by the first test solution> ExhaustiveSearch(0) End StartExhaustiveSearch ExhaustiveSearch(Integer: next_index) // See if we are done. If <next_index > max_index> // We have assigned all items, so we are at a leaf node. <If the test solution is better than the best solution found so far, save it> Else // We have not assigned all items, so we are not at a leaf node. <Assign item next_index to group 0> ExhaustiveSearch(next_index + 1) <Unassign item next_index to group 0> <Assign item at next_index to group 1> ExhaustiveSearch(next_index + 1) <Unassign item next_index to group 1> End If End ExhaustiveSearch

The StartExhaustiveSearch method initializes the best solution found so far. Normally it simply sets the value of that solution (which, in the case of the partition problem, is the difference between the weights of the two groups) to a very large number, so the first valid test solution will be an improvement.

The StartExhaustiveSearch method then calls ExhaustiveSearch to do all the real work.

The ExhaustiveSearch method takes as a parameter the index of the item that it should assign to a group. This is the same as the depth of recursion and the level in the decision tree.

If ExhaustiveSearch has assigned all the items to one group or another, it compares the test solution to see if it is better than the best solution found so far. If the test solution is an improvement, the method saves it as the new best solution.

If ExhaustiveSearch has not yet assigned every item to a group, it tries assigning item number next_index to group 0 and then calls itself recursively to assign the remaining items. After the recursive call returns, the method tries assigning item number next_index to group 1 and again calls itself recursively to assign the remaining items.

Eventually the recursive calls work their way down the tree until they reach leaf nodes and update the best solution if appropriate.

This basic algorithm is fairly flexible and can be adapted for many different problems.

For the partition problem, you can use an array to store the test solution and the best solution found so far. The Kth entry in the array should be a 0 or 1 to indicate whether the Kth item is assigned to group 0 or group 1. When the algorithm reaches a leaf node, it should add up the weights of the items in each group and compare the difference to the best distance found so far.

This algorithm is reasonably straightforward and works, but the fact that it searches the entire decision tree makes it relatively slow. This method will never be fast, but you can make one improvement that sometimes shortens the search considerably.

If the algorithm ever reaches a leaf node where the test assignment makes two groups with exactly equal total weights, it can stop without searching the rest of the decision tree. If the tree contains many optimal solutions, “short circuiting” a search in this way may let the algorithm find a solution relatively quickly and skip searching much of the tree.

For example, in one test, while trying to divide 20 items into two groups of equal weight, a full exhaustive search visited 2,097,150 nodes. When allowed to stop the search after finding an optimal solution, the algorithm visited only 4,098 nodes. The results vary greatly depending on the specific weights.

Branch and Bound

Branch and bound is a technique for searching trees more effectively than an exhaustive search does. After it moves down a branch in the tree, the algorithm calculates the best possible outcome it can achieve down that branch. If the best possible outcome won't be an improvement over the best solution that has already been found, the algorithm abandons that branch and doesn't continue down its subtree. Depending on the specific data values, this can save a huge amount of time.

For example, suppose a partition problem algorithm keeps track of the current total weight in each of the two groups it is building and the total weight of the items that have not yet been assigned to a group. Now suppose the algorithm has reached a point where group 0 has a total weight of 100, group 1 has a total weight of 50, and the unassigned items have a total weight of 20. Suppose also that the algorithm has already found a solution in which the two groups have weights that differ by 20.

If the algorithm were to assign all the remaining items to group 1, group 0 would have a total weight of 100, and group 1 would have a total weight of 70, a difference of 30. But the algorithm has already found a solution in which the difference is only 20. The current test solution cannot be improved enough to make it better than the current best solution. In that case, the algorithm can stop working on its current solution without assigning the rest of the items.

The following pseudocode shows a high-level branch and bound algorithm for the optimization version of the partition problem:

StartBranchAndBound() <Initialize best solution so it is replaced by the first test solution> BranchAndBound(0) End StartBranchAndBound BranchAndBound(Integer: next_index) // See if we are done. If <next_index > max_index> // We have assigned all items, so we are at a leaf node. <If the test solution is better than the best solution, save it> Else // We have not assigned all items, so we are not at a leaf node. If <the test solution cannot be improved enough to beat the current best solution> Then Return <Assign item next_index to group 0> BranchAndBound(next_index + 1) <Unassign item next_index to group 0> <Assign item next_index to group 1> BranchAndBound(next_index + 1) <Unassign item next_index to group 1> End If End BranchAndBound

This algorithm is similar to the exhaustive search algorithm, except that it determines whether the test solution can be improved enough to beat the current best solution, and it returns without recursion if it can't.

Branch and bound often trims many branches and their subtrees from the decision tree, so it can be much faster than an exhaustive search.

For example, in one test, while trying to divide 20 items into two groups of equal weight, a full exhaustive search visited 2,097,150 nodes, but a branch and bound search visited only 774,650 nodes. When both algorithms were allowed to use the “short circuit” described in the preceding section to stop early, the exhaustive search visited 4,082 nodes, but the branch and bound search visited only 298 nodes.

Branch and bound is a very useful technique, but I want to mention two important facts before moving on to decision tree heuristics. First, branch and bound searches any path through the tree that might lead to a solution better than the best solution found so far. That means, like exhaustive search, it always finds the optimal solution.

The second important fact is that, although branch and bound often avoids searching large parts of the decision tree, decision trees can be enormous, so branch and bound can still be fairly slow.

In one test, exhaustive search could search a decision tree for a 25-item partition problem in roughly 6.6 seconds. Branch and bound could search the same tree in roughly 2 seconds. That's a big improvement, but adding a new item to the problem roughly doubles the tree's size. Adding one more item made branch and bound take about 4 seconds, and adding a second item made it take 7.9 seconds.

Branch and bound is much faster than exhaustive search, but it still isn't fast enough to search a really big decision tree such as the 2.2 trillion node tree representing the partition problem with 40 items.

Decision Tree Heuristics

Exhaustive search and branch and bound find the best possible solution. Unfortunately, decision trees are so large that those algorithms work only for relatively small problems.

To search larger trees, you need to use heuristics. A heuristic won't necessarily find the best possible solution, but it may find a fairly good solution—at least for the optimization version of a problem where approximate solutions make sense.

The following sections describe four heuristics for use with the partition problem.

Random Search

One of the simplest heuristics for searching a decision tree is to follow random paths through it. At each node, simply pick a branch to follow randomly. If you try enough random paths, you may stumble across a reasonably good solution.

The following pseudocode shows how you might search a decision tree randomly:

RandomSearch() <Initialize best solution so it is replaced by the first test solution> For i = 1 To num_trials For index = 0 To max_index <Randomly assign item number index to group 0 or 1> Next index // See if this solution is an improvement. <If the test solution is better than the best solution, save it> Next i End RandomSearch

The algorithm starts by initializing the best solution as usual. It then enters a loop that it executes for some number of trials.

For each trial, the algorithm loops over the indices of the items to be partitioned and randomly assigns each item to either group 0 or group 1.

After it has randomly assigned every item to a group, the algorithm checks the solution to see if it is better than the best solution found so far and, if it is, saves the new solution.

If you are trying to partition N weights, each trial takes only N steps, so this heuristic is extremely fast. That's good, because in a large decision tree, the odds of your finding a good solution may be small, so you need to run a lot of trials.

There are several ways you could pick the number of trials to run. You could just run a fixed number of trials—say, 1,000. That will work for small decision trees, but it might be better to pick a number that depends on the tree's size.

Another strategy is to make the number of trials a polynomial function of the number of weights being partitioned. For example, if you are partitioning N weights, you could use num_trials = 3 × N3. The function 3 × N3 grows quickly as N increases, but not nearly as quickly as 2N, so this still searches only a tiny fraction of the decision tree.

Another approach is to continue trying random paths until a certain number of random paths in a row fail to find an improvement. Then the algorithm won't stop as long as it's fairly easy to find improvements.

Perhaps the ideal approach is to let the algorithm run continuously, updating its best solution when it finds improvements, until you stop it. That way if you don't need a solution in a hurry, you can let the algorithm run for hours or possibly even days.

Improving Paths

You can make random path selection more effective if you pick a random path and then try to improve it. Start with a random path. Then randomly pick an item, and switch it from the group it is in to the other group. If that improves the partitioning, keep that change. If that change doesn't help, undo it and try again. Repeat this process many times until you can't improve the path any more.

This technique has many variations. For example, instead of swapping random items, you could try swapping each item one at a time. You might want to repeat that process several times, because swapping one item may change the weights of the two groups so that it is now possible to swap some other item that you could not swap before.

The following pseudocode shows this algorithm:

MakeImprovements() <Initialize best solution so it is replaced by the first test solution> For i = 1 To num_trials // Make a random initial solution. For index = 0 To max_index <Randomly assign item number index to group 0 or 1> Next index // Try to improve the solution. Boolean: had_improvement = True While (had_improvement) // Assume this time we won't have any improvement. had_improvement = False // Try swapping items. For index = 0 To max_index <Swap item number index into the other group> // See if this improves the test solution. If <this swap improves the test solution> Then had_improvement = True Else <Swap the item back> End If Next index Loop // See if this solution is an improvement. <If the test solution is better than the best solution, save it> Next i End MakeImprovements

The algorithm enters a loop to perform a certain number of trials. For each trial, it picks a random test solution.

It then enters a loop that executes as long as the algorithm finds an improvement for the random test solution. Each time through this improvement loop, the algorithm tries swapping each item into the group to which it isn't currently assigned. If that swap improves the test solution, the algorithm keeps it. If that swap does not improve the test solution, the algorithm undoes it.

After it can find no more improvements, the algorithm compares the test solution to the best solution found so far and keeps it if it is better.

You can pick the number of trials to run in the same ways you can for the random heuristic described in the previous section. You can let the algorithm run a fixed number of trials—a number of trials that depends on the number of weights being partitioned—until it finds no improved best solution, or until you stop it.

Sometimes it is not possible to improve a path by making a single swap. For example, suppose you are partitioning the weights 6, 5, 5, 5, 3, and 3. Suppose also that you pick a random path that makes the two groups {6, 3, 3} and {5, 5, 5} so that the groups have total weights of 12 and 15. Therefore, their total weights differ by 3.

Moving an item from the first group to the second only makes the difference greater, so that won't improve the solution.

If you moved an item with weight 5 from the second group to the first, the groups would be {6, 5, 3, 3} and {5, 5}, so their total weights would be 17 and 10—not an improvement.

No single swap can improve this solution. But if you move an item with weight 3 from the first group to the second, and you also move an item with weight 5 from the second group to the first, you get the groups {6, 5, 3} and {5, 5, 3}. The groups would then have weights 14 and 13, an improvement over the original solution.

The single swap strategy described in this section won't find this improvement, because it requires you to make two swaps at the same time. Other improvement strategies try swapping two items at the same time. Of course, there are also improvements you cannot make by swapping two items that you can make by swapping three items, so that strategy doesn't always work either. Still, swapping two items at a time isn't too difficult and may result in some improvements, so it is worth implementing.

Simulated Annealing

Simulated annealing is an improved version of the simple improvement heuristic described in the preceding section. Simulated annealing initially makes large changes to a solution and then over time makes smaller and smaller changes to try to improve the solution.

As mentioned in the preceding section, one problem with the original improvement heuristic is that sometimes moving a single item from one group to the other won't let you improve the solution, but moving two items at the same time might. Even that method has limits. There may be cases where moving two items at the same time won't get you an improvement but moving three will.

Simulated annealing addresses this issue by allowing the algorithm to make large changes to the initial solution. Over time the size of the allowed changes is reduced. The algorithm tries smaller and smaller changes until finally it reaches a test solution that it compares to the best solution found so far.

NOTE Simulated annealing is modeled on the way crystals grow in a cooling metal or mineral. When the material is very hot, the molecules move quickly, so their arrangement changes a lot. As the material cools, the molecular motion decreases and structures form, but there's still enough energy to allow some structures to merge with others if that forms a more stable relationship. Eventually the material cools enough that there isn't enough energy to disrupt the molecular structure. If the cooling happened slowly enough, the material should contain only a few very large crystals representing a very stable arrangement of molecules.

Another way to implement simulated annealing is to consider random changes in any complexity. If a change results in an improvement, the algorithm accepts it and continues. If a change doesn't result in an improvement, the algorithm accepts it anyway with a certain probability. Over time that probability decreases, so initially the algorithm may make the solution worse so that it can later get to a better end result. Eventually the probability of accepting a nonimproving change decreases until the algorithm accepts only changes that improve the solution.

Hill Climbing

Imagine you're a lost hiker. It's nighttime, so you can't see very far, and you need to find the top of the mountain. One strategy you could use would be to always move up the steepest slope. If the mountain has a reasonably smooth shape with no small peaks or hills on its side, you'll eventually reach the top. If there is a smaller hill on the side of the mountain, however, you may become stuck there and not know which way to go until morning.

In a hill-climbing heuristic, the algorithm always makes a choice that moves it closer to a better solution. For the partitioning problem, that means placing the next item in the group that minimizes the difference in the groups' weights. That's equivalent to adding the item to the group that has the smaller total weight.

For example, suppose the items have weights 3, 4, 1, 5, and 6. The first item can go in either group, so suppose it's placed in the first group.

Now the algorithm considers the second item, with weight 4. If the algorithm places the second item in the first group, the groups are {3, 4} and {}, so the difference between their total weights is 7. If the algorithm places the second item in the second group, the groups are {3} and {4}, so the difference between their total weights is 1. To make the best choice at the time, the algorithm places the item in the second group.

Next the algorithm considers the third item, with weight 1. If the algorithm places this item in the first group, the groups are {3, 1} and {4}, so the difference between their total weights is 0. If the algorithm places the item in the second group, the groups are {3} and {4, 1}, so the difference between their total weights is 2. To make the best choice at the time, the algorithm places the item in the first group.

The algorithm continues in this manner until it has placed all the items in a group.

The following pseudocode shows the hill-climbing algorithm:

HillClimbing() For index = 0 To max_index Integer: difference_0 = <difference in group weights if item number index is in group 0> Integer: difference_1 = <difference in group weights if item number index is in group 1> If (difference_0 < difference_1) <Place item number index in group 0> Else <Place item number index in group 1> End If Next index End HillClimbing

If you are partitioning N weights, this algorithm performs only N steps, so it is extremely fast. In a large decision tree, it is unlikely to find the best possible solution, but sometimes it finds a reasonable solution.

Hill climbing is so fast that you could spend some extra time improving its solution. For example, you could try using the techniques described in the preceding section to improve the initial solution.

Sorted Hill Climbing

One easy way to improve the hill-climbing algorithm is to sort the weights and then consider them in order of decreasing size. The idea is that the early stages of the algorithm place the heavier objects in groups and then the later stages use the smaller items to balance the groups.

The following pseudocode shows this algorithm:

SortedHillClimbing() <Sort the items in order of decreasing weight> For index = 0 To max_index Integer: difference_0 = <difference in group weights if item number index is in group 0> Integer: difference_1 = <difference in group weights if item number index is in group 1> If (difference_0 < difference_1) <Place item number index in group 0> Else <Place item number index in group 1> End If Next index End SortedHillClimbing

This is the same as the hill-climbing algorithm described in the preceding section, with the addition of the sorting step.

This may seem like a small modification, but sorted hill climbing often finds a better solution than hill climbing.

If you are partitioning N weights, the sorted hill-climbing algorithm takes O(N log N) steps to sort the weights and then N steps to generate its solution. The sorting step makes it slower than the normal hill-climbing algorithm, but it's still extremely fast.

In fact, sorted hill climbing is so fast that you could spend some extra time improving its solution, just as you can improve the normal hill-climbing algorithm's solution.

Other Decision Tree Problems

This chapter has focused on the partition problem, but you can use decision trees to model many other difficult problems. The following sections describe some algorithmic problems you can study with decision trees.

Many of the problems come in pairs—one problem that asks whether something is possible, and another that asks for an optimal solution.

Generalized Partition Problem

In the partition problem, the goal is to divide a set of objects into two groups with equal weight. In the generalized partition problem, the goal is to divide a set of objects into K groups with equal weight.

The decision tree for this problem has K branches at each node corresponding to putting the item at that level of the tree into one of the K different partitions. If you have N items, the tree is N levels tall, so it contains KN leaf nodes.

The same heuristics that work for the partition problem also work for the generalized partition problem, although they are more complicated. For example, a random improvement for the partition problem might try moving an object from one group to the other. In the generalized partition problem, it would need to consider moving the object from one group into any of the other K − 1 groups.

The optimization version of the generalized partition problem asks you to find a way to divide the items into K groups, but you need to decide how to judge the best solution. For example, you might try to minimize the sum of the absolute value of the differences between the groups' weights and the average group weight. For example, suppose you have four groups with total weights 15, 18, 22, and 25. The average of those weights is 20, so the absolute values of the differences between the group weights and the average are 5, 2, 2, and 5, making the sum of those differences 14.

Alternatively, you might want to minimize the sum of the squares of the differences between the group's weights and the average. For the preceding example, the squared differences would be 25, 4, 4, and 25, so the sum would be 58. This measurement would favor solutions where all the group weights are close to the average.

Subset Sum

In the subset sum problem, you have a set of numbers, and you want to determine whether there is a subset whose sum is 0. For example, consider the set {−11, −7, −5, −3, 4, 6, 9, 12, 14}. This set has the zero-sum subset {−7, −5, −3, 6, 9}. A related optimization version of the problem would ask you to find a subset with a sum close to 0.

You can model this problem with a decision tree similar to the one you use for the partition problem. Essentially, you need to divide the items into two groups—one that holds objects to go into the zero-sum set, and one that holds objects that will be discarded.

Like the decision tree for the partition problem, if you are working with N items, this tree is N levels, and each node has two branches—one corresponding to adding an item to the zero-sum set, and one corresponding to discarding the item, so the tree has 2N leaf nodes.

You can use branch and bound and heuristics on the optimization version of this problem but not on the nonoptimization version.

Bin Packing

In the bin-packing problem, you have a set of items of different weights and a series of bins that have the same capacity. (In a generalized version, the bins could have different capacities.) The goal is to pack the items into the bins so that you use as few bins as possible.

You could model this as a decision tree in which each branch corresponds to putting an item in a particular bin. If you have N items and K bins, the tree would have N levels with K branches at each node, so the tree would have KN leaf nodes.

This is an optimization problem. You can use branch and bound and heuristics to try to find good solutions.

A related problem is to find a way to pack the items into bins so that you use only images<total weight of all items> ÷ <bin capacity>images where images images means to round up. For example, if the total weight of the items is 115, and the bins have a capacity of 20, can you find a way to pack the items into only six bins? You can use heuristics to try to find a good solution, but if you don't find a solution, that doesn't mean one doesn't exist.

Cutting Stock

The cutting stock problem is basically a two-dimensional version of the bin-packing problem. In this problem, you need to cut out a collection of shapes (usually rectangles) from a set of boards, pieces of cloth, or other stock. The goal is to use as few pieces of stock as possible.

Modeling this problem as a decision tree is much harder than it is for the binpacking problem, because how you position shapes on a piece of stock changes the number of pieces you can fit on that piece. That means assigning a shape to a piece of stock isn't enough. You must also assign a position to the shape within the stock. If you have K pieces of stock, there are more than K places where you can position a shape, so a node may have more than K branches.

If you make some simplifying assumptions, you can still use a decision tree for this problem. For example, if the pieces of stock are 36 inches by 72 inches, and you allow a shape to be positioned at only an (X, Y) position where X and Y are integer numbers of inches, there are 36 × 72 = 2,592 positions where you can place a shape on a particular piece of stock. That means each node in the tree would have K × 2,592 branches.

Fortunately, many of those branches are easy to trim from the tree. For example, some branches place a shape so close to an edge of the stock that it won't fit. Other branches make a shape overlap with another shape. If you avoid following those kinds of branches, you can search the tree to find at least some solution.

The tree will still be extremely large, however, so you'll need to use heuristics to find reasonable solutions.

Also note that the simplifying assumptions may exclude some solutions. For example, suppose you want to fit five 7-inch-by-7-inch squares on 20-inch-by-20- inch sheets of stock. If you place the squares so that their edges are parallel to the sides of the stock, you can fit only two squares vertically and horizontally on each piece of stock, as shown on the left in Figure 12-4. If you rotate one of the squares, however, you can fit all five squares on a single piece of stock, as shown on the right.

images

Figure 12-4: If you don't allow rotation, some solutions are impossible.

A common variation of the cutting stock problem involves a single very long piece of stock, such as a roll of paper. The goal is to minimize the number of linear inches of stock used.

Knapsack

In the knapsack problem, you are given a set of objects that each have a weight and a value, and a knapsack that holds a certain amount of weight. Your goal is to find the items with the maximum value that will fit in the knapsack. For example, you might fill the knapsack with a few heavy items that have high values, or you may be better off filling it with lots of lighter items that have lower values.

The knapsack problem is similar to the partition problem, in which you try to divide the items into two groups—one with items to go into the knapsack, and one with items to remain outside. In the knapsack problem, the goal is to make the first group as valuable as possible and also to ensure that the first group fits inside the knapsack.

Because the knapsack's basic structure is similar to that of the partition problem, you can use a similar decision tree. The main differences are that not all assignments are legal due to the weight constraint, and the goal is different.

The same techniques that work well with the partition problem also work with the knapsack problem. Random solutions, improved solutions, and simulated annealing work in much the same way they do for the partition problem.

Branch and bound could stop examining a partial solution if its total weight already exceeded the knapsack's capacity.

Branch and bound could also stop if the total value of the unconsidered items was insufficient to raise the current solution's value enough to beat the best solution found so far. For example, suppose you've found a solution worth $100 and you're examining a partial solution that has a value of $50. If the total value of all the remaining items is only $20, you cannot improve this solution enough to beat the current best solution.

At each step, a hill-climbing heuristic would add the highest-value item that could still fit in the knapsack to the solution.

A sorted hill-climbing heuristic might consider the items in order of decreasing weight so that later selections could fill in any remaining room in the knapsack.

Probably a better sorted hill-climbing heuristic would consider the items in order of decreasing value-to-weight ratio, so it would first consider the items worth the most dollars per pound (or whatever the units are).

The decision tree for a knapsack problem with N items contains 2N leaf nodes, so this isn't an easy problem, but at least you can try some heuristics.

Traveling Salesman Problem (TSP)

Suppose you're a traveling salesperson who must visit certain locations on a map and then return to your starting point. The traveling salesman problem is to visit those locations in an order that minimizes the total distance covered.

NOTE TSP has important practical implications for businesses that have fleets of vehicles. For example, the U.S. Postal Service's letter carriers and truck drivers travel 1.3 billion miles per year. If better routing can shave even a fraction of a percent off that total, the savings in fuel and wear on the vehicles could be huge.

To model this problem as a decision tree, the Kth level of the tree corresponds to selecting an item to visit Kth in the completed tour. If there are N locations to visit, the root node has N branches, corresponding to visiting each of the locations first. The nodes at the second level of the tree have N − 1 branches, corresponding to visiting any of the locations not yet visited next. The nodes at the third level of the tree have N − 2 branches and so on to the leaves, which have no branches. The total number of leaves in the tree would be N × (N − 1) × (N − 2) × ... × 1 = N!.

This tree is so big that you need to use heuristics to find good solutions.

Satisfiability (SAT)

Given a logical statement such as “A and B or (A and not C),” the satisfiability problem is to decide whether there is a way to assign the values true and false to the variables A, B, and C to make the statement true. A related problem asks you to find such an assignment.

You can model this problem with a binary decision tree in which the left and right branches represent setting a variable to true or false. Each leaf node represents an assignment of values to all the variables and determines whether the statement as a whole is true or false.

This problem is harder than the partition problem because there are no approximate solutions. Any leaf node makes the statement either true or false. The statement cannot be “approximately true.” (However, if you use probabilistic logic, in which variables have probabilities of truth rather than being definitely true or false, you might be able to find a way to make the statement probably true.)

A random search of the tree may find a solution, but if you don't find a solution, you cannot conclude that there isn't one.

You can try to improve random solutions. But making changes makes the statement either true or not, so there's no way you can improve the solution gradually. That means you cannot really use the path improvement strategy described earlier. Because you can't improve the solution gradually, you also can't use simulated annealing or hill climbing. In general, you also can't tell if a partial assignment makes the statement false, so you can't use branch and bound either.

With exhaustive and random search as your only real options, you can solve satisfiability for only relatively small problems.

NOTE You may wonder why anyone would want to solve SAT. It turns out that you can reduce many other problems to an instance of SAT. For an instance of some other problem, such as the partition problem, you can build a logical expression such that a solution to SAT corresponds to a solution to the partition problem. In other words, if you can solve one, you can solve the other.

By proving that different problems can be reduced to SAT, you show that they are as hard as SAT. So the fact that there is no known easy way to solve SAT means that there is no known easy way to solve the other problems.

SAT is related to the 3SAT 3-satisfiability problem. In 3SAT the logical statement consists of terms combined with the And operator. Each term involves three variables or their negations combined with Or. For example, the statement “(A or B or not C) and (B or not A or D)” is in the 3SAT format.

With some work that is far outside the scope of this book, you can show that SAT and 3SAT are equivalent, so they are equally difficult to solve.

Note that the same kind of decision tree will solve either version of the problem.

Summary

Decision trees are a powerful tool for approaching complex problems, but they are not the best approach for every problem. For example, you could model the eight queens problem described in Chapter 9 with a decision tree that was eight levels tall and that had 64 branches per node, giving a total of 648 ≈ 2.8 × 1014 leaf nodes. By taking advantage of the problem's structure, however, you can eliminate most of the tree's branches without exploring them and explore only 113 board positions. Thinking about the problem as a general decision tree may be a mistake, because it might make you miss the simplifications that let you solve the problem efficiently.

Still, decision trees are a powerful technique that you should at least consider if you don't know of a better approach to a problem.

This chapter and the previous two described trees and algorithms that build, maintain, and search trees. The next two chapters discuss networks. Like trees, networks are linked data structures. Unlike trees, networks are not hierarchical, so they can contain loops. That makes them a bit more complicated than trees, but it also lets them model and solve some interesting problems that trees can't.

Exercises

Asterisks indicate particularly difficult problems.

  1. Write a program that exhaustively searches the tic-tac-toe game tree and counts the number of times X wins, O wins, and the game ends in a tie. What are those counts, and what is the total number of games possible? Do the numbers give an advantage to one player or the other?
  2. Modify the programs you wrote for Exercise 1 so that the user can place one or more initial pieces on a tic-tac-toe board and then count the number of possible X wins, O wins, and ties from that point. How many total games are possible starting from each of the nine initial moves by X? (Do you need to calculate all nine?)
  3. Write a program that lets the player play tic-tac-toe against the computer. Let the player be either X or O. Provide three skill levels: Random (the computer moves randomly), Beginner (the computer uses minimax with only three levels of recursion), and Expert (the computer uses a full minimax search).
  4. Write a program that uses exhaustive search to solve the optimizing partition problem. Allow the user to create a list of random weights between bounds set by the user. Also allow the user to check a box indicating whether the algorithm is allowed to short circuit and stop early.
  5. Extend the program you wrote for Exercise 4 so that you can perform branch and bound with and without short circuits.
  6. *Use the program you wrote for Exercise 4 to solve the partition problem using exhaustive search and branch and bound with the values 1 through 5, 1 through 6, 1 through 7, and so on up to 1 through 25. Then graph the number of nodes visited versus the number of weights being partitioned for the two methods. Finally, on a separate graph, show the logarithm of the number of nodes visited versus the number of weights for the results. What can you conclude about the number of nodes visited for the two methods?
  7. Extend the program you wrote for Exercise 5 to use a random heuristic to find solutions.
  8. Extend the program you wrote for Exercise 7 to use an improvement heuristic to find solutions.
  9. What groups would a partitioning program find if it used the hill-climbing heuristic for the weights 7, 9, 7, 6, 7, 7, 5, 7, 5, and 6? What are the groups' total weights and the difference between the total weights?
  10. Extend the program you wrote for Exercise 8 to use a hill-climbing heuristic to find solutions.
  11. Repeat Exercise 9 using a sorted hill-climbing heuristic.
  12. Repeat Exercise 9 using exhaustive search.
  13. Extend the program you wrote for Exercise 10 to use a sorted hill-climbing heuristic to find solutions.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.201.32