Negamax
Negamax search is a variant form of minimax search that relies on the zero-sum property of a two-player game.
This algorithm relies on the fact that to simplify the implementation of the minimax algorithm. More precisely, the value of a position to player A in such a game is the negation of the value to player B. Thus, the player on move looks for a move that maximizes the negation of the value resulting from the move: this successor position must by definition have been valued by the opponent. The reasoning of the previous sentence works regardless of whether A or B is on move. This means that a single procedure can be used to value both positions. This is a coding simplification over minimax, which requires that A selects the move with the maximum-valued successor while B selects the move with the minimum-valued successor.
It should not be confused with negascout, an algorithm to compute the minimax or negamax value quickly by clever use of alpha–beta pruning discovered in the 1980s. Note that alpha–beta pruning is itself a way to compute the minimax or negamax value of a position quickly by avoiding the search of certain uninteresting positions.
Most adversarial search engines are coded using some form of negamax search.
Negamax base algorithm
NegaMax operates on the same game trees as those used with the minimax search algorithm. Each node and root node in the tree are game states (such as game board configuration) of a two player game. Transitions to child nodes represent moves available to a player who is about to play from a given node.
The negamax search objective is to find the node score value for the player who is playing at the root node. The pseudocode below shows the negamax base algorithm,[1] with a configurable limit for the maximum search depth:
function negamax(node, depth, color) is if depth = 0 or node is a terminal node then return color × the heuristic value of node value := −∞ for each child of node do value := max(value, −negamax(child, depth − 1, −color)) return value
(* Initial call for Player A's root node *) negamax(rootNode, depth, 1)
(* Initial call for Player B's root node *) negamax(rootNode, depth, −1)
The root node inherits its score from one of its immediate child nodes. The child node that ultimately sets the root node's best score also represents the best move to play. Although the negamax function shown only returns the node's best score, practical negamax implementations will retain and return both best move and best score for the root node. Only the node's best score is essential with non-root nodes. And a node's best move isn't necessary to retain nor return for non-root nodes.
What can be confusing is how the heuristic value of the current node is calculated. In this implementation, this value is always calculated from the point of view of player A, whose color value is one. In other words, higher heuristic values always represent situations more favorable for player A. This is the same behavior as the normal minimax algorithm. The heuristic value is not necessarily the same as a node's return value due to value negation by negamax and the color parameter. The negamax node's return value is a heuristic score from the point of view of the node's current player.
Negamax scores match minimax scores for nodes where player A is about to play, and where player A is the maximizing player in the minimax equivalent. Negamax always searches for the maximum value for all its nodes. Hence for player B nodes, the minimax score is a negation of its negamax score. Player B is the minimizing player in the minimax equivalent.
Negamax variant with no color parameter
Negamax can be implemented without the color parameter. In this case, the heuristic evaluation function must return values from the point of view of the node's current player (Ex: In a chess game, if it is white's turn and white is winning, it should return a positive value. However if it is black's turn, it should return a negative value).
function negamax(node, depth) is if depth = 0 or node is a terminal node then return evaluatePosition() // From current player's perspective value := −∞ for each child of node do value := max(value, −negamax(child, depth − 1)) return value
// Example picking best move in a chess game using negamax function above function think(boardState) is allMoves := generateLegalMoves(boardState) bestMove := null bestEvaluation := -∞ for each move in allMoves board.apply(move) evaluateMove := -negamax(boardState, depth=3) board.undo(move) if evaluateMove > bestEvaluation bestMove := move bestEvaluation := evaluateMove return bestMove
Negamax with alpha beta pruning
Algorithm optimizations for minimax are also equally applicable for Negamax. Alpha–beta pruning can decrease the number of nodes the negamax algorithm evaluates in a search tree in a manner similar with its use with the minimax algorithm.
The pseudocode for depth-limited negamax search with alpha–beta pruning follows:[1]
function negamax(node, depth, α, β, color) is if depth = 0 or node is a terminal node then return color × the heuristic value of node childNodes := generateMoves(node) childNodes := orderMoves(childNodes) value := −∞ foreach child in childNodes do value := max(value, −negamax(child, depth − 1, −β, −α, −color)) α := max(α, value) if α ≥ β then break (* cut-off *) return value
(* Initial call for Player A's root node *) negamax(rootNode, depth, −∞, +∞, 1)
Alpha (α) and beta (β) represent lower and upper bounds for child node values at a given tree depth. Negamax sets the arguments α and β for the root node to the lowest and highest values possible. Other search algorithms, such as negascout and MTD(f), may initialize α and β with alternate values to further improve tree search performance.
When negamax encounters a child node value outside an alpha/beta range, the negamax search cuts off thereby pruning portions of the game tree from exploration. Cut offs are implicit based on the node return value. A node value found within the range of its initial α and β is the node's exact (or true) value. This value is identical to the result the negamax base algorithm would return, without cut offs and without any α and β bounds. If a node return value is out of range, then the value represents an upper (if value ≤ α) or lower (if value ≥ β) bound for the node's exact value. Alpha–beta pruning eventually discards any value bound results. Such values do not contribute nor affect the negamax value at its root node.
This pseudocode shows the fail-soft variation of alpha–beta pruning. Fail-soft never returns α or β directly as a node value. Thus, a node value may be outside the initial α and β range bounds set with a negamax function call. In contrast, fail-hard alpha–beta pruning always limits a node value in the range of α and β.
This implementation also shows optional move ordering prior to the foreach loop that evaluates child nodes. Move ordering[2] is an optimization for alpha beta pruning that attempts to guess the most probable child nodes that yield the node's score. The algorithm searches those child nodes first. The result of good guesses is earlier and more frequent alpha/beta cut offs occur, thereby pruning additional game tree branches and remaining child nodes from the search tree.
Negamax with alpha beta pruning and transposition tables
Transposition tables selectively memoize the values of nodes in the game tree. Transposition is a term reference that a given game board position can be reached in more than one way with differing game move sequences.
When negamax searches the game tree, and encounters the same node multiple times, a transposition table can return a previously computed value of the node, skipping potentially lengthy and duplicate re-computation of the node's value. Negamax performance improves particularly for game trees with many paths that lead to a given node in common.
The pseudo code that adds transposition table functions to negamax with alpha/beta pruning is given as follows:[1]
function negamax(node, depth, α, β, color) is alphaOrig := α (* Transposition Table Lookup; node is the lookup key for ttEntry *) ttEntry := transpositionTableLookup(node) if ttEntry.is_valid and ttEntry.depth ≥ depth then if ttEntry.flag = EXACT then return ttEntry.value else if ttEntry.flag = LOWERBOUND then α := max(α, ttEntry.value) else if ttEntry.flag = UPPERBOUND then β := min(β, ttEntry.value) if α ≥ β then return ttEntry.value if depth = 0 or node is a terminal node then return color × the heuristic value of node childNodes := generateMoves(node) childNodes := orderMoves(childNodes) value := −∞ for each child in childNodes do value := max(value, −negamax(child, depth − 1, −β, −α, −color)) α := max(α, value) if α ≥ β then break (* Transposition Table Store; node is the lookup key for ttEntry *) ttEntry.value := value if value ≤ alphaOrig then ttEntry.flag := UPPERBOUND else if value ≥ β then ttEntry.flag := LOWERBOUND else ttEntry.flag := EXACT ttEntry.depth := depth ttEntry.is_valid := true transpositionTableStore(node, ttEntry) return value
(* Initial call for Player A's root node *) negamax(rootNode, depth, −∞, +∞, 1)
Alpha/beta pruning and maximum search depth constraints in negamax can result in partial, inexact, and entirely skipped evaluation of nodes in a game tree. This complicates adding transposition table optimizations for negamax. It is insufficient to track only the node's value in the table, because value may not be the node's true value. The code therefore must preserve and restore the relationship of value with alpha/beta parameters and the search depth for each transposition table entry.
Transposition tables are typically lossy and will omit or overwrite previous values of certain game tree nodes in its tables. This is necessary since the number of nodes negamax visits often far exceeds the transposition table size. Lost or omitted table entries are non-critical and will not affect the negamax result. However, lost entries may require negamax to re-compute certain game tree node values more frequently, thus affecting performance.
References
- George T. Heineman; Gary Pollice & Stanley Selkow (2008). "Chapter 7:Path Finding in AI". Algorithms in a Nutshell. Oreilly Media. pp. 213–217. ISBN 978-0-596-51624-6.
- John P. Fishburn (1984). "Appendix A: Some Optimizations of α-β Search". Analysis of Speedup in Distributed Algorithms (revision of 1981 PhD thesis). UMI Research Press. pp. 107–111. ISBN 0-8357-1527-2.
- ^ a b c Breuker, Dennis M. Memory versus Search in Games, Maastricht University, October 16, 1998
- ^ Schaeffer, Jonathan (1989). "The History Heuristic and Alpha–Beta Search Enhancements in Practice". IEEE Transactions on Pattern Analysis and Machine Intelligence. 11 (11): 1203–12. doi:10.1109/34.42858.