MTD(f)

A Minimax Algorithm faster than NegaScout

MTD(f) is a new minimax search algorithm, simpler and more efficient than previous algorithms. In tests with a number of tournament game playing programs for chess, checkers and Othello it performed better, on average, than NegaScout/PVS (the AlphaBeta variant used in practically all good chess, checkers, and Othello programs). One of the strongest chess programs of the moment, MIT's parallel chess program Cilkchess uses MTD(f) as its search algorithm, replacing NegaScout, which was used in StarSocrates, the previous version of the program.

You want proof of how simple MTD(f) is? Here's the code (my apologies to the language purists for mixing C and Pascal; block structure is indicated by indentation only):

function MTDF(root : node_type; f : integer; d : integer) : integer;

That's right. The entire algorithm is only ten lines of code.

The algorithm works by calling AlphaBetaWithMemory a number of times with a search window of zero size. The search works by zooming in on the minimax value. Each AlphaBeta call returns a bound on the minimax value. The bounds are stored in upperbound and lowerbound, forming an interval around the true minimax value for that search depth. Plus and minus INFINITY is shorthand for values outside the range of leaf values. When both the upper and the lower bound collide, the minimax value is found.

MTD(f) gets its efficiency from doing only zero-window alpha-beta searches, and using a "good" bound (variable beta) to do those zero-window searches. Conventionally AlphaBeta is called with a wide search window, as in AlphaBeta(root, -INFINITY, +INFINITY, depth), making sure that the return value lies between the value of alpha and beta. In MTD(f) a window of zero size is used, so that on each call AlphaBeta will either fail high or fail low, returning a lower bound or an upper bound on the minimax value, respectively. Zero window calls cause more cutoffs, but return less information - only a bound on the minimax value. To nevertheless find it, MTD(f) has to call AlphaBeta a number of times, converging towards it. The overhead of re-exploring parts of the search tree in repeated calls to AlphaBeta disappears when using a version of AlphaBeta that stores and retrieves the nodes it sees in memory.

In order to work, MTD(f) needs a "first guess" as to where the minimax value will turn out to be. The better than first guess is, the more efficient the algorithm will be, on average, since the better it is, the less passes the repeat-until loop will have to do to converge on the minimax value. If you feed MTD(f) the minimax value to start with, it will only do two passes, the bare minimum: one to find an upper bound of value x, and one to find a lower bound of the same value.

Typically, one would call MTD(f) in an iterative deepening framework. A natural choice for a first guess is to use the value of the previous iteration, like this:

function iterative_deepening(root : node_type) : integer;

In a real program you're not only interested in the value of the minimax tree, but also in the best move that goes with it. In the interest of brevity that is not shown in the above pseudo code.

In case your program has a strong oscillation in the values it finds for odd and even search depths, you might be better off by feeding MTD(f) its return value of two plies ago, not one, as the above code does. MTD(f) works best with a stable Principal Variation (doesn't every program?) Although the transposition table greatly reduces the cost of doing a re-search, it is still a good idea to not re-search excessively. As a rule, in the deeper iterations of quiet positions in good programs MTD(f) typically performs between 5 and 15 passes before it finds the minimax value.


Some Background

The name of the algorithm is short for MTD(n, f), which stands for something like Memory-enhanced Test Driver with node n and value f. MTD is the name of a group of driver-algorithms that search minimax trees using zero window AlphaBetaWithMemory calls. Judea Pearl has named zero window AlphaBeta calls "Test", in his seminal papers on the Scout algorithm (the basis for Reinefeld's NegaScout). Adding memory to Test makes it possible to use it in re-searches, creating a group of simple yet efficient algorithms.

MTD(f) is simple in that it only does zero window AlphaBeta calls, making reasoning about the parts of the tree that get traversed easier than with algorithms that use wide window calls, such as NegaScout and the standard AlphaBeta. Actually, the difficulty in analyzing ordinary AlphaBeta was precisely the reason why Pearl introduced his Test in the first place. The AlphaBeta versions shown on this page can be simplified to use a single input bound, instead of both alpha and beta, since alpha is always one less than beta.

Especially in a parallel setting the simplicity of MTD(f) compared to NegaScout is valuable. Designing and debugging a parallel search routine is a complex affair. MTD(f) only needs a zero window search, a Test. Instead of two bounds, MTD(f) needs one. In NegaScout, when new values for the search window become available they have to be communicated asynchronously to the child processes; in MTD(f) you simply abort an entire subtree when a cutoff happens. Furthermore, the recursive search code does not spawn re-searches anymore. All re-searching is done at the root, where things are simpler than down in the parallel tree. The large body of research on parallelizing AlphaBeta and NegaScout is directly applicable to MTD instances, since they use zero-window AlphaBeta calls to do the tree searching. See for example Bradley Kuszmaul's Jamboree search or Rainer Feldmann's Young Brothers Wait Concept. If your AlphaBeta is parallel, then your MTD(f) is parallel.

Incidentally, one of the MTD instances is equivalent to SSS*, George Stockman's best-first minimax algorithm that promised to be more efficient than AlphaBeta. (By equivalent I mean that the two algorithms look different, but search the same nodes.) This SSS*-MTD made the first practical tests of SSS* in full-fledged game playing programs feasible, shedding new and unexpected light, after more than 15 years, on the questions posed in Stockman's 1979 article. See our 1996 article in Artificial Intelligence for more on this subject. (And yes, MTD(f) is also better than SSS*, in case you wondered.)

Another instance of the MTD framework is equivalent to the K. Coplan's C* algorithm. Jean-Christophe Weill has published a number of papers on experiments with a negamax version of C*. In MTD terms the idea of C* is to bisect the interval formed by the upper and lower bounds, reducing the number of AlphaBetaWithMemory calls. On the down side, bisection yields a value for the search window, beta, that turns out to be not as efficient as MTD(f)'s choice. But still, Weill's work indicates that it is worthwhile to experiment with variants on MTD(f)'s choice of pivot value; leaving ample room for more research.


AlphaBetaWithMemory

Note that the MTD(f) code calls an AlphaBeta version that stores its nodes in memory as it has determined their value, and retrieving their values in a re-search. If AlphaBeta wouldn't do that, then each pass of MTD(f) would re-explore most of those nodes. In order for MTD(f) to be efficient your AlphaBeta has to store the nodes it has searched. An ordinary tranposition table of reasonable size suffices, as our experiments showed (see further reading).

To be sure, here's a minimax version of the pseudo code of AlphaBetaWithMemory. The transposition table access code is the same as what is used in most tournament chess, checkers, and Othello programs.

function AlphaBetaWithMemory(n : node_type; alpha , beta , d : integer) : integer; Transposition table access takes place in the retrieve and store calls. The lines around retrieve make sure that if a value is present in the table, it is used, instead of continuing the search. The store function is needed to make sure that the table is filled with values as they become available. In a real program, you would also store the best move in the transposition table, and upon retrieving search it first. In the interest of brevity that is not shown in this code.

Text books on Artificial Intelligence typically discuss a version of AlphaBeta that does not use memory. Therefore, to avoid any confusion, and even though the use of transposition tables is standard practice in the game playing community, the fact that MTD(f) needs a memory-enhanced searcher is stressed here. (Yes, I dislike the name AlphaBetaWithMemory too. Life would be so much simpler if AI text books would discuss useful AlphaBeta versions.) The AlphaBetaWithMemory code is given in the interest of completeness. If you already have a chess program that uses AlphaBeta or NegaScout (minimax or negamax make no difference) and it uses a transposition table, then in all likelihood it will work right away. In none of the programs that I tried did I have to change the existing AlphaBeta code (actually, in all cases a negamax version of NegaScout) to get the transposition table to work properly.


Implementation Tips

The coarser the grain of eval, the less passes MTD(f) has to make to converge to the minimax value. Some programs have a fine grained evaluation function, where positional knowledge can be worth as little as one hundredst of a pawn. Big score swings can become inefficient in for these programs. It may help to dynamically increase the step size: instead of using the previous bound, one can, for example, add an extra few points in the search direction (for failing high, or searching upward, adding the bonus, and for failing low, or searching downward, subtracting the bonus) every two passes or so. (Don Dailey found that a scheme like this works well in a version of Cilkchess.) At the end, if you overshoot the minimax value, you have to make a small search in the opposite direction, using the previous search bound without an extra bonus, to make the final convergence. Also, it can be quite instructive to experiment with different evaluation function grain sizes. Sometimes coarse grain functions work better than fine grain, both for NegaScout and MTD(f).

Some programs unroll the search of the root node, for example, to do extra move sorting at the root. In MTD(f) you can do this too. In the interest of cleanliness this is not shown in the pseudo code.

Sometimes forward pruning or search extensions that depend on alpha and beta values react in surprising ways to a search that consists of zero window calls only. You may have to do some re-tuning to get it all in synch again. Keep in mind that the size of the search tree is quite sensitive to the value of the search window; a strongly oscillating "first guess" is a bad thing. Also, if it weren't for the transposition table, MTD(f)'s re-searches would cause a lot of overhead. Make sure that the transposition table works properly. MTD(f) does more re-searches than NegaScout, and tends to take a bigger penalty from a badly functioning transposition table. Consider storing leaf and quiescence nodes in the table. Experiment with different sizes. There is, however, a limit beyond which expanding the table becomes pointless. That limit should be about the same for NegaScout and MTD(f).

Some tips to keep in mind when doing experiments to better understand the behavior of your search algorithm: The size of the search tree can differ significantly from position to position. Use a large test set in your experiments. Try different set-ups, such as: without null-move pruning, without extensions, disregard counting of quiescence nodes, different transposition table sizes and storage schemes, disable some parts of the move ordering. As in all debugging, if you want to understand the search better, the idea is to disable as much smarts as possible, to be able to study the behavior of a clean, noise-less, algorithm. (Sure, this will take a lot of time, but it can be quite rewarding to get to understand your program better.)


Summary

To summarize, the core ideas of MTD(f) are:


NegaScout

Here's a negamax version of the NegaScout code from Alexander Reinefeld's homepage, the creator of the algorithm. Note that you have to add the transposition table access code in the appropriate places yourself. (It's a "NegaScoutWithoutMemory".)
int NegaScout ( int p, alpha, beta );   
{                        /* compute minimax value of position p */
   int a, b, t, i;

   determine successors p_1,...,p_w of p;
   if ( w = 0 )
      return ( Evaluate(p) );                      /* leaf node */
   a = alpha;
   b = beta;
   for ( i = 1; i <= w; i++ ) {
      t = -NegaScout ( p_i, -b, -a );
      if (t > a) && (t < beta) && (i > 1) && (d < maxdepth-1)
         a = -NegaScout ( p_i, -beta, -t );        /* re-search */
      a = max( a, t );
      if ( a >= beta ) 
         return ( a );                               /* cut-off */
      b = a + 1;                         /* set new null window */
   }
   return ( a );
}

Further reading

Here are some publications that describe MTD(f), and why it works (these files can be printed on a Postscript printer): These papers, and more, with their full reference information, can be found on this list of publications. The work on MTD(f) was part of an extraordionary fruitful and pleasant cooperation with Jonathan Schaeffer, Wim Pijls, and Arie de Bruin, which I gratefully acknowledge. The project also benefited from the kind support of Jaap van den Herik and Henri Bal.


Links

If you are fascinated by game playing programs, and would like to know more about how to write one, then you are in luck, because there are many good books and links available, for novice and expert alike. The links below can help you further. Follow them, and you are bound to learn more, and find a more complete answer than by firing off your question by email.

If you have a question on MTD(f) and minimax research you are welcome to send a note to

Aske Plaat
Vrije Universiteit
Mathematics and Computer Science
De Boelelaan 1081a
1081 HV AMSTERDAM
The Netherlands

Please use correct postage, and do not use email. (Postcards with modern art are highly appreciated.) Bear in mind, however, that I have other duties besides answering questions. The speed of a response increases with the relevance to my research. Please take some time to look for answers yourself, browsing the web or your library. The preceding list of links was put together in the hope that it will be used, prevent basic questions from being emailed, and to provide you with an easy way to find answers yourself. I am, however, definitely interested in your experience with MTD(f), and in research in search algorithms, and parallel algorithms in general.


Thanks to Peter Kouwenhoven and Yngvi Bjornsson for suggestions for improvement of this page. Thanks to Toru Yamazato for pointing out a bug in AlphaBetaWithMemory.

Back to Aske Plaat.
Back to Cilkchess.
Back to Cilk.


Aske Plaat, Dec 3, 1997, aske@cs.vu.nl