Created by W.Langdon from gp-bibliography.bib Revision:1.8028
Game-playing programs typically consist of two main elements: 1) search-tree node generation using search techniques, to traverse relevant game positions, and 2) an evaluation scheme for assessing the value of individual positions, known as the heuristic function (or evaluation function).
Genetic Programming (GP) is a sub-class of evolutionary algorithms, in which a population of solutions to a given problem, embodied as LISP expressions, is improved over time by applying the principles of Darwinian evolution. At each stage, or generation, every solution quality is measured and assigned a numerical value, called fitness. During the course of evolution, natural (or, in our case, artificial) selection takes place, wherein individuals with high fitness values are more likely to generate offspring.
In this dissertation, we explore the application of Genetic Programming to the development of search heuristics, for several hard, generalized combinatorial games, including Chess, Rush Hour, and FreeCell.
We start by applying GP to the evolution of strategies for playing a group of chess endgames. Our first set of experiments gives rise to GP individuals capable of drawing (or even winning) against C RAFTY, a world-class chess program.
We then turn to analysing the strategic capabilities of our evolved players and show that some of them are emergent. In the second set of experiments we devise new measures for determining the effectiveness of each GP terminal, which include testing it both singly,and in conjunction with other terminals. Results show that the whole (embodied as a full-fledged GP individual) is greater than the sum of its parts (the terminals).
Since one of the main conclusions following our analysis is that search must be incorporated into our players, our next set of experiments deals with a novel way to combine search and knowledge using GP, by means of search-inducing terminals and functions. We report our experiments with the Mate-in-N problem in chess, in which we demonstrate how the amount of search effort, measured by the number of nodes visited by C RAFTY (when solving non-trivial problems with N = 4 and N = 5), can be reduced by up to 46percent, which is no mean feat when comparing to such a strong chess program.
In the next part of this dissertation we attack a somewhat different type of problem, namely, single-player games. We show that, although the search algorithms used with these problems are different (specifically, Iterative-deepening A* and Heineman Staged Deepening), evolution of heuristics with GP can be applied successfully across the board.
We evolve the first reported solver for the Rush Hour puzzle, a PSPACE-Complete problem, using a new approach of evolving value-returning policies. Our evolved solvers successfully compete both with non-evolved search and human players for the most difficult known instances of Rush Hour. Additionally, we advance the state-of-the-art of the most difficult known instances by co-evolving solvable 8x8 configurations, requiring over 15,000,000 nodes to solve with blind search. We demonstrate the efficacy of our method for these instances as well, showing that the search effort required to solve them may also be greatly reduced.
We then apply our methods to the game of FreeCell, an NP-Complete problem used as a standard benchmark domain in several International Planning Competitions (IPCs). In practice this problem is much more difficult than Rush Hour, due to its typically large instances, a fact that is evidenced by the utter failure of methods such as A* and IDA* with this problem. We challenge the best solver to date, Heinemans Staged-Deepening algorithm, tailored specifically for this problem. We demonstrate that GP-evolved policies, when equipped with several hand-crafted heuristics, again greatly reduce the search effort of the best algorithm to date, as measured in multiple ways, including time and space required for search, and the percentage of problems solved.
In the final chapter we draw conclusions from both types of problems, and discuss the variants of interactions between search and knowledge when evolving solvers for them. We also propose some future research directions.
The work described in this dissertation was published in [69-74, 160], and won three Humie awards: two Bronze awards: one in 2005 and one in 2009, and a Silver award in 2007.",
cited by \cite{EvolvedToWin}",
Genetic Programming entries for Ami Hauptman