Created by W.Langdon from gp-bibliography.bib Revision:1.7185
This thesis studies grammar-based approaches in the application of Estimation of Distribution Algorithms (EDA) to the tree representation widely used in Genetic Programming (GP).
Evolutionary Computation (EC), motivated by evolution in the real world, has become one of the most widely used Machine Learning techniques, because of its effectiveness and versatility. Partially due to the disruptiveness of the genetic operators, there has been a growing interest in Estimation of Distribution Algorithms(EDA). The basic idea of EDA is to capture the interactions among genes, which represent the internal structure of problem solutions, and in this way to estimate the distribution of good solutions directly, rather than employing genetic operators.More specifically, in each generation, a model, typically a probabilistic model, is inductively learnt from good individuals (training examples), and is sampled to create new individuals of the next generation. A population is not usually maintained between generations, and genetic operators are omitted from EDAs.
Although EDA is becoming one of the most active fields in EC, the solution representation in most EDA is a Genetic Algorithms (GA) style linear representation(one dimensional array, known as the chromosome in GA literature). The more complex tree representations, resembling Genetic Programming (GP), have received only limited exploration, perhaps as a result of their intrinsic complexity. This is unfortunate, because tree representations provide a natural and expressive way of representing solutions for many problems. This is a significant gap in EDA and GP research, because EDA not only has the potential to improve GP performance, but also provides a new perspective for understanding GP. This thesis aims to help fill this gap, exploring grammar-based approaches to extending Estimation of Distribution Algorithms to GP-style tree representations.
Extending conventional EDA to tree representations is not trivial, because of the complexity of tree structures. Tree representations provide an alternative way of representing solutions, which is important to a large class of problems, but at the same time introduce important complications, including larger search spaces and more complex interactions among tree nodes. This thesis proposes a framework, called Program Distribution Estimation with Grammar Models (PRODIGY), which significantly extends previous applications of EDA to GP-style tree representations. The core of this framework is a Stochastic Context-free Grammar model (SCFG). In this research, we confirm both theoretically and empirically that PRODIGY is able to learn a wide range of the dependencies among nodes in tree form individuals that are believed to be important in current GP literature. Two different approaches under the PRODIGY framework are proposed and implemented. The first one iteratively refines the grammar model (but requires additional constraints on the search), while the second one builds a full grammar model, including both its probability and structure components, directly from the training samples. These approaches performed well on the problems studied, exhibiting search characteristics complementary to those of GP. Specifically, the contributions of the thesis are as follows.
1. Providing a comprehensive survey of current research on EDA with emphasis on EDA with GP-style tree representation. We attempt to clarify the relationship between EDA with conventional linear representations and those with a GP-style tree representation, and to reveal the unique difficulties which face this research.
2. Identifying desirable properties of probabilistic models for EDA with GP-style tree representation, and deriving the PRODIGY framework as a consequence.
3. Proposing Program Evolution with Explicit Learning (PEEL) as one implementation of PRODIGY. PEEL's incremental general-to-specific grammar learning method balances the effectiveness and efficiency of the grammar learning.
4. Proposing Grammar Model-based Program Evolution (GMPE) as another implementation of PRODIGY. GMPE realises the PRODIGY framework by introducing elegant inference methods from the formal grammar field. It provides good performance on some problems, but also provides a means to better understand some aspects of conventional GP, especially the building block hypothesis.
5. Proposing Swift GMPE (sGMPE), which is an extension of GMPE, aiming at reducing the computational cost.
6. Deriving a more accurate Minimum Message Length metric for grammar learning in PRODIGY. This metric leads to improved performance in the GMPE system, but may also be useful in grammar learning in general. It is also relevant to the learning of other probabilistic graphical models.",
Genetic Programming entries for Yin Shan