Created by W.Langdon from gp-bibliography.bib Revision:1.8129
Despite the above advantages of WNNs, the optimisation of their parameters and the estimation of the number of hidden neurons have significant effects on their performance. Not all WNN parameters are easily differentiable and are therefore usually excluded from the training process. Currently, standard gradient-based algorithms are used to optimise the different parameters of networks. Moreover, the initialisation of hidden neurons plays a critical role in capturing the variability of data.
Evolutionary algorithms have been used as a gradient-free optimisation method in many research problems where differentiability is unavailable or derivatives are unreliable or impractical to obtain. Thus, evolutionary algorithms was the effective choice for WNN parameter optimisation. Furthermore, in order to devise a bloat-free evolutionary programming method, a Cartesian genetic programming (CGP) model was used. Such models are based on the concept of using and evolving fixed resources such as nodes and their connections links. This phenomenon proves beneficial where adaptability of hidden neurons is required, as its quantification varies from one system to another.
The proposed evolutionary WNN (EWNN) was first applied to a standard two-spiral task. This benchmark task provided a clear understanding of the operation and response of EWNNs, which highlights their potential for separating non-linear classes.
An EWNN was then applied to the classification of three publicly-available biomedical datasets on breast cancer and Parkinson's disease. The process of feature pruning during the evolutionary process, and the effects of training all of the network parameters, were studied in detail.
To further improve the classification performance of EWNNs, an ensemble EWNN (EWNN-e) was proposed. In this method, a genetic algorithm was used to prune trained EWNN classifiers for the three previously-investigated datasets. The EWNN-e was found to be even more accurate than the independent EWNN classifier.
The performance of EWNNs in learning patterns of control behaviour in a benchmark control problem, the acrobot, was the final focus of this thesis. The performance of any reinforcement learning algorithm is dependent on the space domain it operates in, i.e. discrete or continuous, whereby a discrete action space is significantly less challenging than a continuous one. In the context of EWNNs, both discrete and continuous action spaces were investigated. The performance of EWNNs were compared with the state-of-the-art deep RL algorithm. The EWNNs produced robust acrobot controllers that were independent of the type of action space domain.",
Supervisors: Stephan K. Chalupand and Alexandre Mendes",
Genetic Programming entries for Maryam Mahsal Khan