Created by W.Langdon from gp-bibliography.bib Revision:1.8129
Summary for Lay Audience
Artificial neural networks (ANNs) have become popular tools for implementing many kinds of machine learning and artificially intelligent systems. While popular, there are many outstanding questions about how ANNs should be structured, and how they should be trained. Of particular interest is the branch of machine learning called reinforcement learning, which focuses on training artificial agents to perform complex, sequential tasks, like playing video games or navigating a maze. In this thesis, three contributions to research at the intersection of ANNs and reinforcement learning are presented. First, a mathematical language that generalizes multiple contemporary ways of describing neural network organization, second, an evolutionary algorithm that uses this mathematical language to help define an algorithm for machine learning with ANNs in which the network's architecture can be modified during training by the algorithm, and third, a related algorithm that experiments with an alternative method to training ANNs for reinforcement learning called novelty search, which promotes behavioural diversity over greedy reward seeking behaviour. Experimental results indicate that evolutionary algorithms, a form of random search guided by evolutionary principles of selection pressure, are competitive alternatives to conventional deep learning algorithms such as error back propagation. Results also show that architectural mutability. The ability for network architectures to change automatically during training. Can dramatically improve learning performance in games over contemporary methods.",
mentions GP and CGP",
Genetic Programming entries for Ethan Charles Jackson