abstract = "In this paper, we investigate the use of nested
evolution in which each step of one evolutionary
process involves running a second evolutionary process.
We apply this approach to build an evolutionary system
for reinforcement learning (RL) problems. Genetic
programming based on a descriptive encoding is used to
evolve the neural architecture, while an evolution
strategy is used to evolve the connection weights. We
test this method on a non-Markovian RL problem
involving an autonomous foraging agent, finding that
the evolved networks significantly outperform a
rule-based agent serving as a control. We also
demonstrate that nested evolution, partitioning into
subpopulations, and crossover operations all act
synergistically in improving performance in this
context.",
notes = "GECCO-2009 A joint meeting of the eighteenth
international conference on genetic algorithms
(ICGA-2009) and the fourteenth annual genetic
programming conference (GP-2009).