Emergent Discovery of Reinforced Programs using Q-Learning and Planning: A Proof of Concept
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @InProceedings{Sealy:2024:CEC,
-
author = "Noah Sealy and Malcolm I. Heywood",
-
title = "Emergent Discovery of Reinforced Programs using
{Q}-Learning and Planning: {A} Proof of Concept",
-
booktitle = "2024 IEEE Congress on Evolutionary Computation (CEC)",
-
editor = "Bing Xue",
-
address = "Yokohama, Japan",
-
month = "30 " # jun # "- 5 " # jul,
-
publisher = "IEEE",
-
keywords = "genetic algorithms, genetic programming, Q-learning,
Negative feedback, Monte Carlo methods, Navigation,
Programming, Planning, Reinforcement learning,
navigation, grid world",
-
isbn13 = "979-8-3503-0837-2",
-
DOI = "doi:10.1109/CEC60901.2024.10611881",
-
size = "8 pages",
-
abstract = "While applying genetic programming to reinforcement
learning tasks, little attempt is made to incorporate
local state specific reward information. This reduces
the sample efficiency of the approach when tasks are
rich in rewards. In this work, we provide a proof of
concept for using local rewards under grid world tasks
to incrementally parameterize teams of programs. A
planning step is then introduced to propagate Qvalues
through the team of programs. No interaction with the
task is necessary to perform the planning step.
Benchmarking is performed on n by n; n is {5; 10; 20;
50} grid world navigation tasks to illustrate the
utility of the approach.",
-
notes = "also known as \cite{10611881}
WCCI 2024",
- }
Genetic Programming entries for
Noah Sealy
Malcolm Heywood
Citations