Interpretable Policies for Reinforcement Learning by Genetic Programming
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Misc{journals/corr/abs-1712-04170,
-
author = "Daniel Hein and Steffen Udluft and Thomas A. Runkler",
-
title = "Interpretable Policies for Reinforcement Learning by
Genetic Programming",
-
howpublished = "ArXiv",
-
year = "2018",
-
month = "4 " # apr,
-
edition = "V2",
-
keywords = "genetic algorithms, genetic programming,
interpretable, reinforcement learning, model-based,
symbolic regression, industrial benchmark",
-
URL = "https://arxiv.org/abs/1712.04170",
-
size = "15 pages",
-
abstract = "The search for interpretable reinforcement learning
policies is of high academic and industrial interest.
Especially for industrial systems, domain experts are
more likely to deploy autonomously learnt controllers
if they are understandable and convenient to evaluate.
Basic algebraic equations are supposed to meet these
requirements, as long as they are restricted to an
adequate complexity. Here we introduce the genetic
programming for reinforcement learning (GPRL) approach
based on model-based batch reinforcement learning and
genetic programming, which autonomously learns policy
equations from pre-existing default state-action
trajectory samples. GPRL is compared to a
straight-forward method which uses genetic programming
for symbolic regression, yielding policies imitating an
existing well-performing, but non-interpretable policy.
Experiments on three reinforcement learning benchmarks,
i.e., mountain car, cart-pole balancing, and industrial
benchmark, demonstrate the superiority of our GPRL
approach compared to the symbolic regression method.
GPRL is capable of producing well-performing
interpretable reinforcement learning policies from
pre-existing default trajectory data.",
- }
Genetic Programming entries for
Daniel Hein
Steffen Udluft
Thomas A Runkler
Citations