Symbolic regression in dynamic scenarios with gradually changing targets
Created by W.Langdon from
gp-bibliography.bib Revision:1.8129
- @Article{ZEGKLITZ:2019:ASC,
-
author = "Jan Zegklitz and Petr Posik",
-
title = "Symbolic regression in dynamic scenarios with
gradually changing targets",
-
journal = "Applied Soft Computing",
-
volume = "83",
-
pages = "105621",
-
year = "2019",
-
ISSN = "1568-4946",
-
DOI = "doi:10.1016/j.asoc.2019.105621",
-
URL = "http://www.sciencedirect.com/science/article/pii/S1568494619304016",
-
keywords = "genetic algorithms, genetic programming, Symbolic
regression, Dynamic optimization, Backpropagation,
Reinforcement learning",
-
abstract = "Symbolic regression is a machine learning task: given
a training dataset with features and targets, find a
symbolic function that best predicts the target given
the features. This paper concentrates on dynamic
regression tasks, i.e. tasks where the goal changes
during the model fitting process. Our study is
motivated by dynamic regression tasks originating in
the domain of reinforcement learning: we study four
dynamic symbolic regression problems related to
well-known reinforcement learning benchmarks, with data
generated from the standard Value Iteration algorithm.
We first show that in these problems the target
function changes gradually, with no abrupt changes.
Even these gradual changes, however, are a challenge to
traditional Genetic Programming-based Symbolic
Regression algorithms because they rely only on
expression manipulation and selection. To address this
challenge, we present an enhancement to such algorithms
suitable for dynamic scenarios with gradual changes,
namely the recently introduced type of leaf nodes
called Linear Combination of Features. This type of
leaf node, aided by the error backpropagation technique
known from artificial neural networks, enables the
algorithm to better fit the data by using the error
gradient to its advantage rather than searching blindly
using only the fitness values. This setup is compared
with a baseline of the core algorithm without any of
our improvements and also with a classic evolutionary
dynamic optimization technique: hypermutation. The
results show that the proposed modifications greatly
improve the algorithm ability to track a gradually
changing target",
- }
Genetic Programming entries for
Jan Zegklitz
Petr Posik
Citations