Machine learning control - explainable and analyzable methods
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Article{QUADE:2020:PDNP,
-
author = "Markus Quade and Thomas Isele and Markus Abel",
-
title = "Machine learning control - explainable and analyzable
methods",
-
journal = "Physica D: Nonlinear Phenomena",
-
volume = "412",
-
pages = "132582",
-
year = "2020",
-
ISSN = "0167-2789",
-
DOI = "doi:10.1016/j.physd.2020.132582",
-
URL = "http://www.sciencedirect.com/science/article/pii/S0167278920300026",
-
keywords = "genetic algorithms, genetic programming, Explainable
AI, XAI, Machine learning control, Dynamical systems,
Synchronization control",
-
abstract = "Recently, the term explainable AI came into discussion
as an approach to produce models from artificial
intelligence which allow interpretation. For a long
time, symbolic regression has been used to produce
explainable and mathematically tractable models. In
this contribution, we extend previous work on symbolic
regression methods to infer the optimal control of a
dynamical system given one or several optimization
criteria, or cost functions. In earlier publications,
network control was achieved by automated machine
learning control using genetic programming. Here, we
focus on the subsequent path continuation analysis of
the mathematical expressions which result from the
machine learning model. In particular, we use AUTO to
analyze the solution properties of the controlled
oscillator system which served as our model. As a
result, we show that there is a considerable advantage
of explainable symbolic regression models over less
accessible neural networks. In particular, the roadmap
of future works may be to integrate such analyses into
the optimization loop itself to filter out robust
solutions by construction",
- }
Genetic Programming entries for
Markus Quade
Thomas Isele
Markus W Abel
Citations