Surrogate models for discrete optimization problems
Created by W.Langdon from
gp-bibliography.bib Revision:1.8120
- @PhdThesis{DissertationZaef18,
-
author = "Martin Zaefferer",
-
title = "Surrogate models for discrete optimization problems",
-
year = "2018",
-
school = "Fakultaet fuer Informatik, Technischen Universitaet
Dortmund",
-
type = "DOKTORS DER INGENIEURWISSENSCHAFTEN",
-
address = "Germany",
-
month = "19 " # dec,
-
keywords = "genetic algorithms, genetic programming, ddc:004,
surrogate modelling, discrete optimisation,
combinatorial optimisation, kriging, kernel",
-
bibsource = "OAI-PMH server at eldorado.uni-dortmund.de",
-
language = "eng",
-
oai = "oai:eldorado.tu-dortmund.de:2003/37870",
-
URL = "http://hdl.handle.net/2003/37870",
-
URL = "http://dx.doi.org/10.17877/DE290R-19857",
-
URL = "https://eldorado.tu-dortmund.de/bitstream/2003/37870/1/DissertationZaef18.pdf",
-
size = "237 pages",
-
abstract = "Surrogate models are crucial tools for many real-world
optimisation problems. An optimisation algorithm can
evaluate a data-driven surrogate model instead of an
expensive objective function. While surrogate models
are well-established in the continuous optimisation
domain, they are less frequently applied to more
complex search spaces with discrete or combinatorial
solution representations. The main goal of this thesis
is to fill this gap. We develop and improve methods for
data-driven surrogate modelling in discrete search
spaces. After an initial review of existing approaches,
this work focuses on a similarity-based, or
kernel-based, model: Kriging. The intuitive idea is to
change the underlying kernel, thus adapting Kriging to
arbitrary data types. However, the model is sensitive
to the employed kernel. Hence, methods for combining or
selecting kernels are required. To that end, this
thesis discusses various methods based on concepts like
correlation, likelihood, and cross-validation. Our
experiments show that choosing or constructing the
right kernel determines the success of the optimisation
algorithm. Another challenge is that kernels are often
desired to be positive semi-definite (e.g., correlation
functions) or conditionally negative semi-definite
(distances). Analytical proofs of a kernel's
definiteness are possible, yet not always feasible in
practice. At the same time, these properties can have a
significant effect on model accuracy. Hence, we propose
a randomized, empirical search procedure to determine
whether a kernel is definite. Once a kernel is
determined to be indefinite, appropriate
counter-measures have to be taken to avoid performance
losses. Taking inspiration from the field of support
vector learning, we use eigenspectrum transformations
and related methods to correct the kernel matrices. We
add to the state of the art by considering various
repair mechanisms that are linked to additional
requirements imposed by Kriging models. We test most of
our methods with sets of simple benchmark problems. To
extend this, we also investigate two problems that are
more complex. Firstly, we consider solutions
represented by tree structures, which are of interest
in genetic programming. We discuss different kernels on
tree structures and test them on symbolic regression
tasks. Another more complex test case are hierarchical
search spaces. Here, some variables are only active
when other variables satisfy a condition. We design
several new kernels for hierarchical search spaces and
compare them to an existing kernel. Even with those
more complex test cases, it remains unclear how
performance estimates translate to real-world problems.
To address this, we propose a data-driven test function
generator based on Kriging simulation. In contrast to
estimation, simulation may avoid smoothing the training
data and is inherently suited to generate randomized
test instances that reflect real-world behaviour.",
-
notes = "Supervisors: Prof. Dr. Thomas Bartz-Beielstein and
Prof. Dr. Guenter Rudolph",
- }
Genetic Programming entries for
Martin Zaefferer
Citations