Improving Generalisation of AutoML Systems with Dynamic Fitness Evaluations
Created by W.Langdon from
gp-bibliography.bib Revision:1.8129
- @InProceedings{Evans:2020:GECCO,
-
author = "Benjamin P. Evans and Bing Xue and Mengjie Zhang",
-
title = "Improving Generalisation of {AutoML} Systems with
Dynamic Fitness Evaluations",
-
year = "2020",
-
editor = "Carlos Artemio {Coello Coello} and
Arturo Hernandez Aguirre and Josu Ceberio Uribe and
Mario Garza Fabre and Gregorio {Toscano Pulido} and
Katya Rodriguez-Vazquez and Elizabeth Wanner and
Nadarajen Veerapen and Efren Mezura Montes and
Richard Allmendinger and Hugo Terashima Marin and
Markus Wagner and Thomas Bartz-Beielstein and Bogdan Filipic and
Heike Trautmann and Ke Tang and John Koza and
Erik Goodman and William B. Langdon and Miguel Nicolau and
Christine Zarges and Vanessa Volz and Tea Tusar and
Boris Naujoks and Peter A. N. Bosman and
Darrell Whitley and Christine Solnon and Marde Helbig and
Stephane Doncieux and Dennis G. Wilson and
Francisco {Fernandez de Vega} and Luis Paquete and
Francisco Chicano and Bing Xue and Jaume Bacardit and
Sanaz Mostaghim and Jonathan Fieldsend and
Oliver Schuetze and Dirk Arnold and Gabriela Ochoa and
Carlos Segura and Carlos Cotta and Michael Emmerich and
Mengjie Zhang and Robin Purshouse and Tapabrata Ray and
Justyna Petke and Fuyuki Ishikawa and Johannes Lengler and
Frank Neumann",
-
isbn13 = "9781450371285",
-
publisher = "Association for Computing Machinery",
-
publisher_address = "New York, NY, USA",
-
URL = "https://doi.org/10.1145/3377930.3389805",
-
DOI = "doi:10.1145/3377930.3389805",
-
booktitle = "Proceedings of the 2020 Genetic and Evolutionary
Computation Conference",
-
pages = "324--332",
-
size = "9 pages",
-
keywords = "genetic algorithms, genetic programming, regularized
evolution, AutoML, regularization, automated machine
learning, generalisation, dynamic fitness evaluations",
-
address = "internet",
-
series = "GECCO '20",
-
month = jul # " 8-12",
-
organisation = "SIGEVO",
-
abstract = "A common problem machine learning developers are faced
with is overfitting, that is, fitting a pipeline too
closely to the training data that the performance
degrades for unseen data. Automated machine learning
aims to free (or at least ease) the developer from the
burden of pipeline creation, but this overfitting
problem can persist. In fact, this can become more of a
problem as we look to iteratively optimise the
performance of an internal cross-validation (most often
k-fold). While this internal cross-validation hopes to
reduce this overfitting, we show we can still risk
overfitting to the particular folds used. In this work,
we aim to remedy this problem by introducing dynamic
fitness evaluations which approximate repeated k-fold
cross-validation, at little extra cost over single
k-fold, and far lower cost than typical repeated
k-fold. The results show that when time equated, the
proposed fitness function results in significant
improvement over the current state-of-the-art baseline
method which uses an internal single k-fold.
Furthermore, the proposed extension is very simple to
implement on top of existing evolutionary computation
methods, and can provide essentially a free boost in
generalisation/testing performance.",
-
notes = "Also known as \cite{10.1145/3377930.3389805}
GECCO-2020 A Recombination of the 29th International
Conference on Genetic Algorithms (ICGA) and the 25th
Annual Genetic Programming Conference (GP)",
- }
Genetic Programming entries for
Benjamin Evans
Bing Xue
Mengjie Zhang
Citations