Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization
Created by W.Langdon from
gp-bibliography.bib Revision:1.8098
- @InProceedings{Gonzalez:2020:CEC,
-
author = "Santiago Gonzalez and Risto Miikkulainen",
-
booktitle = "2020 IEEE Congress on Evolutionary Computation (CEC)",
-
title = "Improved Training Speed, Accuracy, and Data
Utilization Through Loss Function Optimization",
-
year = "2020",
-
editor = "Yaochu Jin",
-
month = "19-24 " # jul,
-
keywords = "genetic algorithms, genetic programming",
-
isbn13 = "978-1-7281-6929-3",
-
DOI = "doi:10.1109/CEC48606.2020.9185777",
-
abstract = "As the complexity of neural network models has grown,
it has become increasingly important to optimize their
design automatically through meta-learning. Methods for
discovering hyperparameters, topologies, and learning
rate schedules have lead to significant increases in
performance. This paper shows that loss functions can
be optimized with meta-learning as well, and result in
similar improvements. The method, Genetic Loss function
Optimization (GLO), discovers loss functions de novo,
and optimizes them for a target task. Leveraging
techniques from genetic programming, GLO builds loss
functions hierarchically from a set of operators and
leaf nodes. These functions are repeatedly recombined
and mutated to find an optimal structure, and then a
covariance-matrix adaptation evolutionary strategy
(CMA-ES) is used to find optimal coefficients. Networks
trained with GLO loss functions are found to outperform
the standard cross-entropy loss on standard image
classification tasks. Training with these new loss
functions requires fewer steps, results in lower test
error, and allows for smaller datasets to be used. Loss
function optimization thus provides a new dimension of
metalearning, and constitutes an important step towards
AutoML.",
-
notes = "Also known as \cite{9185777}",
- }
Genetic Programming entries for
Santiago Gonzalez
Risto Miikkulainen
Citations