Grammatical Evolution-Driven Algorithm for Efficient and Automatic Hyperparameter Optimisation of Neural Networks
Created by W.Langdon from
gp-bibliography.bib Revision:1.8168
- @Article{Vaidya:2023:Algorithms,
-
author = "Gauri Vaidya and Meghana Kshirsagar and Conor Ryan",
-
title = "Grammatical Evolution-Driven Algorithm for Efficient
and Automatic Hyperparameter Optimisation of Neural
Networks",
-
journal = "Algorithms",
-
year = "2023",
-
volume = "16",
-
number = "7",
-
pages = "article number 319",
-
keywords = "genetic algorithms, genetic programming, Grammatical
Evolution, ANN, search space pruning, machine learning,
combinatorial optimisation, computer vision,
metaheuristics, hyperparameter tuning, deep learning,
neural networks",
-
ISSN = "1999-4893",
-
URL = "https://www.mdpi.com/1999-4893/16/7/319",
-
DOI = "doi:10.3390/a16070319",
-
size = "16 pages",
-
abstract = "... Neural networks have revolutionised the way we
approach problem solving across multiple domains;
however, their effective design and efficient use of
computational resources is still a challenging task.
One of the most important factors influencing this
process is model hyperparameters which vary
significantly with models and datasets. Recently, there
has been an increased focus on automatically tuning
these hyperparameters to reduce complexity and to
optimise resources. From traditional human-intuitive
tuning methods to random search, grid search, Bayesian
optimisation, and evolutionary algorithms, significant
advancements have been made in this direction that
promise improved performance while using fewer
resources. we propose HyperGE, a two-stage model for
automatically tuning hyperparameters driven by
grammatical evolution (GE), a bioinspired
population-based machine learning algorithm. GE
provides an advantage in that it allows users to define
their own grammar for generating solutions, making it
ideal for defining search spaces across datasets and
models. We test HyperGE to fine-tune VGG-19 and
ResNet-50 pre-trained networks using three benchmark
datasets. We demonstrate that the search space is
significantly reduced by a factor of about ninty
percent Stage 2 with fewer number of trials. HyperGE
could become an invaluable tool within the deep
learning community, allowing practitioners greater
freedom when exploring complex problem domains for
hyperparameter fine-tuning.",
-
notes = "also known as \cite{a16070319}",
- }
Genetic Programming entries for
Gauri Vaidya
Meghana Kshirsagar
Conor Ryan
Citations