Genetic Programming, Validation Sets, and Parsimony Pressure
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @TechReport{oai:hal.ccsd.cnrs.fr:inria-00000996_v1,
-
title = "Genetic Programming, Validation Sets, and Parsimony
Pressure",
-
author = "Christian Gagn{\'e} and Marc Schoenauer and
Marc Parizeau and Marco Tomassini",
-
publisher = "HAL - CCSd - CNRS",
-
year = "2006",
-
month = jan # "~09",
-
institution = "l'Equipe TAO INRIA Futurs",
-
type = "ARTCOLLOQUE",
-
number = "inria-00000996",
-
address = "LRI Bat. 490, Universite Paris Sud, 91405 Orsay CEDEX,
France",
-
annote = "Christian Gagn{\'e} ",
-
bibsource = "OAI-PMH server at hal.ccsd.cnrs.fr",
-
contributor = "Christian Gagn{\'e} ",
-
identifier = "inria-00000996 (version 1)",
-
oai = "oai:hal.ccsd.cnrs.fr:inria-00000996_v1",
-
keywords = "genetic algorithms, genetic programming, Computer
Science/Learning",
-
URL = "http://hal.inria.fr/inria-00000996/en/",
-
URL = "http://hal.ccsd.cnrs.fr/docs/00/05/44/78/PDF/gagne-paper.pdf",
-
URL = "http://arxiv.org/abs/cs/0601044",
-
size = "12 pages",
-
abstract = "Fitness functions based on test cases are very common
in Genetic Programming (GP). This process can be
assimilated to a learning task, with the inference of
models from a limited number of samples. This paper is
an investigation on two methods to improve
generalization in GP-based learning: 1) the selection
of the best-of-run individuals using a three data sets
methodology, and 2) the application of parsimony
pressure in order to reduce the complexity of the
solutions. Results using GP in a binary classification
setup show that while the accuracy on the test sets is
preserved, with less variances compared to baseline
results, the mean tree size obtained with the tested
methods is significantly reduced.",
-
notes = "See also
\cite{eurogp06:GagneSchoenauerParizeauTomassini}",
- }
Genetic Programming entries for
Christian Gagne
Marc Schoenauer
Marc Parizeau
Marco Tomassini
Citations