Enhancement of Model Generalisation in Multiobjective Genetic Programming
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @PhdThesis{jnThesis_1.0.5,
-
author = "Ji Ni",
-
title = "Enhancement of Model Generalisation in Multiobjective
Genetic Programming",
-
school = "Electronic and Electrical Engineering, University of
Sheffield",
-
year = "2013",
-
address = "UK",
-
month = dec,
-
keywords = "genetic algorithms, genetic programming, MOGP",
-
URL = "http://etheses.whiterose.ac.uk/5021/1/jnThesis_1.0.5.pdf",
-
URL = "http://etheses.whiterose.ac.uk/5021/1/jnThesis_1.0.5.docx",
-
URL = "http://etheses.whiterose.ac.uk/5021/",
-
URL = "http://ethos.bl.uk/OrderDetails.do?did=32&uin=uk.bl.ethos.589326",
-
size = "133 pages",
-
abstract = "Multi-objective genetic programming (MOGP) is a
powerful evolutionary algorithm that requires no human
pre-fixed model sets to handle regression and
classification problems in the machine learning area.
We aim to improve the model generalisation of MOGP in
both regression and classification tasks. The work in
this thesis has three main contributions. First, we
propose replacing the division operator used in genetic
programming with an analytic quotient (AQ) operator in
regression to systematically achieve lower mean squared
error due principally to removing the discontinuities
or singularities caused by conventional protected or
unprotected division. Further, this AQ operator is
differentiable. Second, we propose using Tikhonov
regularisation, in conjunction with node count (using
an extension of Pareto comparison from vectors to
tuples) as a general complexity measure in MOGP. We
demonstrate that employing this general complexity
yields mean squared test error measures over a range of
regression problems which are typically superior to
those from conventional node count. We further analysed
the reason why our new method outperforms the
conventional complexity measure and conclude that it
forms a decision mechanism which balances both
syntactic and semantic information. Third, we propose
using a loss measure complementary to Vapnik's
statistical learning theory, which can effectively
stabilise classifiers trained by MOGP. We demonstrate
that this loss measure has a number of attractive
properties and has a better correlation with
generalisation error compared to 0/1 loss, so that
better generalisation performance is achievable.",
-
notes = "Supervisor: Dr. Peter I. Rockett
uk.bl.ethos.589326",
- }
Genetic Programming entries for
Ji Ni
Citations