Neural Programming and an Internal Reinforcement Policy
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @InProceedings{teller:1996:npirpSV,
-
author = "Astro Teller and Manuela Veloso",
-
title = "Neural Programming and an Internal Reinforcement
Policy",
-
booktitle = "International Conference Simulated Evolution and
Learning",
-
year = "1996",
-
publisher = "Springer-Verlag",
-
keywords = "genetic algorithms, genetic programming, ANN",
-
URL = "http://www.cs.cmu.edu/afs/cs/usr/astro/public/papers/AS.ps",
-
URL = "http://www.cs.cmu.edu/afs/cs/usr/astro/mosaic/astroseal/astro/seal.html",
-
size = "8 pages",
-
abstract = "An important reason for the continued popularity of
Artificial Neural Networks (ANNs) in the machine
learning community is that the gradient-descent
backpropagation procedure gives ANNs a locally optimal
change procedure and, in addition, a framework for
understanding the ANN learning performance. Genetic
programming (GP) is also a successful evolutionary
learning technique that provides powerful parameterized
primitive constructs. Unlike ANNs, though, GP does not
have such a principled procedure for changing parts of
the learned system based on its current performance.
This paper introduces Neural Programming, a
connectionist representation for evolving programs that
maintains the benefits of GP. The connectionist model
of Neural Programming allows for a regression
credit-blame procedure in an evolutionary learning
system. We describe a general method for an informed
feedback mechanism for Neural Programming, Internal
Reinforcement. We introduce an Internal Reinforcement
procedure and demonstrate its use through an
illustrative experiment.",
-
notes = "html version available from
http://www.cs.cmu.edu/~astro/ SEAL, PADO bucket-brigade
IRNP reach given level of performace in 30% of
generations taken by NP",
- }
Genetic Programming entries for
Astro Teller
Manuela Veloso
Citations