Abstract
NeuroEvolution (NE) is the application of evolutionary algorithms to Artificial Neural Networks (ANN). This paper reports on an investigation into the relative importance of weight evolution and topology evolution when training ANN using NE. This investigation used the NE technique Cartesian Genetic Programming of Artificial Neural Networks (CGPANN). The results presented show that the choice of topology has a dramatic impact on the effectiveness of NE when only evolving weights; an issue not faced when manipulating both weights and topology. This paper also presents the surprising result that topology evolution alone is far more effective when training ANN than weight evolution alone. This is a significant result as many methods which train ANN manipulate only weights.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Both CGP and ANNs can also be structured in a recurrent form.
- 2.
Fully connected between layers i.e. a node in hidden layer two has an input from every node in hidden layer one.
- 3.
If the arity is set high enough however all topologies are possible as each node can lower its own arity by only utilizing the first of multiple connections between two nodes.
References
P. Angeline, G. Saunders, and J. Pollack. An Evolutionary Algorithm that Constructs Recurrent Neural Networks. IEEE Transactions on Neural Networks, 5(1):54–65, 1994.
D. Floreano, P. Dürr, and C. Mattiussi. Neuroevolution: from Architectures to Learning. Evolutionary Intelligence, 1(1):47–62, 2008.
X. Glorot and Y. Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and, Statistics, 2010.
C. Igel. Neuroevolution for Reinforcement Learning using Evolution Strategies. In Evolutionary Computation, volume 4, pages 2588–2595. IEEE, 2003.
M. M. Khan, G. M. Khan, and J. F. Miller. Evolution of Neural Networks using Cartesian Genetic Programming. In Proceedings of IEEE World Congress on Computational Intelligence, 2010.
H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring Strategies for Training Deep Neural Networks. The Journal of Machine Learning Research, 10:1–40, 2009.
J. Miller and S. Smith. Redundancy and Computational Efficiency in Cartesian Genetic Programming. IEEE Transactions on Evolutionary Computation, 10(2):167–174, 2006.
J. Miller and P. Thomson. Cartesian Genetic Programming. In Proceedings of the Third European Conference on Genetic Programming (EuroGP2000), volume 1802, pages 121–132. Springer-Verlag, 2000.
J. F. Miller. What bloat? Cartesian Genetic Programming on Boolean Problems. In 2001 Genetic and Evolutionary Computation Conference Late Breaking Papers, pages 295–302, 2001.
J. F. Miller, editor. Cartesian Genetic Programming. Springer, 2011.
D. Moriarty and R. Mikkulainen. Efficient Reinforcement Learning through Symbiotic Evolution. Machine learning, 22(1):11–32, 1996.
R. Poli. Some Steps Towards a Form of Parallel Distributed Genetic Programming. In Proceedings of the First On-line Workshop on, Soft Computing, pages 290–295, 1996.
K. Stanley and R. Miikkulainen. Evolving Neural Networks through Augmenting Topologies. Evolutionary computation, 10(2):99–127, 2002.
S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S. Fahlman, D. Fisher, et al. The Monk’s Problems a Performance Comparison of Different Learning Algorithms. Technical report, Carnegie Mellon University, 1991.
A. J. Turner and J. F. Miller. Cartesian Genetic Programming encoded Artificial Neural Networks: A Comparison using Three Benchmarks. In Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO-13), pages 1005–1012. ACM, 2013.
A. Vargha and H. D. Delaney. A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics, 25(2):101–132, 2000.
V. K. Vassilev and J. F. Miller. The Advantages of Landscape Neutrality in Digital Circuit Evolution. In Proc. International Conference on Evolvable Systems, volume 1801 of LNCS, pages 252–263. Springer Verlag, 2000.
A. Wieland. Evolving Neural Network Controllers for Unstable Systems. In Neural Networks, 1991, IJCNN-91-Seattle International Joint Conference on, volume 2, pages 667–673. IEEE, 1991.
X. Yao. Evolving Artificial Neural Networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.
T. Yu and J. F. Miller. Neutrality and the Evolvability of a Boolean Function Landscape. Genetic programming, pages 204–217, 2001.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer International Publishing Switzerland
About this paper
Cite this paper
Turner, A.J., Miller, J.F. (2013). The Importance of Topology Evolution in NeuroEvolution: A Case Study Using Cartesian Genetic Programming of Artificial Neural Networks. In: Bramer, M., Petridis, M. (eds) Research and Development in Intelligent Systems XXX. SGAI 2013. Springer, Cham. https://doi.org/10.1007/978-3-319-02621-3_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-02621-3_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02620-6
Online ISBN: 978-3-319-02621-3
eBook Packages: Computer ScienceComputer Science (R0)