ABSTRACT
Modern applications of Artificial Neural Networks (ANNs)largely feature networks organized into layers of nodes. Each layer contains an arbitrary number of nodes, and these nodes only share edges with nodes in certain other layers, as determined by the network's topology. Topologies of ANNs are frequently designed by human intuition, due to the lack of a versatile method of determining the best topology for any given problem. Previous attempts at creating a system to automate the discovery of network topologies have utilized evolutionary computing [6]. The evolution in these systems built networks on a node-by-node basis, limiting the probability of larger, layered topologies. This paper provides on overview of Growth from Embryo of Layered Neural Networks (GELNN), which attempts to evolve topologies ofneural networks in terms of layers, and inter-layer connections, instead of individual nodes and edges.
- M. F. et al. An extendible package for data exploration, classification and correlation. Institute of Pharmaceutical and Food Analysis and Technologies.Google Scholar
- W. B. et al. Genetic programming: an introduction., volume 270. 1998.Google Scholar
- B. Farley and W. Clark. Simulation of self-organizing systems by digital computer. Transactions of the IRE Professional Group on Information Theory Trans. IRE Prof. Group Inf. Theory, 4(4):76--84, 1954.Google ScholarCross Ref
- G. E. Hinton. Reducing the dimensionality of data with neural networks. Science, 313(5786):504--07, 2006.Google ScholarDigital Library
- G. E. Hinton. Learning multiple layers of representation. Trends in Cognitive Sciences, 11(10):428--34, 2007.Google Scholar
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems.Google Scholar
- M. Lichman. Uci machine learning repository. University of California, School of Information and Computer Science, 2013.Google Scholar
- S. Luke and L. Spector. Evolving graphs and networkswith edge encoding: Preliminary report. Late-Breaking Papers at the Genetic Programming 1996 Conference, pages 428--34, 2007.Google Scholar
- M. Minsky and S. Papert. Perceptrons; an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19(88), 1969.Google Scholar
- F. Pagnotta and H. M. Amran. Using data mining to predict secondary school student alcohol consumption. Department of Computer Science,University of Camerino.Google Scholar
- R. Poli, W. B. Langdon, and N. F. McPhee., A field guide to genetic programming, volume 270. 2008. Google ScholarDigital Library
- D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. Readings in Cognitive Science, pages 399--421, 1988.Google ScholarCross Ref
- L. Spector and A. Robinson. Genetic programming and autoconstructive evolution with the push programming language. Genetic Programming and Evolvable Machines, 3, 2002. Google ScholarDigital Library
- T. H. L. Spector and J. Matheson. Solving uncompromising problems with lexicase selection. IEEE Xplore, 313(5786):504--07, Oct 2006.Google Scholar
- K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2):99--127, 2002. Google ScholarDigital Library
Index Terms
- Evolution of Layer Based Neural Networks: Preliminary Report
Recommendations
Towards the evolution of multi-layered neural networks: a dynamic structured grammatical evolution approach
GECCO '17: Proceedings of the Genetic and Evolutionary Computation ConferenceCurrent grammar-based NeuroEvolution approaches have several shortcomings. On the one hand, they do not allow the generation of Artificial Neural Networks (ANNs) composed of more than one hidden-layer. On the other, there is no way to evolve networks ...
Evolving neural networks
GECCO '12: Proceedings of the 14th annual conference companion on Genetic and evolutionary computationNeuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially ...
Recurrent Cartesian Genetic Programming of Artificial Neural Networks
Cartesian Genetic Programming of Artificial Neural Networks is a NeuroEvolutionary method based on Cartesian Genetic Programming. Cartesian Genetic Programming has recently been extended to allow recurrent connections. This work investigates applying ...
Comments