Abstract
In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolutionary optimization of multilayer perceptrons.
Obtained results show that a co-evolutionary model obtains similar or better results than specialized approaches, needing much less training epochs and thus using much less simulation time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Alpaydim, E.: GAL: Networks that grow when they learn and shrink when they forget. International Journal of Pattern Recognition and Artificial Intelligence 8(1), 391–414 (1994)
Castillo, P.A., González, J., Merelo, J.J., Rivas, V., Romero, G., Prieto, A.: SA-Prop: Optimization of Multilayer Perceptron Parameters using Simulated Annealing. In: Mira, J. (ed.) IWANN 1999. LNCS, vol. 1606, pp. 661–670. Springer, Heidelberg (1999) ISBN:3-540-66069-0
Castillo, P.A., Merelo, J.J., Rivas, V., Romero, G., Prieto, A.: G-Prop: Global Optimization of Multilayer Perceptrons using GAs. Neurocomputing 35/1-4, 149–163 (2000)
Castillo, P.A., Arenas, M.G., Merelo, J.J., Rivas, V., Romero, G.: Optimisation of Multilayer Perceptrons Using a Distributed Evolutionary Algorithm with SOAP. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 676–685. Springer, Heidelberg (2002)
Castillo, P.A., Merelo, J.J., Romero, G., Prieto, A., Rojas, I.: Statistical Analysis of the Parameters of a Neuro-Genetic Algorithm. IEEE Transactions on Neural Networks 13(6), 1374–1394 (2002) ISSN:1045-9227
Schwartz, D.B., Samalan, V.K., Solla, S.A., Denker, J.S.: Exhaustive learning. Neural Computation 2(3), 374–385 (1990)
Fahlman, S.: Faster-Learning Variations on Back-Propagation: An Empirical Study. In: Proc. of the 1988 Connectionist Models Summer School. Morgan Kaufmann, San Francisco (1988)
Fahlman, S.E.: An empirical study of learning speed in back-propagation networks. Technical report, Carnegie Mellon University (1988)
GarcÃa-Pedrajas, N., Hervás-MartÃnez, C., Munoz-Pérez, J.: COVNET: A cooperative coevolutionary model for evolving artificial neural networks. IEEE Transactions on Neural Networks 14(3), 575–596 (2003)
Husbands, P.: Distributed coevolutionary genetic algorithms for multi-criteria and multi-constraint optimisation. In: Fogarty, T.C. (ed.) AISB-WS 1994. LNCS, vol. 865, pp. 150–165. Springer, Heidelberg (1994)
Reed, R.D., Marks II., R.J.: Neural Smithing, Bradford. The MIT Press, Cambridge (1999)
Keesing, R., Stork, D.G.: Evolution and Learning in Neural Networks: The number and distribution of learning trials affect the rate of evolution. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Proc. of Neural Information Proc. Sys. NIPS-3, pp. 804–810 (1991)
Kuchenko, P.: SOAP:Lite, Available from http://www.soaplite.com
Mayer, H.A., Schwaiget, R., Huber, R.: Evolving topologies of artificial neural networks adapted to image processing tasks. In: Proc. of 26th Int. Symp. on Remote Sensing of Environment, Vancouver, BC, Canada, pp. 71–74 (1996)
Merelo, J.J., Patón, M., Canas, A., Prieto, A., Morán, F.: Optimization of a competitive learning neural network by genetic algorithms. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 185–192. Springer, Heidelberg (1993)
Moriarty, D.E., Miikkulainen, R.: Forming neural networks through efficient and adaptive coevolution. Evolutionary Computation 4(5) (1998)
Paredis, J.: Coevolutionary computation. Artificial Life 2, 355–375 (1995)
Petridis, V., Kazarlis, S., Papaikonomu, A., Filelis, A.: A hybrid genetic algorithm for training neural networks. Artificial Neural Networks 2, 953–956 (1992)
Potter, M.A., De Jong, K.A.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1), 1–29 (2000)
Prechelt, L.: PROBEN1 — A set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, D-76128 Karlsruhe, Germany (September 1994)
Prechelt, L.: Automatic early stopping using cross validation: quantifying the criteria. Neural Networks 11, 761–767 (1998)
Riedmiller, M.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE International Conference on Neural Networks, San Francisco, vol. 1, pp. 586–591. IEEE, New York (1993)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error backpropagation. In: Rumelhart, D.E., McClelland, J.L.,(eds.) The PDP research group Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)
Smalz, R., Conrad, M.: Combining evolution with credit apportionment: A new learning algorithm for neural nets. Neural Networks 7(2), 341–351 (1994)
Levin, E., Tishby, N., Solla, S.A.: A statistical approach to learning and generalization in layered neural networks. Proc. of the IEEE 78(10), 1568–1574 (1990)
Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)
Zhao, Q.: Co-evolutionary learning of neural networks. Journal of Intelligent and Fuzzy Systems 6, 83–90 (1998) ISSN 1064-1246
Zhao, Q.F., Hammami, O., Kuroda, K., Saito, K.: Cooperative Co-evolutionary Algorithm - How to Evaluate a Module? In: Proc. 1st IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, San Antonio, pp. 150–157 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Castillo, P.A., Arenas, M.G., Merelo, J.J., Romero, G., Rateb, F., Prieto, A. (2004). Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks. In: Keijzer, M., O’Reilly, UM., Lucas, S., Costa, E., Soule, T. (eds) Genetic Programming. EuroGP 2004. Lecture Notes in Computer Science, vol 3003. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24650-3_22
Download citation
DOI: https://doi.org/10.1007/978-3-540-24650-3_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21346-8
Online ISBN: 978-3-540-24650-3
eBook Packages: Springer Book Archive