Skip to main content

Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

  • Conference paper
Genetic Programming (EuroGP 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3003))

Included in the following conference series:

  • 751 Accesses

Abstract

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolutionary optimization of multilayer perceptrons.

Obtained results show that a co-evolutionary model obtains similar or better results than specialized approaches, needing much less training epochs and thus using much less simulation time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alpaydim, E.: GAL: Networks that grow when they learn and shrink when they forget. International Journal of Pattern Recognition and Artificial Intelligence 8(1), 391–414 (1994)

    Article  Google Scholar 

  2. Castillo, P.A., González, J., Merelo, J.J., Rivas, V., Romero, G., Prieto, A.: SA-Prop: Optimization of Multilayer Perceptron Parameters using Simulated Annealing. In: Mira, J. (ed.) IWANN 1999. LNCS, vol. 1606, pp. 661–670. Springer, Heidelberg (1999) ISBN:3-540-66069-0

    Chapter  Google Scholar 

  3. Castillo, P.A., Merelo, J.J., Rivas, V., Romero, G., Prieto, A.: G-Prop: Global Optimization of Multilayer Perceptrons using GAs. Neurocomputing 35/1-4, 149–163 (2000)

    Article  Google Scholar 

  4. Castillo, P.A., Arenas, M.G., Merelo, J.J., Rivas, V., Romero, G.: Optimisation of Multilayer Perceptrons Using a Distributed Evolutionary Algorithm with SOAP. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 676–685. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  5. Castillo, P.A., Merelo, J.J., Romero, G., Prieto, A., Rojas, I.: Statistical Analysis of the Parameters of a Neuro-Genetic Algorithm. IEEE Transactions on Neural Networks 13(6), 1374–1394 (2002) ISSN:1045-9227

    Article  Google Scholar 

  6. Schwartz, D.B., Samalan, V.K., Solla, S.A., Denker, J.S.: Exhaustive learning. Neural Computation 2(3), 374–385 (1990)

    Article  Google Scholar 

  7. Fahlman, S.: Faster-Learning Variations on Back-Propagation: An Empirical Study. In: Proc. of the 1988 Connectionist Models Summer School. Morgan Kaufmann, San Francisco (1988)

    Google Scholar 

  8. Fahlman, S.E.: An empirical study of learning speed in back-propagation networks. Technical report, Carnegie Mellon University (1988)

    Google Scholar 

  9. García-Pedrajas, N., Hervás-Martínez, C., Munoz-Pérez, J.: COVNET: A cooperative coevolutionary model for evolving artificial neural networks. IEEE Transactions on Neural Networks 14(3), 575–596 (2003)

    Article  Google Scholar 

  10. Husbands, P.: Distributed coevolutionary genetic algorithms for multi-criteria and multi-constraint optimisation. In: Fogarty, T.C. (ed.) AISB-WS 1994. LNCS, vol. 865, pp. 150–165. Springer, Heidelberg (1994)

    Google Scholar 

  11. Reed, R.D., Marks II., R.J.: Neural Smithing, Bradford. The MIT Press, Cambridge (1999)

    Google Scholar 

  12. Keesing, R., Stork, D.G.: Evolution and Learning in Neural Networks: The number and distribution of learning trials affect the rate of evolution. In: Lippmann, R.P., Moody, J.E., Touretzky, D.S. (eds.) Proc. of Neural Information Proc. Sys. NIPS-3, pp. 804–810 (1991)

    Google Scholar 

  13. Kuchenko, P.: SOAP:Lite, Available from http://www.soaplite.com

  14. Mayer, H.A., Schwaiget, R., Huber, R.: Evolving topologies of artificial neural networks adapted to image processing tasks. In: Proc. of 26th Int. Symp. on Remote Sensing of Environment, Vancouver, BC, Canada, pp. 71–74 (1996)

    Google Scholar 

  15. Merelo, J.J., Patón, M., Canas, A., Prieto, A., Morán, F.: Optimization of a competitive learning neural network by genetic algorithms. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 185–192. Springer, Heidelberg (1993)

    Google Scholar 

  16. Moriarty, D.E., Miikkulainen, R.: Forming neural networks through efficient and adaptive coevolution. Evolutionary Computation 4(5) (1998)

    Google Scholar 

  17. Paredis, J.: Coevolutionary computation. Artificial Life 2, 355–375 (1995)

    Article  Google Scholar 

  18. Petridis, V., Kazarlis, S., Papaikonomu, A., Filelis, A.: A hybrid genetic algorithm for training neural networks. Artificial Neural Networks 2, 953–956 (1992)

    Google Scholar 

  19. Potter, M.A., De Jong, K.A.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1), 1–29 (2000)

    Article  Google Scholar 

  20. Prechelt, L.: PROBEN1 — A set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, D-76128 Karlsruhe, Germany (September 1994)

    Google Scholar 

  21. Prechelt, L.: Automatic early stopping using cross validation: quantifying the criteria. Neural Networks 11, 761–767 (1998)

    Article  Google Scholar 

  22. Riedmiller, M.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE International Conference on Neural Networks, San Francisco, vol. 1, pp. 586–591. IEEE, New York (1993)

    Chapter  Google Scholar 

  23. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error backpropagation. In: Rumelhart, D.E., McClelland, J.L.,(eds.) The PDP research group Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)

    Google Scholar 

  24. Smalz, R., Conrad, M.: Combining evolution with credit apportionment: A new learning algorithm for neural nets. Neural Networks 7(2), 341–351 (1994)

    Article  Google Scholar 

  25. Levin, E., Tishby, N., Solla, S.A.: A statistical approach to learning and generalization in layered neural networks. Proc. of the IEEE 78(10), 1568–1574 (1990)

    Article  Google Scholar 

  26. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  27. Zhao, Q.: Co-evolutionary learning of neural networks. Journal of Intelligent and Fuzzy Systems 6, 83–90 (1998) ISSN 1064-1246

    Google Scholar 

  28. Zhao, Q.F., Hammami, O., Kuroda, K., Saito, K.: Cooperative Co-evolutionary Algorithm - How to Evaluate a Module? In: Proc. 1st IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, San Antonio, pp. 150–157 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Castillo, P.A., Arenas, M.G., Merelo, J.J., Romero, G., Rateb, F., Prieto, A. (2004). Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks. In: Keijzer, M., O’Reilly, UM., Lucas, S., Costa, E., Soule, T. (eds) Genetic Programming. EuroGP 2004. Lecture Notes in Computer Science, vol 3003. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24650-3_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24650-3_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-21346-8

  • Online ISBN: 978-3-540-24650-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics