Skip to main content

The Evolutionary Buffet Method

  • Chapter
  • First Online:

Part of the book series: Genetic and Evolutionary Computation ((GEVO))

Abstract

Within the field of Genetic Algorithms (GA) and Artificial Intelligence (AI) a variety computational substrates with the power to find solutions to a large variety of problems have been described. Research has specialized on different computational substrates that each excel in different problem domains. For example, Artificial Neural Networks (ANN) (Russell et al., Artificial intelligence: a modern approach, vol 2. Prentice Hall, Upper Saddle River, 2003) have proven effective at classification, Genetic Programs (by which we mean mathematical tree-based genetic programming and will abbreviate with GP) (Koza, Stat Comput 4:87–112, 1994) are often used to find complex equations to fit data, Neuro Evolution of Augmenting Topologies (NEAT) (Stanley and Miikkulainen, Evolut Comput 10:99–127, 2002) is good at robotics control problems (Cully et al., Nature 521:503, 2015), and Markov Brains (MB) (Edlund et al., PLoS Comput Biol 7:e1002,236, 2011; Marstaller et al., Neural Comput 25:2079–2107, 2013; Hintze et al., Markov brains: a technical introduction. arXiv:1709.05601, 2017) are used to test hypotheses about evolutionary behavior (Olson et al., J R Soc Interf 10:20130,305, 2013) (among many other examples). Given the wide range of problems and vast number of computational substrates practitioners of GA and AI face the difficulty that every new problem requires an assessment to find an appropriate computational substrates and specific parameter tuning to achieve optimal results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Adami, C., Brown, C.T.: Evolutionary learning in the 2d artificial life system avida. In: Artificial Life IV, vol. 1194, pp. 377–381. Cambridge, MA: MIT Press (1994)

    Google Scholar 

  2. Adami, C., Schossau, J., Hintze, A.: Evolutionary game theory using agent-based methods. Physics of Life Reviews 19, 1–26 (2016)

    Article  Google Scholar 

  3. Albantakis, L., Hintze, A., Koch, C., Adami, C., Tononi, G.: Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Computational Biology 10, e1003,966 (2014)

    Article  Google Scholar 

  4. Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics 13, 834–846 (1983)

    Article  Google Scholar 

  5. Beer, R.D., et al.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. From Animals to Animats 4, 421–429 (1996)

    Google Scholar 

  6. Bohm, C., CG, N., Hintze, A.: MABE (modular agent based evolver): A framework for digital evolution research. Proceedings of the European Conference of Artificial Life (2017)

    Google Scholar 

  7. Cully, A., Clune, J., Tarapore, D., Mouret, J.B.: Robots that can adapt like animals. Nature 521, 503 (2015)

    Article  Google Scholar 

  8. Edlund, J.A., Chaumont, N., Hintze, A., Koch, C., Tononi, G., Adami, C.: Integrated information increases with fitness in the evolution of animats. PLoS Computational Biology 7, e1002,236 (2011)

    Article  MathSciNet  Google Scholar 

  9. Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990)

    Article  Google Scholar 

  10. Goldman, B.W., Punch, W.F.: Parameter-less population pyramid. In: GECCO ‘14: Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, pp. 785–792. ACM, Vancouver, BC, Canada (2014).

    Google Scholar 

  11. Grabowski, L.M., Bryson, D.M., Dyer, F.C., Ofria, C., Pennock, R.T.: Early evolution of memory usage in digital organisms. In: ALIFE, pp. 224–231. Citeseer (2010)

    Google Scholar 

  12. Hintze, A., et al.: Markov Brains: A Technical Introduction. arXiv preprint arXiv:1709.05601 (2017)

    Google Scholar 

  13. Hintze, A., Miromeni, M.: Evolution of autonomous hierarchy formation and maintenance. Artificial Life 14, 366–367 (2014)

    Google Scholar 

  14. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  15. James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: Proc. of Genetic and Evolutionary Computation Conference (2004)

    Google Scholar 

  16. Jordan, M.I.: Serial order: A parallel distributed processing approach. In: Advances in Psychology, vol. 121, pp. 471–495. Elsevier (1997)

    Google Scholar 

  17. Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 99–134 (1998)

    Article  MathSciNet  Google Scholar 

  18. Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4, 87–112 (1994)

    Article  Google Scholar 

  19. Kvam, P., Cesario, J., Schossau, J., Eisthen, H., Hintze, A.: Computational evolution of decision-making strategies. arXiv preprint arXiv:1509.05646 (2015)

    Google Scholar 

  20. Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)

    Google Scholar 

  21. Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Computation 25, 2079–2107 (2013)

    Article  MathSciNet  Google Scholar 

  22. Merrild, J., Rasmussen, M.A., Risi, S.: Hyperentm: Evolving scalable neural turing machines through hyperneat. arXiv preprint arXiv:1710.04748 (2017)

    Google Scholar 

  23. Miller, J.F.: Cartesian genetic programming. In: Cartesian Genetic Programming, pp. 17–34. Springer (2011)

    Google Scholar 

  24. Mouret, J.B., Clune, J.: Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015)

    Google Scholar 

  25. Olson, R.S., Hintze, A., Dyer, F.C., Knoester, D.B., Adami, C.: Predator confusion is sufficient to evolve swarming behaviour. Journal of The Royal Society Interface 10, 20130,305 (2013)

    Article  Google Scholar 

  26. openAI.com: OpenAI Gym Toolkit (2018). URL https://gym.openai.com/envs/. [Online; accessed 1-Jan-2018]

  27. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q., Kurakin, A.: Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041 (2017)

    Google Scholar 

  28. Russell, S.J., Norvig, P., Canny, J.F., Malik, J.M., Edwards, D.D.: Artificial Intelligence: A Modern Approach, vol. 2. Prentice Hall Upper Saddle River (2003)

    Google Scholar 

  29. Schaffer, C.: A conservation law for generalization performance. In: Proceedings of the 11th International Conference on Machine Learning, pp. 259–265 (1994)

    Chapter  Google Scholar 

  30. Schossau, J., Adami, C., Hintze, A.: Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy 18, 6 (2015)

    Article  Google Scholar 

  31. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)

    Google Scholar 

  32. Sheneman, L., Hintze, A.: Evolving autonomous learning in cognitive networks. Scientific Reports 7, 16,712 (2017)

    Article  Google Scholar 

  33. Smith, A.W.: Neat-python (2015). URL http://neat-python.readthedocs.io/en/latest/index.html. [Online; accessed 10-31-2017]

  34. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artificial Life 15, 185–212 (2009)

    Article  Google Scholar 

  35. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10, 99–127 (2002)

    Article  Google Scholar 

  36. Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ‘13, pp. 847–855. ACM, New York, NY, USA (2013).

    Google Scholar 

  37. Trujillo, L., Muñoz, L., Naredo, E., Martínez, Y.: Neat, there’s no bloat. In: European Conference on Genetic Programming, pp. 174–185. Springer (2014)

    Google Scholar 

  38. Wikipedia: Inverted pendulum — Wikipedia, the free encyclopedia (2018). URL https://en.wikipedia.org/wiki/Inverted_pendulum. [Online; accessed 1-Jan-2018]

  39. Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8, 1341–1390 (1996)

    Article  Google Scholar 

  40. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 67–82 (1997)

    Article  Google Scholar 

  41. Wolpert, D.H., Macready, W.G.: Coevolutionary free lunches. IEEE Transactions on Evolutionary Computation 9, 721–735 (2005)

    Article  Google Scholar 

  42. Wolpert, D.H., Macready, W.G., et al.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010, Santa Fe Institute (1995)

    Google Scholar 

Download references

Acknowledgements

This work was in part funded by the NSF BEACON Center for the Study of Evolution in Action, DBI-0939454. We thank Ken Stanley, Joel Lehman, and Randal Olson for insightful discussions on HyperNEAT and Markov Brain crossovers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arend Hintze .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hintze, A., Schossau, J., Bohm, C. (2019). The Evolutionary Buffet Method. In: Banzhaf, W., Spector, L., Sheneman, L. (eds) Genetic Programming Theory and Practice XVI. Genetic and Evolutionary Computation. Springer, Cham. https://doi.org/10.1007/978-3-030-04735-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04735-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04734-4

  • Online ISBN: 978-3-030-04735-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics