Abstract
Artificially intelligent machines have to explore their environment, store information about it, and use this information to improve future decision making. As such, the quest is to either provide these systems with internal models about their environment or to imbue machines with the ability to create their own models—ideally the later. These models are mental representations of the environment, and we have previously shown that neuroevolution is a powerful method to create artificially intelligent machines (also referred to as agents) that can form said representations. Furthermore, we have shown that one can quantify representations and use that quantity to augment the performance of a genetic algorithm. Instead of just optimizing for performance, one can also positively select for agents that have better representations. The neuroevolutionary approach, that improves performance and lets these agents develop representations, works well for Markov Brains, which are a form of Cartesian Genetic Programming network. Conventional artificial neural networks and their recurrent counterparts, RNNs and LSTMs, are however primarily trained by backpropagation and not evolved, and they behave differently with respect to their ability to form representations. When evolved, RNNs and LSTMs do not form sparse and distinct representations, they “smear” the information about individual concepts of the environment over all nodes in the system. This ultimately makes these systems more brittle and less capable. The question we seek to address, now, is how can we create systems that evolve to have meaningful representations while preventing them from smearing these representations? We look at genetic programming trees as an interesting computational paradigm, as they can take a lot of information in through their various leaves, but at the same time condense that computation into a single node in the end. We hypothesize that this computational condensation could also prevent the smearing of information. Here, we explore how these tree structures evolve and form representations, and we test to what degree these systems either “smear” or condense information.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Observe that the term representations in computer science sometimes also refers to the structure of data, or how an algorithm, for example, is encoded. We mean neither, but instead use the term representation to be about the information a cognitive system has about its environment as defined in Marstaller et al. [19]. The term representation, as we use it, is adapted from the fields of psychology and philosophy.
- 2.
Inspired by the multiple trees used to encode memory in Langdon [18].
References
Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic Programming - An Introduction. Morgan Kaufmann, San Francisco CA (1998)
Beer, R.D.: The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11(4), 209–243 (2003)
Beer, R.D., et al.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. In: From Animals to Animats, vol. 4, pp. 421–429 (1996)
Bengio, Y., Frasconi, P.: An input output hmm architecture. In: Advances in neural information processing systems, pp. 427–434 (1995)
Bohm, C., CG, N., Hintze, A.: MABE (modular agent based evolver): A framework for digital evolution research. In: Proceedings of the European Conference of Artificial Life (2017)
Brooks, R.A.: Intelligence without representation. Artificial intelligence 47(1-3), 139–159 (1991)
Clune, J., Stanley, K.O., Pennock, R.T., Ofria, C.: On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation 15(3), 346–367 (2011)
Deb, K.: Multi-objective optimization using evolutionary algorithms. John Wiley & Sons (2001)
Floreano, D., Dürr, P., Mattiussi, C.: Neuroevolution: From architectures to learning. Evolutionary Intelligence 1(1), 47–62 (2008)
Handley, S.G.: The automatic generations of plans for a mobile robot via genetic programming with automatically defined functions. In: Advances in Genetic Programming, vol. 18, pp. 391–407. MIT Press (1994)
Hintze, A., Edlund, J.A., Olson, R.S., Knoester, D.B., Schossau, J., Albantakis, L., Tehrani-Saleh, A., Kvam, P., Sheneman, L., Goldsby, H., Bohm, C., Adami, C.: Markov brains: A technical introduction. arXiv preprint arXiv:1709.05601 (2017)
Hintze, A., Kirkpatrick, D., Adami, C.: The structure of evolved representations across different substrates for artificial intelligence. In: Artificial Life Conference Proceedings, pp. 388–395. MIT Press (2018)
Hintze, A., Schossau, J., Bohm, C.: The evolutionary Buffet method. In: Genetic Programming Theory and Practice XVI, pp. 17–36. Springer (2019). https://link.springer.com/chapter/10.1007/978-3-030-04735-1_2
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9(8), 1735–1780 (1997)
Kirkpatrick, D., Hintze, A.: The role of ambient noise in the evolution of robust mental representations in cognitive systems. Artif. Life Conf. Proc. (31), 432–439 (2019). https://doi.org/10.1162/isal_a_00198
Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4(2), 87–112 (1994)
Koza, J.R., Rice, J.P.: Automatic programming of robots using genetic programming. In: AAAI, vol. 92, pp. 194–207 (1992)
Langdon, W.B.: Evolving data structures with genetic programming. In: Int. Conference on Genetic Algorithms, pp. 295–302 (1995)
Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Computation 25(8), 2079–2107 (2013)
Merritt, D.J., Brannon, E.M.: Nothing to it: Precursors to a zero concept in preschoolers. Behavioural Processes 93, 91–97 (2013)
Merritt, D.J., Rugani, R., Brannon, E.M.: Empty sets as part of the numerical continuum: conceptual precursors to the zero concept in rhesus monkeys. Journal of Experimental Psychology: General 138(2), 258 (2009)
Miller, J.F.: Cartesian Genetic Programming. Springer (2011)
Miller, J.F.: Cartesian genetic programming. In: Cartesian Genetic Programming, pp. 17–34. Springer (2011)
Nieder, A.: Honey bees zero in on the empty set. Science 360(6393), 1069–1070 (2018)
Nordin, P.: A compiling genetic programming system that directly manipulates the machine code. In: Advances in Genetic Programming, vol. 1, pp. 311–331. MIT Press (1994)
Nordin, P., Banzhaf, W.: Genetic programming controlling a miniature robot. In: Working Notes for the AAAI Symposium on Genetic Programming, vol. 61, p. 67. MIT, Cambridge, MA, USA, AAAI (1995)
Nordin, P., Banzhaf, W.: An on-line method to evolve behavior and to control a miniature robot in real time with genetic programming. Adaptive Behavior 5(2), 107–140 (1997)
Nordin, P., Banzhaf, W.: Real time control of a khepera robot using genetic programming. Control and Cybernetics 26, 533–562 (1997)
Reynolds, C.W.: An evolved, vision-based behavioral model of coordinated group motion. In: Proc From Animals to Animats, vol. 2, pp. 384–392 (1993)
Reynolds, C.W.: Evolution of obstacle avoidance behavior: using noise to promote robust solutions. In: Advances in Genetic Programming, vol. 1, pp. 221–241. Cambridge, MA: MIT Press (1994)
Russell, S.J., Norvig, P., Canny, J.F., Malik, J.M., Edwards, D.D.: Artificial Intelligence: A Modern Approach. Prentice Hall, Upper Saddle River (2003)
Schossau, J., Adami, C., Hintze, A.: Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy 18(1), 6 (2015)
Spector, L., Robinson, A.: Genetic programming and autoconstructive evolution with the push programming language. Genetic Programming and Evolvable Machines 3(1), 7–40 (2002)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1), 1929–1958 (2014)
Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002)
Teller, A.: The evolution of mental models. In: Advances in Genetic Programming, pp. 199–220. MIT Press (1994)
Thomas, B.: Evolutionary algorithms in theory and practice. Oxford University Press, New York (1996)
Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)
Zhou, A., Qu, B.Y., Li, H., Zhao, S.Z., Suganthan, P.N., Zhang, Q.: Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation 1(1), 32–49 (2011)
Acknowledgements
We thank Stephan Winkler for insightful discussions on hidden states in genetic programming trees, and for implementing the prototype for GP-Forests in EvoSphere.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kirkpatrick, D., Hintze, A. (2020). The Evolution of Representations in Genetic Programming Trees. In: Banzhaf, W., Goodman, E., Sheneman, L., Trujillo, L., Worzel, B. (eds) Genetic Programming Theory and Practice XVII. Genetic and Evolutionary Computation. Springer, Cham. https://doi.org/10.1007/978-3-030-39958-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-39958-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-39957-3
Online ISBN: 978-3-030-39958-0
eBook Packages: Computer ScienceComputer Science (R0)