Abstract
This paper examines the use of state and memory in autonomous vehicle controllers. The controllers are computer programs genetically evolved according to the genetic programming paradigm. Specifically, the performance between implicit and explicit state is compared during the execution of a dynamic obstacle avoidance task. A control group, in which controllers possess no form of state, is used for comparison. The results indicate that while the use of implicit state performed better than a stateless controller, the use of explicit state proved superior to both other models. A reason for the performance difference is discussed in relation to a trade-off between the number of representable states in a program of fixed size and the number of instructions executed in determining each action to perform.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
Colombetti, M. and M. Dorigo (1994). Training agents to perform sequential behavior. Adaptive Behaviour 2(3).
Koza, J. R. (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge Mass/London England: The MIT Press.
Koza, J. R. (1994, May). Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge Massachusetts: MIT Press.
Langdon, W. B. (1995a, 15–19 July). Evolving data structures using genetic programming. In L. Eshelman (Ed.), Genetic Algorithms: Proceedings of the Sixth International Conference (ICGA95), Pittsburgh, PA, USA, pp. 295–302. Morgan Kaufmann.
Langdon, W. B. (1995b, January). Evolving data structures using genetic programming. Research Note RN/95/1, UCL, Gower Street, London, WCIE 6BT, UK.
Lin, L.-J. and T. M. Mitchell (1993). Reinforcement learning with hidden states. In Proceedings of the 2nd International Conference on Simulation of Adaptive Behavior, Hawaii.
Littman, M. L. (1994). Memoryless policies: Theoretical limitations and practical results. In Simulation of Adaptive Behaviour (SAB-94), pp. 238–245.
Raik, S. E. (1996, 9–11 August). Parallel genetic artificial life. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA-96), New Horizons, Sunnyvale, California, USA. CSREA.
Raik, S. E. and D. G. Browne (1996, July). Implicit versus explicit: A comparison of state in genetic programming. In J. R. Koza (Ed.), Late breaking papers of the Genetic Programming 1996 Conference, Stanford, California, USA, pp. 151–159. Stanford Bookstore.
Ryan, C. (1995, 10–12 November). GPRobots and GPTeams — competition, co-evolution and co-operation in genetic programming. In E. S. Siegel and J. R. Koza (Eds.), Working Notes for the AAAI Symposium on Genetic Programming, MIT, Cambridge, MA, USA, pp. 86–93. AAAI.
Teller, A. (1994). The evolution of mental models. In K. E. Kinnear (Ed.), Advances in Genetic Programming, Complex Adaptive Systems, pp. 199–219. Cambridge, Massachusetts: The MIT Press.
Watkins, C. (1989). Learning from Delayed Rewards. Ph. D. thesis, Cambridge University.
Wilson, S. W. (1991). The animat path to AI. In J.-A. Meyer and S. W. Wilson (Eds.), From animals to animats, pp. 15–21. First International Conference on Simulation of Adaptive Behavior.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Raik, S.E., Browne, D.G. (1997). Evolving state and memory in genetic programming. In: Yao, X., Kim, JH., Furuhashi, T. (eds) Simulated Evolution and Learning. SEAL 1996. Lecture Notes in Computer Science, vol 1285. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0028523
Download citation
DOI: https://doi.org/10.1007/BFb0028523
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63399-0
Online ISBN: 978-3-540-69538-7
eBook Packages: Springer Book Archive