Created by W.Langdon from gp-bibliography.bib Revision:1.7954
The fundamental statement of this thesis is that we cannot capture the essence of learning by relying on a small number of algorithms. Rather on the contrary there is a need for a whole bunch of context-dependent learning strategies to acquire domain specific information by using information that is already available. Because of the complexity and richness of these strategies and their triggering conditions the obvious escape seems to be: Giving a system the ability to learn the methods how to learn, too. A system with such meta-learning capabilities should view every problem as consisting out of at least two problems: Solving it, and improving the strategies employed to solve it. Of course we do not want to stop at the first meta-level!
The only approach to achieve meta-capacity seems to be: Closing (feeding back) some initial strategies onto themselves so that more complicated and better suited ones can evolve. This requires an initial representation of the system that allows it to introspect and manipulate all of its relevant parts. Furthermore some kind of evolutionary pressure is needed to force the system to organise itself in away that makes sense in the environment it is living in. The fundamental role of the very general principle called evolution and its deep interrelation to the field of learning will be emphasized, Connections to v. Weizsaeckers understanding of Pragmatic information as well as to Piagets model of equilibration will be shown.
Two approaches to the goal of learning how to learn will be presented, both of them being inspired from seemingly rather different corners of artificial intelligence and cognitive science: One source of ideas is to be found in symbol-manipulative learning programs as EURISKO and CYRANO, the other one in work done on neuronal nets, associative networks, genetical algorithms and other weak methods (an analogy to geometric fractals will be drawn). In this context it is argued that object oriented Programming and neuronal nets have more things in common than is usually assumed.
The second approach which leads to the notion of self-referential associating learning mechanisms (SALMs, PSALMs) is illustrated by the implementation of a simple self-referential and self-extending language and a few empirical results obtained by putting pressure on that language to organise itself. However, these results are not suited to show concrete cases of self-reference. It will be made obvious that the available machine capacity is clearly below the level that would be necessary to make the creation of semantic self-reference likely (on the basis of the second approach and within a reasonable time). Thus this paper tends to having inspiring character rather than presenting a practical guidance to universal learning capabilities.
A table of contents is supplied at the end of this work.",
Genetic Programming entries for Jurgen Schmidhuber