abstract = "Over the last decades Genetic Algorithms (GA) and
Genetic Programming (GP) have proven to be efficient
tools for a wide range of applications. However, in
order to solve human-competitive problems they require
large amounts of computational power, particularly
during fitness calculations. In this paper I propose
the implementation of a massively parallel model in
hardware in order to speed up GP. This fine-grained
diffusion architecture has the advantage over the
popular Island model of being VLSI-friendly and is
therefore small and portable, without sacrificing
scalability and effectiveness. The diffusion
architecture consists of a large amount of independent
processing nodes, connected through an X-net topology,
that evolve a large number of small, overlapping
sub-populations. Every node has its own embedded CPU,
which executes a linear machine code representation of
the individuals. Preliminary simulation results
(low-level VHDL simulation) indicate a performance of
10.000 generations per second (depending on the
application). One node requires 10-20.000 gates
including the CPU (also application dependent), which
makes it possible to fit up to 2.000 individuals in one
FPGA (Virtex XC2V10000).",
notes = "ICES-2001
A1 Dalarna University, Sweden sven.eklund@ieee.org",