Created by W.Langdon from gp-bibliography.bib Revision:1.8806
https://theses.hal.science/tel-04137256",
https://theses.hal.science/tel-04137256v1/file/FONTBONNE_Nicolas_theseV2_2023.pdf",
We first study systems where agents individually receive a local reward adapted to their actions and must converge towards an optimal collective behaviour. We introduce a distributed evolutionary learning algorithm called Horizontal Information Transfer (HIT) that tackles this particular issue. Agents interact on-line in their environment and must learn their control policy with an embedded evolutionary algorithm and a parameter exchange system. It has the advantage of coping with the limited computation and communication capabilities of low-cost robots, which are often used in swarm robotics. We analyse this algorithm characteristics and learning dynamics on a foraging task.
We then study systems where the reward is given globally to the entire team. Therefore, this evaluation does not necessarily represent each agents performance, and it can be challenging to calculate an individual contribution. We introduce a centralized cooperative co-evolutionary algorithm (CCEA) that modulates the number of agents policies modification to find a compromise between evaluation quality and execution speed. This modulation also helps in completing tasks where improving team performance requires multiple agents to update in a synchronized manner. We use a multi-robot resource selection problem and a simulated multi-rover exploration problem to provide experimental validations of the proposed algorithms.",
PersonId : 1264018 IdRef : 269779930
Supervisors: Nicolas Bredeche and Nicolas Maudet",
Genetic Programming entries for Nicolas Fontbonne