Gegelati: Lightweight Artificial Intelligence through Generic and Evolvable Tangled Program Graphs
Created by W.Langdon from
gp-bibliography.bib Revision:1.8081
- @InProceedings{Desnos:2021:DASIP,
-
author = "Karol Desnos and Nicolas Sourbier and
Pierre-Yves Raumer and Olivier Gesny and Maxime Pelcat",
-
title = "Gegelati: Lightweight Artificial Intelligence through
Generic and Evolvable Tangled Program Graphs",
-
booktitle = "Workshop on Design and Architectures for Signal and
Image Processing, DASIP",
-
year = "2021",
-
pages = "35--43",
-
address = "Budapest, Hungary",
-
month = jan # " 18-20",
-
publisher = "Association for Computing Machinery",
-
keywords = "genetic algorithms, genetic programming",
-
isbn13 = "9781450389013",
-
language = "en",
-
oai = "oai:HAL:hal-03057652v2",
-
identifier = "hal-03057652",
-
URL = "https://arxiv.org/abs/2012.08296",
-
URL = "https://hal.archives-ouvertes.fr/hal-03057652",
-
URL = "https://hal.archives-ouvertes.fr/hal-03057652v2/document",
-
URL = "https://hal.archives-ouvertes.fr/hal-03057652v2/file/dasip.pdf",
-
DOI = "doi:10.1145/3441110.3441575",
-
size = "9 pages",
-
abstract = "Tangled Program Graph (TPG) is a reinforcement
learning technique based on genetic programming
concepts. On state-of-the-art learning environments,
TPGs have been shown to offer comparable competence
with Deep Neural Networks (DNNs), for a fraction of
their computational and storage cost. This lightness of
TPGs, both for training and inference, makes them an
interesting model to implement Artificial Intelligences
(AIs) on embedded systems with limited computational
and storage resources. In this paper, we introduce the
Gegelati library for TPGs. Besides introducing the
general concepts and features of the library, two main
contributions are detailed in the paper: 1/ The
parallelization of the deterministic training process
of TPGs, for supporting heterogeneous Multiprocessor
Systems-on-Chips (MPSoCs). 2/ The support for
customizable instruction sets and data types within the
genetically evolved programs of the TPG model. The
scalability of the parallel training process is
demonstrated through experiments on architectures
ranging from a high-end 24-core processor to a
low-power heterogeneous MPSoC. The impact of
customizable instructions on the outcome of a training
process is demonstrated on a state-of-the-art
reinforcement learning environment.",
- }
Genetic Programming entries for
Karol Desnos
Nicolas Sourbier
Pierre-Yves Raumer
Olivier Gesny
Maxime Pelcat
Citations