Accelerating Genetic Programming using GPUs
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Misc{DBLP:journals/corr/abs-2110-11226,
-
author = "Vimarsh Sathia and Venkataramana Ganesh and
Shankara Rao Thejaswi Nanditale",
-
title = "Accelerating Genetic Programming using {GPUs}",
-
howpublished = "arXiv",
-
volume = "abs/2110.11226",
-
year = "2021",
-
month = "15 " # oct,
-
keywords = "genetic algorithms, genetic programming, GPU",
-
URL = "https://arxiv.org/abs/2110.11226",
-
eprinttype = "arXiv",
-
eprint = "2110.11226",
-
timestamp = "Thu, 28 Oct 2021 01:00:00 +0200",
-
biburl = "https://dblp.org/rec/journals/corr/abs-2110-11226.bib",
-
bibsource = "dblp computer science bibliography, https://dblp.org",
-
size = "10 pages",
-
abstract = "Genetic Programming (GP), an evolutionary learning
technique, has multiple applications in machine
learning such as curve fitting, data modeling, feature
selection, classification etc. GP has several inherent
parallel steps, making it an ideal candidate for GPU
based parallelisation. This paper describes a GPU
accelerated stack-based variant of the generational GP
algorithm which can be used for symbolic regression and
binary classification. The selection and evaluation
steps of the generational GP algorithm are parallelised
using CUDA. We introduce representing candidate
solution expressions as prefix lists, which enables
evaluation using a fixed-length stack in GPU memory.
CUDA based matrix vector operations are also used for
computation of the fitness of population programs. We
evaluate our algorithm on synthetic datasets for the
Pagie Polynomial (ranging in size from 4096 to 16
million points), profiling training times of our
algorithm with other standard symbolic regression
libraries viz. gplearn, TensorGP and KarooGP. In
addition, using 6 large-scale regression and
classification datasets usually used for comparing
gradient boosting algorithms, we run performance
benchmarks on our algorithm and gplearn, profiling the
training time, test accuracy, and loss. On an NVIDIA
DGX-A100 GPU, our algorithm outperforms all the
previously listed frameworks, and in particular,
achieves average speedups of 119times and 40times
against gplearn on the synthetic and large scale
datasets respectively.",
-
notes = "Indian Institute of Technology Madras",
- }
Genetic Programming entries for
Vimarsh Sathia
Venkataramana Ganesh
Shankara Rao Thejaswi Nanditale
Citations