Taylor Polynomial Enhancer Using Genetic Programming for Symbolic Regression
Created by W.Langdon from
gp-bibliography.bib Revision:1.7325
- @InProceedings{chang:2023:GECCOcomp,
-
author = "Chi-Hsien Chang and Tu-Chin Chiang and Tzu-Hao Hsu and
Ting-Shuo Chuang and Wen-Zhong Fang and Tian-Li Yu",
-
title = "Taylor Polynomial Enhancer Using Genetic Programming
for Symbolic Regression",
-
booktitle = "Proceedings of the 2023 Genetic and Evolutionary
Computation Conference",
-
year = "2023",
-
editor = "Sara Silva and Luis Paquete and Leonardo Vanneschi and
Nuno Lourenco and Ales Zamuda and Ahmed Kheiri and
Arnaud Liefooghe and Bing Xue and Ying Bi and
Nelishia Pillay and Irene Moser and Arthur Guijt and
Jessica Catarino and Pablo Garcia-Sanchez and
Leonardo Trujillo and Carla Silva and Nadarajen Veerapen",
-
pages = "543--546",
-
address = "Lisbon, Portugal",
-
series = "GECCO '23",
-
month = "15-19 " # jul,
-
organisation = "SIGEVO",
-
publisher = "Association for Computing Machinery",
-
publisher_address = "New York, NY, USA",
-
keywords = "genetic algorithms, genetic programming, symbolic
regression, taylor polynomial: Poster",
-
isbn13 = "9798400701191",
-
DOI = "
doi:10.1145/3583133.3590591",
-
size = "4 pages",
-
abstract = "Unlike most research of symbolic regression with
genetic programming (GP) concerning black-box
optimization, this paper focuses on the scenario where
the underlying function is available, but due to
limited computational resources or product
imperfection, the function needs to be approximated
with simplicity to fit measured data. Taylor polynomial
(TP) is commonly used in such scenario; however, its
performance drops drastically away from the expansion
point. On the other hand, solely using GP does not
utilize the knowledge of the underlying function, even
though possibly inaccurate. This paper proposes using
GP as a TP enhancer, namely TPE-GP, to combine the
advantages from TP and GP. Specifically, TPE-GP
utilizes infinite-order operators to compensate the
power of TP with finite order. Empirically, on
functions that are expressible by TP, TP outperformed
both gplearn and TPE-GP as expected, while TPE-GP
outperformed gplearn due to the use of TP. On functions
that are not expressible by TP but expressible by the
function set (FS), TPE-GP was competitive with gplearn
while both outperformed TP. Finally, on functions that
are not expressible by both TP and FS, TPE-GP
outperformed both TP and gplearn, indicating the hybrid
did achieve the synergy effect from TP and GP.",
-
notes = "GECCO-2023 A Recombination of the 32nd International
Conference on Genetic Algorithms (ICGA) and the 28th
Annual Genetic Programming Conference (GP)",
- }
Genetic Programming entries for
Chi-Hsien Chang
Tu-Chin Chiang
Tzu-Hao Hsu
Ting-Shuo Chuang
Wen-Zhong Fang
Tian-Li Yu
Citations