AddGBoost: A gradient boosting-style algorithm based on strong learners
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Article{Sipper:2022:mlwa,
-
author = "Moshe Sipper and Jason H. Moore",
-
title = "AddGBoost: A gradient boosting-style algorithm based
on strong learners",
-
journal = "Machine Learning with Applications",
-
year = "2022",
-
volume = "7",
-
pages = "100243",
-
keywords = "genetic algorithms, genetic programming, Gradient
boosting, Regression",
-
ISSN = "2666-8270",
-
URL = "https://www.sciencedirect.com/science/article/pii/S2666827021001225",
-
DOI = "doi:10.1016/j.mlwa.2021.100243",
-
size = "4 pages",
-
abstract = "We present AddGBoost, a gradient boosting-style
algorithm, wherein the decision tree is replaced by a
succession of (possibly) stronger learners, which are
optimized via a state-of-the-art hyperparameter
optimizer. Through experiments over 90 regression
datasets we show that AddGBoost emerges as the top
performer for 33percent (with 2 stages) up to 42percent
(with 5 stages) of the datasets, when compared with
seven well-known machine-learning algorithms:
KernelRidge, LassoLars, SGDRegressor, LinearSVR,
DecisionTreeRegressor, HistGradientBoostingRegressor,
and LGBMRegressor.",
-
notes = "Not GP?",
- }
Genetic Programming entries for
Moshe Sipper
Jason H Moore
Citations