Automated learning of interpretable models with quantified uncertainty
Created by W.Langdon from
gp-bibliography.bib Revision:1.8098
- @Article{BOMARITO:2023:cma,
-
author = "G. F. Bomarito and P. E. Leser and
N. C. M. Strauss and K. M. Garbrecht and J. D. Hochhalter",
-
title = "Automated learning of interpretable models with
quantified uncertainty",
-
journal = "Computer Methods in Applied Mechanics and
Engineering",
-
volume = "403",
-
pages = "115732",
-
year = "2023",
-
ISSN = "0045-7825",
-
DOI = "doi:10.1016/j.cma.2022.115732",
-
URL = "https://www.sciencedirect.com/science/article/pii/S0045782522006879",
-
keywords = "genetic algorithms, genetic programming, Interpretable
machine learning, Symbolic regression, Bayesian model
selection, Fractional Bayes factor",
-
abstract = "Interpretability and uncertainty quantification in
machine learning can provide justification for
decisions, promote scientific discovery and lead to a
better understanding of model behavior. Symbolic
regression provides inherently interpretable machine
learning, but relatively little work has focused on the
use of symbolic regression on noisy data and the
accompanying necessity to quantify uncertainty. A new
Bayesian framework for genetic-programming-based
symbolic regression (GPSR) is introduced that uses
model evidence (i.e., marginal likelihood) to formulate
replacement probability during the selection phase of
evolution. Model parameter uncertainty is automatically
quantified, enabling probabilistic predictions with
each equation produced by the GPSR algorithm. Model
evidence is also quantified in this process, and its
use is shown to increase interpretability, improve
robustness to noise, and reduce overfitting when
compared to a conventional GPSR implementation on both
numerical and physical experiments",
- }
Genetic Programming entries for
Geoffrey F Bomarito
Patrick E Leser
Nolan Craig McGee Strauss
Karl Michael Garbrecht
Jacob Dean Hochhalter
Citations