Learning a Formula of Interpretability to Learn Interpretable Formulas
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @InProceedings{Virgolin:2020:PPSN,
-
author = "Marco Virgolin and Andrea {De Lorenzo} and
Eric Medvet and Francesca Randone",
-
title = "Learning a Formula of Interpretability to Learn
Interpretable Formulas",
-
booktitle = "16th International Conference on Parallel Problem
Solving from Nature, Part II",
-
year = "2020",
-
editor = "Thomas Baeck and Mike Preuss and Andre Deutz and
Hao Wang2 and Carola Doerr and Michael Emmerich and
Heike Trautmann",
-
volume = "12270",
-
series = "LNCS",
-
pages = "79--93",
-
address = "Leiden, Holland",
-
month = "7-9 " # sep,
-
publisher = "Springer",
-
keywords = "genetic algorithms, genetic programming, Explainable
artificial intelligence, XAI, Interpretable machine
learning, Symbolic regression, Multi-objective",
-
isbn13 = "978-3-030-58114-5",
-
URL = "https://arxiv.org/abs/2004.11170",
-
DOI = "doi:10.1007/978-3-030-58115-2_6",
-
code_url = "https://github.com/MaLeLabTs/GPFormulasInterpretability?tab=readme-ov-file",
-
size = "15 pages",
-
abstract = "Many risk-sensitive applications require Machine
Learning (ML) models to be interpretable. Attempts to
obtain interpretable models typically rely on tuning,
by trial-and-error, hyper-parameters of model
complexity that are only loosely related to
interpretability. We show that it is instead possible
to take a meta-learning approach: an ML model of
non-trivial Proxies of Human Interpretability (PHIs)
can be learned from human feedback, then this model can
be incorporated within an ML training process to
directly optimize for interpretability. We show this
for evolutionary symbolic regression. We first design
and distribute a survey finalized at finding a link
between features of mathematical formulas and two
established PHIs, simulatability and decomposability.
Next, we use the resulting dataset to learn an ML model
of interpretability. Lastly, we query this model to
estimate the interpretability of evolving solutions
within bi-objective genetic programming. We perform
experiments on five synthetic and eight real-world
symbolic regression problems, comparing to the
traditional use of solution size minimization. The
results show that the use of our model leads to
formulas that are, for a same level of
accuracy-interpretability trade-off, either
significantly more or equally accurate. Moreover, the
formulas are also arguably more interpretable. Given
the very positive results, we believe that our approach
represents an important stepping stone for the design
of next-generation interpretable (evolutionary) ML
algorithms.",
-
notes = "v2 28 May 2020 https://arxiv.org/abs/2004.11170 says
'Accepted at PPSN2020'
PPSN2020",
- }
Genetic Programming entries for
Marco Virgolin
Andrea De Lorenzo
Eric Medvet
Francesca Randone
Citations