Explainable Approaches for Forecasting Building Electricity Consumption
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Article{sakkas:2023:Energies,
-
author = "Nikos Sakkas and Sofia Yfanti and Pooja Shah and
Nikitas Sakkas and Christina Chaniotakis and
Costas Daskalakis and Eduard Barbu and Marharyta Domnich",
-
title = "Explainable Approaches for Forecasting Building
Electricity Consumption",
-
journal = "Energies",
-
year = "2023",
-
volume = "16",
-
number = "20",
-
pages = "Article No. 7210",
-
keywords = "genetic algorithms, genetic programming",
-
ISSN = "1996-1073",
-
URL = "https://www.mdpi.com/1996-1073/16/20/7210",
-
DOI = "doi:10.3390/en16207210",
-
abstract = "Building electric energy is characterised by a
significant increase in its uses (e.g., vehicle
charging), a rapidly declining cost of all related data
collection, and a proliferation of smart grid concepts,
including diverse and flexible electricity pricing
schemes. Not surprisingly, an increased number of
approaches have been proposed for its modelling and
forecasting. In this work, we place our emphasis on
three forecasting-related issues. First, we look at the
forecasting explainability, that is, the ability to
understand and explain to the user what shapes the
forecast. To this extent, we rely on concepts and
approaches that are inherently explainable, such as the
evolutionary approach of genetic programming (GP) and
its associated symbolic expressions, as well as the
so-called SHAP (SHapley Additive eXplanations) values,
which is a well-established model agnostic approach for
explainability, especially in terms of feature
importance. Second, we investigate the impact of the
training timeframe on the forecasting accuracy; this is
driven by the realization that fast training would
allow for faster deployment of forecasting in real-life
solutions. And third, we explore the concept of
counterfactual analysis on actionable features, that
is, features that the user can really act upon and
which therefore present an inherent advantage when it
comes to decision support. We have found that SHAP
values can provide important insights into the model
explainability. In our analysis, GP models demonstrated
superior performance compared to neural network-based
models (with a 20-30percent reduction in Root Mean
Square Error (RMSE)) and time series models (with a
20-40percent lower RMSE), but a rather questionable
potential to produce crisp and insightful symbolic
expressions, allowing a better insight into the model
performance. We have also found and reported here on an
important potential, especially for practical, decision
support, of counterfactuals built on actionable
features, and short training timeframes.",
-
notes = "also known as \cite{en16207210}",
- }
Genetic Programming entries for
Nikos Sakkas
Sofia Yfanti
Pooja Shah
Nikitas Sakkas
Christina Chaniotakis
Costas Daskalakis
Eduard Barbu
Marharyta Domnich
Citations