abstract = "While Artificial Intelligence (AI) is making giant
steps, it is also raising concerns about its
trustworthiness, due to the fact that widely-used
black-box models cannot be exactly understood by
humans. One of the ways to improve humans trust towards
AI is to use interpretable AI models, i.e., models that
can be thoroughly understood by humans, and thus
trusted. However, interpretable AI models are not
typically used in practice, as they are thought to be
less performing than blackbox models. This is more
evident in Reinforcement Learning, where relatively
little work addresses the problem of performing
Reinforcement Learning with interpretable models. we
address this gap, proposing methods for Interpretable
Reinforcement Learning. For this purpose, we optimize
Decision Trees by combining Reinforcement Learning with
Evolutionary Computation techniques, which allows us to
overcome some of the challenges tied to optimizing
Decision Trees in Reinforcement Learning scenarios. The
experimental results show that these approaches are
competitive with the state-of-the-art score while being
extremely easier to interpret. Finally, we show the
practical importance of Interpretable AI by digging
into the inner working of the solutions obtained.",
notes = "Defended on 27-apr-2023 In English.
p111 7.2 'Neuro-Symbolic Analysis of COVID-19 Patients
Data'