Mathematical expression exploration with graph representation and generative graph neural network
Created by W.Langdon from
gp-bibliography.bib Revision:1.8414
- @Article{Liu:2025:neunet,
-
author = "Jingyi Liu and Weijun Li and Lina Yu and Min Wu and
Wenqiang Li and Yanjie Li and Meilan Hao",
-
title = "Mathematical expression exploration with graph
representation and generative graph neural network",
-
journal = "Neural Networks",
-
year = "2025",
-
volume = "187",
-
pages = "107405",
-
keywords = "genetic algorithms, genetic programming, Symbolic
regression, Directed acyclic graph, Graph neural
network, ANN, Reinforcement learning",
-
ISSN = "0893-6080",
-
URL = "
https://www.sciencedirect.com/science/article/pii/S0893608025002849",
-
DOI = "
doi:10.1016/j.neunet.2025.107405",
-
abstract = "Symbolic Regression (SR) methods in tree
representations have exhibited commendable outcomes
across Genetic Programming (GP) and deep learning
search paradigms. Nonetheless, the tree representation
of mathematical expressions occasionally embodies
redundant substructures. Representing expressions as
computation graphs is more succinct and intuitive
through graph representation. Despite its adoption in
evolutionary strategies within SR, deep learning
paradigms remain under-explored. Acknowledging the
profound advancements of deep learning in tree-centric
SR approaches, we advocate for addressing SR tasks
using the Directed Acyclic Graph (DAG) representation
of mathematical expressions, complemented by a
generative graph neural network. We name the proposed
method as Graph-based Deep Symbolic Regression
(GraphDSR). We vectorize node types and employ an
adjacent matrix to delineate connections. The graph
neural networks craft the DAG incrementally, sampling
node types and graph connections conditioned on
previous DAG at every step. During each sample step,
the valid check is implemented to avoid meaningless
sampling, and four domain-agnostic constraints are
adopted to further streamline the search. This process
culminates once a coherent expression emerges.
Constants undergo optimisation by SGD and BFGS
algorithms, and rewards refine the graph neural network
through reinforcement learning. A comprehensive
evaluation across 110 benchmarks underscores the
potency of our approach",
- }
Genetic Programming entries for
Jingyi Liu
Weijun Li
Lina Yu
Min Wu
Wenqiang Li
Yanjie Li
Meilan Hao
Citations