Real-Time Scheduling for Flexible Job Shop With AGVs Using Multiagent Reinforcement Learning and Efficient Action Decoding
Created by W.Langdon from
gp-bibliography.bib Revision:1.8360
- @Article{Li:2025:TSMC,
-
author = "Yuxin Li and Qingzheng Wang and Xinyu Li and
Liang Gao and Ling Fu and Yanbin Yu and Wei Zhou",
-
title = "Real-Time Scheduling for Flexible Job Shop With {AGVs}
Using Multiagent Reinforcement Learning and Efficient
Action Decoding",
-
journal = "IEEE Transactions on Systems, Man, and Cybernetics:
Systems",
-
year = "2025",
-
volume = "55",
-
number = "3",
-
pages = "2120--2132",
-
month = mar,
-
keywords = "genetic algorithms, genetic programming, Job shop
scheduling, Dynamic scheduling, Real-time systems,
Production, Logistics, Conferences, Costs, Vehicle
dynamics, Reinforcement learning, Metaheuristics,
Automated guided vehicle (AGV), disturbance events,
flexible job shop, multiagent reinforcement learning
(MARL)",
-
ISSN = "2168-2232",
-
DOI = "
doi:10.1109/TSMC.2024.3520381",
-
abstract = "The application of automated guided vehicle (AGV)
greatly improves the production efficiency of workshop.
However, machine flexibility and limited logistics
equipment increase the complexity of collaborative
scheduling, and frequent dynamic events bring
uncertainty. Therefore, this article proposes a
real-time scheduling method for dynamic flexible job
shop scheduling problem with AGVs using multiagent
reinforcement learning (MARL). Specifically, a
real-time scheduling framework is proposed in which a
multiagent scheduling architecture is designed for
achieving task selection, machine allocation and AGV
allocation. Then, an action space and an efficient
action decoding algorithm are proposed, which enable
agents to explore in the high-quality solution space
and improve the learning efficiency. In addition, a
state space with generalisation, a reward function
considering machine idle time and a strategy for
handling four disturbance events are designed to
minimise the total tardiness cost. Comparison
experiments show that the proposed method outperforms
the priority dispatching rules, genetic programming and
four popular reinforcement learning (RL)-based methods,
with performance improvements mostly exceeding
10percent. Furthermore, experiments considering four
disturbance events demonstrate that the proposed method
has strong robustness, and it can provide appropriate
scheme for uncertain manufacturing system.",
-
notes = "Also known as \cite{10829984}",
- }
Genetic Programming entries for
Yuxin Li
Qingzheng Wang
Xinyu Li
Liang Gao
Ling Fu
Yanbin Yu
Wei Zhou
Citations