Offline reinforcement learning for job-shop scheduling problems

Imanol Echeverria*, Maialen Murua, Roberto Santana

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in deep learning have shown significant potential for solving combinatorial optimization problems in real-time. Unlike traditional methods, deep learning can generate high-quality solutions efficiently, which is crucial for applications like routing and scheduling. However, existing approaches like deep reinforcement learning (RL) and behavioral cloning have notable limitations, with deep RL suffering from slow learning and behavioral cloning relying solely on expert actions, which can lead to generalization issues and neglect of the optimization objective. Offline RL addresses these challenges by learning from fixed datasets while leveraging reward signals, making it especially suitable for constrained combinatorial problems where online exploration is impractical. This paper introduces a novel offline RL method designed for combinatorial optimization problems with complex constraints, where the state is represented as a heterogeneous graph and the action space is variable. Our approach encodes actions in edge attributes and balances expected rewards with the imitation of expert solutions. We demonstrate the effectiveness of this method on job-shop scheduling and flexible job-shop scheduling benchmarks, achieving superior performance compared to state-of-the-art techniques.

Original languageEnglish
Article number113736
JournalApplied Soft Computing Journal
Volume184
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Deep neural networks
  • Graph neural networks
  • Heterogeneous data
  • Job-shop scheduling problem
  • Offline reinforcement learning

Fingerprint

Dive into the research topics of 'Offline reinforcement learning for job-shop scheduling problems'. Together they form a unique fingerprint.

Cite this