Offline reinforcement learning for job-shop scheduling problems

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

Resumen

Recent advances in deep learning have shown significant potential for solving combinatorial optimization problems in real-time. Unlike traditional methods, deep learning can generate high-quality solutions efficiently, which is crucial for applications like routing and scheduling. However, existing approaches like deep reinforcement learning (RL) and behavioral cloning have notable limitations, with deep RL suffering from slow learning and behavioral cloning relying solely on expert actions, which can lead to generalization issues and neglect of the optimization objective. Offline RL addresses these challenges by learning from fixed datasets while leveraging reward signals, making it especially suitable for constrained combinatorial problems where online exploration is impractical. This paper introduces a novel offline RL method designed for combinatorial optimization problems with complex constraints, where the state is represented as a heterogeneous graph and the action space is variable. Our approach encodes actions in edge attributes and balances expected rewards with the imitation of expert solutions. We demonstrate the effectiveness of this method on job-shop scheduling and flexible job-shop scheduling benchmarks, achieving superior performance compared to state-of-the-art techniques.

Idioma originalInglés
Número de artículo113736
PublicaciónApplied Soft Computing Journal
Volumen184
DOI
EstadoPublicada - dic 2025

Huella

Profundice en los temas de investigación de 'Offline reinforcement learning for job-shop scheduling problems'. En conjunto forman una huella única.

Citar esto