TY - GEN
T1 - Enhanced Generalization Through Prioritization and Diversity in Self-Imitation Reinforcement Learning Over Procedural Environments with Sparse Rewards
AU - Andres, Alain
AU - Zha, Daochen
AU - Del Ser, Javier
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Exploration poses a fundamental challenge in Reinforcement Learning (RL) with sparse rewards, limiting an agent's ability to learn optimal decision-making due to a lack of informative feedback signals. Self-Imitation Learning (self-IL) has emerged as a promising approach for exploration, leveraging a replay buffer to store and reproduce successful behaviors. However, traditional self-IL methods, which rely on high-return transitions and assume singleton environments, face challenges in generalization, especially in procedurally-generated (PCG) environments. Therefore, new self-IL methods have been proposed to rank which experiences to persist, but they replay transitions uniformly regardless of their significance, and do not address the diversity of the stored demonstrations. In this work, we propose tailored self-IL sampling strategies by prioritizing transitions in different ways and extending prioritization techniques to PCG environments. We also address diversity loss through modifications to counteract the impact of generalization requirements and bias introduced by prioritization techniques. Our experimental analysis, conducted over three PCG sparse reward environments, including MiniGrid and ProcGen, highlights the benefits of our proposed modifications, achieving a new state-of-the-art performance in the MiniGrid-MultiRoom-N12-S10 environment.
AB - Exploration poses a fundamental challenge in Reinforcement Learning (RL) with sparse rewards, limiting an agent's ability to learn optimal decision-making due to a lack of informative feedback signals. Self-Imitation Learning (self-IL) has emerged as a promising approach for exploration, leveraging a replay buffer to store and reproduce successful behaviors. However, traditional self-IL methods, which rely on high-return transitions and assume singleton environments, face challenges in generalization, especially in procedurally-generated (PCG) environments. Therefore, new self-IL methods have been proposed to rank which experiences to persist, but they replay transitions uniformly regardless of their significance, and do not address the diversity of the stored demonstrations. In this work, we propose tailored self-IL sampling strategies by prioritizing transitions in different ways and extending prioritization techniques to PCG environments. We also address diversity loss through modifications to counteract the impact of generalization requirements and bias introduced by prioritization techniques. Our experimental analysis, conducted over three PCG sparse reward environments, including MiniGrid and ProcGen, highlights the benefits of our proposed modifications, achieving a new state-of-the-art performance in the MiniGrid-MultiRoom-N12-S10 environment.
KW - Diversity
KW - Experience Replay Buffer
KW - Generalization
KW - Reinforcement Learning
KW - Self-Imitation Learning
UR - http://www.scopus.com/inward/record.url?scp=85182923052&partnerID=8YFLogxK
U2 - 10.1109/SSCI52147.2023.10371796
DO - 10.1109/SSCI52147.2023.10371796
M3 - Conference contribution
AN - SCOPUS:85182923052
T3 - 2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023
SP - 1414
EP - 1420
BT - 2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023
Y2 - 5 December 2023 through 8 December 2023
ER -