Towards Improving Exploration in Self-Imitation Learning using Intrinsic Motivation

Alain Andres*, Esther Villar-Rodriguez, Javier Del Ser

*Autor correspondiente de este trabajo

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

5 Citas (Scopus)

Resumen

Reinforcement Learning has emerged as a strong alternative to solve optimization tasks efficiently. The use of these algorithms highly depends on the feedback signals provided by the environment in charge of informing about how good (or bad) the decisions made by the learned agent are. Unfortunately, in a broad range of problems the design of a good reward function is not trivial, so in such cases sparse reward signals are instead adopted. The lack of a dense reward function poses new challenges, mostly related to exploration. Imitation Learning has addressed those problems by leveraging demonstrations from experts. In the absence of an expert (and its subsequent demonstrations), an option is to prioritize well-suited exploration experiences collected by the agent in order to bootstrap its learning process with good exploration behaviors. However, this solution highly depends on the ability of the agent to discover such trajectories in the early stages of its learning process. To tackle this issue, we propose to combine imitation learning with intrinsic motivation, two of the most widely adopted techniques to address problems with sparse reward. In this work intrinsic motivation is used to encourage the agent to explore the environment based on its curiosity, whereas imitation learning allows repeating the most promising experiences to accelerate the learning process. This combination is shown to yield an improved performance and better generalization in procedurally-generated environments, outperforming previously reported self-imitation learning methods and achieving equal or better sample efficiency with respect to intrinsic motivation in isolation.

Idioma originalInglés
Título de la publicación alojadaProceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
EditoresHisao Ishibuchi, Chee-Keong Kwoh, Ah-Hwee Tan, Dipti Srinivasan, Chunyan Miao, Anupam Trivedi, Keeley Crockett
EditorialInstitute of Electrical and Electronics Engineers Inc.
Páginas890-899
Número de páginas10
ISBN (versión digital)9781665487689
DOI
EstadoPublicada - 2022
Evento2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022 - Singapore, Singapur
Duración: 4 dic 20227 dic 2022

Serie de la publicación

NombreProceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022

Conferencia

Conferencia2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
País/TerritorioSingapur
CiudadSingapore
Período4/12/227/12/22

Financiación

FinanciadoresNúmero del financiador
Department of Education of the Basque GovernmentIT1456-22
Eusko Jaurlaritza

    Huella

    Profundice en los temas de investigación de 'Towards Improving Exploration in Self-Imitation Learning using Intrinsic Motivation'. En conjunto forman una huella única.

    Citar esto