Collaborative Exploration and Reinforcement Learning between Heterogeneously Skilled Agents in Environments with Sparse Rewards

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

3 Citas (Scopus)

Resumen

A critical goal in Reinforcement Learning is the minimization of the time needed for an agent to learn to solve a given environment. In this context, collaborative reinforcement learning refers to the improvement of this learning process through the interaction between agents, which usually yields better results than training each agent in isolation. Most studies in this area have focused on the case with homogeneous agents, namely, agents equally skilled for undertaking their task. By contrast, heterogeneity among agents could arise due to the particular capabilities on how they sense the environment and/or the actions they could perform. Those differences eventually hinder the learning process and information sharing between agents. This issue becomes even more complicated to address over hard exploration scenarios where the extrinsic rewards collected from the environment are sparse. This work sheds light on the impact of leveraging collaborative learning strategies between heterogeneously skilled agents over hard exploration scenarios. Our study gravitates on how to share and exploit knowledge between the agents so as to mutually improve their learning procedures, further considering mechanisms to cope with sparse rewards. We assess the performance of these strategies via extensive simulations over modifications of the ViZDooM environment, which allow examining their benefits and drawbacks when dealing with agents endowed with different behavioral policies. Our results uncover the inherent problems of not considering the skill heterogeneity of the agents in the knowledge sharing strategy, and unleash a manifold of research directions aimed at circumventing these noted issues.

Idioma originalInglés
Título de la publicación alojadaIJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
EditorialInstitute of Electrical and Electronics Engineers Inc.
ISBN (versión digital)9780738133669
DOI
EstadoPublicada - 18 jul 2021
Evento2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China
Duración: 18 jul 202122 jul 2021

Serie de la publicación

NombreProceedings of the International Joint Conference on Neural Networks
Volumen2021-July

Conferencia

Conferencia2021 International Joint Conference on Neural Networks, IJCNN 2021
País/TerritorioChina
CiudadVirtual, Shenzhen
Período18/07/2122/07/21

Financiación

FinanciadoresNúmero del financiador
Eusko JaurlaritzaT1294-19, KK-2020/00049

    Huella

    Profundice en los temas de investigación de 'Collaborative Exploration and Reinforcement Learning between Heterogeneously Skilled Agents in Environments with Sparse Rewards'. En conjunto forman una huella única.

    Citar esto