Collaborative Exploration and Reinforcement Learning between Heterogeneously Skilled Agents in Environments with Sparse Rewards

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

A critical goal in Reinforcement Learning is the minimization of the time needed for an agent to learn to solve a given environment. In this context, collaborative reinforcement learning refers to the improvement of this learning process through the interaction between agents, which usually yields better results than training each agent in isolation. Most studies in this area have focused on the case with homogeneous agents, namely, agents equally skilled for undertaking their task. By contrast, heterogeneity among agents could arise due to the particular capabilities on how they sense the environment and/or the actions they could perform. Those differences eventually hinder the learning process and information sharing between agents. This issue becomes even more complicated to address over hard exploration scenarios where the extrinsic rewards collected from the environment are sparse. This work sheds light on the impact of leveraging collaborative learning strategies between heterogeneously skilled agents over hard exploration scenarios. Our study gravitates on how to share and exploit knowledge between the agents so as to mutually improve their learning procedures, further considering mechanisms to cope with sparse rewards. We assess the performance of these strategies via extensive simulations over modifications of the ViZDooM environment, which allow examining their benefits and drawbacks when dealing with agents endowed with different behavioral policies. Our results uncover the inherent problems of not considering the skill heterogeneity of the agents in the knowledge sharing strategy, and unleash a manifold of research directions aimed at circumventing these noted issues.

Original languageEnglish
Title of host publicationIJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9780738133669
DOIs
Publication statusPublished - 18 Jul 2021
Event2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China
Duration: 18 Jul 202122 Jul 2021

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2021-July

Conference

Conference2021 International Joint Conference on Neural Networks, IJCNN 2021
Country/TerritoryChina
CityVirtual, Shenzhen
Period18/07/2122/07/21

Keywords

  • collaborative training
  • curiosity
  • Deep Reinforcement Learning
  • heterogeneous agents
  • intrinsic motivation
  • sparse rewards

Fingerprint

Dive into the research topics of 'Collaborative Exploration and Reinforcement Learning between Heterogeneously Skilled Agents in Environments with Sparse Rewards'. Together they form a unique fingerprint.

Cite this