Natural multimodal communication for human-robot collaboration

Iñaki Maurtua, Izaskun Fernández, Alberto Tellaeche, Johan Kildal, Loreto Susperregi, Aitor Ibarguren, Basilio Sierra

Research output: Contribution to journalArticlepeer-review

60 Citations (Scopus)

Abstract

This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real industrial settings. The fusion of several interaction mechanisms is particularly relevant in industrial applications in which adverse environmental conditions might affect the performance of vision-based interaction (e.g. poor or changing lighting) or voice-based interaction (e.g. environmental noise). Our approach relies on the recognition of speech and gestures for the processing of requests, dealing with information that can potentially be contradictory or complementary. For disambiguation, it uses semantic technologies that describe the robot characteristics and capabilities as well as the context of the scenario. Although the proposed approach is generic and applicable in different scenarios, this article explains in detail how it has been implemented in two real industrial cases in which a robot and a worker collaborate in assembly and deburring operations.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalInternational Journal of Advanced Robotic Systems
Volume14
Issue number4
DOIs
Publication statusPublished - Jul 2017

Keywords

  • Collaborative robots
  • Fusion
  • Multimodal interaction
  • Natural communication
  • Reasoning
  • Safe human-robot collaboration
  • Semantic web technologies

Fingerprint

Dive into the research topics of 'Natural multimodal communication for human-robot collaboration'. Together they form a unique fingerprint.

Cite this