Natural multimodal communication for human-robot collaboration

Iñaki Maurtua*, Izaskun Fernández, Alberto Tellaeche, Johan Kildal, Loreto Susperregi, Aitor Ibarguren, Basilio Sierra

*Autor correspondiente de este trabajo

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

60 Citas (Scopus)

Resumen

This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real industrial settings. The fusion of several interaction mechanisms is particularly relevant in industrial applications in which adverse environmental conditions might affect the performance of vision-based interaction (e.g. poor or changing lighting) or voice-based interaction (e.g. environmental noise). Our approach relies on the recognition of speech and gestures for the processing of requests, dealing with information that can potentially be contradictory or complementary. For disambiguation, it uses semantic technologies that describe the robot characteristics and capabilities as well as the context of the scenario. Although the proposed approach is generic and applicable in different scenarios, this article explains in detail how it has been implemented in two real industrial cases in which a robot and a worker collaborate in assembly and deburring operations.

Idioma originalInglés
Páginas (desde-hasta)1-12
Número de páginas12
PublicaciónInternational Journal of Advanced Robotic Systems
Volumen14
N.º4
DOI
EstadoPublicada - jul 2017

Huella

Profundice en los temas de investigación de 'Natural multimodal communication for human-robot collaboration'. En conjunto forman una huella única.

Citar esto