Multimodal human-robot interaction framework for a personal robot

  • Javi F. Gorostiza
  • , Ramón Barber
  • , Alaa M. Khamis
  • , María Malfaz
  • , Rakel Pacheco
  • , Rafael Rivas
  • , Ana Corrales
  • , Elena Delgado
  • , Miguel A. Salichs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

58 Citations (Scopus)

Abstract

This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction research. The control architecture of this personal robot is a hybrid control architecture called AD (Automatic-Deliberative) that incorporates an Emotion Control System (ECS) Maggie's main goal is to interact establish a peer-to-peer relationship with humans. To achieve this goal, a set of human-robot interaction skills are developed based on the proposed framework. The human-robot interaction skills imply tactile, visual, remote voice and sound modes. The multi-modal fusion and synchronization are also presented in this paper.

Original languageEnglish
Title of host publicationProceedings - RO-MAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication
Pages39-44
Number of pages6
DOIs
Publication statusPublished - 2006
Externally publishedYes
EventRO-MAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication - Hatfield, United Kingdom
Duration: 6 Sept 20068 Sept 2006

Publication series

NameProceedings - IEEE International Workshop on Robot and Human Interactive Communication

Conference

ConferenceRO-MAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication
Country/TerritoryUnited Kingdom
CityHatfield
Period6/09/068/09/06

Fingerprint

Dive into the research topics of 'Multimodal human-robot interaction framework for a personal robot'. Together they form a unique fingerprint.

Cite this