Modelling and Measuring Trust in Human–Robot Collaboration

Erlantz Loizaga*, Leire Bastida, Sara Sillaurren, Ana Moya, Nerea Toledo

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Recognizing trust as a pivotal element for success within Human–Robot Collaboration (HRC) environments, this article examines its nature, exploring the different dimensions of trust, analysing the factors affecting each of them, and proposing alternatives for trust measurement. To do so, we designed an experimental procedure involving 50 participants interacting with a modified ‘Inspector game’ while we monitored their brain, electrodermal, respiratory, and ocular activities. This procedure allowed us to map dispositional (static individual baseline) and learned (dynamic, based on prior interactions) dimensions of trust, considering both demographic and psychophysiological aspects. Our findings challenge traditional assumptions regarding the dispositional dimension of trust and establish clear evidence that the first interactions are critical for the trust-building process and the temporal evolution of trust. By identifying more significant psychophysiological features for trust detection and underscoring the importance of individualized trust assessment, this research contributes to understanding the nature of trust in HRC. Such insights are crucial for enabling more seamless human–robot interaction in collaborative environments.

Original languageEnglish
Article number1919
JournalApplied Sciences (Switzerland)
Volume14
Issue number5
DOIs
Publication statusPublished - Mar 2024

Keywords

  • experimental process
  • Human–Robot Collaboration (HRC)
  • trust dimensions
  • trust dynamics

Fingerprint

Dive into the research topics of 'Modelling and Measuring Trust in Human–Robot Collaboration'. Together they form a unique fingerprint.

Cite this