A Robot Should Compensate for Its Mistakes: An Exploration of the Dynamics of Trust Violation and Repair Strategies in Human-Robot Collaboration

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Human-robot interactions are becoming prevalent in a varied number of fields, with trust being essential for efficient collaboration between humans and robots. Robots, just like humans, are bound to make mistakes leading to a violation of trust. Research investigating how to repair this broken trust has produced mixed results. This work investigates the effects of five communicative trust repair strategies (apology, denial, explanation, compensation, and silence) on participants’ trust in the robot, following trust violations of two kinds (moral and performance violation). In an online between-subjects experiment, participants engaged in a collaborative task with a robot that repeatedly committed trust violating acts and responded with a repair message. The findings indicate the higher severity of moral violations on moral trust and willingness to collaborate in the future, with compensation showing to be the most effective repair strategy, enhancing trust and willingness to collaborate, while also reducing discomfort. This work advances the understanding of trust relationships in collaborative HRI contexts.
Original languageEnglish
Article number22
Number of pages34
JournalACM Transactions on Human-Robot Interaction
Volume15
Issue number1
DOIs
Publication statusPublished - 28 Oct 2025

Keywords

  • Words and Phrases
  • Human-Robot Interaction
  • Collaborative HRI
  • Trust
  • Trust violation
  • Trust repair

Fingerprint

Dive into the research topics of 'A Robot Should Compensate for Its Mistakes: An Exploration of the Dynamics of Trust Violation and Repair Strategies in Human-Robot Collaboration'. Together they form a unique fingerprint.

Cite this