gaips_bea image1 image2 image3 image4 image5 gaips_ecute_beach_bar_banner gaips_ecute_train_incorrect_ticket_banner
Exploring the Impact of Fault Justification in Human-Robot Trust


Abstract With the growing interest on human-robot collaboration, the development of robotic partners that we can trust has to consider the impact of error situations. In particular, human-robot trust has been pointed as mainly affected by the performance of the robot and as such, we believe that in a collaborative setting, trust towards a robotic partner may be compromised after a faulty behaviour. This paper contributes to a user study exploring how a technical failure of an autonomous social robot affects trust during a collaborative scenario, where participants play the Tangram game in turns with the robot. More precisely, in a 2x2 (plus control) experiment we investigated 2 different recovery strategies, justify the failure or ignore the failure, after 2 different consequences of the failure, compromising or not the collaborative task. Overall, the results indicate that a faulty robot is perceived significantly less trustworthy. However, the recovery strategy of justifying the failure was able to mitigate the negative impact of the failure when the consequence was less severe. We also found an interaction effect between the two factors considered. These findings raise new implications for the development of reliable and trustworthy robots in human-robot collaboration.
Year 2018
Keywords cooperation, error recovery, faulty robots, recovery strategy, social human-robot interaction, technical failure, trust;Social Robotic Companions;
Authors Filipa Correia, Carla Guerra, Samuel Mascarenhas, Francisco S. Melo, Ana Paiva
Booktitle Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems
Pages 507--513
Series AAMAS '18
Publisher International Foundation for Autonomous Agents and Multiagent Systems
Address Stockholm, Sweden
Pdf File \"pdf
BibTex bib icon or see it here down icon

@inproceedings { correia18, abstract = {With the growing interest on human-robot collaboration, the development of robotic partners that we can trust has to consider the impact of error situations. In particular, human-robot trust has been pointed as mainly affected by the performance of the robot and as such, we believe that in a collaborative setting, trust towards a robotic partner may be compromised after a faulty behaviour. This paper contributes to a user study exploring how a technical failure of an autonomous social robot affects trust during a collaborative scenario, where participants play the Tangram game in turns with the robot. More precisely, in a 2x2 (plus control) experiment we investigated 2 different recovery strategies, justify the failure or ignore the failure, after 2 different consequences of the failure, compromising or not the collaborative task. Overall, the results indicate that a faulty robot is perceived significantly less trustworthy. However, the recovery strategy of justifying the failure was able to mitigate the negative impact of the failure when the consequence was less severe. We also found an interaction effect between the two factors considered. These findings raise new implications for the development of reliable and trustworthy robots in human-robot collaboration.}, address = {Stockholm, Sweden}, booktitle = {Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems}, keywords = {cooperation, error recovery, faulty robots, recovery strategy, social human-robot interaction, technical failure, trust;Social Robotic Companions;}, pages = {507--513}, publisher = {International Foundation for Autonomous Agents and Multiagent Systems}, series = {AAMAS '18}, title = {Exploring the Impact of Fault Justification in Human-Robot Trust}, year = {2018}, author = {Filipa Correia and Carla Guerra and Samuel Mascarenhas and Francisco S. Melo and Ana Paiva} }

up icon hide this content