As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
While humans are used to reason about other humans’ behavior, they are not readily able to understand the decision processes of artificial agents. This can be harmful in human-robot interaction (HRI) settings where a user may suspect erroneous or, even worse, intentionally non-cooperative behavior, resulting in reduced acceptance of the robot. In order to mitigate such negative effects, autonomous robots may be equipped with the ability to adequately explain their behavior. To that end, a robot is required to have the ability to (1) robustly detect a user’s need for explanation and (2) identify the situation-specific nature of the explanation need. Further it needs to be endowed with (3) communicative capabilities in order to deliver suitable explanations and ensure sufficient understanding. This extended abstract presents recent work towards endowing a social robot with such qualities and discusses how robots can meet users’ explanation needs more adequately.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.