As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
This paper addresses the question of whether robots should adhere to the same social norms that apply to human-human interaction when they explain their behavior. Specifically, this paper investigates how the topics of ascribing intentions to robots’ behavior, and robots’ explainability intertwine in the context of social interactions. We argue that robots should be able to contextually guide users towards adopting the most appropriate interpretative framework by providing explanations that refer to intentions, reasons and objectives as well as different kinds causes (e.g., mechanical, accidental, etc.). We support our argument with use cases grounded in real-world applications.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.