As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable—never does anything that puzzles or frightens her—and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk, I will outline current work, within project RoboTIPS [1], to apply recent research on artificial theory of mind [2] to the challenge of providing social robots with the ability to explain themselves.