As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
People often have reciprocal habits, almost automatically responding to others' actions. A robot who interacts with humans may also reciprocate, in order to come across natural and be predictable. We aim to facilitate decision support that advises on utility-efficient habits in these interactions. To this end, given a model for reciprocation behavior with parameters that represent habits, we define a game that describes what habit one should adopt to increase the utility of the process. This paper concentrates on two agents. The used model defines that an agent's action is a weighted combination of the other's previous actions (reacting) and either i) her innate kindness, or ii) her own previous action (inertia). In order to analyze what happens when everyone reciprocates rationally, we define a game where an agent may choose her habit, which is either her reciprocation attitude (i or ii), or both her reciprocation attitude and weight. We characterize the Nash equilibria of these games and consider their efficiency. We find that the less kind agents should adjust to the kinder agents to improve both their own utility as well as the social welfare. This constitutes advice on improving cooperation and explains real life phenomena in human interaction, such as the societal benefits from adopting the behavior of the kindest person, or becoming more polite as one grows up.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.