As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
As computer system capabilities increase, the opportunities for human-computer and human-robot conflicts likewise grow. In the past, members of the public seldom came into conflict with highly complex agents, robots, and systems; but in the future, these will become common occurrences. These conflicts can vary in how much negotiation is possible, how much time is available for assessment and decision-making, the severity of the consequences, and whether there is an objective way to judge the choices available. Empirical research on how people will make decisions in such situations is required so that systems can be designed to interact well with people. With computational agents being integrated into many areas of life including shopping, driving, and even policing, careful design and philosophical consideration is necessary. Designing agents with conflicts and context in mind will make it possible to avoid or resolve some of these conflicts in more beneficial ways, and to reduce the risks from conflicts in critical situations or under time pressure. Designing agents is inherently a value-laden process, and as a result, values must be chosen with regard to social wellbeing, in order to advance pro-social aims. This requires both user and expert input informed by empirical research, especially for systems that are used under time pressure or in high-risk environments.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.