In this paper ethical and epistemological challenges of data security, information protection and privacy in social robotics are addressed. Analyzing the characteristics of asymmetric “new wars” including cyberwar and information warfare the IT based problematic of social robots will be elaborated as Social Robotic Information- and Cyberwar. It is argued that a security policy based on tacit knowledge (Tacit Security) is one possible answer to current questions in robot ethics and robot philosophy concerning data security.
The Aristotelian concept of phronesis captures the kind of situated knowledge, which is needed in order for us to understand and act morally in the specific situations in which we find ourselves. On this background, it is discussed whether an ‘as if’ version of phronesis, understood as situational awareness, might enable us to design a virtuous robot with ‘as if’ capabilities of the phronimos. It is argued that eventually we might see this kind of virtuous robot, but its ‘as if’ qualities would not be sufficient for the virtuous robot to count as an ethical agent, since phronesis is presumably not computationally tractable.
In part one of this paper I turn to Don Ihde to show how a technological object can occupy the role that “the other” plays for Hegel in his phenomenology as the structural features of Hegel's analyses of self-other relations can be found in Ihde's analyses of human-technology relations. I then turn to Singer's Wired for War and Gertz's Philosophy of War and Exile. Using these texts I show how the way soldiers treat robots by naming them, protecting them, and by even risking their lives to save them, illustrates Hegel's central claim: ethical life develops based on the process of discovering that to recognize others (whether human or technological) is to recognize ourselves and that to misrecognize others is to misrecognize ourselves. I conclude by offering suggestions as to how this understanding of ethical life as based on recognition and misrecognition can be applied to design ethics.
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele's history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history as products of engineering would undermine their autonomy and thus responsibility.
Attitudes towards robots in elderly care are systematically sceptical: a central worry is that a robot caretaker will rob the elderly of their human contacts. Are such worries justified? Will robotics change something relevant concerning the human dignity of elders? Are some specific robots especially dubious, or can robotics, as a generic technology, change the practices of care so that human dignity would be under threat? In this paper, we ask what human dignity entails in elderly care, and what kinds of threats and possibilities social robotics may bring with it. Earlier studies have approached this question, for example, in light of the capability theories of human life, consistent with human dignity. Our starting point are theories of recognition of persons, which have distinguished three main kinds of needs for recognition: the need for respect as a person, the need to feel esteemed as a contributor to the common good, and the need to be loved.
This paper examines the potential for structural discrimination to be woven into the fabric of autonomous vehicle developments, which remain underexplored and undiscussed. The prospect for structural discrimination arises as a result of the coordinated modes of autonomous vehicle behaviour that is prescribed by its code. This leads to the potential for individuated outcomes to be networked and thereby multiplied consistently to any number of vehicles implementing such a code. The aggregated effects of such algorithmic policy preferences will thus cumulate in the reallocation of benefits and burdens to certain categories of persons in a relatively stable manner. The spectre of implicit structural discrimination is therefore raised by the orderly and stable rearrangement of biases that may be expressed by the controlling algorithm. The potential for a much more pernicious form of active structural discrimination looms with the possibility of crash optimisation impulses in which a protective shield is cast over those individuals in which society may have a vested interest in prioritising or safeguarding. A stark dystopian scenario is introduced to sketch the contours whereby personal beacons signal individual identity, and potentially relative worth, to autonomous vehicles engaging in a crash damage calculus. At the risk of introducing these ideas into the development of autonomous vehicles, this paper hopes to spark a debate to foreclose these eventualities.
This paper examines ethical issues related to the use of robots as companions or caregivers for older adults. While so-called doom scenarios that depict myriad negative effects of increased robot presence and expanded human-robot interaction (HRI) raise engaging concerns, this paper seeks to diffuse some of those concerns and examine the potential impact of an increased robot presence and HRI on human-human interaction (HHI). Dystopian scenarios that focus almost exclusively on HRI neglect to acknowledge that humans will likely continue to interact, perhaps in novel ways, and fail to incorporate the possible beneficial effects of robot presence on HHI. The importance of supporting HHI must be kept in view when speculating about the future of HRI.
Using Roger Crisp's  arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour. Because robots are completely rational agents they do not need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way.
Eduard Fosch-Villaronga, Alex Barco, Beste Özcan, Jainendra Shukla
195 - 205
Socially Assistive Robotics (SAR) aims to provide robot-assisted therapy, for physical as well as cognitive rehabilitation. The paper analyzes two distinct use cases of cognitive rehabilitation therapies, one among involving children with Traumatic Brain Injury (TBI); and another one; second among involving individuals with Intellectual Disability (ID), and raises concerns regarding emotional adaptation, personalization, design, and ELS issues of human-robot interaction in such cases. The paper's aim is to provide some guidance on how social robots should be designed in order to accommodate emotions in HRI as well as to respect the rights of the persons with disabilities. We argue that it is critically important to address the concerns highlighted in order to empower robots with empathetic behavior and to deliver effective cognitive rehabilitation therapies.
In order to investigate whether robots, or, more generally, artificial systems, can have emotions, I will shed a light on Giovanna Colombetti's enactive theory of emotions because the idea of an enactive approach, especially the role it grants biology, seems to be conflicting with the idea of emotional artificial systems. I will take a look at some points of contact between the enactive approach to emotions and its interest regarding artificial systems, first and foremost the enactive notions of autonomy and “sense-making”. In what way may these concepts be realized in artificial systems as well? This will entail the question of what living systems are and what distinguishes from an artificial system, in general but especially regarding to their emotions. Having analyzed these concepts, is there any case left in which we can speak of genuine emotions of an artificial system? If not, what kind of emotions may artificial systems then have?
Jaana Parviainen, Lina van Aerschot, Tuomo Särkikoski, Satu Pekkarinen, Helinä Melkas, Lea Hennala
210 - 219
This paper seeks to answer the question of how the interactive capabilities of social robots are related to their embodied character. Contributing to the discussions on the role of physical appearance in robotics, we apply a phenomenological theory of the body to develop a new understanding of the robot body. Drawing on Edmund Husserl's phenomenological distinction between the material and the lived body, we consider the robot body as “double” since it entails both objective and subjective aspects. We assume that the expressivity of “double bodies” can be seen as central in understanding the phenomenon of aliveness in social robots.
Eduard Fosch-Villaronga, Vishwas Kalipalya-Mruthyunjaya
223 - 233
The aim of this paper is to mold and materialize the future of learning. The paper introduces a Modular Cognitive Educator System (MCES), which aims to help people learn cognitive and ethical capabilities to face one of the indirect impacts of the robot revolution, namely, its impact on the educational system. MCES discusses the importance of agile mindset in future learning processes, which is extended by the inclusion of other values and skills such as effort, perseverance, adaptability, and creativity. Subsequently, MCES interconnects new age technologies and education to induce new approaches in thinking and learning.
Introducing social robots in educational institutions comes with many challenges with regards to the involved pedagogues, their available time and lack of technological skills as well as general technical issues such as poor internet connection. In this paper, we describe how Blue Ocean Robotics works with end-users in our Innovation Projects that focus on long-term implementation of social robots in Danish educational institutions.
Sofia Serholt, Wolmet Barendregt, Dennis Küster, Aidan Jones, Patrícia Alves-Oliveira, Ana Paiva
240 - 251
As robots are becoming increasingly common in society and education, it is expected that autonomous and socially adaptive classroom robots may eventually be given responsible roles in primary education. In this paper, we present the results of a questionnaire study carried out with students enrolled in compulsory education in three European countries. The study aimed to explore students' normative perspectives on classroom robots pertaining to roles and responsibilities, student-robot relationships, and perceptive and emotional capabilities in robots. The results suggest that, although students are generally positive toward the existence of classroom robots, certain aspects are deemed more acceptable than others.
In this paper I argue that the rationality characterizing strategic action in game theory is computable. In making this argument I discuss parametric ordinal decision-theory developed by Kenneth J. Arrow, and the parametric expected utility rankings of John von Neumann and Oskar Morgenstern. I next discuss von Neumann and Morgenstern's two-person zero-sum game theory. I argue that even though von Neumann and Morgenstern introduce rational decision-making predicated on a randomizing device, that this procedure is subject to computation. Moreover, whereas the Arrovian actors and von Neumann Morgenstern expected utility maximizing agents can be subject to indecisiveness due to indifference among elements in an optimal choice set, if we assume repeating choice contexts, then von Neumann and Morgenstern's introduction of randomization in mixed-strategies can solve the problem of computing decisions in these cases. This argument is a fundamental part of a larger project that argues that the strategic rationality formalized by von Neumann and Morgenstern is computable in the sense of the Church thesis. If this is true, then insofar as strategic rationality (also called rational choice) is paradigmatic of instrumental rationality, then these agents are in principle no different from artificial intelligences with the same instructions for action (rules linking choices and outcomes) and identical preferences and beliefs.
The introduction of social robots into society will require that they follow ethical principles which go beyond consequentialism. In this paper, I show how to apply the principle of double effect to solve an ethical dilemma involving robots studied by Alan Winfield and colleagues. The principle of double effect states conditions for ethically acceptable behavior when there are both positive and negative consequences of an action. I propose a formal semantics with actions, causes, intentions, and utilities based upon the work of Judea Pearl, John Horty, and others. With this formal semantics, the question of whether an action is permitted according to the principle of double effect is reduced to deciding whether a certain formula is true or otherwise.
I refute Bringsjord's attempted refutation of Searle, who has argued against two recent visions: Bostrom's super-intelligence (post-humanism) and Floridi's info-spheres (information revolution). My refutation derives from the impossibility of Turing machines to compute consequential information not linked with observations of its output. Placing post-humanism and information revolution under a philosophical perspective leads to an identification of an unspoken presupposition in both: universalism of meaning. A philosophical theory of information needs a semiotic theory of signs and representations that take information to be a property of signs that are linked with their interpreting minds.
We sketch an inference architecture that permits linguistic aspects of politeness to be interpreted; we do so by applying the ideas of politeness theory to the SCARE corpus of task-oriented dialogues, a type of dialogue of particular relevance to robotics. The fragment of the SCARE corpus we analyzed contains 77 uses of politeness strategies: our inference architecture covers 58 of them using classical AI planning techniques; the remainder require other forms of means-ends inference. So by the end of the paper we will have discussed in some detail how to interpret automatically different forms of politeness — but should we do so? We conclude with some brief remarks on the issues involved.