
Ebook: Sociable Robots and the Future of Social Relations

The robotics industry is growing rapidly, and to a large extent the development of this market sector is due to the area of social robotics – the production of robots that are designed to enter the space of human social interaction, both physically and semantically. Since social robots present a new type of social agent, they have been aptly classified as a disruptive technology, i.e. the sort of technology which affects the core of our current social practices and might lead to profound cultural and social change.
Due to its disruptive and innovative potential, social robotics raises not only questions about utility, ethics, and legal aspects, but calls for “robo-philosophy” – the comprehensive philosophical reflection from the perspectives of all philosophical disciplines. This book presents the proceedings of the first conference in this new area, “Robo-Philosophy 2014 – Sociable Robots and the Future of Social Relations, held in Aarhus, Denmark, in August 2014. The short papers and abstracts collected here address questions of social robotics from the perspectives of philosophy of mind, social ontology, ethics, meta-ethics, political philosophy, aesthetics, intercultural philosophy, and metaphilosophy.
Social robotics is still in its early stages, but it is precisely now that we need to reflect its possible cultural repercussions. This book is accessible to a wide readership and will be of interest to everyone involved in the development and use of social robotics applications, from social roboticists to policy makers.
We would like to thank our colleagues and administrative staff at Aarhus University who supported us at different stages of the realization of the conference Robo-Philosophy 2014. We are grateful to all members on the conference program committee: Mark Bickhard, Charles Ess, Ezio di Nucci, Raffaele Rodogno, Jens-Christian Bjerring, Martin Mose Bentzen, Klaus Robering, and Carola Eschenbach for helping us in selecting submissions. We would like to thank Gitte Grønning Munk and Ib Jensen at AU-Communication/Conference Support for help with many practical tasks, and their colleague Nikolai Lander for the conference graphics. When we realized that the conference foyer was lacking ambiente, Stefan Larsen created in record time an automated info-booth with the Telenoid robot. We are particularly indebted to our student onsite conference managers, Thea Frederiksen and Rikke Mayland Olesen, for the smooth practical realization of the conference, together with our student staff, Anna Frida Vind Andersen, Søren Toft Høyner, Cecilie Kjær Rimdal, Louise Rognlien, and Niklas Tørring. We are grateful to Thea Frederiksen for competent help in preparing the manuscript for the proceedings. Finally, we would like to thank our head of department, Bjarke Pårup, for financial and moral support. The conference was made possible by a grant from the VELUX Foundation in the context of funding for a larger research project.
Johanna Seibt, Raul Hakli, Marco Nørskov
Aarhus October 15, 2014
For more than a decade, the field of human-robot interaction has generated many valuable contributions of interest to the robotics community at large. The field is vast, going all the way from perception to action and decision. In the same time, research on human-human joint action has become a topic of intense research in cognitive psychology and philosophy, bringing elements and even architecture hints to help our understanding of human-human joint action. In this paper, we would like to analyse some findings from these disciplines and connect them to the human-robot joint action case. This work is for us a first step toward the definition of a framework dedicated to human-robot interaction.
Socially aware robots have to coordinate their actions considering the spatial requirements of the humans with whom they interact. We propose a general framework based on the notion of affordances that generalizes geometrical accounts to the problem of human-aware placement of robot activities. The framework provides a conceptual instrument to take into account the heterogeneous abilities and affordances of humans, robots, and environmental entities. We discuss how affordance knowledge can be used in various reasoning tasks relevant to human-robot interaction.
There is increasingly much agreement in the cognitive sciences that human cognition is embodied – to some significant extent. However, there is much less agreement regarding in what sense(s) cognition is embodied. In particular, there is much agreement that sensorimotor interaction with the environment is fundamental to cognition. From a historical perspective, this emphasis on the sensorimotor body is at least partly due to the crucial role that the conceptual shift in artificial intelligence (AI) research – from computational to robotic models – has played in the overall development of embodied cognitive science. Most embodied AI research, however, in particular work on symbol grounding and related approaches, reduces the body to a mere sensorimotor interface for internal processes that are still just as computational as they were 30–40 years ago. In Harnad's terms, this type of AI has only gone from a computational to a robotic functionalism. In theory, this could be limited to AI research as such, but in practice the view of the physical body as the computational mind's sensorimotor interface to the world still pervades much of cognitive science and philosophy of mind. The argument presented here is that there are good reasons to say that at least today's robots are in fact not embodied – in any sense that would allow for anything even close to human embodied cognition and intentionality – and that this has implications for social interactions between humans and robots.
What does it mean to be agent and how do we perceive others as such? Do the same rules apply when interacting with others who are radically different from ourselves? We typically perceive the agency of others through their behavior, as they engage various aspects of their affordance field. The affordance concept refers to an organism's environmentally anchored action possibilities, but questions abound as to how more precisely to understand the relational, modal, future-directed and dynamic aspects of this notion. These complexities might be seen as intensified in social interaction where we also might perceive others' agency through reciprocal negotiations and sharing of affordances. Via Merleau-Ponty's analysis of social perception I try to bring together a re-interpretation of affordances and of perceptible agency, which might begin to give us some tools to understand interactions with agents truly other than ourselves.
While there is general consensus that robust forms of social learning enable the possibility of human cultural evolution, the specific nature, origins, and development of such learning mechanisms remains an open issue. The current paper offers an action-based approach to the study of social learning in general and imitation learning in particular. From this action-based perspective, imitation itself undergoes learning and development and is modeled as an instance of social meta-learning – children learning how to use others as a resource for further learning. This social meta-learning perspective is then applied empirically to an ongoing debate about the reason children imitate causally unnecessary actions while learning about a new artifact (i.e., over-imitate). Results suggest that children over-imitate because it is the nature of learning about social realities in which cultural artifacts are a central aspect.
Social cognition research has focused on the debate on the nature of mechanisms underlying social abilities. However, the competing views in the debate share a basic assumption: mental states attribution is central for social cognition. The aim of this paper is twofold: firstly, I present an alternative framework known as mindshaping. According to it, human beings are biologically predisposed to learn and teach cultural and rational norms and complex cultural patterns of behaviour that enhance social cognition. Secondly I will highlight how this new framework can open new perspectives of research in the area of social robotics.
It is clear that people can interact with programs and robots in ways that appear to be, and can seem to participants to be, social. Asking the question of whether or not such interactions could be genuinely social requires examining the nature of sociality and further examining what requirements are involved for the participants in such interactions to co-constitutively engage in genuine social realities — to constitute genuine social agents. I will attempt to address both issues.
A further question is “Why ask the question?” Isn't “sociality” like a program in that simulating the running of a program is the running of a program — so sufficiently simulated sociality is genuine sociality? What more could be relevant and why?
There are at least two sorts of answers: 1) to better understand the metaphysics of sociality and thereby its potentialities and ways in which “merely” simulated sociality might fall short, especially of the developmental and historistic potentialities of sociality, and 2) to better understand the issues of ethics surrounding interactions among and between humans and robots.
The philosophical tradition offers us numerous variations on the claim that for human agents, individuality is prior to sociality: what I will call the Priority of Individuality Thesis (POI). As an implicit presupposition, I argue that it vitiates attempts to account for sociality, but also undermines the category of the individual person itself. This paper problematizes POI. Though I ultimately want to show that sociality is prior to or at least co-constitutive of individuality, my primary objective here will be critical. I will challenge POI on two grounds. First, in most of its variants, POI turns on an equivocal notion of priority. There are at least three distinct kinds of priority at work in the philosophical literature: epistemic priority, ontological priority, and diachronic temporal priority. There are also numerous problematic equivocations among them. Second, I consider two examples from the philosophical canon in which POI can be seen to have pathological, even paradoxical consequences. By individuating moral agency by recourse to autonomous individual human subjects, both Kantian and Utilitarian accounts of morality erase the possibility of morally significant differences among such subjects. Furthermore, two classic solutions to the problem of other minds both presuppose that the problem of other minds is the epistemological problem of identifying individual subjects as such. And as Merleau-Ponty has suggested, both “solve” the problem of other minds by essentially erasing otherness; there may be multiple instances of mind, but qua instances of mind, there are no significant differences between them.
Much of the ethical debate about social robotics applications hinges on the ontological classification of our interactions with robots, but a detailed ontological account of simulated social interaction is still missing. In this paper I briefly explain why characterizations of human-robot interactions are best undertaken in a neutral, ‘technical’ idiom. Then I define five modes of simulation (functionally replicating, imitating, mimicking, displaying, and approximating) in terms of relations between processes. I sketch how these notions can be used to describe more precisely what a certain robot ‘can do.’ In conclusion I sketch a general strategy for developing a fine-grained taxonomy of social interactions.
I study the implications of the use of social robotics to our concepts of social interaction in everyday usage and in philosophical theories of social action. If people sometimes conceive their activities with robots as cases of social interaction even though they do not attribute to robots all the capacities that philosophers take to be necessary requirements for participating in social interaction, these requirements may need to be reconsidered. For instance, some analyses of social interaction require that the parties of interaction are jointly committed in the activity in question in ways that involve obligations. However, such normative concepts as commitments and obligations may not be attributable to robots, but people may still conceive themselves as being involved in social interaction with them. If this is the case, there is a tension between how social interaction is understood in everyday contexts and how it is analysed in philosophy. I study different ways to understand this tension in terms of alternative methodological orientations towards conceptual analysis of social interaction.
In this paper I offer a way to think about artificial agents in terms of their capacities, or competence, and I work out what this approach means for their status and for the way we ought to treat such agents. The discussion draws largely on the work that Christian List and Philip Pettit have done on group agency.