Preface
This volume contains the Proceedings of the conference Robophilosophy 2016 / TRANSOR 2016: What Social Robots Can and Should Do (October 17–21, Aarhus University, Denmark). After Robo-Philosophy 2014—Sociable Robots and the Future of Social Relations (August 19–22, Aarhus University, Denmark), the first research event in “robophilosophy” worldwide, it was possible to establish a biennial conference series in Robophilosophy that is scheduled to run well into the 2020s in different locations.
The decision to combine Robophilosophy 2016 with an international research conference of the Research Network in Transdisciplinary Studies in Social Robotics (TRANSOR) was motivated by the prospect of increased volume and impact—with 13 plenary talks and about 70 talks in sessions and workshops, Robophilosophy 2016 / TRANSOR 2016turned out to be the hitherto largest event on multi-(inter)-disciplinary research on human-robot interaction (worldwide) that predominantly engages the perspectives of the Humanities. We felt that this was the right time to send out a strong signal that the rapid development of social robotics calls for a concerted and integrated effort across the disciplines of the Humanities to understand the transformative potentials of human-robot interactions.
There are various theoretical and practical-ethical reasons for expanding, at least this time round, the invitation to robophilosophy to a more inclusive call for Humanities research in relation to social robotics. Since the research landscape pertaining to human-robot interaction is currently very much in the making, it might be worthwhile to set out these reasons, together with some pointers on how the contributions to the Proceedings mirror this view of the current research situation.
To begin with theoretical reasons, the term “robophilosophy”—introduced in 2014—is a piece of metaphilosophical terminology with determinate origin and definition [1–4]. The term identifies an ongoing “fundamental systematic reconfiguration of philosophy in the face of artificial social agency” that involves three research dimensions—it is “philosophy of, for, and by social robotics” [3, 5]. Robophilosophy is (i) the philosophical reflection of the socio-cultural and ethical impact of social robots, (ii) it is the employment of philosophical methods (conceptual-and phenomenological analysis, formal theory construction, rational value discourse, etc.) for conceptual and methodological problems arising with artificial social agency, and (iii) it is experimental philosophy undertaken not merely with the familiar (quantitative, qualitative, experimental) methods of empirical research but also by construction (i.e. design and programming of physical and kinematic appearance and interactive capabilities).
This third dimension of robophilosophy, philosophy by social robotics, introduces a wide scope of interdisciplinary collaborations in robophilosophy beyond the interaction between philosophy and robotics. For it amounts to a far-reaching methodological repositioning where the standard philosophical methodologies lose the relative autonomy that is traditionally credited to them. What social robots can do depends—to a large extent—on how they are perceived, and how they are perceived depends—to a considerable extent—on how they are conceptually framed, e.g. as “companions” or as “machines”, “assistive technology”, etc. Robophilosophy acknowledges that the conceptual norms that whether such framings are admissible or overly metaphorical and misleading, are themselves undergoing revision—in the course of new human practices with artificial social agents. If empirical research in cognitive science will reveal that the neurophysiological processes that are distinctive elements of human social cognition are also triggered in human-robot interaction [see e.g. 6, 7], philosophers (social ontologists) may need to adjust the traditional premise that social interactions can only take place among humans. Similar mutual feedback relations hold between, on the one hand, ontological or phenomenological descriptions of human-robot interaction in philosophy, and, on the other hand, empirical research of human-robot interactions in psychology, anthropology, linguistics, and sociology.
In short, since social robotics creates new interdependencies between facts and the concepts of social interactions, robophilosophy explicitly must view itself as a constitutive part of a wide-scope transdisciplinary engagement with the phenomena of human-robot interactions.
Most of the conference contributions collected here display this currently ongoing process of methodological reorientation where philosophy and other humanities try to find their stance towards social robotics and a place in HRI (Human-Robot Interaction Studies). Contributions that address methodological issues head-on form the largest group of session papers (Part II: Methodological Issues), but also in many other contributions (see Part III: Robots in the Wild (workshop); Part II: Perception of Social Robots(session); and the paper by J. Parviainen et al.) authors argue directly for the inclusion of certain disciplinary perspectives, concepts, and methods in order to arrive at a more accurate description of the complex phenomena of human-robot interaction. Other contributions (see Part II: Emotions in Human-Robot Interaction; Education, Art and Innovation; Social Norms and Robot Sociality (sessions)) make this methodological case more indirectly, by showing that the inclusion of the Humanities research in HRI can be highly productive or even indispensable in clarifying what social robots can and cannot do. For example, this is the central agenda of two of the six conference workshops. Both workshops Commitment and Agency Management in Joint Action and Phronēsis for Machine Ethics? (in Part III) promote interactions between philosophers and roboticists on questions of capacity. Similarly, methodological considerations loom large also in those contributions that explore the significance of art for deeper, innovative understanding of human-robot interaction or human sociality (see Part III: Co-Designing Child-Robot Interactions (workshop); Part I: the plenary by S. Penny; Part II: the paper by B. Romic).
This brings us to practical-ethical reasons for interdisciplinary engagement with the phenomena of human-robot interactions across the Humanities. The question what social robots can do can be asked in two ways. On the one hand, it can be asked as a question about the capacities of an artificial agent, in the sense of a ‘classical AI question’. This is the focus of the workshop on artificial phronēsis as well as of the workshop Artificial Empathy (both in Part III), which features HRI research with the more ‘classical’ interdisciplinary combination of robotics, neuroscience, and psychology. On the other hand, and this is the crucial difference between classical AI and robotics versus social robotics, the focus of the ‘can do’ question may turn on the nature of the interaction—what kinds of interactions can be engendered by putting so-called ‘social’ robots into the space of human social interaction? Most of the conference contributions explicitly or implicitly perform this ontological turn—in philosophical terminology: from substance to process—and redirect their attention from the object, the robot, to human-robot interactions as such. As explained in [8], this shift has a momentous consequence. Unlike objects, social interactions not only underlie descriptive norms (correctly categorized or not?) but also practical-ethical norms (is it practically rational or ethical to create or promote such interactions?). In other words, when we shift the investigative focus to interactions, descriptive questions of what social robots can do are very closely connected to normative question of what social robots should do. Of course, as the plenaries by J. Robertson and K. Richardson (Part I) illustrate, we can still dissociate descriptive research on human-robot interactions from normative investigations. But in view of the potentially devastating socio-economic and socio-cultural impact of the “robot revolution,” from massive job loss to dystopian scenarios of “robot smog” [9] there are good practical reasons to keep descriptive and normative research on the expectable cultural change in close vicinity.
This perception of the current situation is documented in the fact that half of the plenaries and a large group of session talks (Part II: Ethical Tasks and Implications and Social Norms and Robot Sociality) address ethical and normative aspects. Moreover, and even more importantly perhaps, the need for close contact to normative aspect is reflected by the unusual development of roboticists themselves turning to philosophers and other scholars of the Humanities in order to gauge the cultural impact of (social) robotics, and to initiate a movement towards value-oriented design and “responsible robotics.” Two new initiatives to this effect, the IEEE's Global Initiative for Ethical Considerations in Autonomous Systems and the Foundation for Responsible Robotics, decided to use the conference as a platform to promote these ideas (with a short session on value-oriented design, whose description is not included in the Proceedings, and the workshop Responsible Robotics (Part III)).
To summarize, there are good reasons, currently at least, to engage a wide spectrum of Humanities research in the investigation of the potentials of social robotics, and for robophilosophers to seek collaborative contacts also across the Humanities. These reasons also promote an outlook on social robotics research where descriptive and normative inquiries are kept in close vicinity. We call this outlook “Integrative Social Robotics” and argue directly for a value-driven paradigm in social robotics that turns on what social robots can-and-should do, from the very beginning and throughout [see 8]. But even if perhaps not everyone might yet join this vision of a new pro-active role of the Humanities in society, the contributions to Robophilosophy 2016 / TRANSOR 2016 seem to endorse a shared conception that right now we live “the robotic moment” of human history, and all experts on culture and cultural dynamics need to be involved when we jointly respond to the challenge of determining “who we are and who we are willing to become” [10, p. 26].
References
[1] J. Seibt, R. Hakli, M. Nørskov (eds.). 2014. Sociable Robots and the Future of Social Relations–Proceeedings of Robophilosophy 2014. IOS-Press, Amsterdam.
[2] M. Nørskov (eds.). 2015. Sociable Robots–Boundaries, Potentials, Challenges. Ashgate, Farnham, UK.
[3] J. Seibt. 2016. Robophilosophy. In: R. Braidotti, M. Hlavajova (eds.). Posthuman Glossary, forthcoming.
[4] R. Hakli, J. Seibt (eds.). 2016. Sociality and Normativity for Robots–Philosophical Investigations. Springer, New York, forthcoming.
[5] J. Seibt, R. Hakli, M. Nørskov (eds.). 2017. Robophilosophy–Philosophy of, for, and by Social Robotics. MIT Press, Cambridge, MA.
[6] T. Chaminade, M. Zecca, S. Blakemore, A. Takanishi, C.D. Frith, S. Micera, P. Dario, G. Rizzolatti, V. Gallese, M.A. Umiltà. 2010. “Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures”, PloS one, vol. 5, no. 7, pp. e11577.
[7] L.M. Oberman, E.M. Hubbard, J.P. McCleery, E.L. Altschuler, V.S. Ramachandran, J.A. Pineda. 2005. “EEG evidence for mirror neuron dysfunction in autism spectrum disorders”, Cognitive brain research, vol. 24, no. 2, pp. 190–198.
[8] J. Seibt. 2016. Integrative Social Robotics–A New Method Paradigm to Solve the Description Problem and the Regulation Problem? In: J. Seibt, M. Nørskov, S. Schack Andersen (eds.), this volume.
[9] I.R. Nourbakhsh. 2013. Robot Futures, MIT Press, Cambridge, MA.
[10] S. Turkle. 2012. Alone Together: Why We Expect More from Technology and Less from Each Other, Basic books, New York.