
Ebook: Envisioning Robots in Society – Power, Politics, and Public Space

Robots are predicted to play a role in many aspects of our lives in the future, affecting work, personal relationships, education, business, law, medicine and the arts. As they become increasingly intelligent, autonomous, and communicative, they will be able to function in ever more complex physical and social surroundings, transforming the practices, organizations, and societies in which they are embedded.
This book presents the proceedings of the Robophilosophy 2018 conference, held in Vienna, Austria, from 14 to 7 February 2018. The third event in the Robophilosophy Conference Series, the conference was entitled Envisioning Robots in Society – Politics, Power, and Public Space. It focused on the societal, economic, and political issues related to social robotics. The book is divided into two parts and an Epilogue. Part I, entitled Keynotes, contains abstracts of the keynotes and two longer papers. Part II is divided into 7 subject sections containing 37 papers. Subjects covered include robots in public spaces; politics and law; work and business; military robotics; and policy.
The book provides an overview of the questions, answers, and approaches that are currently at the heart of both academic and public discussions. The contributions collected here will be of interest to researchers and policy makers alike, as well as other stakeholders.
The five editors of this volume are delighted to present here the Proceedings of the third event in the Robophilosophy Conference Series, held February 14–17, at the University of Vienna, Austria. After the first two events in this series, which took place in Aarhus in 2014 and 2016, this volume collects the results of an intense, exciting and very productive research exchange that featured close to 100 research presentations and brought together about 250 researchers from all over the world.
The Robophilosophy 2018 conference, entitled Envisioning Robots in Society—Politics, Power, and Public Space, prominently focused on societal, economic, and political issues related to social robotics, including the organization of work and labor, policy, education, economics, law, medicine and care, and the arts. Within these and other socio-political domains social robots will appear in ever more intelligent, connectable, and extensive ways, producing artificial agents that function in ever more complex physical and social surroundings, and transform the practices and organizations in which they are embedded. This raises a host of difficult and highly complex questions for policy-makers, engineers, and researchers. Which socio-political, socio-cultural, economic, and ethical challenges will we humans be confronted with as social robots are included into a growing number of contexts of everyday life? How can philosophy and other disciplines contribute to asking these questions and addressing these challenges?
Our conference was to send yet another signal to researchers, policy makers, engineers, and corporations that this is the time to (pro)actively engage with these issues and realize that they jointly share the burden of responsibility for shaping the course of the “robot revolution”.
The papers in this volume address the difficult questions of the impending socio-cultural changes due to the ‘robot revolution’. They ask these questions in different ways, ranging from reflections on the future of the economy and work to ethical questions about responsibility and philosophical discussions about the moral status of artificial agents. The Proceedings offer an interesting spectrum of the questions, answers, and approaches that currently are at the center of both academic and public discussions. We are confident that the short contributions collected here advance the state of the art and are helpful to researchers and policy makers alike, as well as of interest for other stakeholders.
As the Robophilosophy Conference Series in general, these Proceedings aim to present philosophical and interdisciplinary humanities research in and on social robotics that can inform and engage with policy making and political agendas—critically and constructively. During the conference we explored how academia and the private sector can work hand in hand to assess benefits and risks of future production formats and employment conditions. We have talked about how research in the humanities, including art and art research, and research in the social and human sciences can contribute to imagining and envisioning the potentials of future social interactions in the public space. We hope that the contributions in this volume will further discussion and exchange on the difficult yet eminently important questions that arise when we envision the introduction of robots into our societies.
Service robots are becoming ever more pervasive in society-at-large. They are present in our apartments and our streets. They are found in hotels, hospitals, and care homes, in shopping malls, and on company grounds. In doing so, various challenges arise. Service robots consume energy, they take up space in ever more crowded cities, sometimes leading us to collide with them and stumble over them. They monitor us, they communicate with us and retain our secrets on their data drives. In relation to this, they can be hacked, kidnapped and abused. The first section of this article presents different types of service robots—like security, transport, therapy, and care robots—and discusses the moral implications that arise from their existence. Information ethics and machine ethics will form the basis for interrogating these moral implications. The second section discusses the draft for a patient declaration, by which people can determine whether and how they want to be treated and cared for by a robot. However, individual specifications may violate personal interests or the business interests of the hospital or nursing home. The author argues such a patient declaration will be vital in a world ever more impacted by these service robots.
“Integrative Social Robotics” (ISR) is a new approach or general method for generating social robotics applications that are culturally sustainable (Seibt 2016). The paper briefly recapitulates the primary motivation for ISR. Currently social robotics is caught in a compounded version of the Collingridge dilemma—a triple gridlock of description, evaluation, and regulation tasks. In a second step we describe how ISR can overcome this gridlock, presenting five principles that should guide the research, design, and development (RDD) process for applications in social robotics. Characteristic of ISR is to intertwine a mixed method approach (i.e., conducting experimental, quantitative, qualitative, and phenomenological research for the same envisaged application) with conceptual and axiological analysis in philosophy; moreover, ISR is value-driven and abides by the “non-replacement principle”: social robots may only do what humans should but cannot do. In conclusion we suggest, with reference to a classification of different formats of pluridisciplinary research by Nersessian and Newstetter (2013), that ISR may establish social robotics as a new transdiscipline.
The integration of social robots in human societies requires that they are capable to take decisions that may affect the lives of people around them. In order to ensure that these robots will behave according to shared ethical principles, an important shift in the design and development of social robots is needed, one where the main goal is improving ethical transparency rather than technical performance, and placing human values at the core of robot designs. In this abstract, we discuss the concept of ethical decision making and how to achieve trust according to the principles of Autonomy, Responsibility and Transparency (ART).
The field of artificial intelligence and robotics has long adapted an anthropocentric view, putting the intelligence structures of humans as the guiding requirements for developing artificial intelligence. This paper use observations of a robotic lawnmower to demonstrate how we can apply Jakob von Uexküll's Umwelt theory to describe robots and robot behavior to further our understanding of the behavior of different kinds robots.
In the area of consumer robots that need to have rich social interactions with humans, one of the challenges is the complexity of computing the appropriate interactions in a cognitive, social and physical context. We propose a novel approach for social robots based on the concept of Social Practices. By using social practices robots are able to be aware of their own social identities (given by the role in the social practice) and the identities of others and also be able to identify the different social contexts and the appropriate social interactions that go along with those contexts and identities.
In three sections some interactions at the workshop “YuMi in Action! Ethics and Engineering as Transdisciplinary Robotics Performance” and related later reflections are summarized. The primary emphasis is to illustrate what transdisciplinarity is and how transdisciplinarity could work as a certain form of scientific cooperation. Therefore four principles of biomedical ethics are applied to human robot interactions just like the VDI codex of engineering ethics. Methodological fundamentals of transdisciplinary cooperation are discussed related to concrete medical applications of the robot “YuMi.” In the last section the question of transdisciplinary research is (experimentally) related to the metaphor of “social submarines” and concrete practical issues that endanger real transdisciplinarity. Here we close with the question whether children´s more or less unbiased imagination could be seen as a requirement of real transdisciplinarity.
Debates around robots, both scientific and non-scientific, mostly put the human being in their focus. This is important and necessary to produce machines that humans can operate and interact with, and to do responsible research. We think, however, that the phenomenon of robots, and generally machines, is only fully comprehensible if we, the observers, step back and try to understand machines from another, unusual perspective: the machines themselves.
In this paper I try to illustrate, quite roughly and indicatively, the interconnections between automation technology and social organization. Central to this analysis are the notions of automation, increased productivity in a capitalist society, labor, equality, global inequality and the modern culture of technology. I will end the paper with brief critical remarks on the question of ‘robot rights’.
The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such approach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal reasoning, in 3. I discuss whether considerations on robots' essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions.
This article examines the compatibility of law and robotics by comparing robotic and human legal-decision making. In a scenario where a robot and a person make exactly the same legal decision, with the same factual consequences, there would still be an important difference between the robotic and human decision-making processes. People will be able to relate to the human decision-maker, and this capacity shapes the judgment individuals have over the fairness of the decision and its outcome. Concurrently, individuals are not able to relate to the robot in the same scenario, and to reproduce the conditions of relatability in the robot is unforeseeable with the current development of cognitive sciences. Our capacity to judge the fairness or unfairness of the actions of others shapes our acceptance of and faith in the legal system. Robots making legal decisions would not recreate the same conditions of trust in the fairness of the legal-system, which is one source of incompatibilities between law and robotics.
The workshop was the fifth event in the series of meetings organized by the Research Network for Transdisciplinary Studies in Social Robotics (TRANSOR, www.transor.org). In line with previous TRANSOR events it served the general aim of including the Humanities into a full-scale interdisciplinary or even transdisciplinary research on Human-Robot Interaction and Social Robotics. The specific aim of this workshop was to contribute to a better understanding of the possible socio-cultural, psychological, and ethical-existential implications of the increased use of social robots in the workplace. The contributions investigated human work experience in different forms and modes of human-robot co-working. Two papers presented classificatory frameworks for distinguishing forms of working with robots (human-robot collaboration) and forms of working alongside artificial social agents. Other papers presented empirical work on new classificatory frameworks.