Ebook: Culturally Sustainable Social Robotics
The subject of social robotics has enormous projected economic significance. However, social robots not only present us with novel opportunities but also with novel risks that go far beyond safety issues. It is a potentially highly disruptive technology which could negatively affect the most valuable parts of the fabric of human social interactions in irreparable ways. Since engineering educations do not yet offer the necessary competences to analyze, holistically assess, and constructively mitigate these risks, new alliances must be established between engineering and SSH disciplines, with special emphasis on the humanities (i.e. disciplines specializing in the analysis of socio-cultural interactions and human experience). The Robophilosophy Conference Series was established in 2014 with the purpose of creating a new forum and catalyzing the research discussion in this important area of applied humanities research, with focus on robophilosophy.
Robophilosophy conferences have been the world’s largest venues for humanities research in and on social robotics. The book at hand presents the proceedings of Robophilosophy Conference 2020: Culturally Sustainable Social Robotics, the fourth event in the international, biennial Robophilosophy Conference Series, which brought together close to 400 participants from 29 countries. The speakers of the conference, whose contributions are collected in this volume, were invited to offer concrete proposals for how the Humanities can help to shape a future where social robotics is guided by the goals of enhancing socio-cultural values rather than by utility alone. The book is divided into 3 parts; Abstracts of Plenaries, which contains 6 plenary sessions; Session Papers, with 44 papers under 8 thematic categories; and Workshops, containing 25 items on 5 selected topics.
Providing concrete proposals from philosophers and other SSH researchers for new models and methods, this book will be of interest to all those involved in developing artificial ‘social’ agents in a culturally sustainable way that is also – a fortiori – ethically responsible.
This volume contains the Proceedings of Robophilosophy 2020: Culturally Sustainable Social Robotics, the fourth event in the biennial Robophilosophy Conference Series. The series presents interdisciplinary research in philosophy (and other Humanities) in and on social robotics. Past events in the series (Robophilosophy 2014: Sociable Robots and the Future of Social Relations, Aarhus; Robophilosophy 2016: What Social Robots Can and Should Do, Aarhus; Robophilosophy 2018: Envisioning Robots in Society—Power, Politics, and Public Space, Vienna) featured 60–100 research presentations and attracted 150–250 international participants. That these past conferences, as well as Robophilosophy 2020, are to date still the world’s largest conferences in Humanities research in and on social robotics is problematically disproportional to the enormous projected economic significance of social robotics. However, participation numbers have been steadily rising and the research community of robophilosophy conferences is expanding. Moreover, supported by a new methodological reflectiveness in Human-Robot Interaction (HRI) research, it appears that, very slowly, those changes in the research landscape are taking place that motivated the institution of the Robophilosophy Conferences in the first place: if engineering products are to participate in human social interactions, new alliances must be established between engineering and SSH disciplines, with special emphasis on the humanities. The particular expertise of the humanities is the analysis of the symbolic and normative space of human interaction—what it means for individuals, communities, and societies to be engaged in this or that interaction with natural or technical systems, or with other human agents—and it is an expertise that social robotics and HRI research ultimately cannot do without. The realization of this circumstance may have been slower than one might have wished for—among both engineering and Humanities researchers—but as “human-centered AI and robotics” now begin to find explicit attention in research funding programs, the goal of producing “culturally sustainable” technology based on Humanities expertise should hopefully receive momentum.
These were the thoughts that went through the minds of the organizers of RP2020 in March 2020, at the onset of the COVID-19 pandemic, when we decided not to postpone the conference. While the dimension of the personal encounters in formal and informal discussions at a conference cannot be overstated, we felt that it was even more important to ensure that the unusually large number of submissions by younger researchers would receive a timely publication outlet. We decided that we would try to recapture some of the benefits of a live conference in creating a hybrid event—with live and pre-recorded content online, accessible during an extended period of time via an interactive webpage. We asked the speakers for the session papers to provide short video-recordings of their talks, which registered participants could comment on during a period of 10 days; these comments together with other questions were taken up in ten focused live online discussion sessions. Together with six live plenaries and five live workshops, moderated from a live conference studio, this large-scale research exchange filled the period from August 10–August 21, with live sessions occurring the last three days.
As an online event, the Robophilosophy 2020 conference literally took place around the world. Due to a special advertising effort—which our plenary speakers generously supported by collaborating with us on video teasers of their talks—close to 400 researchers from 29 countries around the globe participated. Colleagues in Europe, the US, Japan, New Zealand, Australia, the Philippines, China, and several countries in South America and the Baltics, joined in a debate about how to address the challenges of creating culturally sustainable social robotics. Thus, the COVID-19 pandemic, which likely will accelerate the global interest in social robotics, also forced researchers to adapt to a communicative format that reveals the concerns of robophilosophy as global concerns.
Which applications of social robotics (if any) could we rationally want? This may be the shortest formulation of the core question of robophilosophy. The term “robophilosophy” identifies an ongoing “fundamental systematic reconfiguration of philosophy in the face of artificial social agency” that involves three research dimensions—it is “philosophy of, for, and by social robotics”. (Seibt J. Robophilosophy. In: Posthuman Glossary. R. Braidotti, M. Hlavajova, editors. Bloomsbury; 2017. p. 390–4.) Robophilosophy is (i) the philosophical reflection of the socio-cultural and ethical impact of social robots; (ii) it is the employment of philosophical methods (conceptual and phenomenological analysis, formal theory construction, rational value discourse, etc.) for conceptual and methodological problems arising with artificial social agency; furthermore, (iii) it is experimental philosophy undertaken not merely with the familiar (quantitative, qualitative, experimental) methods of empirical research but also by construction (i.e. design and programming of physical and kinematic appearance and interactive capabilities).
Each robophilosophy conference articulates the core question from an angle that, in the perception of the local organizers, ties in with focal points of the current research discussion and public debate. The main agenda of RP2014 was to communicate the need for Humanities expertise in social robotics and HRI research. RP2016 served to delineate robophilosophy more clearly from roboethics and put the focus on the entanglement of theoretical, methodological, and practical-normative problems arising with social robotics. RP2018 stressed the larger socio-political implications and cultural dimension of the role of social robotics. The aim of RP2020 was to direct the challenge back to the research community in the Humanities—relative to recent developments in the research debate and a greater opening towards the Humanities, it seemed the right time to shift from critique to construction. Instead of criticizing omissions in HRI and social robotics research, we wanted to invite our colleagues to offer concrete proposals for how the Humanities can contribute to shaping a future where social robotics is guided by the goals of enhancing socio-cultural values rather than mere utilities.
After a decade of interdisciplinary research into social robotics and Human-Robot Interaction (HRI) we still lack a clear understanding and regulative directives for how to ensure that social robotics will contribute to a community’s resources for human well-being—to the practices in which members of a community experience justice, dignity, autonomy, privacy, security, authenticity, knowledge, freedom, beauty, friendship, sensitivity, empathy, compassion, creativity, and other socio-cultural core values, as these may be shared, or vary, across cultures. In the Call for Papers for RP2020, we invited philosophers and more broadly Humanities researchers to offer constructive answers to questions of method and procedure:
Precisely what, in terms of conceptual tools and research methods, can Humanities researchers, who are trained in the analysis of the experiential complexity of human social interactions, contribute to the task of producing culturally sustainable applications of social robotics?
Precisely how can Humanities research assist us in determining which socio-cultural values we wish to sustain or even to enhance?
Precisely how can philosophers and other Humanities researchers assist engineers in exploring what interacting with ‘social’ robots will come to mean to us, as individuals and societies?
And even more constructively, precisely how can we create cultural dynamics with or through social robots that will not impact our value landscape negatively? How could we design human-robot interactions in ways that will positively cultivate the values we, or people anywhere, care about?
Answers to these and related questions, by over 100 authors of 74 researcher contributions, are contained in these Proceedings. (Videos of the plenary lectures are available at the Robophilosophy YouTube channel.)
The systematic structure of the Proceedings deviates somewhat from the systematic structure of the conference, which was partly necessitated by practical issues (time zones). Here we have put conceptual, methodological, and design issues in front, in order to emphasize that the normative problems arising with social robotics in most cases cannot be addressed without clarifying conceptual issues beforehand or alongside.
While three of the five conference workshops advanced central themes in robophilosophy—sociality, moral standing, and trust—the remaining two workshops presented something novel. The workshop “Robots in Religious Contexts” might mark the beginning of a new research line—robotheology. The workshop “Think-and-Perform Tank”, on the other hand, introduced a new heuristic methodology for the development of culturally sustainable social robotics applications by crossing aesthetic and theoretical epistemologies in interactive improvisation between humans and machines.
The inclusion of art, as distinctive epistemic pathway to the ‘truth’ of human-robot interaction, has been a characteristic of all Robophilosophy Conferences, and for RP2020 we had invited the German theater ensemble “Meinhardt & Krauss” to perform their newest production “ELIZA—Uncanny Love” at the Music Hall in Aarhus. In order to include the symbolic trajectories of this play in some fashion, the artists agreed to produce a film version of selected scenes and in an “artist-audience” dialogue some insights and impressions could be shared. However, here more than elsewhere direct physical experience is decisive, and we are thus looking forward to the live performance of this play at the next conference in 2022.
To conclude with a look ahead, the tasks of robophilosophy cannot be fruitfully addressed from armchairs, ivory towers, or any reflective stance that isolates itself from the dynamics of praxis. As we develop artificial ‘social’ agents, the Humanities need to take on a new role and become pro-active, in order to help us to create technological futures worth living. There are currently two main strategies in robophilosophy.
On the one hand, some researchers engage in the wide-scope commentary of cultural criticism that reflects the role of social robotics in a larger cultural context—this is philosophy of social robotics. The audience of these reflections is typically society at large, and they aim to engender a shift away from pure profit maximization by changing the minds of individuals, and thereby changing practices. Let us call this the edification strategy.
On the other hand, as documented by the majority of the contributions in these Proceedings, we see an increase in philosophy for and by social robotics: concrete proposals for new models and methods, presented by philosophers and other SSH researchers, for how to develop social interactions with robots in culturally sustainable ways. This may be called the optimistic-subversive strategy—it is the trust that by changing the conceptual tools and paradigms of production processes we can change the products and thereby change practices and minds in one step.
Only time can show which strategy will be more successful, and we probably need both. We hope, however, and are encouraged by the increasing number of participating non-philosophers, from the empirical sciences and from engineering, that the next robophilosophy conferences will focus on strengthening the second strategy. In our view, the research actions we need in the future are not at the level of polite or antagonistic dialogue between technology development and the Humanities, but at the level of direct practical collaboration. Cultural sustainability is not something to be deduced from fixed premises, but something to co-develop in praxis.
Social robots are designed to coexist with people and learn through their interactions. We, in turn, are expected to develop ways of behaving, communicating, and organizing that support robots. Inspired by this co-evolving relationship, this talk will explore social robots as “companion artifacts”, focusing critical attention on how our concepts of self, cultural practices, social organizations, and sociotechnical infrastructures are co-constructed with existing and imagined social robots. I discuss how “Japanese culture” is repeatedly assembled in relation to social robots, what it means to “domesticate” robotic technologies, and how community-based methods can incorporate diverse sociocultural values into social robotics.
One thing that distinguishes robots from other machines is the extent to which they operate at the level of meaning and not just mechanism. In this presentation, I will suggest that the embodied nature of robots means that they sometimes function as icons and thus convey meanings in a different manner to other media forms. Moreover, the representational content of social robots sometimes implicates their designers in difficult ethical dilemmas. In order to try to address these, engineers need to think more consciously about the politics of the meanings their robots rely on and convey.
The conversation about social robots and ethics has matured considerably over the years, moving beyond two inadequate poles: superficially utilitarian analyses of ethical ‘risks’ of social robots that fail to question the underlying sociotechnical systems and values driving robotics development, and speculative, empirically unfounded fears of robo-pocalypses that likewise leave those underlying systems and values unexamined and unchallenged. Today our perspective in the field is normatively richer and more empirically grounded. However, there is still work to be done. In the transition from risk-mitigation that accepts the social status quo, to deeper thinking about how to design different worlds in which we might flourish with social robots, we nevertheless have not reckoned with the moral and social debt already accumulated in existing robotics systems and our broader culture of sociotechnical innovation. We relish our creative and philosophical imaginings of a future in which we live well with robots, but without a serious reckoning with the past and present, and the legacies of harm and neglect that must be redressed and repaired in order for those futures to be possible and sustainable. This talk explores those legacies and their accumulated debts, and what it will take to liberate social robotics from them.
An important aspect of transparency is enabling a user to understand what a robot might do in different circumstances. An elderly person might be very unsure about robots, so it is important that her assisted living robot is helpful, predictable—never does anything that puzzles or frightens her—and above all safe. It should be easy for her to learn what the robot does and why, in different circumstances, so that she can build a mental model of her robot. An intuitive approach would be for the robot to be able to explain itself, in natural language, in response to spoken requests such as “Robot, why did you just do that?” or “Robot, what would you do if I fell down?” In this talk, I will outline current work, within project RoboTIPS , to apply recent research on artificial theory of mind  to the challenge of providing social robots with the ability to explain themselves.
This paper provides an analysis of social robots from a care ethics perspective, with relationality and reciprocity at the center. By investigating the impact of social robots on relational reciprocity I suggest a specific kind of deception; humans are deceived into believing that the robot is deserving of reciprocity by the robot’s appearance of responsiveness. Addressing the impact of social robots on reciprocity as a political ideal, the risk identified is a re-direction of resources from humans towards robots; social robots may threaten the ability to reciprocate to, and further may weaken the incentive to give back to, care workers.
Human societies have, historically, undergone a number of moral revolutions. Some of these have been precipitated by technological changes. Will the integration of robots into our social lives precipitate a new moral revolution? In this keynote, I will look at the history of moral revolutions and the role of techno-social change in facilitating those revolutions. I will examine the structural properties of human moral systems and how those properties might be affected by social robots. I will argue that much of our current social morality is agency-centric and that social robots, as non-standard agents, will disrupt that model.
The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because both traditional concepts of agency fail to differentiate the agential capacities of AI systems.
Whether it is humanoid robots or chatbots: their thinking is on the surface and their promises are on the deep end. Superficial expression and activity data are used to make internal emotions, motives and attitudes available for development—mostly based on technologies that are not introspective, i.e. that do not know any impressions or experiential qualities. This paper translates this ambivalence into an analytics of the interconnectivity of intelligence types. Humans, but also machines, become visible here only as designs of the coupling of various intelligences—as carriers that have to be arranged in such a way that different types of intelligence can offer each other their need for complementation. An ethnographic case study draws on these theoretical considerations by comparing two situations of a) statistically and algorithmically modeling future users and b) repairing robot motions in a playful way. In both scenarios, algorithmic and hermeneutical intelligences complement each other by constitutively different modes of interoperability.
People often make ascriptions that they believe to be literally false. A robot, for example, may be treated as if it were a dog, or as if it had certain intentions, emotions, or personality traits. How can one do this while also believing that robots cannot really have such traits? In this paper we explore how Kendall Walton’s theory of make-believe might account for this apparent paradox. We propose several extensions to Walton’s theory, some implications for how we make attributions and use mental models, and an informal account of human-robot interaction from the human’s perspective.
Social robotics and HRI are in need of a unified and differentiated theoretical framework where, relative to interaction context, robotic properties can be related to types of human experiences and interactive dispositions. The aim of this paper is to contribute to this task by providing new descriptive tools. In social robotics and HRI it is commonly assumed that social interactions with robots are due to ‘anthropomorphizing’. We challenge this assumption and argue, on conceptual and empirical grounds, that social interactions with robots are not always the result of anthropomorphizing, i.e., the projection of imaginary or fictional human social capacities, but of sociomorphing, i.e., the perception of actual non-human social capacities. Sociomorphing can take many forms which phenomenally manifest themselves in various types of experienced sociality. We very briefly sketch core elements of the descriptive framework OASIS (the Ontology of Asymmetric Social Interactions) in order to show how one might develop a classificatory system for types of experienced sociality.
We argue against the view that human behavior is the benchmark of robotic performance for every kind of social interaction. To the contrary, it is rather human agents who, in what we call ‘functional social interactions,’ aim at simulating social automatons. An important aspect of this simulation is the agent’s attempt to suppress every indication of the existence of a difference between what she experiences from the “‘I’-perspective” and what is perceived by other agents, the “‘me’-perspective”. Although experiencing this difference is not needed for realizing functional interaction it is, however, needed for what we call “close interhuman relationships”.
Effects of anthropomorphism or zoomorphism in social robotics motivate two opposing tendencies in the philosophy and ethics of robots: a ‘rational’ tendency that discourages excessive anthropomorphism because it is based on an illusion and a ‘visionary’ tendency that promotes the relational reality of human-robot interaction. I argue for two claims: First, the opposition between these tendencies cannot be resolved and leads to a kind of technological antinomy. Second, we can deal with this antinomy by way of an analogy between our treatment of robots as social interactors and the perception of objects in pictures according to a phenomenological theory of image perception. Following this analogy, human- or animal-likeness in social robots is interpreted neither as a psychological illusion, nor as a relational reality. Instead, robots belong to a special ontological category shaped by perception and interaction, similar to objects in images.
In February 2012, Robonaut R2 and Dan Burbank performed the first human-humanoid handshake in space. The handshake welcomed R2 as a crewmember—a social agent rather than a thing. In Heidegger’s terms, it is experienced not only as present-at-hand or ready-to-hand, but also as a quasi-Dasein. Extending Ihde’s concept of alterity relations we argue it is experienced in lively alterity relations, given respect as an other capable of initiating and reacting to contact. R2 is capable not only of executing programs, but of also playing its part in choreographed and improvised collaborative performances within a shared social milieu.
Due to its interdisciplinary nature, the field of HRI uses many concepts typical of the social sciences and humanities, in addition to terms that are usually associated with technology. In this paper, I analyse the problems that arise when we use the term ‘empathy’ to describe and explain the interaction between robots and humans. I argue that this not only raises questions about the possibility of applying this term in situations in which only one of the participants of the interaction is a traditionally understood social subject but also requires answers to questions about such problematic concepts as values and culture.
In this paper, I engage in a deep wonder at the apparent animacy of robot technology. I elucidate on the nature of movement in living organisms, and how this movement is different from the movement of robots. Through the illustration of the Umwelt of a single-celled protozoan and a robot lawnmower, I identify fundamental differences in the purpose behind their movements.
According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly dismissing the apparent personalities of our instruments (mild sociopathy). This paper analyzes and proposes a step toward ameliorating this dilemma by way of the thought of a 5th century North African philosopher and theologian, Augustine of Hippo, who is among those essential in giving us our understanding of relational persons. Augustine’s semiotics, deeply intertwined with our affective life, suggest that, if we are to own persuasive social robots humanely, we must join our instinctive experience of empathy for them to an empathic acknowledgment of the real unknown relational persons whose emails, text messages, books, and bodily movements will have provided the training data for the behavior of near-future social AIs. So doing, we may see simulation as simulation (albeit persuasive), while expanding our empathy to include those whose refracted behavioral moments are the seedbed of this simulation. If we naïvely stop at the social robot as the ultimate object of our cognitive and affective empathy, we will suborn the sign to ourselves, undermining rather than sustaining a culture that prizes empathy and abhors the instrumentalization of persons.
The implementation of culturally sustainable social robotics (SR) puts high requirements on the design of social human-robot interaction. This paper proposes the concept of empowerment technology (ET) as a value-driven framework for advancing the interlocking of human values and computational modeling. A capability-based model of the interactive unity of humans and robots is introduced and applied to a robotic childcare system. This case study shows that culturally sustainable SR in terms of ET is possible if SR addresses the values held by local stakeholders and ensures the support of human empowerment in terms of these values.
Once we have an idea of what culturally sustainable robotic behaviour looks like, we face the problem of how to get a robot to actually behave as such. We argue that for a robot to exhibit behaviour that conforms to the cultural values of the human environment they operate in; they must be equipped with the capability to mindread. Our argument follows from the observation that cultural norms can only be correctly applied when certain conditions are met, and that those conditions can refer to the internal states of the agents taking part in the interaction. Consequently, for an artificial agent to correctly apply a cultural norm, it must infer the internal states of other agents. If a cultural norm is incorrectly applied, then a human agent could consider the resulting behaviour inappropriate. This renders mindreading essential to produce behaviour that respects cultural expectations.
In this paper, through the prism of the notion of workplace identity, we critically reflect on potential challenges of working alongside social service robots in service industries. From feminist studies of workplace identity, we adopt concepts of naturalization and normalization, and discuss how service robots’ “imprisonment” in the role of a friendly and consistent helper may present psychological and political challenges to how service employees relate to and perform their workplace identity.
In the attempt to make robots culturally diverse, social robotics research is overwhelmed by cultural stereotypes. Many researchers introduce concepts such as Culturally Robust Robots to account for the dynamic and flexible nature of culture. These concepts are grounded on an implicit assumption: that current AI methods are epistemologically adequate to represent and reason about “culture”. This paper questions that assumption by looking at two knowledge representation and reasoning (KR&R) methods used in intelligent robotics; argue for the inadequacy of current methods; and call for a critical revision of the use of KR&R in social robotics.