Ebook: Social Robots in Social Institutions
Social institutions emerge from social practices which coordinate activities by the explicit statement of rules, goals, and values. When artificial social actors are introduced into the physical and symbolic space of institutions, will this affect or transform institutional structures and practices, and how can social robotics as an interdisciplinary endeavor contribute to the ability of our institutions to perform their functions in society?
This book presents the Proceedings of Robophilosophy 2022, the 5th event in the biennial Robophilosophy conference series, held in Helsinki, Finland, from 16 to 19 August 2022. The theme of this edition of the conference was Social Robots in Social Institutions, and it featured international multidisciplinary research from the humanities, social sciences, Human-Robot Interaction, and social robotics. The 63 papers, 41 workshop papers and 5 posters included in this book are divided into 4 sections: plenaries, sessions, workshops and posters, with the 41 papers in the ‘Sessions’ section grouped into 13 subdivisions including elderly care, healthcare, law, education and art, as well as ethics and religion. These papers explore the anticipated conceptual and practical changes which will come about in the course of introducing social robotics into public and private institutions, such as public services, legal systems, social and healthcare services, or educational institutions.
The research contributions collected here offer cutting edge explorations of the societal significance of social robots for the future of social institutions – they will be of interest to both researchers in Robophilosophy, Human-Robot Interactions, and robotics, as well as private companies and policy makers aiming to place artificial social agents in social institutions.
This volume contains the Proceedings of Robophilosophy 2022: Social Robots in Social Institutions, the fifth event in the biennial Robophilosophy Conference Series, which was held in Helsinki, Finland, August 16–19, 2022. The conference series was initiated by an interdisciplinary research group at Aarhus University in August 2014, with an event that marked the beginning of the new field of robophilosophy. Robophilosophy is defined as “philosophy of, for, and by social robotics” – a new area of applied philosophy undertaken in close interdisciplinary contact with empirical research in HRI, technical design, and robotics engineering. After the second event in 2016, which also took place in Aarhus, the series began to travel internationally with periodic changes of conference locations. In 2018, the conference was held in Vienna, Austria. The 2020 conference was again organized by Aarhus University, but it was held fully online because of the COVID-19 pandemic. In 2020, at the early stage of planning the Helsinki conference, we expected that by 2022 the pandemic would have fully subsided, and we could return to the traditional in-person conference format. However, the pandemic lasted much longer than we expected. In late 2021, when the call for papers was already sent out, yet another wave of the pandemic hit, and we started to worry about not receiving enough submissions unless we allowed for online presentations as well. Hence, we found ourselves back at the drawing board, now saddled with the task of planning how to organize a hybrid conference instead of a purely on-site event. In February 2022, Russia invaded Ukraine. Any war is deeply shocking and reason to despair of humanity, but this war, after a period of peace of more than 75 years after the devastation of the Second World War, affected Europeans profoundly. It also raised the concern that potential participants may not feel comfortable travelling to a country sharing a long border with the attacker. However, while such extra-academic worries were constantly in the background, in our small world all went very well—we had around 200 participants from 29 countries and around 100 high quality research presentations. Best of all, most of the authors were willing to attend in person, and created a very concentrated, friendly and intellectually inspiring atmosphere. Even the fickle Finnish weather was cooperative and allowed for discussions in street cafés long into the night. Simply put, the conference was a success at all levels—and yet, we were always aware that it constituted a four-day bubble offering the delight of joint academic work, a temporary mental safe haven in the midst of the horrible war that is unfortunately still raging in Ukraine as we write this.
Each Robophilosophy Conference has a special theme and for the year 2022 we chose Social Robots in Social Institutions. The aim of social robotics is to create entities capable of social interaction with humans. This raises questions about the notion of sociality, because our standard notions of sociality presuppose that the participants of interactions are persons, not robots or other artificial agents. In human societies, many forms of social interaction have been institutionalised. Broadly speaking, institutions emerge from social practices that coordinate activities by establishing formal and informal rules which, in turn, state the goals and values they serve and assign roles and positions with corresponding rights and responsibilities. Institutions guide individuals to coordinate their actions and cooperate in a way that stabilizes the institutions and serves the goals of the society. One of the aims of the conference was to understand and to critically evaluate how social robotics can be expected to transform, and partly already is transforming, institutional structures, institutional practices, and the institution–citizen interaction for instance in the fields of social and health care, education, science, media, and law.
After almost two decades of interdisciplinary research into social robotics and Human–Robot Interaction (HRI) we still lack a clear understanding and regulative directives for how to ensure that social robotics will contribute to a community’s resources for affording human well-being—to the practices and institutions in which members of a community experience justice, dignity, autonomy, privacy, security, authenticity, knowledge, freedom, beauty, friendship, sensitivity, empathy, compassion, creativity, and other socio-cultural core values, as these may be shared, or vary, across cultures. Central questions concerning the larger societal significance of social robots are more urgent than ever: How does the introduction of social robots into our social institutions shape them and their ability to perform their functions in the society? How should we understand sociality in the context of human-robot interactions? What about social roles and practices? If we use social robots in social institutions, what are the effects of trust in institutions, responsibility allocations, and questions concerning transparency? What are the ecological and environmental consequences of large-scale use of social robotics and its technological prerequisites such as smart devices, networks, and computing and memory resources? How to institutionally prevent potentially harmful consequences? How to implement understanding of human social institutions including their normative aspects into the development of social robots? Robophilosophy 2022 advanced the discussion of these and related questions, in plenaries, session talks, workshops, and individual exchanges that only an in-person event can enable.
The Robophilosophy Conference Series aims at promoting interdisciplinary Humanities research in and on social robotics. So we were particularly pleased that researchers from other disciplines participated, especially also from the multidisciplinary areas of HRI studies and social robotics. These cross-overs from overlapping interdisciplinary areas are needed to combine all relevant perspectives and results and approach the complex task of finding pathways towards developing social robots in a responsible fashion. This, in fact, is the core message of the Robophilosophy Conference Series: Only if humanities and social science researchers join forces with the research community and practitioners in social robotics and HRI can we create futures worth living.
With the organization of the conference in Helsinki we aimed to contribute to this mission, by way of exploring and debating themes, topics, and questions related to “Social Robots in Social Institutions”, embracing both theoretical and practical angles, as is the peculiar character of all Robophilosophy conferences.
The articles and abstracts collected in these Proceedings, which comprise almost all the research contributions presented at the conference, investigate social robots in various institutional contexts including religious, legal, and care institutions. The conference was an invitation and an opportunity to philosophers and other researchers in humanities and social sciences, as well as researchers in social robotics and HRI, to explore together how interdisciplinary research can contribute to shaping a future where social robotics is guided by the goals of enhancing socio-cultural values rather than mere economic utilities.
Raul Hakli, Pekka Mäkelä, and Johanna Seibt
In this paper, I discuss what I call a new control problem related to AI in the form of humanoid robots, and I compare it to what I call the old control problem related to AI more generally. The old control problem – discussed by authors such as Alan Turing, Norbert Wiener, and Roman Yampolskiy – concerns a worry that we might lose control over advanced AI technologies, which is seen as something that would be instrumentally bad. The new control problem is that there might be certain types of AI technologies – in particular, AI technologies in the form of lifelike humanoid robots – where there might be something problematic, at least from a symbolic point of view, about wanting to completely control these robots. The reason for this is that such robots might be seen as symbolizing human persons and because wanting to control such robots might therefore be seen as symbolizing something non-instrumentally bad: persons controlling other persons. A more general statement of the new control problem is to say that it is the problem of describing under what circumstances having complete control over AI technologies is unambiguously good from an ethical point of view. This paper sketches an answer to this by also discussing AI technologies that do not take the form of humanoid robots and that are such that control over them can be conceptualized as a form of extended self-control.
Every day we see news about advances and the societal impact of AI. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring AI ethics is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies’ principles and values.
The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots in everyday domains: workplaces, schools, healthcare contexts, and homes. At the same time, the current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide. Socially assistive robotics has the potential to contribute significantly to addressing those challenges, without amplifying concerns about the future of work. This talk will discuss HRI methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer’s patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Finally, implications on human work, care, and purpose will be discussed.
In this paper the role of robots in institutional settings is considered and, in particular, the possibility of robots occupying institutional roles. It is argued that robots are not rational agents and, therefore, cannot choose their ultimate ends, including the ultimate collective ends of institutions. Moreover, robots are not moral agents and cannot exercise the moral judgments, including discretionary moral judgments, required of institutional role occupants. Rather robots can only be organisational role occupants performing a restricted range of specialised tasks that do not require moral judgments and doing so in circumscribed domains under the tight control of human beings. Robots in institutional roles are, or ought to be, technological instruments under the control of human agents in the service of the collective goods definitive of the institutions in question.
This talk will address some key decisional issues that are necessary for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible and fluent manner a human-robot joint action seen as a collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour, and how this influence qualitatively the robot decisional, planning, control and communication processes.
So-called “killer robots”, i.e., lethal autonomous weapon systems capable of selecting and attacking military targets without human intervention or control, are a particularly controversial topic in machine ethics. Many prominent AI researchers and scientists are calling for a ban of these devices. The paper will discuss three ethical objections against LAWS: (1) the argument from the responsibility gap (Sparrow), (2) the argument from human agency (Leveringhaus), and (3) the argument from moral duty (Misselhorn). These three arguments raise fundament ethical concerns about LAWS. They are supposed to show that lethal autonomous weapon systems would not just have morally bad consequences but that the use of killer robots is morally wrong in itself.
To substitute humans with machines in elderly care and companionship sounds as nonsensical in the best of cases and as an absolute lack of humanity in the worst. However, the advancements of robotics offer novel ways to understand what machines could do for the elderly, and perhaps it is time to re-think our established assumption that the elderly have no friends nor companions in social robots. This contribution aims to argue that the social acceptance and integration of social robots into elderly people’s lives could be supported by what the author calls “a right to robot” that every elderly person should have. This right to robot could also be our way to comply with our duty to the older generation to respect and ensure its autonomy and freedom for as long as possible and to provide this generation with the most advanced and novel ways to live one’s life independently and in line with individual preferences.
Research and development of robots for health and elder care is guided by political, economic and technological visions that imagine social robots as a way to address the future care crisis caused by demographic change. Within that vision, robots were originally imagined as substitutes or assistants for care givers, thereby increasing efficiency and mitigating the lack of personnel. On a practical level, however, this overarching image of robots doing genuine care is neither technologically feasible nor aligned with practices of good care. But although this image is more or less abandoned in professional discourse, the expectation that the use of robots will somehow address the care crisis remains. This discrepancy creates special problems for embedded ethicists. We present findings from interviews and workshops with care workers that question the vision of robots as a technological solution for a societal challenge. In conclusion, this development seems to severely limit social human-robot interaction to a merely instrumental role.
In this work I am discussing the possibilities and limitations of two objective list approaches to human dignity for assessing the impact of the use of carebots in aged care facilities, focusing on the application in the context of the German legal framework and health care regime. Both the capabilities approach in the interpretation of Martha Nussbaum (CA) and a definition of human dignity from the perspective of the psychology of shame (SD) have been proposed as an ethical framework for improving the quality of care with the overarching goal of preserving human dignity. I will first demonstrate that the CA conflicts with German law with respect to patients in a persistent vegetative state and—due to its definition of the human being based on abilities and active striving—runs the risk of not recognizing a large majority of people in need of care as human beings and thus as bearers of dignity. With respect to the SD, the robot introduces a new kind of shame—Promethean shame—into the complex network of stakeholder relationships, which the SD cannot grasp in its current definition.
Companion-type social robots are expected to support elders socially and emotionally. Whilst studies show promising results, ethical concerns have also been raised. Yet, there are only a few studies that investigate ethical issues empirically. The current study investigates elders’ expectations about the companion-type robot Pleo and how much these were fulfilled after prolonged interaction through an ethical lens, thereby also targeting in differences between elders living independently and with assistance. In the study, N = 33 elders living with and without assistance in the community or in nursing homes interacted with the robot dinosaur Pleo as it suited them in their home environment for two weeks. Expectations regarding a) the robot’s capabilities, and b) the robot’s impact on elders’ lives were assessed beforehand by means of open-ended interview questions. After two weeks, elders rated the fulfillment of their individual expectations on a 7-point scale. Overall experiences as recorded interview after the interaction period were also evaluated. The results show that elders expected the robot to behave almost like a living being. Whilst, overall, participants expected some therapeutic effects, elders living with assistance anticipated to derive fun and enjoyment from the robot. Negative effects like undue responsibility, fading of enjoyment, or anger and frustration were not uncommon. The results are discussed in the light of their ethical implications.
Elder-care robots have been suggested as a solution for the rising elder-care needs. Although many elder-care agents are commercially available, there are concerns about the behaviour of these robots in ethically charged situations. However, we do not find any evidence of ethical reasoning in commercial offerings. Assuming that this is due to the lack of agreed-upon standards, we offer a set of ethical ‘whetstones’ for them to hone their abilities. We believe that this will help to build better ethically sensitive elder-care robots, and also to understand the robot’s behaviour before making them a part of an elder-care organisation.
The development of social robots in medicine is an important area of development in robotics. It is possible that in the future, robots will become able to (partly) replace physicians. Several authors think robots ought not to replace physicians because they cannot be empathic, and empathy is necessary for good are. In this paper, I show that although widely accepted, this argument rests on two questionable assumptions. The first one is that because empathy is highly beneficial to care, it is necessary for good care. The second is that because empathy is necessary for good care performed by humans, it is also necessary for good care performed by robots. I discuss these two assumptions and show that the empathy-based argument against the use of social robots in medicine is not as convincing as we might have originally thought. I conclude that we need to explore further what good care is and the role that empathy plays in it.
The present study explores the question of intercultural differences in attitudes towards robotic assistance for care work, by using the concept of “norms of care” as a tertium comparationis for systematic comparative analysis. The main hypothesis is that normative expectations of what constitutes appropriate care work will affect the way how professional caregivers interpret the offers of technological assistance from care robotics. Based on this concept, we have empirically investigated how the reception of assistive robot technology in nursing care takes place and which phenomena emerge as significant intercultural differences. For this purpose, we have conducted HRI experiments with robotic arms designed for tele-manipulated assistance for nursing tasks at the bedside, followed by individual interviews with test subjects. Our comparative analysis shows that significant intercultural differences can be found in framing the norm of care: while the German caregivers focus on the question of whether the sphere of interpersonal trust relationship can be maintained in case of a robotization of care services and problematize the potential threat to dignity of individuals, the non-German caregivers generally follow a collective principle of care relationship and apply it as a central criterion for moral evaluation of technology use.
Social robots and chatbots are becoming increasingly significant for clinical applications in mental health services. As a means of mental health enhancement, robots can be used as counselors to help users make better decisions to improve their wellbeing. In this context, ethical aspects of both robots’ and chatbots’ application as mental healthcare improvement assistants are crucially important. In this paper, we discuss some ethical challenges associated with counselor bots and chatbots by considering the most important ethical principles of counseling and psychotherapy, such as autonomy, confidentiality, intimacy, responsibility, and reciprocity. We try to explore how these ethical values are at risk when using bots in mental healthcare practices and draw attention to the need of adjusting and re-setting boundaries and ethical rules in designing human/bot interaction.
As societies across the developed world are dealing with problems associated with aging populations, a promising solution in the form robotics technologies that support elderly people in their daily healthcare has emerged. However, emerging technology are like a double-edge sword. Although healthcare robots can be used for elderly and disabled people with different levels of assistive supports, ie by monitoring their real time health for prompt interaction or by communicating with people to reduce their anxiety, they also bring with them many concerns from an ethical, legal and societal perspective. Among them, one serious issue is privacy and data protection. When healthcare robots are powered by machine learning and distributed databases, “data-driven” networked healthcare robots will be able to gather a huge amount of personal data in physical environments through their interactions with humans. There are several alternative approaches of data protection for “data-driven” networked healthcare robots, including privacy by design, de-identification of data and informed consent. In this article our focus is on the issue of informed consent in human-robot interaction. My argument is that specific conditions of intelligent robots (i.e., embodiment) will mean that the principle of informed consent cannot just be copied and applied to “data-driven” networked healthcare robots. I will make the comparison of the two types of informed consent to clarify our targeted “informed consent in human-robot interaction”. Furthermore, there is a need to discuss potential legal conflicts of this new type of informed consent when it is applied to different countries and their respective legal regimes. Hence, in this article I will conduct a comparative legal analysis of European, American and Japanese data protection law to investigate how such differences might influence the implementation of informed consent to data-driven healthcare robots.
There are two main goals I set out to achieve here. On the one hand, I intend to delve deeper into the exploration of the sense of touch, which has been largely ignored by philosophers, including empirically minded ones. And on the other hand, albeit not decisively conclusive, my claim is that there are important boundaries of the implementation of social robots in a care setting such as long-term care facilities based on interactive limitations. The limitations in the interactions between human subjects (both care recipients and care givers) and social robots are most visible where touch is involved, hence my preoccupation with it.
This paper presents the findings of an exploratory, qualitative case study where dental professionals’ and care clients’ experiences of and expectations for robot accent are explored. Our research focuses on humanoid, social, co-located robots advising clients in standard Sweden-Swedish on preventive dental selfcare. As steps in a two-year co-creation project, we performed interviews within the linguistic context of Finland’s minority Finland-Swedish population. Our aim was to explore stakeholders’ experiences of and expectations for robot accents in a care context. Thematic analysis revealed six main themes: Expressions, benefits, and barriers to robot accent as conveyor of message, robot accents as support, and as disturbance, and lastly legitimacy of robot accents. The paper demonstrates the central role of robot accent in experiences of HRI in a Finland-Swedish setting, manifestations of in- and out-groupness, and the multifaceted expectations for robot accent recognition and speech.
This article is devoted to the question of how robots are used in policing and what opportunities and risks arise in social terms. It begins by briefly explaining the characteristics of modern police work. It puts service robots and social robots in relation to each other and outlines relevant disciplines. The article also lists types of robots that are and could be relevant in the present context. It then gives examples from different countries of the use of robots in police work and security services. From these, it derives the central tasks of robots in this area and their most important technical features. A discussion from social, ethical, and technical perspectives seeks to provide clarity on how robots are changing the police as a social institution and with social actions and relationships, and what challenges need to be addressed.
The paper argues that we should grant negative rights to humanoid robots. These are rights that relate to non-interference e.g., freedom from violence, or freedom from discrimination. Doing so will prevent moral degradation to our human society. The consideration of robot moral status has seen a progression towards the consideration of robot rights. This is a controversial debate, with many scholars seeing the consideration of robot rights in black and white. It is, however, valuable to take a nuanced approach. This paper highlights the value of taking a nuanced approach by arguing that we should consider negative rights for humanoid robots. Where a lot of discussion about robot rights centres around the possibility of robot consciousness which would warrant robots being protected by rights for their own moral sakes, the paper takes a human-centred approach. It argues that we should, at least, grant negative rights to humanoid robots for the sake of human beings and not necessarily only for the sake of robots. This is because, given the human-likeness of humanoid robots, we relate to them in a human-like way. Should we, in the context of these relations, treat these robots immorally, there is concern that we may damage our own moral fibre or, more collectively, society’s moral fibre. Thus, inhibiting the immoral treatment of robots, protects the moral fibre of society, thereby preventing moral degradation in our human society.
How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper.
This paper seeks to bring a new social perspective to the concept of robot literacy in a second language (L2) learning context. We studied the kind of representations that learners created of a robot after it was introduced as a new peer learner and integrated into teaching and learning practices. We focused on the development of long-term relationships to shed light on how representations of the robot changed during a school term. In addition, we analysed how learners acquired interaction skills with the robot, contributing to the social aspect of HRI. The robot, both as a means of learning and an object of learning, thus contributes a new perspective to robot literacy. The semester-long pedagogical experiment resulted in a new learning circle. Therefore, we advocate a new interpretation of sociality in the classroom based on this new digital dimension.
On the background of recent concerns regarding online education in times of pandemic and a growing pedagogical divide in terms of unequal access to skilled teachers, we consider it timely to open a debate surrounding the use of social robots in education fulfilling a role that is anchored in the institution of pedagogs in Antiquity and which was somewhat left aside from contemporary inquiries: the pedagogical role of supporting and complementing the teaching activity. We develop our conceptual philosophical contribution to this debate around the following question: Is the use of social robots in primary and lower secondary education an intervention that can contribute positively to bridging the pedagogical divide? We offer a moderate-positive answer to this question within the normative framework of Aristotelian virtue ethics. Namely, we argue that social robots in the form of collaborative robots (cobots) can be co-designed as pedagogical enabling devices to provide support to children for acquiring intellectual virtues necessary in the educational process and thus contribute to solving part of the pedagogical divide.