![loader](/Content/Images/indicator.white.gif)
Ebook: Social Robots with AI: Prospects, Risks, and Responsible Methods
![loader](/Content/Images/indicator.white.gif)
The novel capacities of multimodal generative AI have suddenly brought us much closer to the longstanding vision of ubiquitous social robotics. Robots may soon become part of everyday life, performing many services as well or better than humans. We have entered a decisive phase in the robotic moment of human cultural history, and it is more urgent than ever that we determine “who we are and who we are willing to become” (Sherry Turkle).
This book presents the proceedings of RP2024, the sixth event in the biennial Robophilosophy Conference Series, held from 20 to 23 August 2024 in Aarhus, Denmark. Robophilosophy conferences are the world’s largest events for fully interdisciplinary social-robotics research, featuring contributions from humanities and social-science research in HRI, robotics, AI research, and cognitive science, as well as art events.
RP2024 explored the questions of socio-cultural transformation that can be expected to ensue from the new technological potential of social robotics. As is characteristic for the conferences in this series, RP2024 addressed not only questions of concrete practice, but also the deeper theoretical and existential issues that reach far beyond safety and privacy concerns into the conceptual and normative fabric of our societies and our individual self-comprehension.
The book is divided into 3 parts. Part 1 contains abstracts of the 8 plenary sessions; Part 2 contains 55 session papers divided into 8 sections; and Part 3 contains details of the 10 workshops which formed part of the conference.
The book showcases the way in which technical empirical, conceptual and phenomenological research can make a concrete contribution to the necessary collaborative effort, and will be of interest to all those working in the field.
Robophilosophy 2024 (RP2024) was the sixth event in the biennial Robophilosophy Conference Series, inaugurated ten years ago in 2014 here at Aarhus University. In content, disciplinary scope, spirit, and societal relevance, it proved to be just the kind of intense event for international research exchange that we had hoped for to celebrate a decade of robophilosophy. In view of this anniversary, we want to present this year’s conference within its temporal context, looking back at the development of the research area as manifested in the events of the RP series, and looking ahead to the decade to come.
There is one factor which – rather surprisingly – has remained constant throughout the past decade. From its outset until today, the RP series has featured among ‘the world’s largest conferences for humanities research in and on social robotics’. Robophilosophy conferences are typically multitrack events with 80-120 research contributions and 200-400 participants. While large for the humanities, this size of such research events cannot compare to the enormous annual professional meetings in other disciplines. Given that social robotics has been hailed as the signature technology of the automation age, and given that the humanities arguably plays an indispensable role in successful social robotics, it is surprising that the comparatively small events of the RP series are still the largest international events facilitating research exchange in this area. This year’s event followed the pattern of the series in terms of size and organization: RP 2024 was a multi-track conference featuring 128 research talks (8 plenaries, 62 research talks in 18 sessions, and 58 workshop talks in 11 workshops, as well as 5 contributions to art and performance) contributed by 174 researchers from 26 countries and four continents, and presented for an audience of close to 300 participants, both in-person and online, from all around the world.
Another factor that has stably characterised Robophilosophy conferences is a unique aspiration to reach for the wide interdisciplinary scope that is required to address the empirical, technical, and political, but especially also the ethical, cultural, and existential questions, of the technological revolution that we are currently witnessing. Here we can discern a positive development. The goal of genuine cross- faculty interdisciplinarity is increasingly better realised – more researchers from the technical sciences are joining. With contributors from 39 different academic disciplines, RP2024 had greater interdisciplinary scope than previous events; importantly, over a third of the contributors to RP2024 have their professional training in technical and scientific disciplines (social robotics, cognitive robotics, biorobotics, cognitive science, neuroscience, computer science) etc.
Judging from the statistics of the conference series, this development towards greater interdisciplinarity is supported by a change in the research landscape of the applied humanities themselves; while at RP2014, contributors predominantly had traditional departmental affiliations, at RP2024, most of the contributors from the humanities and social sciences were also members of interdisciplinary research centres or project teams.
In our view, such changes in affiliation go hand in hand with the changes in the research focus of robophilosophy over the past decade. The label ‘robophilosophy’ covers three dimensions of philosophical research – it is defined as ‘philosophy of, for, and by social robotics’. The term ‘philosophy’ in this definition refers to the research activities of philosophers, but at RP conferences the term ‘robophilosphy’ is typically taken in a wider sense, as a stand-in for humanities and social sciences (SSH) research into and on social robotics more broadly. The three dimensions (of, for, and by) of research intentions (reflective, pro-active/collaborative, and revisionary) apply to robophilosophy in both its narrow and wider readings, and the following description of the past development of the research landscape of robophilosophy in its narrow sense also has resonances in the development of SSH research in and on social robotics more broadly (For a more detailed exposition of the three dimensions of robophilosophy see chapter 1 of Robophilosophy—Philosophy of, for, and by Social Robotics, ed. by J. Seibt, R. Hakli, and M. Nørskov, MIT Press 2025.).
In the early days of robo-ethics (introduced by Gianmarco Verruggio in 2002) and robophilosophy (introduced in 2013), researchers mainly contributed to the dimension of the philosophy of social robotics, offering reflections on the ethical, and more broadly, socio-cultural significance of the vision of social robots, i.e., robots that are designed to be able to move and act in the physical and symbolic spaces of human social interactions. Increasingly, however, and certainly from 2016 onwards, the purely reflective stance of the philosophy of social robotics was accompanied by and soon replaced by the collaborative stance of philosophy for social robotics. Here the specific methods and theoretical tools of philosophy (and SSH research) are used for the sake of responsible (culturally sustainable, positive, etc.) development of technology in social robotics. Importantly, however, this second dimension of robophilosophy does not only draw on philosophical ethics. For example, in 2010 Selma Sabanoviç (anthropology) was early to promote a collaborative design programme for the ‘mutual shaping of society and technology’ supported by the use of SSH methods (ethnographic research and other methods of qualitative research) during the development of technology in social robotics. Similarly, in 2016, Kerstin Fischer (linguistics) demonstrated the productive use of methods of conversational analysis for human-robot interaction (HRI) research. Moreover, philosophers increasingly applied the analytical methods and concepts of the theoretical disciplines within philosophy (ontology and social ontology in particular, as well as philosophy of the mind, phenomenology and social phenomenology, and the philosophy of science/technology) to support the research discourse in HRI. These conceptual tools were offered (i) to increase terminological precision in HRI and social robotics, and/or (ii) to support the interdisciplinary integration of these young multidisciplinary research areas. These resources and methods, produced by the collaborative stance of philosophy (SSH research) for social robotics, have been finding an increasing response: since RP 2020 we have seen a trend towards greater receptiveness in HRI and social-robotics research of the contributions of SSH research, with more direct involvement of researchers from robotics. The demography and content of RP2024 also demonstrates this quite clearly: half of the workshops and close to 50% of the session talks present the results of research collaborations on technology design and development, integrating expertise from the technical sciences and SSH research.
The third dimension of robophilosophy, philosophy by social robotics, has also received more attention since 2020, in tandem with the technological advances in AI research. This metaphilosophical dimension of robophilosophy could currently even be said to be in the foreground, part of the impression of the achievements of multimodal generative AI-systems, as is documented by the content of this year’s conference. Philosophy by social robotics means that philosophers and SSH researchers take a self- critical and constructive stance, and investigate how the new type of highly intelligent social robots challenge traditional core assumptions in philosophy and other SSH disciplines about mind, consciousness, agency, autonomy, and emotional intelligence, together with the role of these capacities in social interaction.
Of course, the three dimensions of robophilosophy, the of, for, and by, are always intertwined. At RP 2014 (Social Robotics and the Future of Social Relations) we explored the metaphilosophical significance of social robotics, pointing out the clash between the empirical results of HRI and the traditional (Cartesian) model of subjectivity, i.e., the dominant traditional metaphysical division into subjects and objects, which precludes normative competence for social actions in non-conscious agents. Genuine progress has been made along the second dimension (philosophy for social robotics) in the past decade by rejecting the traditional model of subjectivity, both for practical purposes (the issues of robot rights, sentimentalism, nudging) and for the theoretical modelling of human-robot interactions, and the assault on the traditional idea of sociality as an allegedly exceptional capacity in humans (philosophy by social robotics) has led to productive new accounts of sociality (philosophy for social robotics).
Ten years later, the metaphilosophical challenges now reach even deeper into the foundations of our self-comprehension. The achievements of multi-modal AI-systems, which, particularly when embodied in robots, complete the loop between perception, thought, and action, seem to amount to an empirical proof of behaviourism and functionalism. Several of the short papers collected here explore the limits of what current simulations of mentality can achieve, but also, in the sense of philosophy for robotics, how a differentiated description of human mental capacity can assist us in gauging more precisely what social robots equipped with AI can and should do.
Looking into the future: which of the three dimensions of robophilosophy will be most important in the years to come? Will the next decade push us robophilosophers (i.e., philosophers and SSH researchers) back to a purely reflective stance where we merely observe, report, and comment on deeply transformative, socio-cultural changes? Will politics allow technology companies to proceed independently of socio-cultural expertise and value considerations, driven by monetary gain alone – or, even worse – by the TESCREAL ideology of Silicon Valley? (https://www.dair-institute.org/tescreal/) Or will politics or the social robotics community find their way towards paradigms of R&D processes that operate with multidisciplinary developer teams in which philosophers and SSH researchers can work for successful value-preserving or value-enhancing social robotics applications? Will lead researchers in robotics who are also protagonists of an ethical, reflective approach and have, throughout the years, supported the RP series, such as Raja Chatila, Alan Winfied, Aurélie Clodic, and Rachid Alami, be able to convince their colleagues of the benefits of integrating SSH expertise into application development? Will people be obliged to adjust to robots, or will we discover ways to create human-robot interactions that preserve and enhance what we value about humanity?
Predictions are impossible in this current climate of unusual political uncertainty. The outlook for the next decade can only take the form of a commitment: to try to uphold all three dimensions of robophilosophical research, and to work, in particular, for the second dimension of pro-active collaboration.
From its very beginning the RP conference series has stood under the banner of Sherry Turkle’s observation that we currently ‘live the robotic moment’ of human cultural history, when ‘we need to decide who we are and who we are willing to become’ (S. Turkle. Alone Together. New York, Basic Books; 2011, p.26.). It seemed to us, during the planning stage of the conference in September 2023, that the robotic moment had reached a decisive phase. Multimodal, generative AI-systems bring us closer to the longstanding vision of personalising robots and using them everywhere in our lives, both at work and at home. Thus, for the first time, we have widened the scope of an RP conference somewhat to include discussion of AI- systems – albeit in relation to their embodiment in robots. Accordingly, many of the research contributions to RP2024 collected here take account, explicity and implicitly, of this new development, investigating the socio-cultural and ethical implications of social robots with AI-systems, as well as such systems with simulated social and mental capacities.
In this sense, RP2024 has already provided a new perspective from which to consider social robotics into the next decade. There are three further features of RP2024 that are characteristic for the series, and which will, we hope, accompany it into the future. We particularly want to mention them here as they are not documented in the proceedings. First, RP2024 hosted 11 workshops which prompted intense and well-focused discussion. Despite some being supplemented by short papers from the contributors, the descriptions of the workshops collected here do not, and could not, capture the performative productivity of these encounters. Similarly, the conference included five art sessions, and even though two descriptions are included, these artworks will live on in the experiential memory of the audience as performances and experiences. Finally, RP2024 attracted many early-career researchers and newcomers to robophilosophy, while also benefitting from the presence of established international protagonists in the field. This enabled constructive engagement, not only across disciplines but also across academic ages. The friendly, collaborative atmosphere, typical for all RP conferences so far, together with a steadily growing community spirit, will serve well to carry us into the next decade. AI systems may surpass us in all we can do, but never in all we can be for each other.
Aarhus, October 2024,
Johanna Seibt, Peter Fazekas, and Oliver Santiago Quick
The so-called symbol-grounding problem (SGP) has long plagued cognitive robotics (and AI). If Rob, a humanoid household robot, is asked to remove and discard the faded rose from among the dozen in the vase, and accedes, does Rob grasp the formulae/data he processed to get the job done? Does he for instance really understand the formulae inside him that logicizes “There’s exactly one faded rose in the vase”? Some (e.g., Searle, Harnad, Bringsjord) have presented and pressed a negative answer, and have held that engineering a robot for whom the answer is ‘Yes’ is, or at least may well be, insoluble. This negativity increases if Rob must understand that giving a faded rose to someone as a sign of love might not be socially adept.
We change the landscape, by bringing to bear, in a cognitive robot, an unprecedented, intertwined quartet of capacities that make all the difference: namely, (i) social planning; (ii) multi-modal perception; (iii) pre-meditated attention to guide such perception; and (iv) automated defeasible reasoning about causation. In other words, a genuinely social robot that senses in varied ways under the guidance of how it directs its attention, and adjudicates among competing arguments for what it perceives, solves SGP, or at least a version thereof. An exemplar of such a robot is our PERI.2, which we demonstrate in an environment called ‘Logi-Forms,’ intelligent navigation of which requires social reasoning.
This paper considers symbol grounding in its practical and theoretical aspects. Taking up the theoretical perspective, we begin by considering the relative inefficiency of large language models in acquiring language. A framework is introduced based on the concept of morphological computation and formalised with reference to conditional Kolmogorov complexity: that the form of embodied experience scaffolds human language acquisition. This argument is extended to consider the symbol grounding problem, with particular reference to the origin of language in both the individual and historical sense. It is argued that, while humans also make use of statistical learning, the process of symbol grounding via morphological computation is essential at the origins of language and during early development. It provides a minimal ontology in terms of objects, containers, processes, etc.—basic features which language models must instead brute force by statistical means. The paper closes by reconsidering the symbol grounding problem in light of recent advances, particularly the promise of multi-modal models and robotics, and ultimately concludes that the status of the symbol grounding problem depends upon our aims in the pursuit of artificial intelligence.
The fusion of Large Language Models (LLMs) and robotic systems has led to a transformative paradigm in the robotic field, offering unparalleled capabilities not only in the communication domain but also in skills like multimodal input handling, high-level reasoning, and plan generation. The grounding of LLMs knowledge into the empirical world has been considered a crucial pathway to exploit the efficiency of LLMs in robotics. Nevertheless, connecting LLMs’ representations to the external world with multimodal approaches or with robots’ bodies is not enough to let them understand the meaning of the language they are manipulating. Taking inspiration from humans, this work draws attention to three necessary elements for an agent to grasp and experience the world. The roadmap for LLMs grounding is envisaged in an active bodily system as the reference point for experiencing the environment, a temporally structured experience for a coherent, self-related interaction with the external world, and social skills to acquire a common-grounded shared experience.
On the basis of Ned Block’s distinction between cognitive accessibility and phenomenology (previously known as access consciousness and phenomenal consciousness), this paper argues that social robots equipped with mere cognitive accessibility but no phenomenal consciousness could nonetheless be morally competent to engage in moral deliberation and decision-making in scenarios involving moral dilemmas. Inspired by Bertram F. Malle’s advocacy of moral competence, this paper aims to establish moral competence of social robots without assuming that they have achieved the status of moral agents. Drawing on a survey conducted by the author with the human-in-the-loop methodology, the paper presents sample scenarios involving ethical dilemmas in assisted suicide, truth-telling, rescue operations, and law enforcement intervention, and argues that social robots with sufficiently constructed cognitive access will have the resources—being able to cognitively access and evaluate the relevant information and context—to handle these dilemmas in alignment with human values. What is required for social robots to obtain moral competence is not the ability to feel, to empathize, or to know what it is like to be them. It is rather the cognitive architecture of reasoning, information processing, verbal communication, aided with an appropriate moral framework. This paper employs the moral framework Confucian virtue ethics.
In this paper, I review the recent debate on the prospect of AI consciousness and assess its ethical relevance. Intuitively, when a being is conscious, this is sufficient to ground its non-derivative moral status. However, the semantic content of consciousness is ambiguous, particularly when attributed to non-biotic entities. Standard ethical accounts concerning the moral status of sentient beings presuppose affective or valenced consciousness. Yet, recent speculations about artificial consciousness have largely been driven by considerations of cognitive architecture, often neglecting the affective aspect of subjective experience. Crucially, no persuasive narrative currently exists about how valenced states may emerge in artificial systems. This significantly reduces the ethical relevance of current AI consciousness claims. As ethicists, we are well-advised to treat such claims with care, demanding precision and rigor when consciousness is attributed to artificial systems. This involves developing logically coherent claims about the emergence of artificial valence.
Autonomy is the ability to act, decide and govern oneself independently. From the individual sphere to social dynamics, autonomy appears as a common thread that guides our choices. Traditionally confined to human beings, the concept has expanded to include Artificial Intelligence (AI) systems. The same terminology is therefore used to designate two different types of autonomy. How can we define and differentiate between them? On the other hand, what brings them together and justifies the use of the same term? In this paper, we explore the essence of autonomy in Section 1 and its traditional philosophical definitions, as well as its applications in biomedical ethics. We call it natural autonomy. In Section 2, we define artificial autonomy by explaining what an autonomous system is, and by giving examples to illustrate our statement. Finally in Section 3, we compare these two forms of autonomy.
Whether artificial agents “understand” some activity or idea is a perennial question in the philosophy of AI and robotics. In this paper, I review two ways philosophers have traditionally discussed understanding, and how tensions between these approaches complicate and frustrate the attribution of understanding to the artificial agents of today, like self-driving cars or generative AI. To move past these tensions, I propose an account of understanding as a participatory activity, that is, as an activity that characteristically involves multiple agents. While this account is perhaps surprising, I argue that it handles the challenges of quasi-agents like self-driving cars and LLMs in an intuitive and satisfying way from the perspective of common-sense psychology.
In this paper we show that with the increasing integration of social robots into daily life, concerns arise regarding their impact on the potential for creating emotional dependency. Using findings from the literature in Human-Robot Interaction, Human-Computer Interaction, Internet studies and Political Economics, we argue that current design and governance paradigms incentivize the creation of emotionally dependent relationships between humans and robots. To counteract this, we introduce Interaction Minimalism, a design philosophy that aims to minimize unnecessary interactions between humans and robots, and instead promote human-human relationships, hereby mitigating the risk of emotional dependency. By focusing on functionality without fostering dependency, this approach encourages autonomy, enhances human-human interactions, and advocates for minimal data extraction. Through hypothetical design examples, we demonstrate the viability of Interaction Minimalism in promoting healthier human-robot relationships. Our discussion extends to the implications of this design philosophy for future robot development, emphasizing the need for a shift towards more ethical practices that prioritize human well-being and privacy.
In the following work, we introduce the mathematical and theoretical underpinnings of dynamical systems theory and enactivism and extend them to child-robot interaction. We believe this approach leads to more tangible methods for studying such interactions. Dynamical Systems Theory (DST) are described and applied through Participatory Sense-Making (PSM), an enactive approach to social cognition. While PSM does well to lay out a new level of analysis for social interactions between humans, robots don’t cleanly fit in. We propose here that a child’s perception of a robot as a genuine sense-maker allows the interaction process to be considered a dynamic and coupled system. Perceived sociality is integral to creating and sustaining a meaningfully dynamic interaction, and we call for finer distinctions to that end while studying child-robot interaction. Our proposed spectrum of perceived sociality, informed and grounded by a dynamical systems approach to child development and theories aimed at socially categorizing a robot interactor, such as sociomorphing, operationalizes the study of child-robot interactions. This approach enhances dynamical and enactive systems methodologies in developmental research and human-robot interaction studies.
In social interactions, interpersonal distance influences relationships, provides protection, and regulates arousal. Despite the intuitive nature of adopting specific distances, little is known about comfortable interpersonal distances with social robots. Here, 66 participants saw individuals standing face-to-face with a robot at different distances and pressed a button when seeing a woman or a man (in different blocks). In line with the negativity bias hypothesis, suggesting quicker reaction times to negative stimuli, participants showed a preference for increased distances, resulting in longer reaction times. Human-likeness of robots moderated the link between distance and arousal. The most human-like robot was less liked and evoked higher arousal. These findings have implications for designing social robots and optimizing interactions, particularly in educational or medical contexts.
I ague that considering interactions with artificial agents in terms of emotionally-loaded scripts can contribute to explaining our attribution of emotional states to social robots as well as our emotional reactions during interactions with them. Moreover, it helps us identify the normative components of such interactions. Evidence suggests we attribute emotion to artificial agents, and that we experience emotions towards them, despite knowing they do not experience emotions in a human sense. I propose that these situations activate scripts and schemata that come with expectations on how agents should behave and feel. Scripts contain information about expected emotional reactions, and their activation prescribes the interpretation of emotions in normative ways, as well as emotional attributions. I thus suggest that, when interacting with social robots, our behaviors and emotions, as well as our attributions, are highly normatively regulated.