

In this paper we show that with the increasing integration of social robots into daily life, concerns arise regarding their impact on the potential for creating emotional dependency. Using findings from the literature in Human-Robot Interaction, Human-Computer Interaction, Internet studies and Political Economics, we argue that current design and governance paradigms incentivize the creation of emotionally dependent relationships between humans and robots. To counteract this, we introduce Interaction Minimalism, a design philosophy that aims to minimize unnecessary interactions between humans and robots, and instead promote human-human relationships, hereby mitigating the risk of emotional dependency. By focusing on functionality without fostering dependency, this approach encourages autonomy, enhances human-human interactions, and advocates for minimal data extraction. Through hypothetical design examples, we demonstrate the viability of Interaction Minimalism in promoting healthier human-robot relationships. Our discussion extends to the implications of this design philosophy for future robot development, emphasizing the need for a shift towards more ethical practices that prioritize human well-being and privacy.