

In order to operate successfully in human social world, robots need to collect and represent information not only about the physical environment, but the other agents and their beliefs and goals, groups of agents with shared beliefs and goals, and institutional structures such as social norms, rules, and conventions. The workshop studies what kind of robot architectures and knowledge representation and reasoning mechanisms are needed in order to deal with such complexity. Will it suffice to do within the conceptual paradigm of intentional agency and the BDI-style approaches that are based on the conceptual primitives of beliefs, desires, and intentions or will it be necessary to add new primitive concepts, like obligations or commitments, or group-based concepts, like collective beliefs or we-intentions? Or are there viable alternatives to the intentional agency paradigm that would be sophisticated enough to enable robots to operate in complex social environments without recourse to attribution of intentional states altogether? These are some of the questions that the workshop aims to tackle.