Whether artificial agents “understand” some activity or idea is a perennial question in the philosophy of AI and robotics. In this paper, I review two ways philosophers have traditionally discussed understanding, and how tensions between these approaches complicate and frustrate the attribution of understanding to the artificial agents of today, like self-driving cars or generative AI. To move past these tensions, I propose an account of understanding as a participatory activity, that is, as an activity that characteristically involves multiple agents. While this account is perhaps surprising, I argue that it handles the challenges of quasi-agents like self-driving cars and LLMs in an intuitive and satisfying way from the perspective of common-sense psychology.