We study models of coordination, negotiation and collaboration in multi-agent systems (MAS). More specifically, we investigate scalable models and protocols for various distributed consensus coordination problems in large-scale MAS. Examples of such problems include conflict avoidance, leader election and coalition formation. We are particularly interested in application domains where robotic or unmanned vehicle agents interact with each other in real-time, as they try to jointly complete various tasks in complex dynamic environments, and where decisions often need to be made “on the fly”. Such MAS applications, we argue, necessitate a multi-tiered approach to learning how to coordinate effectively. One such collaborative MAS application domain is ensembles of autonomous micro unmanned aerial vehicles (micro-UAVs). A large ensemble of micro-UAVs on a complex, multi-stage mission comprised of many diverse tasks with varying time and other resource requirements provides an excellent framework for studying multi-tiered learning how to better coordinate. A variety of tasks and their resource demands, complexity and unpredictability of the overall environment, types of coordination problems that the UAVs may encounter in the course of their mission, multiple time scales at which the overall system can use learning and adaptation in order to perform better in the future, and multiple logical and organizational levels at which large ensembles of micro-UAVs can be analyzed and optimized, all suggest the need for a multi-tiered approach to learning. We outline our theoretical and conceptual framework that integrates reinforcement learning and meta-learning, and discuss potential benefits that our framework could provide for enabling autonomous micro-UAVs (and other types of autonomous vehicles) to coordinate more effectively.