In recent work, we showed that stable formations can emerge through negotiation in a lower-dimensional latent space (which we called geometric embeddings). Appropriately constructed embeddings yield globally stable equilibria based solely on local observations and decisions. We extend this work by applying learning techniques to optimize the geometry of the swarm along the resultant equilibrium manifold.

We extend previous work by applying reinforcement learning (RL) to control the orientation of the swarm. In a proof-of-concept, we implem this idea using Continuous Action Learning Automata to learn the optimal orientation (azimuth and altitude angle) of the embedding plane. In this implementation, collective learning is coordinated by a randomly-selected leader agent.

Updates on progress posted here.

This will also be the topic of an upcoming talk at Ingenuity Labs Research Institute. Time/location TBD.