In centralized multi-agent systems, often modeled as multi-agent partially observable Markov decision processes (MPOMDPs), the action and observation spaces grow exponentially with the number of agents, making the value and belief estimation of single-agent online planning ineffective. Prior work partially tackles value estimation by exploiting the inherent structure of multi-agent settings via so-called coordination graphs. Additionally, belief estimation has been improved by incorporating the likelihood of observations into the approximation. However, the challenges of value estimation and belief estimation have only been tackled individually, which prevents existing methods from scaling to many agents. Therefore, we address these challenges simultaneously. First, we introduce weighted particle filtering to a sample-based online planner for MPOMDPs. Second, we present a scalable approximation of the belief. Third, we bring an approach that exploits the typical locality of agent interactions to novel online planning algorithms for MPOMDPs operating on a so-called sparse particle filter tree. Our experimental evaluation against several state-of-the-art baselines shows that our methods (1) are competitive in settings with only a few agents and (2) improve over the baselines in the presence of many agents.

Citation

  Galesloot, M., Simão, T. D., Junges, S., & Jansen, N. (2024). Factored Online Planning in Many-Agent POMDPs. AAAI.

@inproceedings{Galesloot2024factored,
  author = {Galesloot, Maris and Sim\~{a}o, Thiago D. and Junges, Sebastian and Jansen, Nils},
  title = {Factored Online Planning in Many-Agent POMDPs},
  booktitle = {AAAI},
  year = {2024}
}