Publication: Learning Cooperative and Scalable Behavior for Decentralized Drone Swarms in Adversarial Environments
Files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Coordinating autonomous drone swarms in decentralized environments presents a significant challenge, especially when designing strategies that scale effectively. In this thesis, we propose a graph-based multi-agent reinforcement learning (MARL) framework that enables drone swarms to autonomously learn cooperative interception behaviors in pursuit-evasion scenarios. Our approach employs graph neural networks (GNNs) to enforce permutation invariance, accommodate varying team sizes, and support decentralized decision-making under limited observability. Each agent operates without access to global state information, relying solely on local observations and limited-range communication. We begin by outlining the relevant background concepts, then detail our proposed methodology. We demonstrate generalization to unseen team sizes and the emergence of decentralized strategies by evaluating our model in benchmark scenarios.