Princeton University users: to view a senior thesis while away from campus, connect to the campus network via the Global Protect virtual private network (VPN). Unaffiliated researchers: please note that requests for copies are handled manually by staff and require time to process.
 

Publication:

Learning Cooperative and Scalable Behavior for Decentralized Drone Swarms in Adversarial Environments

Loading...
Thumbnail Image

Files

Chang_David.pdf (834.3 KB)

Date

2025-04-14

Journal Title

Journal ISSN

Volume Title

Publisher

Research Projects

Organizational Units

Journal Issue

Access Restrictions

Abstract

Coordinating autonomous drone swarms in decentralized environments presents a significant challenge, especially when designing strategies that scale effectively. In this thesis, we propose a graph-based multi-agent reinforcement learning (MARL) framework that enables drone swarms to autonomously learn cooperative interception behaviors in pursuit-evasion scenarios. Our approach employs graph neural networks (GNNs) to enforce permutation invariance, accommodate varying team sizes, and support decentralized decision-making under limited observability. Each agent operates without access to global state information, relying solely on local observations and limited-range communication. We begin by outlining the relevant background concepts, then detail our proposed methodology. We demonstrate generalization to unseen team sizes and the emergence of decentralized strategies by evaluating our model in benchmark scenarios.

Description

Keywords

Citation