ICUAS'23 Paper Abstract

Close

Paper ThC5.1

Silveira, Jefferson (Queen's University), Cabral, Kleber (Queen's University), Rabbath, Camille Alain (Defence Research and Development Canada), Givigi, Sidney (Queen's University)

Deep Reinforcement Learning Solution of Reach-Avoid Games with Superior Evader in the Context of Unmanned Aerial Systems

Scheduled for presentation during the Regular Session "UAS Applications V" (ThC5), Thursday, June 8, 2023, 14:00−14:20, Room 466

2023 International Conference on Unmanned Aircraft Systems (ICUAS), June 6-9, 2023, Lazarski University, Warsaw, Poland

This information is tentative and subject to change. Compiled on April 24, 2024

Keywords UAS Applications, Autonomy, Simulation

Abstract

This paper presents a deep reinforcement learning (DRL) approach to solve a reach-avoid problem that commonly arises in air defence systems. The focus of this paper is to improve the defender's ability to pursue a more capable (faster) attacker that is trying to evade the defender while aiming for a target. We propose and analyze the resulting DRL strategy for scenarios with one and two pursuers against one evader and for two different types of aircraft, multirotor and fixed-wing, on the two-dimensional plane. During training, the pursuers face a faster evader that executes a saddle-point optimal strategy obtained by analytically solving the problem as a differential game (DG). We compare the win rate of the DRL policy with the win rate when pursuers use the DG strategy against the faster evader. Even though the DG strategy is optimal for aircraft with the same speed, it quickly deteriorates from the pursuer's perspective when the evader is faster. In contrast, the results of the learned strategy show that the learned policy deteriorates slowly, which results in higher win rates in many situations with faster evaders when compared to the DG strategy.

 

 

All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-04-24  22:04:36 PST  Terms of use