REDUAS 2019 Paper Abstract

Close

Paper MoD15T1.3

CHOO, WAI KEONG (Cranfield University), Shin, Hyo-Sang (Cranfield University), Tsourdos, Antonios (Cranfield University)

Reinforcement Learning for Autonomous Aircraft Avoidance

Scheduled for presentation during the Regular Session "Airspace Control" (MoD15T1), Monday, November 25, 2019, 16:20−16:40, Room T1

2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), November 25-27, 2019, Cranfield University, Cranfield, UK

This information is tentative and subject to change. Compiled on April 24, 2024

Keywords See-and-avoid Systems, Simulation, Reliability of UAS

Abstract

Effective collision avoidance strategy is crucial for the operation of any unmanned aerial vehicle. In order to maximise the safety and the effectiveness of the collision avoidance strategy, the strategy needs to solve for choosing the best action by taking account of any situation. In this paper, the traditional control method is replaced by a Reinforcement Learning (RL) method called Deep-Q-Network (DQN) and investigate the performance of DQN in aerial collision avoid- ance. This paper formulate the collision avoidance process as a Markov Decision Process (MDP). DQN will be trained in two simulated scenarios to approximate the best policy which will give us the best action for performing the collision avoidance. First simulation is head-to-head collision simulation following with head-to-head with a crossing aircraft simulation.

 

 

All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-04-24  09:18:29 PST  Terms of use