ICUAS 2021 Paper Abstract

Close

Paper WeA2.3

Sadhu, Arup Kumar (Tata Consultancy Services Ltd), Shukla, Shubham (Tata Consultancy Services Ltd), Sortee, Sarvesh (Tata Consultancy Services Ltd), Ludhiyani, Mohit (Tata Consultancy Services Ltd), Dasgupta, Ranjan (Tata Consultancy Services Ltd)

Simultaneous Learning and Planning Using Rapidly Exploring Random Tree* and Reinforcement Learning

Scheduled for presentation during the Regular Session "Learning Methods I" (WeA2), Wednesday, June 16, 2021, 11:10−11:30, Kozani

2021 International Conference on Unmanned Aircraft Systems (ICUAS), June 15-18, 2021, Athens, Greece

This information is tentative and subject to change. Compiled on April 26, 2024

Keywords Path Planning

Abstract

The paper proposes an approach to learn and plan simultaneously in a partially known environment. The proposed framework exploits the Voronoi-based property of Rapidly exploring Random Tree ∗ (RRT ∗ ), which balances the exploration-exploitation in Reinforcement Learning (RL). RL is employed to learn policy (sequence of actions), while RRT ∗ is planning simultaneously. Once policy is learned for a fixed start and goal, repeated planing for identical start and goal can be avoided. In case of any environmental uncertainties, RL dynamically adapts the learned policy with the help of RRT ∗ . Apparently, both learning and planning complement each other to handle environmental uncertainties dynamically in real-time and online. Interestingly, more the proposed algorithm runs, its efficiency increases in terms of planning time and uncertainty handling capability over the contender algorithm (i.e., RRT∗). Simulation results are shown to demonstrate the efficacy of the proposed approach.

 

 

All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-04-26  18:33:23 PST  Terms of use