ICUAS 2020 Paper Abstract


Paper ThC3.1

Yang, Chenhao (University of Tübingen), Liu, Yuyi (Kyoto University), Zell, Andreas (University of Tübingen)

RCPNet: Deep-Learning Based Relative Camera Pose Estimation for UAVs

Scheduled for presentation during the Regular Session "UAS Applications III" (ThC3), Thursday, September 3, 2020, 17:00−17:20, Edessa

2020 International Conference on Unmanned Aircraft Systems (ICUAS), September 1-4, 2020 (Postponed from June 9-12, 2020), Athens, Greece

This information is tentative and subject to change. Compiled on September 25, 2020

Keywords UAS Applications, Navigation, Path Planning


In this paper, we propose a deep neural-network-based regression approach, combined with a 3D structure-based computer vision method, to solve the relative camera pose estimation problem for autonomous navigation of UAVs. Different from existing learning-based methods that train and test camera pose estimation in the same scene, our method succeeds in estimating relative camera poses across various urban scenes via a single trained model. We also built a Tuebingen Buildings database of RGB images collected by a drone in eight urban scenes. Over 10,000 images with corresponding 6DoF poses as well as 300,000 image pairs with their relative translational and rotational information are included in the dataset. We evaluate the accuracy of our method in the same scene and across scenes, using the Cambridge Landmarks dataset and the Tuebingen Buildings dataset. We compare the performance with existing learning-based pose regression methods PoseNet and RPNet on these two benchmark datasets.



All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-09-25  16:03:29 PST  Terms of use