ICUAS 2019 Paper Abstract

Close

Paper ThC2.2

Wu, Hsiang-Huang (PRAIRIE VIEW A and M UNIVERSITY)

Real-Time Single Object Detection on the UAV

Scheduled for presentation during the Regular Session "UAS Applications V" (ThC2), Thursday, June 13, 2019, 16:20−16:40, Heritage C

2020 International Conference on Unmanned Aircraft Systems (ICUAS), June 11-14, 2019, Athens, Greece

This information is tentative and subject to change. Compiled on March 28, 2024

Keywords UAS Applications, Training, Autonomy

Abstract

The demand for mission critical tasks, especially for tracking on the UAVs, has been increasing due to their superior mobility. Out of necessity, the ability of processing large images emerges for object detection or tracking with UAVs because the object cannot be recognized clearly from an image in low resolution when the UAV flies high. As such, the requirements of low latency and lack of internet access under some circumstances become the major challenges. In this paper, we present a modeling method of CNN that is dedicated to single object detection on the UAV without any transfer learning model. Not limited to the features learned by the transfer learning model, the single object can be selected arbitrarily and specifically, even can be distinguished from those other objects in the same category. Our modeling method introduces the inducing neural network that follows the traditional CNN and plays the role of guiding the training in a fast and efficient way with respect to the training convergence and the model capacity. Using the dataset released by DAC 2018, which contains 98 classes and 96,408 images taken by UAVs, we present how our modeling method develops the inducing neural network that integrates multi-task learning drawn from the state-of-the-art works to achieve about 50% of IoU (Intersection over Union of the ground- truth bounding boxes and predicted bounding boxes) and 20 FPS running on NVIDIA Jetson TX2. In the experiment, we collect the images from the drone and train a model for detecting the car. With our model, the drone can do the inference of an image in size of 720x1280 and navigate itself to track the car using the inference result in one second.

 

 

All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-03-28  23:38:17 PST  Terms of use