ICUAS 2020 Paper Abstract

Close

Paper ThA1.4

Tsiourva, Maria (University of Nevada, Reno), Papachristos, Christos (University of Nevada Reno)

LiDAR Imaging-Based Attentive Perception

Scheduled for presentation during the Regular Session "See and Avoid Systems I" (ThA1), Thursday, September 3, 2020, 11:00−11:20, Macedonia Hall

2020 International Conference on Unmanned Aircraft Systems (ICUAS), September 1-4, 2020 (Postponed from June 9-12, 2020), Athens, Greece

This information is tentative and subject to change. Compiled on April 25, 2024

Keywords See-and-avoid Systems, Navigation

Abstract

In this paper we present a novel approach on attentive robotic perception by developing a saliency model on LiDAR imaging. Modern LIDAR sensors provide access to a multitude of structured images, namely intensity, reflectivity, ambient, and range. These images can in turn be fused and a saliency model can be developed in a manner analogous to the human attentive system but tailored to the uniqueness of LiDAR perception. The derived LiDAR-based saliency model exploits a bottom-up approach according to which the reflectivity, intensity, range, and ambient images are compared to each other and through a sequence of image processing steps multiple conspicuity maps are acquired which in turn give rise to a unified saliency map that efficiently encodes which objects are most important and worth further analysis. This attentive perception system on LiDAR imaging gives rise to efficient obstacle detection for robotic systems that provides 360deg coverage and allows millisecond-order execution times on embedded processors. The derived model is experimentally evaluated on a set of datasets from ground and flying robots.

 

 

All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-04-25  23:13:49 PST  Terms of use