ICUAS'17 Paper Abstract


Paper ThC2.3

Tulpan, Dan (National Research Council Canada), Bouchard, Cajetan (National Research Council Canada), Ellis, Kristopher (National Research Council Canada), Minwalla, Cyrus (National Research Council Canada)

Detection of Clouds in Sky/cloud and Aerial Images Using Moment Based Texture Segmentation

Scheduled for presentation during the "See-and-avoid Systems - II" (ThC2), Thursday, June 15, 2017, 16:20−16:40, Salon AB

2017 International Conference on Unmanned Aircraft Systems, June 13-16, 2017, Miami Marriott Biscayne Bay, Miami, FL,

This information is tentative and subject to change. Compiled on April 12, 2021

Keywords See-and-avoid Systems, UAS Applications


Unmanned aircraft flying beyond line of sight in uncontrolled airspace need to maintain adequate separation from local inclement weather patterns for regulatory compliance and operational safety. Although commercial solutions for weather avoidance exist, they are tailored to manned aviation and as such either lack the accuracy or the size, weight, and power (SWaP) requirements of small Unmanned Aerial System (UAS). Detection and ranging to the cloud ceiling is a key component of weather avoidance. Proposed herein is a computer vision approach to cloud detection consisting of feature extraction and machine learning. Six image moments on local texture regions were extracted and fused within a classification algorithm for discrimination of cloud pixels. Three different popular classifiers were evaluated for efficacy. Two publicly available datasets of all-sky images were utilized for training and test datasets. The proposed approach was compared to five well-known thresholding techniques via quantitative analysis. Results indicate that our method consistently outperformed the popular thresholding methods across all tested images. Comparison between the classification techniques indicated random forests to possess the highest training accuracy, while multilayer perceptrons showed better prediction accuracy on the test dataset. Upon extending the method to realistic images including background clutter, the random forest classifier demonstrated the best training accuracy of 100% and the best prediction accuracy of 96%. Although computationally more expensive, the random forest classifier also produced the fewest number of false positives. Sensitivity analysis for window sizes is presented for robust validation of the chosen approach, which showed improved detection accuracy.



All Content © PaperCept, Inc.

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2021 PaperCept, Inc.
Page generated 2021-04-12  06:08:22 PST  Terms of use