NASA Astrophysics Data System (ADS)
Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi
2013-05-01
The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.
Real-time UAV trajectory generation using feature points matching between video image sequences
NASA Astrophysics Data System (ADS)
Byun, Younggi; Song, Jeongheon; Han, Dongyeob
2017-09-01
Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system
Design of UAV high resolution image transmission system
NASA Astrophysics Data System (ADS)
Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng
2017-02-01
In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.
Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data
NASA Astrophysics Data System (ADS)
Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.
2016-06-01
This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
Video change detection for fixed wing UAVs
NASA Astrophysics Data System (ADS)
Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa
2017-10-01
In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.
UAV field demonstration of social media enabled tactical data link
NASA Astrophysics Data System (ADS)
Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.
2015-05-01
This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.
NASA Astrophysics Data System (ADS)
Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.
2015-08-01
Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.
The application of micro UAV in construction project
NASA Astrophysics Data System (ADS)
Kaamin, Masiri; Razali, Siti Nooraiin Mohd; Ahmad, Nor Farah Atiqah; Bukari, Saifullizan Mohd; Ngadiman, Norhayati; Kadir, Aslila Abd; Hamid, Nor Baizura
2017-10-01
In every outstanding construction project, there is definitely have an effective construction management. Construction management allows a construction project to be implemented according to plan. Every construction project must have a progress development works that is usually created by the site engineer. Documenting the progress of works is one of the requirements in construction management. In a progress report it is necessarily have a visual image as an evidence. The conventional method used for photographing on the construction site is by using common digital camera which is has few setback comparing to Micro Unmanned Aerial Vehicles (UAV). Besides, site engineer always have a current issues involving limitation of monitoring on high reach point and entire view of the construction site. The purpose of this paper is to provide a concise review of Micro UAV technology in monitoring the progress on construction site through visualization approach. The aims of this study are to replace the conventional method of photographing on construction site using Micro UAV which can portray the whole view of the building, especially on high reach point and allows to produce better images, videos and 3D model and also facilitating site engineer to monitor works in progress. The Micro UAV was flown around the building construction according to the Ground Control Points (GCPs) to capture images and record videos. The images taken from Micro UAV have been processed generate 3D model and were analysed to visualize the building construction as well as monitoring the construction progress work and provides immediate reliable data for project estimation. It has been proven that by using Micro UAV, a better images and videos can give a better overview of the construction site and monitor any defects on high reach point building structures. Not to be forgotten, with Micro UAV the construction site progress is more efficiently tracked and kept on the schedule.
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Experimental application of simulation tools for evaluating UAV video change detection
NASA Astrophysics Data System (ADS)
Saur, Günter; Bartelsen, Jan
2015-10-01
Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.
UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.
Chen, Jessie Y C
2010-08-01
A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.
Intergraph video and images exploitation capabilities
NASA Astrophysics Data System (ADS)
Colla, Simone; Manesis, Charalampos
2013-08-01
The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.
Short-term change detection for UAV video
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2012-11-01
In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.
Real-time target tracking and locating system for UAV
NASA Astrophysics Data System (ADS)
Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen
2017-07-01
In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.
Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles
Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.
2017-01-01
Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.
Kedzierski, Michal; Delis, Paulina
2016-06-23
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization
Kedzierski, Michal; Delis, Paulina
2016-01-01
The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954
Annotation of UAV surveillance video
NASA Astrophysics Data System (ADS)
Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John
2004-08-01
Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.
Using crowd sourcing to combat potentially illegal or dangerous UAV operations
NASA Astrophysics Data System (ADS)
Tapsall, Brooke T.
2016-10-01
The UAV (Unmanned Aerial Vehicles) industry is growing exponentially at a pace that policy makers, individual countries and law enforcement agencies are finding difficult to keep up. The UAV market is large, as such the amount of UAVs being operated in potentially dangerous situations is prevalent and rapidly increasing. Media is continually reporting `near-miss' incidents between UAVs and commercial aircraft, UAV breaching security in sensitive areas or invading public privacy. One major challenge for law enforcement agencies is gaining tangible evidence against potentially dangerous or illegal UAV operators due to the rapidity with which UAV operators are able to enter, fly and exit a scene before authorities can arrive or before they can be located. DroneALERT, an application available via the Airport-UAV.com website, allows users to capture potentially dangerous or illegal UAV activity using their mobile device as it the incident is occurring. A short online DroneALERT Incident Report (DIR) is produced, emailed to the user and the Airport-UAV.com custodians. The DIR can be used to aid authorities in their investigations. The DIR contains details such as images and videos, location, time, date of the incident, drone model, its distance and height. By analysing information from the DIR, photos or video, there is a high potential for law enforcement authorities to use this evidence to identify the type of UAV used, triangulate the location of the potential dangerous UAV and operator, create a timeline of events, potential areas of operator exit and to determine the legalities breached. All provides crucial evidence for identifying and prosecuting a UAV operator.
D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances
NASA Astrophysics Data System (ADS)
Piras, M.; Di Pietra, V.; Visintini, D.
2017-08-01
The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.
Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.
2015-12-01
There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
Pricise Target Geolocation and Tracking Based on Uav Video Imagery
NASA Astrophysics Data System (ADS)
Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan Hruska
Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical tomore » use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.« less
Automated UAV-based video exploitation using service oriented architecture framework
NASA Astrophysics Data System (ADS)
Se, Stephen; Nadeau, Christian; Wood, Scott
2011-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.
Image-based tracking and sensor resource management for UAVs in an urban environment
NASA Astrophysics Data System (ADS)
Samant, Ashwin; Chang, K. C.
2010-04-01
Coordination and deployment of multiple unmanned air vehicles (UAVs) requires a lot of human resources in order to carry out a successful mission. The complexity of such a surveillance mission is significantly increased in the case of an urban environment where targets can easily escape from the UAV's field of view (FOV) due to intervening building and line-of-sight obstruction. In the proposed methodology, we focus on the control and coordination of multiple UAVs having gimbaled video sensor onboard for tracking multiple targets in an urban environment. We developed optimal path planning algorithms with emphasis on dynamic target prioritizations and persistent target updates. The command center is responsible for target prioritization and autonomous control of multiple UAVs, enabling a single operator to monitor and control a team of UAVs from a remote location. The results are obtained using extensive 3D simulations in Google Earth using Tangent plus Lyapunov vector field guidance for target tracking.
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-01-01
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-08-19
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.
2010-09-01
53 Figure 26. Image of the phased array antenna...................................................................54...69 Figure 38. Computation of correction angle from array factor and sum/difference beams...71 Figure 39. Front panel of the tracking algorithm
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology
NASA Astrophysics Data System (ADS)
Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.
2014-02-01
Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.
A Cloud Robotics Based Service for Managing RPAS in Emergency, Rescue and Hazardous Scenarios
NASA Astrophysics Data System (ADS)
Silvagni, Mario; Chiaberge, Marcello; Sanguedolce, Claudio; Dara, Gianluca
2016-04-01
Cloud robotics and cloud services are revolutionizing not only the ICT world but also the robotics industry, giving robots more computing capabilities, storage and connection bandwidth while opening new scenarios that blend the physical to the digital world. In this vision, new IT architectures are required to manage robots, retrieve data from them and create services to interact with users. Among all the robots this work is mainly focused on flying robots, better known as drones, UAV (Unmanned Aerial Vehicle) or RPAS (Remotely Piloted Aircraft Systems). The cloud robotics approach shifts the concept of having a single local "intelligence" for every single UAV, as a unique device that carries out onboard all the computation and storage processes, to a more powerful "centralized brain" located in the cloud. This breakthrough opens new scenarios where UAVs are agents, relying on remote servers for most of their computational load and data storage, creating a network of devices where they can share knowledge and information. Many applications, using UAVs, are growing as interesting and suitable devices for environment monitoring. Many services can be build fetching data from UAVs, such as telemetry, video streaming, pictures or sensors data; once. These services, part of the IT architecture, can be accessed via web by other devices or shared with other UAVs. As test cases of the proposed architecture, two examples are reported. In the first one a search and rescue or emergency management, where UAVs are required for monitoring intervention, is shown. In case of emergency or aggression, the user requests the emergency service from the IT architecture, providing GPS coordinates and an identification number. The IT architecture uses a UAV (choosing among the available one according to distance, service status, etc.) to reach him/her for monitoring and support operations. In the meantime, an officer will use the service to see the current position of the UAV, its telemetry and video streaming from its camera. Data are stored for further use and documentation and can be shared to all the involved personal or services. The second case refer to imaging survey. An investigation area is selected using a map or a set of coordinates by a user that can be on the field on in a management facility. The cloud system elaborate this data and automatically compute a flight plan that consider the survey data requirements (i.e: picture ground resolution, overlapping) but also several environment constraints (i.e: no fly zones, possible hazardous areas, known obstacles, etc). Once the flight plan is loaded in the selected UAV the mission starts. During the mission, if a suitable data network coverage is available, the UAV transmit acquired images (typically low quality image to limit bandwidth) and shooting pose in order to perform a preliminary check during the mission and minimize failing in survey; if not, all data are uploaded asynchronously after the mission. The cloud servers perform all the tasks related to image processing (mosaic, ortho-photo, geo-referencing, 3D models) and data management.
NASA Astrophysics Data System (ADS)
Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos
2017-09-01
Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.
The use of open data from social media for the creation of 3D georeferenced modeling
NASA Astrophysics Data System (ADS)
Themistocleous, Kyriacos
2016-08-01
There is a great deal of open source video on the internet that is posted by users on social media sites. With the release of low-cost unmanned aerial vehicles, many hobbyists are uploading videos from different locations, especially in remote areas. Using open source data that is available on the internet, this study utilized structure to motion (SfM) as a range imaging technique to estimate 3 dimensional landscape features from 2 dimensional image sequences subtracted from video, applied image distortion correction and geo-referencing. This type of documentation may be necessary for cultural heritage sites that are inaccessible or documentation is difficult, where we can access video from Unmanned Aerial Vehicles (UAV). These 3D models can be viewed using Google Earth, create orthoimage, drawings and create digital terrain modeling for cultural heritage and archaeological purposes in remote or inaccessible areas.
Robust real-time horizon detection in full-motion video
NASA Astrophysics Data System (ADS)
Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin
2014-06-01
The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.
Detection of unmanned aerial vehicles using a visible camera system.
Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C
2017-01-20
Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.
Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.
2016-01-01
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196
Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J
2016-01-14
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
An Augmented Virtuality Display for Improving UAV Usability
2005-01-01
cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and
Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows
USDA-ARS?s Scientific Manuscript database
Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...
Achieving an Optimal Medium Altitude UAV Force Balance in Support of COIN Operations
2009-02-02
and execute operations. UAS with common data links and remote video terminals (RVTs) provide input to the common operational picture (COP) and...full-motion video (FMV) is intuitive to many tactical warfighters who have used similar sensors in manned aircraft. Modern data links allow the video ...Document (AFDD) 2-9. Intelligence, Surveillance, and Reconnaissance Operations, 17 July 2007. Baldor, Lolita C. “Increased UAV reliance evident in
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
Moving object detection in top-view aerial videos improved by image stacking
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen
2017-08-01
Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.
Adaptive pattern for autonomous UAV guidance
NASA Astrophysics Data System (ADS)
Sung, Chen-Ko; Segor, Florian
2013-09-01
The research done at the Fraunhofer IOSB in Karlsruhe within the AMFIS project is focusing on a mobile system to support rescue forces in accidents or disasters. The system consists of a ground control station which has the capability to communicate with a large number of heterogeneous sensors and sensor carriers and provides several open interfaces to allow easy integration of additional sensors into the system. Within this research we focus mainly on UAV such as VTOL (Vertical takeoff and Landing) systems because of their ease of use and their high maneuverability. To increase the positioning capability of the UAV, different onboard processing chains of image exploitation for real time detection of patterns on the ground and the interfacing technology for controlling the UAV from the payload during flight were examined. The earlier proposed static ground pattern was extended by an adaptive component which admits an additional visual communication channel to the aircraft. For this purpose different components were conceived to transfer additive information using changeable patterns on the ground. The adaptive ground pattern and their application suitability had to be tested under external influence. Beside the adaptive ground pattern, the onboard process chains and the adaptations to the demands of changing patterns are introduced in this paper. The tracking of the guiding points, the UAV navigation and the conversion of the guiding point positions from the images to real world co-ordinates in video sequences, as well as use limits and the possibilities of an adaptable pattern are examined.
Image restoration for civil engineering structure monitoring using imaging system embedded on UAV
NASA Astrophysics Data System (ADS)
Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem
2013-04-01
Nowadays, civil engineering structures are periodically surveyed by qualified technicians (i.e. alpinist) operating visual inspection using heavy mechanical pods. This method is far to be safe, not only for civil engineering structures monitoring staff, but also for users. Due to the unceasing traffic increase, making diversions or closing lanes on bridge becomes more and more difficult. New inspection methods have to be found. One of the most promising technique is to develop inspection method using images acquired by a dedicated monitoring system operating around the civil engineering structures, without disturbing the traffic. In that context, the use of images acquired with an UAV, which fly around the structures is of particular interest. The UAV can be equipped with different vision system (digital camera, infrared sensor, video, etc.). Nonetheless, detection of small distresses on images (like cracks of 1 mm or less) depends on image quality, which is sensitive to internal parameters of the UAV (vibration modes, video exposure times, etc.) and to external parameters (turbulence, bad illumination of the scene, etc.). Though progresses were made at UAV level and at sensor level (i.e. optics), image deterioration is still an open problem. These deteriorations are mainly represented by motion blur that can be coupled with out-of-focus blur and observation noise on acquired images. In practice, deteriorations are unknown if no a priori information is available or dedicated additional instrumentation is set-up at UAV level. Image restoration processing is therefore required. This is a difficult problem [1-3] which has been intensively studied over last decades [4-12]. Image restoration can be addressed by following a blind approach or a myopic one. In both cases, it includes two processing steps that can be implemented in sequential or alternate mode. The first step carries out the identification of the blur impulse response and the second one makes use of this estimated blur kernel for performing the deconvolution of the acquired image. In the present work, different regularization methods, mainly based on the pseudo norm aforementioned Total Variation, are studied and analysed. The key point of their respective implementation, their properties and limits are investigated in this particular applicative context. References [1] J. Hadamard. Lectures on Cauchy's problem in linear partial differential equations. Yale University Press, 1923. [2] A. N. Tihonov. On the resolution of incorrectly posed problems and regularisation method (in Russian). Doklady A. N.SSSR, 151(3), 1963. [3] C. R. Vogel. Computational Methods for inverse problems, SIAM, 2002. [4] A. K. Katsaggelos, J. Biemond, R.W. Schafer, and R. M. Mersereau, "A regularized iterative image restoration algorithm," IEEE Transactions on Signal Processing, vol.39, no. 4, pp. 914-929, 1991. [5] J. Biemond, R. L. Lagendijk, and R. M. Mersereau, "Iterative methods for image deblurring," Proceedings of the IEEE, vol. 78, no. 5, pp. 856-883, 1990. [6] D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine, vol. 13, no. 3, pp. 43-64, 1996. [7] Y. L. You and M. Kaveh, "A regularization approach to joint blur identification and image restoration," IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 416-428, 1996. [8] T. F. Chan and C. K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998. [9] S. Chardon, B. Vozel, and K. Chehdi. Parametric Blur Estimation Using the GCV Criterion and a Smoothness Constraint on the Image. Multidimensional Systems and Signal Processing Journal, Kluwer Ed., 10:395-414, 1999 [10] B. Vozel, K. Chehdi, and J. Dumoulin. Myopic image restoration for civil structures inspection using UAV (in French). In GRETSI, 2005. [11] L. Bar, N. Sochen, and N. Kiryati. Semi-blind image restoration via Mumford-Shah regularization. IEEE Transactions on Image Processing, 15(2), 2006. [12] J. H. Money and S. H. Kang, "Total variation minimizing blind deconvolution with shock filter reference," Image and Vision Computing, vol. 26, no. 2, pp. 302-314, 2008.
Applications of UAVs for Remote Sensing of Critical Infrastructure
NASA Technical Reports Server (NTRS)
Wegener, Steve; Brass, James; Schoenung, Susan
2003-01-01
The surveillance of critical facilities and national infrastructure such as waterways, roadways, pipelines and utilities requires advanced technological tools to provide timely, up to date information on structure status and integrity. Unmanned Aerial Vehicles (UAVs) are uniquely suited for these tasks, having large payload and long duration capabilities. UAVs also have the capability to fly dangerous and dull missions, orbiting for 24 hours over a particular area or facility providing around the clock surveillance with no personnel onboard. New UAV platforms and systems are becoming available for commercial use. High altitude platforms are being tested for use in communications, remote sensing, agriculture, forestry and disaster management. New payloads are being built and demonstrated onboard the UAVs in support of these applications. Smaller, lighter, lower power consumption imaging systems are currently being tested over coffee fields to determine yield and over fires to detect fire fronts and hotspots. Communication systems that relay video, meteorological and chemical data via satellite to users on the ground in real-time have also been demonstrated. Interest in this technology for infrastructure characterization and mapping has increased dramatically in the past year. Many of the UAV technological developments required for resource and disaster monitoring are being used for the infrastructure and facility mapping activity. This paper documents the unique contributions from NASA;s Environmental Research Aircraft and Sensor Technology (ERAST) program to these applications. ERAST is a UAV technology development effort by a consortium of private aeronautical companies and NASA. Details of demonstrations of UAV capabilities currently underway are also presented.
Colour-based Object Detection and Tracking for Autonomous Quadrotor UAV
NASA Astrophysics Data System (ADS)
Kadouf, Hani Hunud A.; Mohd Mustafah, Yasir
2013-12-01
With robotics becoming a fundamental aspect of modern society, further research and consequent application is ever increasing. Aerial robotics, in particular, covers applications such as surveillance in hostile military zones or search and rescue operations in disaster stricken areas, where ground navigation is impossible. The increased visual capacity of UAV's (Unmanned Air Vehicles) is also applicable in the support of ground vehicles to provide supplies for emergency assistance, for scouting purposes or to extend communication beyond insurmountable land or water barriers. The Quadrotor, which is a small UAV has its lift generated by four rotors and can be controlled by altering the speeds of its motors relative to each other. The four rotors allow for a higher payload than single or dual rotor UAVs, which makes it safer and more suitable to carry camera and transmitter equipment. An onboard camera is used to capture and transmit images of the Quadrotor's First Person View (FPV) while in flight, in real time, wirelessly to a base station. The aim of this research is to develop an autonomous quadrotor platform capable of transmitting real time video signals to a base station for processing. The result from the image analysis will be used as a feedback in the quadrotor positioning control. To validate the system, the algorithm should have the capacity to make the quadrotor identify, track or hover above stationary or moving objects.
NASA Astrophysics Data System (ADS)
Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter
This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.
Integrating critical interface elements for intuitive single-display aviation control of UAVs
NASA Astrophysics Data System (ADS)
Cooper, Joseph L.; Goodrich, Michael A.
2006-05-01
Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.
Introducing a Low-Cost Mini-Uav for - and Multispectral-Imaging
NASA Astrophysics Data System (ADS)
Bendig, J.; Bolten, A.; Bareth, G.
2012-07-01
The trend to minimize electronic devices also accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices. Consequently, it is not surprising that UAVs are already part of our daily life and the current pace of development will increase civil applications. A well known and already wide spread example is the so called flying video game based on Parrot's AR.Drone which is remotely controlled by an iPod, iPhone, or iPad (http://ardrone.parrot.com). The latter can be considered as a low-weight and low-cost Mini-UAV. In this contribution a Mini-UAV is considered to weigh less than 5 kg and is being able to carry 0.2 kg to 1.5 kg of sensor payload. While up to now Mini-UAVs like Parrot's AR.Drone are mainly equipped with RGB cameras for videotaping or imaging, the development of such carriage systems clearly also goes to multi-sensor platforms like the ones introduced for larger UAVs (5 to 20 kg) by Jaakkolla et al. (2010) for forestry applications or by Berni et al. (2009) for agricultural applications. The problem when designing a Mini-UAV for multi-sensor imaging is the limitation of payload of up to 1.5 kg and a total weight of the whole system below 5 kg. Consequently, the Mini-UAV without sensors but including navigation system and GPS sensors must weigh less than 3.5 kg. A Mini-UAV system with these characteristics is HiSystems' MK-Okto (www.mikrokopter.de). Total weight including battery without sensors is less than 2.5 kg. Payload of a MK-Okto is approx. 1 kg and maximum speed is around 30 km/h. The MK-Okto can be operated up to a wind speed of less than 19 km/h which corresponds to Beaufort scale number 3 for wind speed. In our study, the MK-Okto is equipped with a handheld low-weight NEC F30IS thermal imaging system. The F30IS which was developed for veterinary applications, covers 8 to 13 μm, weighs only 300 g, and is capturing the temperature range between -20 °C and 100 °C. Flying at a height of 100 m, the camera's image covers an area of approx. 50 by 40 m. The sensor's resolution is 160 x 120 pixel and the field of view is 28° (H) x 21° (V). According to the producer, absolute accuracy for temperature is ±1 °C and the thermal sensitivity is >0.1 K. Additionally, the MK-Okto is equipped with Tetracam's Mini MCA. The Mini MCA in our study is a four band multispectral imaging system. Total weight is 700 g and spectral characteristics can be modified by filters between 400 and 1000 nm. In this study, three bands with a width of 10 nm (green: 550 nm, red: 671 nm, NIR1: 800 nm) and one band of 20 nm width (NIR2: 950 nm) have been used. Even so the MK-Okto is able to carry both sensors at the same time, the imaging systems were used separately for this contribution. First results of a combined thermal- and multispectral MK-Okto campaign in 2011 are presented and evaluated for a sugarbeet field experiment examining pathogens and drought stress.
Estimation of velocities via optical flow
NASA Astrophysics Data System (ADS)
Popov, A.; Miller, A.; Miller, B.; Stepanyan, K.
2017-02-01
This article presents an approach to the optical flow (OF) usage as a general navigation means providing the information about the linear and angular vehicle's velocities. The term of "OF" came from opto-electronic devices where it corresponds to a video sequence of images related to the camera motion either over static surfaces or set of objects. Even if the positions of these objects are unknown in advance, one can estimate the camera motion provided just by video sequence itself and some metric information, such as distance between the objects or the range to the surface. This approach is applicable to any passive observation system which is able to produce a sequence of images, such as radio locator or sonar. Here the UAV application of the OF is considered since it is historically
Real-Time 3d Reconstruction from Images Taken from AN Uav
NASA Astrophysics Data System (ADS)
Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.
2015-08-01
We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.
Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences
NASA Astrophysics Data System (ADS)
Wierzbicki, Damian
2017-12-01
Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.
2015-01-31
from a wireless joystick console broadcasting at 2.4 GHz. Figure 6. GTRI Airborne Unmanned Sensor System As shown in Figure 7 the autopilot has a...generating wind turbines , and video reconnaissance systems on unmanned aerial vehicles (UAVs). The most basic decision problem in designing a...chosen test UAV case was the GTRI Aerial Unmanned Sensor System (GAUSS) aircraft. The GAUSS platform is a small research UAV with a widely used
Emergency response to landslide using GNSS measurements and UAV
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.; Koukouvelas, Ioannis K.
2017-10-01
Landslide monitoring can be performed using many different methods: Classical geotechnical measurements like inclinometer, topographical survey measurements with total stations or GNSS sensors and photogrammetric techniques using airphotos or high resolution satellite images. However all these methods are expensive or difficult to be developed immediately after the landslide triggering. In contrast airborne technology and especially the use of Unmanned Aerial Vehicles (UAVs) make response to landslide disaster easier as UAVs can be launched quickly in dangerous terrains and send data about the sliding areas to responders on the ground either as RGB images or as videos. In addition, the emergency response to landslide is critical for the further monitoring. For proper displacement identification all the above mentioned monitoring methods need a high resolution and a very accurate representation of the relief. The ideal solution for the accurate and quick mapping of a landslide is the combined use of UAV's photogrammetry and GNSS measurements. UAVs have started their development as expensive toys but they currently became a very valuable tool in large scale mapping of sliding areas. The purpose of this work is to demonstrate an effective solution for the initial landslide mapping immediately after the occurrence of the phenomenon and the possibility of the periodical assessment of the landslide. Three different landslide cases from Greece are presented in the current study. All three landslides have different characteristics: occurred in different geomorphologic environments, triggered by different causes and had different geologic bedrock. In all three cases we performed detailed GNSS measurements of the landslide area, we generated orthophotos as well as Digital Surface Models (DSMs) at an accuracy of less than +/-10 cm. Slide direction and velocity, mass balances as well as protection and mitigation measurements can be derived from the application of the UAVs. Those data in addition are accurate, cost- and time-effective.
A method of fast mosaic for massive UAV images
NASA Astrophysics Data System (ADS)
Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong
2014-11-01
With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott G. Bauer; Matthew O. Anderson; James R. Hanneman
2005-10-01
The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs requiremore » wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.« less
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.
Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo
2018-04-16
Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.
Uav-Based 3d Urban Environment Monitoring
NASA Astrophysics Data System (ADS)
Boonpook, Wuttichai; Tan, Yumin; Liu, Huaqing; Zhao, Binbin; He, Lingfeng
2018-04-01
Unmanned Aerial Vehicle (UAV) based remote sensing can be used to make three-dimensions (3D) mapping with great flexibility, besides the ability to provide high resolution images. In this paper we propose a quick-change detection method on UAV images by combining altitude from Digital Surface Model (DSM) and texture analysis from images. Cases of UAV images with and without georeferencing are both considered. Research results show that the accuracy of change detection can be enhanced with georeferencing procedure, and the accuracy and precision of change detection on UAV images which are collected both vertically and obliquely but without georeferencing also have a good performance.
The remote characterization of vegetation using Unmanned Aerial Vehicle photography
USDA-ARS?s Scientific Manuscript database
Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial phot...
Multi-Sensor Fusion and Enhancement for Object Detection
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur
2005-01-01
This was a quick &week effort to investigate the ability to detect changes along the flight path of an unmanned airborne vehicle (UAV) over time. Video was acquired by the UAV during several passes over the same terrain. Concurrently, GPS data and UAV attitude data were also acquired. The purpose of the research was to use information from all of these sources to detect if any change had occurred in the terrain encompassed by the flight path.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach
NASA Astrophysics Data System (ADS)
Rahman, A. A. Ab; Maulud, K. N. Abdul; Mohd, F. A.; Jaafar, O.; Tahar, K. N.
2017-12-01
Unmanned Aerial Vehicles (UAV) technology has evolved dramatically in the 21st century. It is used by both military and general public for recreational purposes and mapping work. Operating cost for UAV is much cheaper compared to that of normal aircraft and it does not require a large work space. The UAV systems have similar functions with the LIDAR and satellite images technologies. These systems require a huge cost, labour and time consumption to produce elevation and dimension data. Measurement of difficult objects such as water tank can also be done by using UAV. The purpose of this paper is to show the capability of UAV to compute the volume of water tank based on a different number of images and control points. The results were compared with the actual volume of the tank to validate the measurement. In this study, the image acquisition was done using Phantom 3 Professional, which is a low cost UAV. The analysis in this study is based on different volume computations using two and four control points with variety set of UAV images. The results show that more images will provide a better quality measurement. With 95 images and four GCP, the error percentage to the actual volume is about 5%. Four controls are enough to get good results but more images are needed, estimated about 115 until 220 images. All in all, it can be concluded that the low cost UAV has a potential to be used for volume of water and dimension measurement.
Micro-UAV tracking framework for EO exploitation
NASA Astrophysics Data System (ADS)
Browning, David; Wilhelm, Joe; Van Hook, Richard; Gallagher, John
2012-06-01
Historically, the Air Force's research into aerial platforms for sensing systems has focused on low-, mid-, and highaltitude platforms. Though these systems are likely to comprise the majority of the Air Force's assets for the foreseeable future, they have limitations. Specifically, these platforms, their sensor packages, and their data exploitation software are unsuited for close-quarter surveillance, such as in alleys and inside of buildings. Micro-UAVs have been gaining in popularity, especially non-fixed-wing platforms such as quad-rotors. These platforms are much more appropriate for confined spaces. However, the types of video exploitation techniques that can effectively be used are different from the typical nadir-looking aerial platform. This paper discusses the creation of a framework for testing existing and new video exploitation algorithms, as well as describes a sample micro-UAV-based tracker.
Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response
NASA Astrophysics Data System (ADS)
Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.
2018-04-01
The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
NASA Astrophysics Data System (ADS)
Ham, S.; Oh, Y.; Choi, K.; Lee, I.
2018-05-01
Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow
Zhang, Weilong; Guo, Bingxuan; Liao, Xuan; Li, Wenzhuo
2018-01-01
Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images. PMID:29659526
Embedded, real-time UAV control for improved, image-based 3D scene reconstruction
Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul
2016-01-01
Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...
Uncooled microbolometer sensors for unattended applications
NASA Astrophysics Data System (ADS)
Kohin, Margaret; Miller, James E.; Leary, Arthur R.; Backer, Brian S.; Swift, William; Aston, Peter
2003-09-01
BAE SYSTEMS has been developing and producing uncooled microbolometer sensors since 1995. Recently, uncooled sensors have been used on Pointer Unattended Aerial Vehicles and considered for several unattended sensor applications including DARPA Micro-Internetted Unattended Ground Sensors (MIUGS), Army Modular Acoustic Imaging Sensors (MAIS), and Redeployable Unattended Ground Sensors (R-UGS). This paper describes recent breakthrough uncooled sensor performance at BAE SYSTEMS and how this improved performance has been applied to a new Standard Camera Core (SCC) that is ideal for these unattended applications. Video imagery from a BAE SYSTEMS 640x480 imaging camera flown in a Pointer UAV is provided. Recent performance results are also provided.
NASA Astrophysics Data System (ADS)
Sankey, T.; Donald, J.; McVay, J.
2015-12-01
High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.
Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-03-19
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.
A debugging method of the Quadrotor UAV based on infrared thermal imaging
NASA Astrophysics Data System (ADS)
Cui, Guangjie; Hao, Qian; Yang, Jianguo; Chen, Lizhi; Hu, Hongkang; Zhang, Lijun
2018-01-01
High-performance UAV has been popular and in great need in recent years. The paper introduces a new method in debugging Quadrotor UAVs. Based on the infrared thermal technology and heat transfer theory, a UAV is under debugging above a hot-wire grid which is composed of 14 heated nichrome wires. And the air flow propelled by the rotating rotors has an influence on the temperature distribution of the hot-wire grid. An infrared thermal imager below observes the distribution and gets thermal images of the hot-wire grid. With the assistance of mathematic model and some experiments, the paper discusses the relationship between thermal images and the speed of rotors. By means of getting debugged UAVs into test, the standard information and thermal images can be acquired. The paper demonstrates that comparing to the standard thermal images, a UAV being debugging in the same test can draw some critical data directly or after interpolation. The results are shown in the paper and the advantages are discussed.
Experiences of using UAVs for monitoring levee breaches
NASA Astrophysics Data System (ADS)
Brauneck, J.; Pohl, R.; Juepner, R.
2016-11-01
During floods technical protection facilities are subjected to high loads and might fail as several examples have shown in the past. During the major 2002 and 2013 floods in the catchment area of the Elbe River (Germany), some breaching levees caused large inundations in the hinterland. In such situations the emergency forces need comprehensive and reliable realtime information about the situation, especially the breach enlargement and discharge, the spatial and temporal development of the inundation and the damages. After an impressive progress meanwhile unmanned aerial vehicles (UAV) also called remotely piloted aircraft systems (RPAS) are highly capable to collect and transmit precise information from not accessible areas to the task force very quickly. Using the example of the Breitenhagen levee failure near the Saale-Elbe junction in Germany in June 2013 the processing steps will be explained that are needed to come from the visual UAV-flight information to a hydronumeric model. Modelling of the breach was implemented using photogrammetric ranging methods, such as structure from motion and dense image matching. These methods utilize conventional digital multiple view images or videos recorded by either a moving aerial platform or terrestrial photography and allow the construction of 3D point clouds, digital surface models and orthophotos. At Breitenhagen, a UAV recorded the beginning of the levee failure. Due to the dynamic character of the breach and the moving areal platform, 4 different surface models show valid data with extrapolated breach widths of 9 to 40 meters. By means of these calculations the flow rate through the breach has been determined. In addition the procedure has been tested in a physical model, whose results will be presented too.
DAZZLE project: UAV to ground communication system using a laser and a modulated retro-reflector
NASA Astrophysics Data System (ADS)
Thueux, Yoann; Avlonitis, Nicholas; Erry, Gavin
2014-10-01
The advent of the Unmanned Aerial Vehicle (UAV) has generated the need for reduced size, weight and power (SWaP) requirements for communications systems with a high data rate, enhanced security and quality of service. This paper presents the current results of the DAZZLE project run by Airbus Group Innovations. The specifications, integration steps and initial performance of a UAV to ground communication system using a laser and a modulated retro-reflector are detailed. The laser operates at the wavelength of 1550nm and at power levels that keep it eye safe. It is directed using a FLIR pan and tilt unit driven by an image processing-based system that tracks the UAV in flight at a range of a few kilometers. The modulated retro-reflector is capable of a data rate of 20Mbps over short distances, using 200mW of electrical power. The communication system was tested at the Pershore Laser Range in July 2014. Video data from a flying Octocopter was successfully transmitted over 1200m. During the next phase of the DAZZLE project, the team will attempt to produce a modulated retro-reflector capable of 1Gbps in partnership with the research institute Acreo1 based in Sweden. A high speed laser beam steering capability based on a Spatial Light Modulator will also be added to the system to improve beam pointing accuracy.
Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M
2011-01-01
Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less
Comparison of UAV and WorldView-2 imagery for mapping leaf area index of mangrove forest
NASA Astrophysics Data System (ADS)
Tian, Jinyan; Wang, Le; Li, Xiaojuan; Gong, Huili; Shi, Chen; Zhong, Ruofei; Liu, Xiaomeng
2017-09-01
Unmanned Aerial Vehicle (UAV) remote sensing has opened the door to new sources of data to effectively characterize vegetation metrics at very high spatial resolution and at flexible revisit frequencies. Successful estimation of the leaf area index (LAI) in precision agriculture with a UAV image has been reported in several studies. However, in most forests, the challenges associated with the interference from a complex background and a variety of vegetation species have hindered research using UAV images. To the best of our knowledge, very few studies have mapped the forest LAI with a UAV image. In addition, the drawbacks and advantages of estimating the forest LAI with UAV and satellite images at high spatial resolution remain a knowledge gap in existing literature. Therefore, this paper aims to map LAI in a mangrove forest with a complex background and a variety of vegetation species using a UAV image and compare it with a WorldView-2 image (WV2). In this study, three representative NDVIs, average NDVI (AvNDVI), vegetated specific NDVI (VsNDVI), and scaled NDVI (ScNDVI), were acquired with UAV and WV2 to predict the plot level (10 × 10 m) LAI. The results showed that AvNDVI achieved the highest accuracy for WV2 (R2 = 0.778, RMSE = 0.424), whereas ScNDVI obtained the optimal accuracy for UAV (R2 = 0.817, RMSE = 0.423). In addition, an overall comparison results of the WV2 and UAV derived LAIs indicated that UAV obtained a better accuracy than WV2 in the plots that were covered with homogeneous mangrove species or in the low LAI plots, which was because UAV can effectively eliminate the influence from the background and the vegetation species owing to its high spatial resolution. However, WV2 obtained a slightly higher accuracy than UAV in the plots covered with a variety of mangrove species, which was because the UAV sensor provides a negative spectral response function(SRF) than WV2 in terms of the mangrove LAI estimation.
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Teppati Losè, L.
2017-08-01
Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (Dà-Jiāng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen - China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the onboard sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.
Automated geographic registration and radiometric correction for UAV-based mosaics
USDA-ARS?s Scientific Manuscript database
Texas A&M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to s...
Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles
Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan
2015-01-01
Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599
Runway Detection From Map, Video and Aircraft Navigational Data
2016-03-01
FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA by Jose R. Espinosa Gloria March 2016 Thesis Advisor: Roberto Cristi Co-Advisor: Oleg...COVERED Master’s thesis 4. TITLE AND SUBTITLE RUNWAY DETECTION FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA 5. FUNDING NUMBERS 6. AUTHOR...Mexican Navy, unmanned aerial vehicles (UAV) have been equipped with daylight and infrared cameras. Processing the video information obtained from these
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-01-01
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Sun, Kaimin; Li, Deren; Bai, Ting
2016-07-01
Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
IR sensors and imagers in networked operations
NASA Astrophysics Data System (ADS)
Breiter, Rainer; Cabanski, Wolfgang
2005-05-01
"Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with simulation data. The paper addresses the special capabilities, requirements and design considerations of IR sensors and imagers in applications like thermal weapon sights and UAVs for networked operating infantry forces.
Comprehensive UAV agricultural remote-sensing research at Texas A M University
NASA Astrophysics Data System (ADS)
Thomasson, J. Alex; Shi, Yeyin; Olsenholler, Jeffrey; Valasek, John; Murray, Seth C.; Bishop, Michael P.
2016-05-01
Unmanned aerial vehicles (UAVs) have advantages over manned vehicles for agricultural remote sensing. Flying UAVs is less expensive, is more flexible in scheduling, enables lower altitudes, uses lower speeds, and provides better spatial resolution for imaging. The main disadvantage is that, at lower altitudes and speeds, only small areas can be imaged. However, on large farms with contiguous fields, high-quality images can be collected regularly by using UAVs with appropriate sensing technologies that enable high-quality image mosaics to be created with sufficient metadata and ground-control points. In the United States, rules governing the use of aircraft are promulgated and enforced by the Federal Aviation Administration (FAA), and rules governing UAVs are currently in flux. Operators must apply for appropriate permissions to fly UAVs. In the summer of 2015 Texas A&M University's agricultural research agency, Texas A&M AgriLife Research, embarked on a comprehensive program of remote sensing with UAVs at its 568-ha Brazos Bottom Research Farm. This farm is made up of numerous fields where various crops are grown in plots or complete fields. The crops include cotton, corn, sorghum, and wheat. After gaining FAA permission to fly at the farm, the research team used multiple fixed-wing and rotary-wing UAVs along with various sensors to collect images over all parts of the farm at least once per week. This article reports on details of flight operations and sensing and analysis protocols, and it includes some lessons learned in the process of developing a UAV remote-sensing effort of this sort.
Tracking, aiming, and hitting the UAV with ordinary assault rifle
NASA Astrophysics Data System (ADS)
Racek, František; Baláž, Teodor; Krejčí, Jaroslav; Procházka, Stanislav; Macko, Martin
2017-10-01
The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier's mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.
NASA Astrophysics Data System (ADS)
Yuan, X.; Wang, X.; Dou, A.; Ding, X.
2014-12-01
As the UAV is widely used in earthquake disaster prevention and mitigation, the efficiency of UAV image processing determines the effectiveness of its application to pre-earthquake disaster prevention, post-earthquake emergency rescue, and disaster assessment. Because of bad weather conditions after destructive earthquake, the wide field cameras captured images with serious vignetting phenomenon, which can significantly affects the speed and efficiency of image mosaic, especially the extraction of pre-earthquake building and geological structure information and also the accuracy of post-earthquake quantitative damage extraction. In this paper, an improved radial gradient correction method (IRGCM) was developed to reduce the influence from random distribution of land surface objects on the images based on radial gradient correction method (RGCM, Y. Zheng, 2008; 2013). First, a mean-value image was obtained by the average of serial UAV images. It was used as calibration instead of single images to obtain the comprehensive vignetting function by using RGCM. Then each UAV image would be corrected by the comprehensive vignetting function. A case study was done to correct the UAV images sequence, which were obtained in Lushan County after Ms7.0 Lushan, Sichuan, China earthquake occurred on April 20, 2013. The results show that the comprehensive vignetting function generated by IRGCM is more robust and accurate to express the specific optical response of camera in a particular setting. Thus it is particularly useful for correction of a mass UAV images with non-uniform illuminations. Also, the correction process was simplified and it is faster than conventional methods. After correction, the images have better radial homogeneity and clearer details, to a certain extent, which reduces the difficulties of image mosaic, and provides a better result for further analysis and damage information extraction. Further test shows also that better results were obtained by taking advantage of comprehensive vignetting function to the other UAV image sequences from different regions. The research was supported by these projects, NO.2012BAK15B02 and 2013IES010106.
Spectral Imaging from Uavs Under Varying Illumination Conditions
NASA Astrophysics Data System (ADS)
Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I.
2013-08-01
Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.
Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images
NASA Astrophysics Data System (ADS)
Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao
2016-11-01
Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Automated content and quality assessment of full-motion-video for the generation of meta data
NASA Astrophysics Data System (ADS)
Harguess, Josh
2015-05-01
Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
Coordinated Autonomy for Persistent Presence in Harbor and Riverine Environments
2007-09-30
estimators, and methods designed to deal with real-world problems such as video transmission noise; • OpenCV for basic computer vision functionality as...awareness and forward surveillance of Rocky’s intended path. Aerial video was transmitted to the UAV ground station, where an operator using GIS
Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images
NASA Astrophysics Data System (ADS)
Rhee, S.; Kim, T.
2017-08-01
Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific model. The results of analysis showed that the errors were within the specification of 1 : 1,000 map. Although the Y-parallax can be eliminated, it is still necessary to improve the accuracy of absolute ground position error in order to apply this technique to the actual work. There are a few models in which the difference in height between adjacent models is about 40 cm. We analysed the stability of UAV images by checking angle differences between adjacent images. We also analysed the average area covered by one stereo model and discussed the possible difficulty associated with this narrow coverage. In the future we consider how to reduce position errors and improve map drawing performances from UAVs.
Supervisory Control of Unmanned Vehicles
2010-04-01
than-ideal video quality (Chen et al., 2007; Chen and Thropp, 2007). Simpson et al. (2004) proposed using a spatial audio display to augment UAV...operator’s SA and discussed its utility for each of the three SA levels. They recommended that both visual and spatial audio information should be...presented concurrently. They also suggested that presenting the audio information spatially may enhance UAV operator’s sense of presence (i.e
Feasibility Study On Missile Launch Detection And Trajectory Tracking
2016-09-01
Vehicles ( UAVs ) in military operations, their role in a missile defense operation is not well defined. The simulation program discussed in this thesis ...targeting information to an attacking UAV to reliably intercept the missile. B . FURTHER STUDIES The simulation program can be enhanced to improve the...intercept the threat. This thesis explores the challenges in creating a simulation program to process video footage from an unstable platform and the
2015-06-01
GEOINT geospatial intelligence GFC ground force commander GPS global positioning system GUI graphical user interface HA/DR humanitarian...transport stream UAS unmanned aerial system . See UAV. UAV unmanned aerial vehicle. See UAS. VM virtual machine VMU Marine Unmanned Aerial Vehicle... Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future. However, there are too
Detection of the power lines in UAV remote sensed images using spectral-spatial methods.
Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham
2018-01-15
In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Y. L.
2015-12-01
Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.
Application Possibility of Smartphone as Payload for Photogrammetric Uav System
NASA Astrophysics Data System (ADS)
Yun, M. H.; Kim, J.; Seo, D.; Lee, J.; Choi, C.
2012-07-01
Smartphone can not only be operated under 3G network environment anytime and anyplace but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study is aimed to assess the possibility of smartphone as a payload for photogrammetric UAV system. Prior to such assessment, a smartphone-based photogrammetric UAV system application was developed, through which real-time image, location and attitude data was obtained using smartphone under both static and dynamic conditions. Subsequently the accuracy assessment on the location and attitude data obtained and sent by this system was conducted. The smartphone images were converted into ortho-images through image triangulation. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration. In case IO parameters were taken into account in the static experiment, the results from triangulation for any smartphone type were within 1.5 pixel (RMSE), which was improved at least by 35% compared to when IO parameters were not taken into account. On the contrary, the improvement effect of considering IO parameters on accuracy in triangulation for smartphone images in dynamic experiment was not significant compared to the static experiment. It was due to the significant impact of vibration and sudden attitude change of UAV on the actuator for automatic focus control within the camera built in smartphone under the dynamic condition. This cause appears to have a negative impact on the image-based DEM generation. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.
Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-01-01
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-11-26
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.
Optimal UAV Path Planning for Tracking a Moving Ground Vehicle with a Gimbaled Camera
2014-03-27
micro SD card slot to record all video taken at 1080P resolution. This feature allows the team to record the high definition video taken by the...Inequality constraints 64 h=[]; %Equality constraints 104 Bibliography 1. “ DIY Drones: Official ArduPlane Repository”, 2013. URL https://code
Terrorist and Insurgent Unmanned Aerial Vehicles: Use, Potentials, and Military Implications
2015-08-01
Strategic. While the drone swarms of normal and micro - sized UAVs projected in this threat scenario may still be a few decades out and possibly...craft for reconnaissance and pro- paganda video purposes. Such groups are still very much in an experimental phase of using these craft and possess...technol- ogy trends influencing their potential uses, three red teaming threat scenarios have been created for early warning purposes: 1) Single UAV
NASA Astrophysics Data System (ADS)
Chianucci, Francesco; Disperati, Leonardo; Guzzi, Donatella; Bianchini, Daniele; Nardino, Vanni; Lastri, Cinzia; Rindinella, Andrea; Corona, Piermaria
2016-05-01
Accurate estimates of forest canopy are essential for the characterization of forest ecosystems. Remotely-sensed techniques provide a unique way to obtain estimates over spatially extensive areas, but their application is limited by the spectral and temporal resolution available from these systems, which is often not suited to meet regional or local objectives. The use of unmanned aerial vehicles (UAV) as remote sensing platforms has recently gained increasing attention, but their applications in forestry are still at an experimental stage. In this study we described a methodology to obtain rapid and reliable estimates of forest canopy from a small UAV equipped with a commercial RGB camera. The red, green and blue digital numbers were converted to the green leaf algorithm (GLA) and to the CIE L*a*b* colour space to obtain estimates of canopy cover, foliage clumping and leaf area index (L) from aerial images. Canopy attributes were compared with in situ estimates obtained from two digital canopy photographic techniques (cover and fisheye photography). The method was tested in beech forests. UAV images accurately quantified canopy cover even in very dense stand conditions, despite a tendency to not detecting small within-crown gaps in aerial images, leading to a measurement of a quantity much closer to crown cover estimated from in situ cover photography. Estimates of L from UAV images significantly agreed with that obtained from fisheye images, but the accuracy of UAV estimates is influenced by the appropriate assumption of leaf angle distribution. We concluded that true colour UAV images can be effectively used to obtain rapid, cheap and meaningful estimates of forest canopy attributes at medium-large scales. UAV can combine the advantage of high resolution imagery with quick turnaround series, being therefore suitable for routine forest stand monitoring and real-time applications.
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Uav Photgrammetric Workflows: a best Practice Guideline
NASA Astrophysics Data System (ADS)
Federman, A.; Santana Quintero, M.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J.
2017-08-01
The increasing commercialization of unmanned aerial vehicles (UAVs) has opened the possibility of performing low-cost aerial image acquisition for the documentation of cultural heritage sites through UAV photogrammetry. The flying of UAVs in Canada is regulated through Transport Canada and requires a Special Flight Operations Certificate (SFOC) in order to fly. Various image acquisition techniques have been explored in this review, as well as well software used to register the data. A general workflow procedure has been formulated based off of the literature reviewed. A case study example of using UAV photogrammetry at Prince of Wales Fort is discussed, specifically in relation to the data acquisition and processing. Some gaps in the literature reviewed highlight the need for streamlining the SFOC application process, and incorporating UAVs into cultural heritage documentation courses.
Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators
Ruano, Susana; Cuevas, Carlos; Gallego, Guillermo; García, Narciso
2017-01-01
Unmanned Aerial Vehicles (UAVs) are being extensively used nowadays. Therefore, pilots of traditional aerial platforms should adapt their skills to operate them from a Ground Control Station (GCS). Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs. The experiments show how the AR tool improves significantly the situational awareness of the UAV operators. PMID:28178189
Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators.
Ruano, Susana; Cuevas, Carlos; Gallego, Guillermo; García, Narciso
2017-02-06
Unmanned Aerial Vehicles (UAVs) are being extensively used nowadays. Therefore, pilots of traditional aerial platforms should adapt their skills to operate them from a Ground Control Station (GCS). Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs. The experiments show how the AR tool improves significantly the situational awareness of the UAV operators.
Assessing the Accuracy of Ortho-image using Photogrammetric Unmanned Aerial System
NASA Astrophysics Data System (ADS)
Jeong, H. H.; Park, J. W.; Kim, J. S.; Choi, C. U.
2016-06-01
Smart-camera can not only be operated under network environment anytime and any place but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study's proposed UAV photogrammetric method, low-cost UAV and smart camera were used. The elements of interior orientation were acquired through camera calibration. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration, The Digital Elevation Model (DEM) was constructed using the image data photographed at the target area and the results of the ground control point survey. This study also analyzes the proposed method's application possibility by comparing a Ortho-image the results of the ground control point survey. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
NASA Astrophysics Data System (ADS)
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Grussenmeyer, P.; Freville, T.
2017-02-01
Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19th century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs' images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.
An Intuitive Graphical User Interface for Small UAS
2013-05-01
reduced from two to one . The stock displays, including video with text overlay on one and FalconView on the other, are replaced with a single, graphics...INTRODUCTION Tactical UAVs such as the Raven, Puma and Wasp are often used by dismounted warfighters on missions that require a high degree of mobility by...the operators on the ground. The current ground control stations (GCS) for the Wasp, Raven and Puma tactical UAVs require two people and two user
USDA-ARS?s Scientific Manuscript database
We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...
Authenticity and privacy of a team of mini-UAVs by means of nonlinear recursive shuffling
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Ming-Kai; Baier, Patrick; Lee, Ting N.; Buss, James R.; Madan, Rabinder N.
2006-04-01
We have developed a real-time EOIR video counter-jittering sub-pixel image correction algorithm for a single mini- Unmanned Air Vehicle (m-UAV) for surveillance and communication (Szu et al. SPIE Proc. V 5439 5439, pp.183-197, April 12, 2004). In this paper, we wish to plan and execute the next challenge---- a team of m-UAVs. The minimum unit for a robust chain saw communication must have the connectivity of five second-nearest-neighbor members with a sliding, arbitrary center. The team members require an authenticity check (AC) among a unit of five, in order to carry out a jittering mosaic image processing (JMIP) on-board for every m-UAV without gimbals. The JMIP does not use any NSA security protocol ("cardinal rule: no-man, no-NSA codec"). Besides team flight dynamics (Szu et al "Nanotech applied to aerospace and aeronautics: swarming,' AIAA 2005-6933 Sept 26-29 2005), several new modules: AOA, AAM, DSK, AC, FPGA are designed, and the JMIP must develop their own control, command and communication system, safeguarded by the authenticity and privacy checks presented in this paper. We propose a Nonlinear Invertible (deck of card) Shuffler (NIS) algorithm, which has a Feistel structure similar to the Data Encryption Standard (DES) developed by Feistel et. al. at IBM in the 1970's; but DES is modified here by a set of chaotic dynamical shuffler Key (DSK), as re-computable lookup tables generated by every on-board Chaotic Neural Network (CNN). The initializations of CNN are periodically provided by the private version of RSA from the ground control to team members to avoid any inadvertent failure of broken chain among m-UAVs. Efficient utilization of communication bandwidth is necessary for a constantly moving and jittering m-UAV platform, e.g. the wireless LAN protocol wastes the bandwidth due to a constant need of hand-shaking procedures (as demonstrated by NRL; though sensible for PCs and 3 rd gen. mobile phones). Thus, the chaotic DSK must be embedded in a fault-tolerant Neural Network Associative Memory for the error-resilientconcealment mosaic image chip re-sent. However, the RSA public and private keys, chaos typing and initial value are given on set or sent to each m-UAV so that each platform knows only its private key. AC among 5 team members are possible using a reverse RSA protocol. A hashed image chip is coded by the sender's private key and nobody else knows in order to send to it to neighbors and the receiver can check the content by using the senders public key and compared the decrypted result with on-board image chips. We discover a fundamental problem of digital chaos approach in a finite state machine, of which a fallacy test of a discrete version is needed for a finite number of bits, as James Yorke advocated early. Thus, our proposed chaotic NIS for bits stream protection becomes desirable to further mixing the digital CNN outputs. The fault tolerance and the parallelism of Artificial Neural Network Associative Memory are necessary attributes for the neighborhood smoothness image restoration. The associated computational cost of O(N2) deems to be worthy, because the Chaotic version CNN of N-D can further provide the privacy only for the lost image chip (N=8x8) re-sent requested by its neighbors and the result is better performed than a simple 1-D logistic map. We gave a preliminary design of low end of FPGA firmware that to compute all on board seemed to be possible.
Multiple Event Localization in a Sparse Acoustic Sensor Network Using UAVs as Data Mules
2012-12-01
necessarily reflect the position or the policy of the Government , and no official endorsement should be inferred. Path Acoustic Sensor Communication Footprint...a Microhard radio to forward the ToAs to the mule-UAV. Two Procerus Unicorn UAVs were used with different payloads. The imaging- UAV was equipped
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Koehl, M.; Grussenmeyer, P.; Freville, T.
2017-08-01
Photogrammetry has seen an increase in the use of UAVs (Unmanned Aerial Vehicles) for both large and smaller scale cartography. The use of UAVs is also advantageous because it may be used for tasks requiring quick response, including in the case of the inspection and monitoring of buildings. The objective of the project is to study the acquisition and processing protocols which exist in the literature and to adapt them for UAV projects. This implies a study on the calibration of the sensors, flight planning, comparison of software solutions, data management, and analysis on the different products of a UAV project. Two historical buildings of the city of Strasbourg were used as case studies: a part of the Rohan Palace façade and the St-Pierre-le-Jeune Catholic church. In addition, a preliminary test was performed on the Josephine Pavilion. Two UAVs were used in this research; namely the Sensefly Albris and the DJI Phantom 3 Professional. The experiments have shown that the calibration parameters tend to be unstable for small sensors. Furthermore, the dense matching of images remains a particular problem to address in a close range photogrammetry project, more so in the presence of noise on the images. Data management in cases where the number of images is high is also very important. The UAV is nevertheless a suitable solution for the surveying and recording of historical buildings because it is able to take images from points of view which are normally inaccessible to classical terrestrial techniques.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Automated UAV-based mapping for airborne reconnaissance and video exploitation
NASA Astrophysics Data System (ADS)
Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre
2009-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.
The use of unmanned aerial vehicle imagery in intertidal monitoring
NASA Astrophysics Data System (ADS)
Konar, Brenda; Iken, Katrin
2018-01-01
Intertidal monitoring projects are often limited in their practicality because traditional methods such as visual surveys or removal of biota are often limited in the spatial extent for which data can be collected. Here, we used imagery from a small unmanned aerial vehicle (sUAV) to test their potential use in rocky intertidal and intertidal seagrass surveys in the northern Gulf of Alaska. Images captured by the sUAV in the high, mid and low intertidal strata on a rocky beach and within a seagrass bed were compared to data derived concurrently from observer visual surveys and to images taken by observers on the ground. Observer visual data always resulted in the highest taxon richness, but when observer data were aggregated to the lower taxonomic resolution obtained by the sUAV images, overall community composition was mostly similar between the two methods. Ground camera images and sUAV images yielded mostly comparable community composition despite the typically higher taxonomic resolution obtained by the ground camera. We conclude that monitoring goals or research questions that can be answered on a relatively coarse taxonomic level can benefit from an sUAV-based approach because it allows much larger spatial coverage within the time constraints of a low tide interval than is possible by observers on the ground. We demonstrated this large-scale applicability by using sUAV images to develop maps that show the distribution patterns and patchiness of seagrass.
SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements
Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos
2009-01-01
In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963
Building Damage Extraction Triggered by Earthquake Using the Uav Imagery
NASA Astrophysics Data System (ADS)
Li, S.; Tang, H.
2018-04-01
When extracting building damage information, we can only determine whether the building is collapsed using the post-earthquake satellite images. Even the satellite images have the sub-meter resolution, the identification of slightly damaged buildings is still a challenge. As the complementary data to satellite images, the UAV images have unique advantages, such as stronger flexibility and higher resolution. In this paper, according to the spectral feature of UAV images and the morphological feature of the reconstructed point clouds, the building damage was classified into four levels: basically intact buildings, slightly damaged buildings, partially collapsed buildings and totally collapsed buildings, and give the rules of damage grades. In particular, the slightly damaged buildings are determined using the detected roof-holes. In order to verify the approach, we conduct experimental simulations in the cases of Wenchuan and Ya'an earthquakes. By analyzing the post-earthquake UAV images of the two earthquakes, the building damage was classified into four levels, and the quantitative statistics of the damaged buildings is given in the experiments.
Correction of projective distortion in long-image-sequence mosaics without prior information
NASA Astrophysics Data System (ADS)
Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie
2010-04-01
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
2013-06-01
fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal
Construction of an unmanned aerial vehicle remote sensing system for crop monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon
2016-04-01
We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.
Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging
NASA Astrophysics Data System (ADS)
Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.
2013-08-01
Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.
2008-03-01
wearing eyeglasses or contacts to achieve 20/20 vision would not constitute an automatic rejection to operate a UAV. Therefore, the reduced medical...Current selection methods may in fact not provide the fit for Predator needs because they do not really test what the Predator pilot really requires to do...but more importantly, how the information fits into what we already know-- our knowledge which has been previously obtained based on our experiences
Preliminary Study on Earthquake Surface Rupture Extraction from Uav Images
NASA Astrophysics Data System (ADS)
Yuan, X.; Wang, X.; Ding, X.; Wu, X.; Dou, A.; Wang, S.
2018-04-01
Because of the advantages of low-cost, lightweight and photography under the cloud, UAVs have been widely used in the field of seismic geomorphology research in recent years. Earthquake surface rupture is a typical seismic tectonic geomorphology that reflects the dynamic and kinematic characteristics of crustal movement. The quick identification of earthquake surface rupture is of great significance for understanding the mechanism of earthquake occurrence, disasters distribution and scale. Using integrated differential UAV platform, series images were acquired with accuracy POS around the former urban area (Qushan town) of Beichuan County as the area stricken seriously by the 2008 Wenchuan Ms8.0 earthquake. Based on the multi-view 3D reconstruction technique, the high resolution DSM and DOM are obtained from differential UAV images. Through the shade-relief map and aspect map derived from DSM, the earthquake surface rupture is extracted and analyzed. The results show that the surface rupture can still be identified by using the UAV images although the time of earthquake elapse is longer, whose middle segment is characterized by vertical movement caused by compression deformation from fault planes.
SUSI 62 A Robust and Safe Parachute Uav with Long Flight Time and Good Payload
NASA Astrophysics Data System (ADS)
Thamm, H. P.
2011-09-01
In many research areas in the geo-sciences (erosion, land use, land cover change, etc.) or applications (e.g. forest management, mining, land management etc.) there is a demand for remote sensing images of a very high spatial and temporal resolution. Due to the high costs of classic aerial photo campaigns, the use of a UAV is a promising option for obtaining the desired remote sensed information at the time it is needed. However, the UAV must be easy to operate, safe, robust and should have a high payload and long flight time. For that purpose, the parachute UAV SUSI 62 was developed. It consists of a steel frame with a powerful 62 cm3 2- stroke engine and a parachute wing. The frame can be easily disassembled for transportation or to replace parts. On the frame there is a gimbal mounted sensor carrier where different sensors, standard SLR cameras and/or multi-spectral and thermal sensors can be mounted. Due to the design of the parachute, the SUSI 62 is very easy to control. Two different parachute sizes are available for different wind speed conditions. The SUSI 62 has a payload of up to 8 kg providing options to use different sensors at the same time or to extend flight duration. The SUSI 62 needs a runway of between 10 m and 50 m, depending on the wind conditions. The maximum flight speed is approximately 50 km/h. It can be operated in a wind speed of up to 6 m/s. The design of the system utilising a parachute UAV makes it comparatively safe as a failure of the electronics or the remote control only results in the UAV coming to the ground at a slow speed. The video signal from the camera, the GPS coordinates and other flight parameters are transmitted to the ground station in real time. An autopilot is available, which guarantees that the area of investigation is covered at the desired resolution and overlap. The robustly designed SUSI 62 has been used successfully in Europe, Africa and Australia for scientific projects and also for agricultural, forestry and industrial applications.
Cross Validation on the Equality of Uav-Based and Contour-Based Dems
NASA Astrophysics Data System (ADS)
Ma, R.; Xu, Z.; Wu, L.; Liu, S.
2018-04-01
Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.
Firefly: A HOT camera core for thermal imagers with enhanced functionality
NASA Astrophysics Data System (ADS)
Pillans, Luke; Harmer, Jack; Edwards, Tim
2015-06-01
Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.
USDA-ARS?s Scientific Manuscript database
An unmanned aerial vehicle was used to capture videos of cattle in pastures to determine the efficiency of this technology for use by Mounted Inspectors in the Permanent Quarantine zone (PQZ) of the Cattle Fever Tick Eradication Program in south Texas along the U.S.-Mexico Border. These videos were ...
Radar sensing via a Micro-UAV-borne system
NASA Astrophysics Data System (ADS)
Catapano, Ilaria; Ludeno, Giovanni; Gennarelli, Gianluca; Soldovieri, Francesco; Rodi Vetrella, Amedeo; Fasano, Giancarmine
2017-04-01
In recent years, the miniaturization of flight control systems and payloads has contributed to a fast and widespread diffusion of micro-UAV (Unmanned Aircraft Vehicle). While micro-UAV can be a powerful tool in several civil applications such as environmental monitoring and surveillance, unleashing their full potential for societal benefits requires augmenting their sensing capability beyond the realm of active/passive optical sensors [1]. In this frame, radar systems are drawing attention since they allow performing missions in all-weather and day/night conditions and, thanks to the microwave ability to penetrate opaque media, they enable the detection and localization not only of surface objects but also of sub-surface/hidden targets. However, micro-UAV-borne radar imaging represents still a new frontier, since it is much more than a matter of technology miniaturization or payload installation, which can take advantage of the newly developed ultralight systems. Indeed, micro-UAV-borne radar imaging entails scientific challenges in terms of electromagnetic modeling and knowledge of flight dynamics and control. As a consequence, despite Synthetic Aperture Radar (SAR) imaging is a traditional remote sensing tool, its adaptation to micro-UAV is an open issue and so far only few case studies concerning the integration of SAR and UAV technologies have been reported worldwide [2]. In addition, only early results concerning subsurface imaging by means of an UAV-mounted radar are available [3]. As a contribution to radar imaging via autonomous micro-UAV, this communication presents a proof-of-concept experiment. This experiment represents the first step towards the development of a general methodological approach that exploits expertise about (sub-)surface imaging and aerospace systems with the aim to provide high-resolution images of the surveyed scene. In details, at the conference, we will present the results of a flight campaign carried out by using a single radar-equipped drone. The system is made by a commercial radar system, whose mass, size, power and cost budgets is compatible with the installation on micro-UAV. The radar system has been mounted on a DJI 550 UAV, a flexible hexacopter allowing both complex flight operations and static flight, and has been equipped with small size log-periodic antennas, having a 6 dB gain over the frequency range from 2 GHz to 11 GHz. An ad-hoc signal processing chain has been adopted to process the collected raw data and obtain an image of the investigated scenario providing an accurate target detection and localization. This chain involves a SVD-based noise filter procedure and an advanced data processing approach, which assumes a linear model of the underlying scattering phenomenon. REFERENCES [1] K. Whitehead, C. H. Hugenholtz, "Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: a review of progress and challenges", J. Unmanned Vehicle Systems, vol.2, pp. 69-85, 2014. [2] K. Ouchi, Recent trend and advance of synthetic aperture radar with selected topics, Remote Sens, vol.5, pp.716-807, 2013. [3] D. Altdor et al., UAV-borne electromagnetic induction and ground-penetrating radar measurements: a feasibility test, 74th Annual Meeting of the Deutsche Geophysikalische Gesellschaft in Karlsruhe, Germany, March 9 - 13, 2014.
Feasibility of employing a smartphone as the payload in a photogrammetric UAV system
NASA Astrophysics Data System (ADS)
Kim, Jinsoo; Lee, Seongkyu; Ahn, Hoyong; Seo, Dongju; Park, Soyoung; Choi, Chuluong
2013-05-01
Smartphones can be operated in a 3G network environment at any time or location, and they also cost less than existing photogrammetric UAV systems, providing high-resolution images and 3D location and attitude data from a variety of built-in sensors. This study aims to assess the feasibility of using a smartphone as the payload for a photogrammetric UAV system. To carry out the assessment, a smartphone-based photogrammetric UAV system was developed and utilized to obtain image, location, and attitude data under both static and dynamic conditions. The accuracy of the location and attitude data obtained and sent by this system was then evaluated. The smartphone images were converted into ortho-images via image triangulation, which was carried out both with and without consideration of the interior orientation (IO) parameters determined by camera calibration. In the static experiment, when the IO parameters were taken into account, the triangulation results were less than 1.28 pixels (RMSE) for all smartphone types, an improvement of at least 47% compared with the case when IO parameters were not taken into account. In the dynamic experiment, on the other hand, the accuracy of smartphone image triangulation was not significantly improved by considering IO parameters. This was because the electronic rolling shutter within the complementary metal-oxide semiconductor (CMOS) sensor built into the smartphone and the actuator for the voice coil motor (VCM)-type auto-focusing affected by the vibration and the speed of the UAV, which is likely to have a negative effect on image-based digital elevation model (DEM) generation. However, considering that these results were obtained using a single smartphone, this suggests that a smartphone is not only feasible as the payload for a photogrammetric UAV system but it may also play a useful role when installed in existing UAV systems.
a Comparison of Uav and Tls Data for Soil Roughness Assessment
NASA Astrophysics Data System (ADS)
Milenković, M.; Karel, W.; Ressl, C.; Pfeifer, N.
2016-06-01
Soil roughness represents fine-scale surface geometry which figures in many geophysical models. While static photogrammetric techniques (terrestrial images and laser scanning) have been recently proposed as a new source for deriving roughness heights, there is still need to overcome acquisition scale and viewing geometry issues. By contrast to the static techniques, images taken from unmanned aerial vehicles (UAV) can maintain near-nadir looking geometry over scales of several agricultural fields. This paper presents a pilot study on high-resolution, soil roughness reconstruction and assessment from UAV images over an agricultural plot. As a reference method, terrestrial laser scanning (TLS) was applied on a 10 m x 1.5 m subplot. The UAV images were self-calibrated and oriented within a bundle adjustment, and processed further up to a dense-matched digital surface model (DSM). The analysis of the UAV- and TLS-DSMs were performed in the spatial domain based on the surface autocorrelation function and the correlation length, and in the frequency domain based on the roughness spectrum and the surface fractal dimension (spectral slope). The TLS- and UAV-DSM differences were found to be under ±1 cm, while the UAV DSM showed a systematic pattern below this scale, which was explained by weakly tied sub-blocks of the bundle block. The results also confirmed that the existing TLS methods leads to roughness assessment up to 5 mm resolution. However, for our UAV data, this was not possible to achieve, though it was shown that for spatial scales of 12 cm and larger, both methods appear to be usable. Additionally, this paper suggests a method to propagate measurement errors to the correlation length.
Roadside IED detection using subsurface imaging radar and rotary UAV
NASA Astrophysics Data System (ADS)
Qin, Yexian; Twumasi, Jones O.; Le, Viet Q.; Ren, Yu-Jiun; Lai, C. P.; Yu, Tzuyang
2016-05-01
Modern improvised explosive device (IED) and mine detection sensors using microwave technology are based on ground penetrating radar operated by a ground vehicle. Vehicle size, road conditions, and obstacles along the troop marching direction limit operation of such sensors. This paper presents a new conceptual design using a rotary unmanned aerial vehicle (UAV) to carry subsurface imaging radar for roadside IED detection. We have built a UAV flight simulator with the subsurface imaging radar running in a laboratory environment and tested it with non-metallic and metallic IED-like targets. From the initial lab results, we can detect the IED-like target 10-cm below road surface while carried by a UAV platform. One of the challenges is to design the radar and antenna system for a very small payload (less than 3 lb). The motion compensation algorithm is also critical to the imaging quality. In this paper, we also demonstrated the algorithm simulation and experimental imaging results with different IED target materials, sizes, and clutters.
NASA Astrophysics Data System (ADS)
Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor
2016-06-01
The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.
Smart Cruise Control: UAV sensor operator intent estimation and its application
NASA Astrophysics Data System (ADS)
Cheng, Hui; Butler, Darren; Kumar, Rakesh
2006-05-01
Due to their long endurance, superior mobility and the low risk posed to the pilot and sensor operator, UAVs have become the preferred platform for persistent ISR missions. However, currently most UAV based ISR missions are conducted through manual operation. Event the simplest tasks, such as vehicle tracking, route reconnaissance and site monitoring, need the sensor operator's undivided attention and constant adjustment of the sensor control. The lack of autonomous behaviour greatly limits of the effectiveness and the capability of UAV-based ISR, especially the use of a large number of UAVs simultaneously. Although fully autonomous UAV based ISR system is desirable, it is still a distant dream due to the complexity and diversity of combat and ISR missions. In this paper, we propose a Smart Cruise Control system that can learn UAV sensor operator's intent and use it to complete tasks automatically, such as route reconnaissance and site monitoring. Using an operator attention model, the proposed system can estimate the operator's intent from how they control the sensor (e.g. camera) and the content of the imagery that is acquired. Therefore, for example, from initially manually controlling the UAV sensor to follow a road, the system can learn not only the preferred operation, "tracking", but also the road appearance, "what to track" in real-time. Then, the learnt models of both road and the desired operation can be used to complete the task automatically. We have demonstrated the Smart Cruise Control system using real UAV videos where roads need to be tracked and buildings need to be monitored.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
D Surface Generation from Aerial Thermal Imagery
NASA Astrophysics Data System (ADS)
Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.
2015-12-01
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
Autonomous agricultural remote sensing systems with high spatial and temporal resolutions
NASA Astrophysics Data System (ADS)
Xiang, Haitao
In this research, two novel agricultural remote sensing (RS) systems, a Stand-alone Infield Crop Monitor RS System (SICMRS) and an autonomous Unmanned Aerial Vehicles (UAV) based RS system have been studied. A high-resolution digital color and multi-spectral camera was used as the image sensor for the SICMRS system. An artificially intelligent (AI) controller based on artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) was developed. Morrow Plots corn field RS images in the 2004 and 2006 growing seasons were collected by the SICMRS system. The field site contained 8 subplots (9.14 m x 9.14 m) that were planted with corn and three different fertilizer treatments were used among those subplots. The raw RS images were geometrically corrected, resampled to 10cm resolution, removed soil background and calibrated to real reflectance. The RS images from two growing seasons were studied and 10 different vegetation indices were derived from each day's image. The result from the image processing demonstrated that the vegetation indices have temporal effects. To achieve high quality RS data, one has to utilize the right indices and capture the images at the right time in the growing season. Maximum variations among the image data set are within the V6-V10 stages, which indicated that these stages are the best period to identify the spatial variability caused by the nutrient stress in the corn field. The derived vegetation indices were also used to build yield prediction models via the linear regression method. At that point, all of the yield prediction models were evaluated by comparing the R2-value and the best index model from each day's image was picked based on the highest R 2-value. It was shown that the green normalized difference vegetation (GNDVI) based model is more sensitive to yield prediction than other indices-based models. During the VT-R4 stages, the GNDVI based models were able to explain more than 95% potential corn yield consistently for both seasons. The VT-R4 stages are the best period of time to estimate the corn yield. The SICMS system is only suitable for the RS research at a fixed location. In order to provide more flexibility of the RS image collection, a novel UAV based system has been studied. The UAV based agricultural RS system used a light helicopter platform equipped with a multi-spectral camera. The UAV control system consisted of an on-board and a ground station subsystem. For the on-board subsystem, an Extended Kalman Filter (EKF) based UAV navigation system was designed and implemented. The navigation system, using low cost inertial sensors, magnetometer, GPS and a single board computer, was capable of providing continuous estimates of UAV position and attitude at 50 Hz using sensor fusion techniques. The ground station subsystem was designed to be an interface between a human operator and the UAV to implement mission planning, flight command activation, and real-time flight monitoring. The navigation system is controlled by the ground station, and able to navigate the UAV in the air to reach the predefined waypoints and trigger the multi-spectral camera. By so doing, the aerial images at each point could be captured automatically. The developed UAV RS system can provide a maximum flexibility in crop field RS image collection. It is essential to perform the geometric correction and the geocoding before an aerial image can be used for precision farming. An automatic (no Ground Control Point (GCP) needed) UAV image georeferencing algorithm was developed. This algorithm can do the automatic image correction and georeferencing based on the real-time navigation data and a camera lens distortion model. The accuracy of the georeferencing algorithm was better than 90 cm according to a series test. The accuracy that has been achieved indicates that, not only is the position solution good, but the attitude error is extremely small. The waypoints planning for UAV flight was investigated. It suggested that a 16.5% forward overlap and a 15% lateral overlap were required to avoiding missing desired mapping area when the UAV flies above 45 m high with 4.5 mm lens. A whole field mosaic image can be generated according to the individual image georeferencing information. A 0.569 m mosaic error has been achieved and this accuracy is sufficient for many of the intended precision agricultural applications. With careful interpretation, the UAV images are an excellent source of high spatial and temporal resolution data for precision agricultural applications. (Abstract shortened by UMI.)
Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study.
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species' habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km(2) area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as 'certain' (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.
Unmanned Aerial Vehicles (UAVs) for Surveying Marine Fauna: A Dugong Case Study
Hodgson, Amanda; Kelly, Natalie; Peel, David
2013-01-01
Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys. PMID:24223967
Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal
2009-01-01
This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.
Emergency Response Fire-Imaging UAS Missions over the Southern California Wildfire Disaster
NASA Technical Reports Server (NTRS)
DelFrate, John H.
2008-01-01
Objectives include: Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. Demonstrate long-endurance mission capabilities (20-hours+). Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. Deliver real-time imagery (within 10-minutes of acquisition). Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).
Emergency Response Fire-Imaging UAS Missions over the Southern California Wildfire Disaster
NASA Technical Reports Server (NTRS)
Cobleigh, Brent R.
2007-01-01
Objectives include: Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. Demonstrate long-endurance mission capabilities (20-hours+). Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. Deliver real-time imagery (within 10-minutes of acquisition). Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-01-01
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner. PMID:29439447
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR.
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-02-11
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner.
Path planning and Ground Control Station simulator for UAV
NASA Astrophysics Data System (ADS)
Ajami, A.; Balmat, J.; Gauthier, J.-P.; Maillot, T.
In this paper we present a Universal and Interoperable Ground Control Station (UIGCS) simulator for fixed and rotary wing Unmanned Aerial Vehicles (UAVs), and all types of payloads. One of the major constraints is to operate and manage multiple legacy and future UAVs, taking into account the compliance with NATO Combined/Joint Services Operational Environment (STANAG 4586). Another purpose of the station is to assign the UAV a certain degree of autonomy, via autonomous planification/replanification strategies. The paper is organized as follows. In Section 2, we describe the non-linear models of the fixed and rotary wing UAVs that we use in the simulator. In Section 3, we describe the simulator architecture, which is based upon interacting modules programmed independently. This simulator is linked with an open source flight simulator, to simulate the video flow and the moving target in 3D. To conclude this part, we tackle briefly the problem of the Matlab/Simulink software connection (used to model the UAV's dynamic) with the simulation of the virtual environment. Section 5 deals with the control module of a flight path of the UAV. The control system is divided into four distinct hierarchical layers: flight path, navigation controller, autopilot and flight control surfaces controller. In the Section 6, we focus on the trajectory planification/replanification question for fixed wing UAV. Indeed, one of the goals of this work is to increase the autonomy of the UAV. We propose two types of algorithms, based upon 1) the methods of the tangent and 2) an original Lyapunov-type method. These algorithms allow either to join a fixed pattern or to track a moving target. Finally, Section 7 presents simulation results obtained on our simulator, concerning a rather complicated scenario of mission.
NASA Astrophysics Data System (ADS)
Yahyanejad, Saeed; Rinner, Bernhard
2015-06-01
The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.
Critical infrastructure monitoring using UAV imagery
NASA Astrophysics Data System (ADS)
Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos
2016-08-01
The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.
Autonomous unmanned air vehicles (UAV) techniques
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Lee, Ting N.
2007-04-01
The UAVs (Unmanned Air Vehicles) have great potentials in different civilian applications, such as oil pipeline surveillance, precision farming, forest fire fighting (yearly), search and rescue, boarder patrol, etc. The related industries of UAVs can create billions of dollars for each year. However, the road block of adopting UAVs is that it is against FAA (Federal Aviation Administration) and ATC (Air Traffic Control) regulations. In this paper, we have reviewed the latest technologies and researches on UAV navigation and obstacle avoidance. We have purposed a system design of Jittering Mosaic Image Processing (JMIP) with stereo vision and optical flow to fulfill the functionalities of autonomous UAVs.
NASA Astrophysics Data System (ADS)
Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.
2015-08-01
With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.
USDA-ARS?s Scientific Manuscript database
Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...
UAVs Being Used for Environmental Surveying
Chung, Sandra
2017-12-09
UAVs, are much more sophisticated than your typical remote-controlled plane. INL robotics and remote sensing experts have added state-of-the-art imaging and wireless technology to the UAVs to create intelligent remote surveillance craft that can rapidly survey a wide area for damage and track down security threats.
Automated geographic registration and radiometric correction for UAV-based mosaics
NASA Astrophysics Data System (ADS)
Thomasson, J. Alex; Shi, Yeyin; Sima, Chao; Yang, Chenghai; Cope, Dale A.
2017-05-01
Texas A and M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to science-based utilization of such mosaics are geographic registration and radiometric calibration. In our current research project, image files are taken to the computer laboratory after the flight, and semi-manual pre-processing is implemented on the raw image data, including ortho-mosaicking and radiometric calibration. Ground control points (GCPs) are critical for high-quality geographic registration of images during mosaicking. Applications requiring accurate reflectance data also require radiometric-calibration references so that reflectance values of image objects can be calculated. We have developed a method for automated geographic registration and radiometric correction with targets that are installed semi-permanently at distributed locations around fields. The targets are a combination of black (≍5% reflectance), dark gray (≍20% reflectance), and light gray (≍40% reflectance) sections that provide for a transformation of pixel-value to reflectance in the dynamic range of crop fields. The exact spectral reflectance of each target is known, having been measured with a spectrophotometer. At the time of installation, each target is measured for position with a real-time kinematic GPS receiver to give its precise latitude and longitude. Automated location of the reference targets in the images is required for precise, automated, geographic registration; and automated calculation of the digital-number to reflectance transformation is required for automated radiometric calibration. To validate the system for radiometric calibration, a calibrated UAV-based image mosaic of a field was compared to a calibrated single image from a manned aircraft. Reflectance values in selected zones of each image were strongly linearly related, and the average error of UAV-mosaic reflectances was 3.4% in the red band, 1.9% in the green band, and 1.5% in the blue band. Based on these results, the proposed physical system and automated software for calibrating UAV mosaics show excellent promise.
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
Gong, Mali; Guo, Rui; He, Sifeng; Wang, Wei
2016-11-01
The security threats caused by multi-rotor unmanned aircraft vehicles (UAVs) are serious, especially in public places. To detect and control multi-rotor UAVs, knowledge of IR characteristics is necessary. The IR characteristics of a typical commercial quad-rotor UAV are investigated in this paper through thermal imaging with an IR camera. Combining the 3D geometry and IR images of the UAV, a 3D IR characteristics model is established so that the radiant power from different views can be obtained. An estimation of operating range to detect the UAV is calculated theoretically using signal-to-noise ratio as the criterion. Field experiments are implemented with an uncooled IR camera in an environment temperature of 12°C and a uniform background. For the front view, the operating range is about 150 m, which is close to the simulation result of 170 m.
a Three-Dimensional Simulation and Visualization System for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Cui, T.
2017-08-01
Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.
The advanced linked extended reconnaissance and targeting technology demonstration project
NASA Astrophysics Data System (ADS)
Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle
2007-06-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
NASA Astrophysics Data System (ADS)
Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso
2014-06-01
New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Drones at the Beach - Surf Zone Monitoring Using Rotary Wing Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Rynne, P.; Brouwer, R.; de Schipper, M. A.; Graham, F.; Reniers, A.; MacMahan, J. H.
2014-12-01
We investigate the potential of rotary wing Unmanned Aerial Vehicles (UAVs) to monitor the surf zone. In recent years, the arrival of lightweight, high-capacity batteries, low-power electronics and compact high-definition cameras has driven the development of commercially available UAVs for hobbyists. Moreover, the low operation costs have increased their potential for scientific research as these UAVs are extremely flexible surveying platforms. The UAVs can fly for ~12 min with a mean loiter radius of 1 - 3.5 m and a mean loiter error of 0.75 - 4.5 m, depending on the environmental conditions, flying style, battery type and vehicle type. Our experiments using multiple, alternating UAVs show that it is possible to have near continuous imagery data with similar Fields Of View. The images obtained from the UAVs (Fig. 1a), and in combination with surveyed Ground Control Points (GCPs) (Fig. 1b, red squares and white circles), can be geo-rectified (Fig. 1c) to pixel resolution between 0.01 - 1 m and a reprojection error, i.e. the difference between the surveyed GPS location of a GCP and the location of the GCP obtained from the geo-rectified image, of O(1 m). These geo-rectified images provide data on a variety of coastal aspects, such as beach width (Wb(x,t)), surf zone width (Wsf(x,t)), wave breaking location (rectangle B), beach usage (circle C) and location of dune vegegation (rectangle D), amongst others. Additionally, the possibility to have consecutive, high frequency (up to 2 Hz) rectified images makes the UAVs a great data instrument for spatially and temporally variable systems, such as the surf zone. Our first observations with the UAVs reveal the potential to quickly obtain surf zone and beach characteristics in response to storms or for day to day beach information, as well as the scientific pursuits of surf zone kinematics on different spatial and temporal scales, and dispersion and advection estimates of pollutants/dye. A selection of findings from several field experiments and using multiple optical instruments will be showed at the meeting, discussing the new possibilities rotary wing UAVs can offer for surf zone research.
UAV based hydromorphological mapping of a river reach to improve hydrodynamic numerical models
NASA Astrophysics Data System (ADS)
Lükő, Gabriella; Baranya, Sándor; Rüther, Nils
2017-04-01
Unmanned Aerial Vehicles (UAVs) are increasingly used in the field of engineering surveys. In river engineering, or in general, water resources engineering, UAV based measurements have a huge potential. For instance, indirect measurements of the flow discharge using e.g. large-scale particle image velocimetry (LSPIV), particle tracking velocimetry (PTV), space-time image velocimetry (STIV) or radars became a real alternative for direct flow measurements. Besides flow detection, topographic surveys are also essential for river flow studies as the channel and floodplain geometry is the primary steering feature of the flow. UAVs can play an important role in this field, too. The widely used laser based topographic survey method (LIDAR) can be deployed on UAVs, moreover, the application of the Structure from Motion (SfM) method, which is based on images taken by UAVs, might be an even more cost-efficient alternative to reveal the geometry of distinct objects in the river or on the floodplain. The goal of this study is to demonstrate the utilization of photogrammetry and videogrammetry from airborne footage to provide geometry and flow data for a hydrodynamic numerical simulation of a 2 km long river reach in Albania. First, the geometry of the river is revealed from photogrammetry using the SfM method. Second, a more detailed view of the channel bed at low water level is taken. Using the fine resolution images, a Matlab based code, BASEGrain, developed by the ETH in Zürich, will be applied to determine the grain size characteristics of the river bed. This information will be essential to define the hydraulic roughness in the numerical model. Third, flow mapping is performed using UAV measurements and LSPIV method to quantitatively asses the flow field at the free surface and to estimate the discharge in the river. All data collection and analysis will be carried out using a simple, low-cost UAV, moreover, for all the data processing, open source, freely available software will be used leading to a cost-efficient methodology. The results of the UAV based measurements will be discussed and future research ideas will be outlined.
Design and implementation of a remote UAV-based mobile health monitoring system
NASA Astrophysics Data System (ADS)
Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix
2017-04-01
Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.
Efficient structure from motion for oblique UAV images based on maximal spanning tree expansion
NASA Astrophysics Data System (ADS)
Jiang, San; Jiang, Wanshou
2017-10-01
The primary contribution of this paper is an efficient Structure from Motion (SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an algorithm, considering spatial relationship constraints between image footprints, is designed for match pair selection with the assistance of UAV flight control data and oblique camera mounting angles. Second, a topological connection network (TCN), represented by an undirected weighted graph, is constructed from initial match pairs, which encodes the overlap areas and intersection angles into edge weights. Then, an algorithm, termed MST-Expansion, is proposed to extract the match graph from the TCN, where the TCN is first simplified by a maximum spanning tree (MST). By further analysis of the local structure in the MST, expansion operations are performed on the vertices of the MST for match graph enhancement, which is achieved by introducing critical connections in the expansion directions. Finally, guided by the match graph, an efficient SfM is proposed. Under extensive analysis and comparison, its performance is verified by using three oblique UAV datasets captured with different multi-camera systems. Experimental results demonstrate that the efficiency of image matching is improved, with speedup ratios ranging from 19 to 35, and competitive orientation accuracy is achieved from both relative bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA with GCPs. At the same time, images in the three datasets are successfully oriented. For the orientation of oblique UAV images, the proposed method can be a more efficient solution.
Wireless Command-and-Control of UAV-Based Imaging LANs
NASA Technical Reports Server (NTRS)
Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.
2006-01-01
Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.
Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han
2017-09-07
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
Shigaraki UAV-Radar Experiment (ShUREX): overview of the campaign with some preliminary results
NASA Astrophysics Data System (ADS)
Kantha, Lakshmi; Lawrence, Dale; Luce, Hubert; Hashiguchi, Hiroyuki; Tsuda, Toshitaka; Wilson, Richard; Mixa, Tyler; Yabuki, Masanori
2017-12-01
The Shigaraki unmanned aerial vehicle (UAV)-Radar Experiment (ShUREX) is an international (USA-Japan-France) observational campaign, whose overarching goal is to demonstrate the utility of small, lightweight, inexpensive, autonomous UAVs in probing and monitoring the lower troposphere and to promote synergistic use of UAVs and very high frequency (VHF) radars. The 2-week campaign lasting from June 1 to June 14, 2015, was carried out at the Middle and Upper Atmosphere (MU) Observatory in Shigaraki, Japan. During the campaign, the DataHawk UAV, developed at the University of Colorado, Boulder, and equipped with high-frequency response cold wire and pitot tube sensors (as well as an iMET radiosonde), was flown near and over the VHF-band MU radar. Measurements in the atmospheric column in the immediate vicinity of the radar were obtained. Simultaneous and continuous operation of the radar in range imaging mode enabled fine-scale structures in the atmosphere to be visualized by the radar. It also permitted the UAV to be commanded to sample interesting structures, guided in near real time by the radar images. This overview provides a description of the ShUREX campaign and some interesting but preliminary results of the very first simultaneous and intensive probing of turbulent structures by UAVs and the MU radar. The campaign demonstrated the validity and utility of the radar range imaging technique in obtaining very high vertical resolution ( 20 m) images of echo power in the atmospheric column, which display evolving fine-scale atmospheric structures in unprecedented detail. The campaign also permitted for the very first time the evaluation of the consistency of turbulent kinetic energy dissipation rates in turbulent structures inferred from the spectral broadening of the backscattered radar signal and direct, in situ measurements by the high-frequency response velocity sensor on the UAV. The data also enabled other turbulence parameters such as the temperature structure function parameter {C}_T^2 and refractive index structure function parameter {C}_n^2 to be measured by sensors on the UAV, along with radar-inferred refractive index structure function parameter {C}_{n,radar}^2 . The comprehensive dataset collected during the campaign (from the radar, the UAV, the boundary layer lidar, the ceilometer, and radiosondes) is expected to help obtain a better understanding of turbulent atmospheric structures, as well as arrive at a better interpretation of the radar data.
a Fast Approach for Stitching of Aerial Images
NASA Astrophysics Data System (ADS)
Moussa, A.; El-Sheimy, N.
2016-06-01
The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.
The Practical Application of Uav-Based Photogrammetry Under Economic Aspects
NASA Astrophysics Data System (ADS)
Sauerbier, M.; Siegrist, E.; Eisenbeiss, H.; Demir, N.
2011-09-01
Nowadays, small size UAVs (Unmanned Aerial Vehicles) have reached a level of practical reliability and functionality that enables this technology to enter the geomatics market as an additional platform for spatial data acquisition. Though one could imagine a wide variety of interesting sensors to be mounted on such a device, here we will focus on photogrammetric applications using digital cameras. In praxis, UAV-based photogrammetry will only be accepted if it a) provides the required accuracy and an additional value and b) if it is competitive in terms of economic application compared to other measurement technologies. While a) was already proven by the scientific community and results were published comprehensively during the last decade, b) still has to be verified under real conditions. For this purpose, a test data set representing a realistic scenario provided by ETH Zurich was used to investigate cost effectiveness and to identify weak points in the processing chain that require further development. Our investigations are limited to UAVs carrying digital consumer cameras, for larger UAVs equipped with medium format cameras the situation has to be considered as significantly different. Image data was acquired during flights using a microdrones MD4-1000 quadrocopter equipped with an Olympus PE-1 digital compact camera. From these images, a subset of 5 images was selected for processing in order to register the effort of time required for the whole production chain of photogrammetric products. We see the potential of mini UAV-based photogrammetry mainly in smaller areas, up to a size of ca. 100 hectares. Larger areas can be efficiently covered by small airplanes with few images, reducing processing effort drastically. In case of smaller areas of a few hectares only, it depends more on the products required. UAVs can be an enhancement or alternative to GNSS measurements, terrestrial laser scanning and ground based photogrammetry. We selected the above mentioned test data from a project featuring an area of interest within the practical range for mini UAVs. While flight planning and flight operation are already quite efficient processes, the bottlenecks identified are mainly related to image processing. Although we used specific software for image processing, the identified gaps in the processing chain today are valid for most commercial photogrammetric software systems on the market. An outlook proposing improvements for a practicable workflow applicable in projects in private economy will be given.
COBRA ATD minefield detection results for the Joint Countermine ACTD Demonstrations
NASA Astrophysics Data System (ADS)
Stetson, Suzanne P.; Witherspoon, Ned H.; Holloway, John H., Jr.; Suiter, Harold R.; Crosby, Frank J.; Hilton, Russell J.; McCarley, Karen A.
2000-08-01
The Coastal Battlefield Reconnaissance and Analysis)COBRA) system described here was a Marine Corps Advanced Technology Demonstration (ATD) development consisting of an unmanned aerial vehicle (UAV) airborne multispectral video sensor system and ground station which processes the multispectral video data to automatically detect minefields along the flight path. After successful completion of the ATD, the residual COBRA ATD system participated in the Joint Countermine (JCM) Advanced Concept Technology Demonstration (ACTD) Demo I held at Camp Lejeune, North Carolina in conjunction with JTFX97 and Demo II held in Stephenville, Newfoundland in conjunction with MARCOT98. These exercises demonstrated the COBRA ATD system in an operational environment, detecting minefields that included several different mine types in widely varying backgrounds. The COBRA system performed superbly during these demonstrations, detecting mines under water, in the surf zone, on the beach, and inland, and has transitioned to an acquisition program. This paper describes the COBRA operation and performance results for these demonstrations, which represent the first demonstrated capability for remote tactical minefield detection from a UAV. The successful COBRA technologies and techniques demonstrated for tactical UAV minefield detection in the Joint Countermine Advanced Concept Technology Demonstrations have formed the technical foundation for future developments in Marine Corps, Navy, and Army tactical remote airborne mine detection systems.
Classical Photogrammetry and Uav - Selected Ascpects
NASA Astrophysics Data System (ADS)
Mikrut, S.
2016-06-01
The UAV technology seems to be highly future-oriented due to its low costs as compared to traditional aerial images taken from classical photogrammetry aircrafts. The AGH University of Science and Technology in Cracow - Department of Geoinformation, Photogrammetry and Environmental Remote Sensing focuses mainly on geometry and radiometry of recorded images. Various scientific research centres all over the world have been conducting the relevant research for years. The paper presents selected aspects of processing digital images made with the UAV technology. It provides on a practical example a comparison between a digital image taken from an airborne (classical) height, and the one made from an UAV level. In his research the author of the paper is trying to find an answer to the question: to what extent does the UAV technology diverge today from classical photogrammetry, and what are the advantages and disadvantages of both methods? The flight plan was made over the Tokarnia Village Museum (more than 0.5 km2) for two separate flights: the first was made by an UAV - System FT-03A built by FlyTech Solution Ltd. The second was made with the use of a classical photogrammetric Cesna aircraft furnished with an airborne photogrammetric camera (Ultra Cam Eagle). Both sets of photographs were taken with pixel size of about 3 cm, in order to have reliable data allowing for both systems to be compared. The project has made aerotriangulation independently for the two flights. The DTM was generated automatically, and the last step was the generation of an orthophoto. The geometry of images was checked under the process of aerotriangulation. To compare the accuracy of these two flights, control and check points were used. RMSE were calculated. The radiometry was checked by a visual method and using the author's own algorithm for feature extraction (to define edges with subpixel accuracy). After initial pre-processing of data, the images were put together, and shown side by side. Buildings and strips on the road were selected from whole data for the comparison of edges and details. The details on UAV images were not worse than those on classical photogrammetric ones. One might suppose that geometrically they also were correct. The results of aerotriangulation prove these facts, too. Final results from aerotriangulation were on the level of RMS = 1 pixel (about 3 cm). In general it can be said that photographs from UAVs are not worse than classic ones. In the author's opinion, geometric and radiometric qualities are at a similar level for this kind of area (a small village). This is a very significant result as regards mapping. It means that UAV data can be used in mapping production.
NASA Astrophysics Data System (ADS)
Rango, A.; Vivoni, E. R.; Browning, D. M.; Anderson, C.; Laliberte, A. S.
2013-12-01
It is taking longer than expected to realize the immense potential of Unmanned Aerial Vehicles (UAVs)for civil applications due to the complexity of regulations being developed by the Federal Aviation Authority (FAA) that can be applied to both manned and unmanned flight in the National Airspace System (NAS). As a result, FAA has required that for all UAV flights in the NAS, an external pilot must maintain line-of-sight contact with the UAV. Properly trained observers must also be present to assist the external pilot in collision avoidance. Additionally, in order to fly in the NAS, formal approval must be requested from FAA through application for a Certificate of Authorization (COA for government applicants or a Special Airworthiness Certificate (SAC) in the experimental category for non-government applicants. Flight crews of UAVs must pass exams also required for manned airplane pilots. Although flight crews for UAVs are not required to become manned airplane pilots, UAV flight missions are much more efficient if one or two of the UAV flight crew are also manned aircraft pilots so they can serve as the UAV mission commander. Our group has performed numerous UAV flights within the Jornada Experimental Range in southern New Mexico. Two developments with Jornada UAVs can be recommended to other UAV operators that would increase flight time experience and study areas covered by UAV images. First, do not overlook the possibility of obtaining permission to fly in Restricted Military Airspace (RMA). At the Jornada, our airspace is approximately 50% NAS and 50% RMA. With experiments ongoing in both types of airspace, we can fly in both areas and continue to increase UAV flights. Second, we have developed an air- and-ground vehicle approach for long distance, continuous pilot transport that always maintains line-of-sight requirements. This allows flying several target areas on a single mission and increasing the number of acquired UAV images - over 90,000 UAV images have now been acquired at Jornada. Most of our UAV flights have taken place over rangelands or watersheds in the western U.S. These flights have been successful used for classification of vegetation cover and type, measuring gaps between vegetation patches, identifing locations of potentially erosive soil, deriving digital elevation models, and monitoring plant phenology.. These measurements can be directly compared to more costly and time-intensive traditional techniques used in rangeland health determinations. New UAVs are becoming available with increased sensor payload capacity. At Jornada we have concentrated on flying at low altitudes (~215 m) to acquire hyperspatial resolutions with digital cameras of about 5-6 cm. We also fly a six band multispectral camera with spatial resolution of ~ 13 cm. We have recently acquired a larger Bat-4 UAV to go with the Bat-3 UAV. The major improvement associated with this upgrade is an increase in sensor payload from 1.4 kg to 14 kg. We are surveying the type of sensors that we could add to best increase our information content.
Autonomous Control of a Quadrotor UAV Using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Sureshkumar, Vijaykumar
UAVs are being increasingly used today than ever before in both military and civil applications. They are heavily preferred in "dull, dirty or dangerous" mission scenarios. Increasingly, UAVs of all kinds are being used in policing, fire-fighting, inspection of structures, pipelines etc. Recently, the FAA gave its permission for UAVs to be used on film sets for motion capture and high definition video recording. The rapid development in MEMS and actuator technology has made possible a plethora of UAVs that are suited for commercial applications in an increasingly cost effective manner. An emerging popular rotary wing UAV platform is the Quadrotor A Quadrotor is a helicopter with four rotors, that make it more stable; but more complex to model and control. Characteristics that provide a clear advantage over other fixed wing UAVs are VTOL and hovering capabilities as well as a greater maneuverability. It is also simple in construction and design compared to a scaled single rotorcraft. Flying such UAVs using a traditional radio Transmitter-Receiver setup can be a daunting task especially in high stress situations. In order to make such platforms widely applicable, a certain level of autonomy is imperative to the future of such UAVs. This thesis paper presents a methodology for the autonomous control of a Quadrotor UAV using Fuzzy Logic. Fuzzy logic control has been chosen over conventional control methods as it can deal effectively with highly nonlinear systems, allows for imprecise data and is extremely modular. Modularity and adaptability are the key cornerstones of FLC. The objective of this thesis is to present the steps of designing, building and simulating an intelligent flight control module for a Quadrotor UAV. In the course of this research effort, a Quadrotor UAV is indigenously developed utilizing the resources of an online open source project called Aeroquad. System design is comprehensively dealt with. A math model for the Quadrotor is developed and a simulation environment is built in the MATLAB/Simulink framework. The Fuzzy flight controller development is discussed intensively. Validation of the math model developed is presented using actual flight data. Excellent attitude tracking is demonstrated for near hover flight regimes. The responses are analyzed and future work involving implementation is discussed.
NASA Astrophysics Data System (ADS)
Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.
2016-03-01
Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe's requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. For the block and railway corridor, the basestation's position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. Similarly, for the flight over Fryer Dam, the base-station's position was also precisely determined by processing a 4-hour session using the CSRS-PPP Post Processing service. POSPac UAV's camera calibration and quality control (CalQC) module was used to refine the camera interior orientation parameters using an Integrated Sensor Orientation (ISO) approach. POSPac UAV was also used to generate the Exterior Orientation parameters for images collected during the test flight. The Inpho photogrammetric software package was used to develop the final map products for both corridors under various scenarios. The imagery was first imported into an Inpho project, with updated focal length, principal point offsets and Exterior Orientation parameters. First, a Digital Terrain/Surface Model (DTM/DSM) was extracted from the stereo imagery, following which the raw images were orthorectified to produce an orthomosaic product.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
USDA-ARS?s Scientific Manuscript database
With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...
Towards collaboration between unmanned aerial and ground vehicles for precision agriculture
NASA Astrophysics Data System (ADS)
Bhandari, Subodh; Raheja, Amar; Green, Robert L.; Do, Dat
2017-05-01
This paper presents the work being conducted at Cal Poly Pomona on the collaboration between unmanned aerial and ground vehicles for precision agriculture. The unmanned aerial vehicles (UAVs), equipped with multispectral/hyperspectral cameras and RGB cameras, take images of the crops while flying autonomously. The images are post processed or can be processed onboard. The processed images are used in the detection of unhealthy plants. Aerial data can be used by the UAVs and unmanned ground vehicles (UGVs) for various purposes including care of crops, harvest estimation, etc. The images can also be useful for optimized harvesting by isolating low yielding plants. These vehicles can be operated autonomously with limited or no human intervention, thereby reducing cost and limiting human exposure to agricultural chemicals. The paper discuss the autonomous UAV and UGV platforms used for the research, sensor integration, and experimental testing. Methods for ground truthing the results obtained from the UAVs will be used. The paper will also discuss equipping the UGV with a robotic arm for removing the unhealthy plants and/or weeds.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-01-01
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations. PMID:27171091
The pan-sharpening of satellite and UAV imagery for agricultural applications
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata
2016-10-01
Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-05-10
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.
Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications
NASA Astrophysics Data System (ADS)
Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.
2011-09-01
Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing
Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu
2017-01-01
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
NASA Astrophysics Data System (ADS)
Shao, Yanhua; Mei, Yanying; Chu, Hongyu; Chang, Zhiyuan; He, Yuxuan; Zhan, Huayi
2018-04-01
Pedestrian detection (PD) is an important application domain in computer vision and pattern recognition. Unmanned Aerial Vehicles (UAVs) have become a major field of research in recent years. In this paper, an algorithm for a robust pedestrian detection method based on the combination of the infrared HOG (IR-HOG) feature and SVM is proposed for highly complex outdoor scenarios on the basis of airborne IR image sequences from UAV. The basic flow of our application operation is as follows. Firstly, the thermal infrared imager (TAU2-336), which was installed on our Outdoor Autonomous Searching (OAS) UAV, is used for taking pictures of the designated outdoor area. Secondly, image sequences collecting and processing were accomplished by using high-performance embedded system with Samsung ODROID-XU4 and Ubuntu as the core and operating system respectively, and IR-HOG features were extracted. Finally, the SVM is used to train the pedestrian classifier. Experiment show that, our method shows promising results under complex conditions including strong noise corruption, partial occlusion etc.
AirSTAR: A UAV Platform for Flight Dynamics and Control System Testing
NASA Technical Reports Server (NTRS)
Jordan, Thomas L.; Foster, John V.; Bailey, Roger M.; Belcastro, Christine M.
2006-01-01
As part of the NASA Aviation Safety Program at Langley Research Center, a dynamically scaled unmanned aerial vehicle (UAV) and associated ground based control system are being developed to investigate dynamics modeling and control of large transport vehicles in upset conditions. The UAV is a 5.5% (seven foot wingspan), twin turbine, generic transport aircraft with a sophisticated instrumentation and telemetry package. A ground based, real-time control system is located inside an operations vehicle for the research pilot and associated support personnel. The telemetry system supports over 70 channels of data plus video for the downlink and 30 channels for the control uplink. Data rates are in excess of 200 Hz. Dynamic scaling of the UAV, which includes dimensional, weight, inertial, actuation, and control system scaling, is required so that the sub-scale vehicle will realistically simulate the flight characteristics of the full-scale aircraft. This testbed will be utilized to validate modeling methods, flight dynamics characteristics, and control system designs for large transport aircraft, with the end goal being the development of technologies to reduce the fatal accident rate due to loss-of-control.
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
How Should the Joint Force Handle the Command and Control of Unmanned Aircraft Systems?
2008-11-18
personnel, and control apparatus. Collectively these are the unmanned aircraft system (UAS). The outputs of a UAS can range from full motion video ...reconnaissance aircraft, like the pilotless Predator drone that provides real-time surveillance video to the battlefield.”55 He continued, “While...www.foxnews.com/story/0,2933,351964,00.html [accessed July 7, 2008]. Baldor, Lolita C. Associated Press. “Increased UAV Reliance Evident in 2009 Budget
Optimal trajectory planning for a UAV glider using atmospheric thermals
NASA Astrophysics Data System (ADS)
Kagabo, Wilson B.
An Unmanned Aerial Vehicle Glider (UAV glider) uses atmospheric energy in its different forms to remain aloft for extended flight durations. This UAV glider's aim is to extract atmospheric thermal energy and use it to supplement its battery energy usage and increase the mission period. Given an infrared camera identified atmospheric thermal of known strength and location; current wind speed and direction; current battery level; altitude and location of the UAV glider; and estimating the expected altitude gain from the thermal, is it possible to make an energy-efficient based motivation to fly to an atmospheric thermal so as to achieve UAV glider extended flight time? For this work, an infrared thermal camera aboard the UAV glider takes continuous forward-looking ground images of "hot spots". Through image processing a candidate atmospheric thermal strength and location is estimated. An Intelligent Decision Model incorporates this information with the current UAV glider status and weather conditions to provide an energy-based recommendation to modify the flight path of the UAV glider. Research, development, and simulation of the Intelligent Decision Model is the primary focus of this work. Three models are developed: (1) Battery Usage Model, (2) Intelligent Decision Model, and (3) Altitude Gain Model. The Battery Usage Model comes from the candidate flight trajectory, wind speed & direction and aircraft dynamic model. Intelligent Decision Model uses a fuzzy logic based approach. The Altitude Gain Model requires the strength and size of the thermal and is found a priori.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
Employing UAVs to Acquire Detailed Vegetation and Bare Ground Data for Assessing Rangeland Health
NASA Astrophysics Data System (ADS)
Rango, A.; Laliberte, A.; Herrick, J. E.; Winters, C.
2007-12-01
Because of its value as a historical record (extending back to the mid 1930s), aerial photography is an important tool used in many rangeland studies. However, these historical photos are not very useful for detailed analysis of rangeland health because of inadequate spatial resolution and scheduling limitations. These issues are now being resolved by using Unmanned Aerial Vehicles (UAVs) over rangeland study areas. Spatial resolution improvements have been rapid in the last 10 years from the QuickBird satellite through improved aerial photography to the new UAV coverage and have utilized improved sensors and the more simplistic approach of low altitude flights. Our rangeland health experiments have shown that the low altitude UAV digital photography is preferred by rangeland scientists because it allows, for the first time, their identification of vegetation and land surface patterns and patches, gap sizes, bare soil percentages, and vegetation type. This hyperspatial imagery (imagery with a resolution finer than the object of interest) is obtained at about 5cm resolution by flying at an altitude of 150m above the surface of the Jornada Experimental Range in southern New Mexico. Additionally, the UAV provides improved temporal flexibility, such as flights immediately following fires, floods, and other catastrophic disturbances, because the flight capability is located near the study area and the vehicles are under the direct control of the users, eliminating the additional steps associated with budgets and contracts. There are significant challenges to improve the data to make them useful for operational agencies, namely, image distortion with inexpensive, consumer grade digital cameras, difficulty in detecting sufficient ground control points in small scenes (152m by 114m), accuracy of exterior UAV information on X,Y, Z, roll, pitch, and heading, the sheer number of images collected, and developing reliable relationships with ground-based data across a broad range of topographies and plant communities. Our efforts are currently focused on developing a complete and efficient workflow for UAV operational missions consisting of flight planning, image acquisition, image rectification and mosaicking, and image classification. The remote sensing capability is being incorporated into existing rangeland health assessment and monitoring protocols.
Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes
NASA Astrophysics Data System (ADS)
Ozcan, O.
2016-12-01
Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.
Uav Borne Low Altitude Photogrammetry System
NASA Astrophysics Data System (ADS)
Lin, Z.; Su, G.; Xie, F.
2012-07-01
In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
NASA Astrophysics Data System (ADS)
Gohatre, Umakant Bhaskar; Patil, Venkat P.
2018-04-01
In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.
Lessons Learned from NASA UAV Science Demonstration Program Missions
NASA Technical Reports Server (NTRS)
Wegener, Steven S.; Schoenung, Susan M.
2003-01-01
During the summer of 2002, two airborne missions were flown as part of a NASA Earth Science Enterprise program to demonstrate the use of uninhabited aerial vehicles (UAVs) to perform earth science. One mission, the Altus Cumulus Electrification Study (ACES), successfully measured lightning storms in the vicinity of Key West, Florida, during storm season using a high-altitude Altus(TM) UAV. In the other, a solar-powered UAV, the Pathfinder Plus, flew a high-resolution imaging mission over coffee fields in Kauai, Hawaii, to help guide the harvest.
Possibilities of Use of UAVS for Technical Inspection of Buildings and Constructions
NASA Astrophysics Data System (ADS)
Banaszek, Anna; Banaszek, Sebastian; Cellmer, Anna
2017-12-01
In recent years, Unmanned Aerial Vehicles (UAVs) have been used in various sectors of the economy. This is due to the development of new technologies for acquiring and processing geospatial data. The paper presents the results of experiments using UAV, equipped with a high resolution digital camera, for a visual assessment of the technical condition of the building roof and for the inventory of energy infrastructure and its surroundings. The usefulness of digital images obtained from the UAV deck is presented in concrete examples. The use of UAV offers new opportunities in the area of technical inspection due to the detail and accuracy of the data, low operating costs and fast data acquisition.
UAV Trajectory Modeling Using Neural Networks
NASA Technical Reports Server (NTRS)
Xue, Min
2017-01-01
Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural network should be able to predict the vehicle's future states at next time step. A complete 4-D trajectory are then generated step by step using the trained neural network. Experiments in this work show that the neural network can approximate the sUAV's model and predict the trajectory accurately.
McEvoy, John F; Hall, Graham P; McDonald, Paul G
2016-01-01
The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys.
NASA Astrophysics Data System (ADS)
Reineman, B. D.; Lenain, L.; Statom, N.; Melville, W. K.
2012-12-01
We have developed instrumentation packages for unmanned aerial vehicles (UAVs) to measure ocean surface processes along with momentum fluxes and latent, sensible, and radiative heat fluxes in the marine atmospheric boundary layer (MABL). The packages have been flown over land on BAE Manta C1s and over water on Boeing-Insitu ScanEagles. The low altitude required for accurate surface flux measurements (< 30 m) is below the typical safety limit of manned research aircraft; however, with advances in laser altimeters, small-aircraft flight control, and real-time kinematic differential GPS, low-altitude flight is now within the capability of small UAV platforms. Fast-response turbulence, hygrometer, and temperature probes permit turbulent flux measurements, and short- and long-wave radiometers allow the determination of net radiation, surface temperature, and albedo. Onboard laser altimetry and high-resolution visible and infrared video permit observations of surface waves and fine-scale (O(10) cm) ocean surface temperature structure. Flight tests of payloads aboard ScanEagle UAVs were conducted in April 2012 at the Naval Surface Warfare Center Dahlgren Division (Dahlgren, VA), where measurements of water vapor, heat, and momentum fluxes were made from low-altitude (31-m) UAV flights over water (Potomac River). ScanEagles are capable of ship-based launch and recovery, which can extend the reach of research vessels and enable scientific measurements out to ranges of O(10-100) km and altitudes up to 5 km. UAV-based atmospheric and surface observations can complement observations of surface and subsurface phenomena made from a research vessel and avoid the well-known problems of vessel interference in MABL measurements. We present a description of the instrumentation, summarize results from flight tests, and discuss potential applications of these UAVs for ship-based MABL studies.
NASA Astrophysics Data System (ADS)
Chen, Su-Chin; Hsiao, Yu-Shen; Chung, Ta-Hsien
2015-04-01
This study is aimed at determining the landslide and driftwood potentials at Shenmu area in Taiwan by Unmanned Aerial Vehicle (UAV). High-resolution orthomosaics and digital surface models (DSMs) are both obtained from several UAV practical surveys by using a red-green-blue(RGB) camera and a near-infrared(NIR) one, respectively. Couples of artificial aerial survey targets are used for ground control in photogrammtry. The algorithm for this study is based on Logistic regression. 8 main factors, which are elevations, terrain slopes, terrain aspects, terrain reliefs, terrain roughness, distances to roads, distances to rivers, land utilizations, are taken into consideration in our Logistic regression model. The related results from UAV are compared with those from traditional photogrammetry. Overall, the study is focusing on monitoring the distribution of the areas with high-risk landslide and driftwood potentials in Shenmu area by Fixed-wing UAV-Borne RGB and NIR images. We also further analyze the relationship between forests, landslides, disaster potentials and upper river areas.
On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner
NASA Astrophysics Data System (ADS)
Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.
2018-05-01
Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
Hurricane Harvey Building Damage Assessment Using UAV Data
NASA Astrophysics Data System (ADS)
Yeom, J.; Jung, J.; Chang, A.; Choi, I.
2017-12-01
Hurricane Harvey which was extremely destructive major hurricane struck southern Texas, U.S.A on August 25, causing catastrophic flooding and storm damages. We visited Rockport suffered severe building destruction and conducted UAV (Unmanned Aerial Vehicle) surveying for building damage assessment. UAV provides very high resolution images compared with traditional remote sensing data. In addition, prompt and cost-effective damage assessment can be performed regardless of several limitations in other remote sensing platforms such as revisit interval of satellite platforms, complicated flight plan in aerial surveying, and cloud amounts. In this study, UAV flight and GPS surveying were conducted two weeks after hurricane damage to generate an orthomosaic image and a DEM (Digital Elevation Model). 3D region growing scheme has been proposed to quantitatively estimate building damages considering building debris' elevation change and spectral difference. The result showed that the proposed method can be used for high definition building damage assessment in a time- and cost-effective way.
D Modeling with Photogrammetry by Uavs and Model Quality Verification
NASA Astrophysics Data System (ADS)
Barrile, V.; Bilotta, G.; Nunnari, A.
2017-11-01
This paper deals with a test lead by Geomatics laboratory (DICEAM, Mediterranea University of Reggio Calabria), concerning the application of UAV photogrammetry for survey, monitoring and checking. The study case relies with the surroundings of the Department of Agriculture Sciences. In the last years, such area was interested by landslides and survey activities carried out to take the phenomenon under control. For this purpose, a set of digital images were acquired through a UAV equipped with a digital camera and GPS. Successively, the processing for the production of a 3D georeferenced model was performed by using the commercial software Agisoft PhotoScan. Similarly, the use of a terrestrial laser scanning technique allowed to product dense cloud and 3D models of the same area. To assess the accuracy of the UAV-derived 3D models, a comparison between image and range-based methods was performed.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
An accelerated image matching technique for UAV orthoimage registration
NASA Astrophysics Data System (ADS)
Tsai, Chung-Hsien; Lin, Yu-Ching
2017-06-01
Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Burnishing Techniques Strengthen Hip Implants; Signal Processing Methods Monitor Cranial Pressure; Ultraviolet-Blocking Lenses Protect, Enhance Vision; Hyperspectral Systems Increase Imaging Capabilities; Programs Model the Future of Air Traffic Management; Tail Rotor Airfoils Stabilize Helicopters, Reduce Noise; Personal Aircraft Point to the Future of Transportation; Ducted Fan Designs Lead to Potential New Vehicles; Winglets Save Billions of Dollars in Fuel Costs; Sensor Systems Collect Critical Aerodynamics Data; Coatings Extend Life of Engines and Infrastructure; Radiometers Optimize Local Weather Prediction; Energy-Efficient Systems Eliminate Icing Danger for UAVs; Rocket-Powered Parachutes Rescue Entire Planes; Technologies Advance UAVs for Science, Military; Inflatable Antennas Support Emergency Communication; Smart Sensors Assess Structural Health; Hand-Held Devices Detect Explosives and Chemical Agents; Terahertz Tools Advance Imaging for Security, Industry; LED Systems Target Plant Growth; Aerogels Insulate Against Extreme Temperatures; Image Sensors Enhance Camera Technologies; Lightweight Material Patches Allow for Quick Repairs; Nanomaterials Transform Hairstyling Tools; Do-It-Yourself Additives Recharge Auto Air Conditioning; Systems Analyze Water Quality in Real Time; Compact Radiometers Expand Climate Knowledge; Energy Servers Deliver Clean, Affordable Power; Solutions Remediate Contaminated Groundwater; Bacteria Provide Cleanup of Oil Spills, Wastewater; Reflective Coatings Protect People and Animals; Innovative Techniques Simplify Vibration Analysis; Modeling Tools Predict Flow in Fluid Dynamics; Verification Tools Secure Online Shopping, Banking; Toolsets Maintain Health of Complex Systems; Framework Resources Multiply Computing Power; Tools Automate Spacecraft Testing, Operation; GPS Software Packages Deliver Positioning Solutions; Solid-State Recorders Enhance Scientific Data Collection; Computer Models Simulate Fine Particle Dispersion; Composite Sandwich Technologies Lighten Components; Cameras Reveal Elements in the Short Wave Infrared; Deformable Mirrors Correct Optical Distortions; Stitching Techniques Advance Optics Manufacturing; Compact, Robust Chips Integrate Optical Functions; Fuel Cell Stations Automate Processes, Catalyst Testing; Onboard Systems Record Unique Videos of Space Missions; Space Research Results Purify Semiconductor Materials; and Toolkits Control Motion of Complex Robotics.
NASA Astrophysics Data System (ADS)
Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert
2017-04-01
Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.
Photogrammetric mapping using unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Graça, N.; Mitishita, E.; Gonçalves, J.
2014-11-01
Nowadays Unmanned Aerial Vehicle (UAV) technology has attracted attention for aerial photogrammetric mapping. The low cost and the feasibility to automatic flight along commanded waypoints can be considered as the main advantages of this technology in photogrammetric applications. Using GNSS/INS technologies the images are taken at the planned position of the exposure station and the exterior orientation parameters (position Xo, Yo, Zo and attitude ω, φ, χ) of images can be direct determined. However, common UAVs (off-the-shelf) do not replace the traditional aircraft platform. Overall, the main shortcomings are related to: difficulties to obtain the authorization to perform the flight in urban and rural areas, platform stability, safety flight, stability of the image block configuration, high number of the images and inaccuracies of the direct determination of the exterior orientation parameters of the images. In this paper are shown the obtained results from the project photogrammetric mapping using aerial images from the SIMEPAR UAV system. The PIPER J3 UAV Hydro aircraft was used. It has a micro pilot MP2128g. The system is fully integrated with 3-axis gyros/accelerometers, GPS, pressure altimeter, pressure airspeed sensors. A Sony Cyber-shot DSC-W300 was calibrated and used to get the image block. The flight height was close to 400 m, resulting GSD near to 0.10 m. The state of the art of the used technology, methodologies and the obtained results are shown and discussed. Finally advantages/shortcomings found in the study and main conclusions are presented
Feasibility study of a novel miniaturized spectral imaging system architecture in UAV surveillance
NASA Astrophysics Data System (ADS)
Liu, Shuyang; Zhou, Tao; Jia, Xiaodong; Cui, Hushan; Huang, Chengjun
2016-01-01
The spectral imaging technology is able to analysis the spectral and spatial geometric character of the target at the same time. To break through the limitation brought by the size, weight and cost of the traditional spectral imaging instrument, a miniaturized novel spectral imaging based on CMOS processing has been introduced in the market. This technology has enabled the possibility of applying spectral imaging in the UAV platform. In this paper, the relevant technology and the related possible applications have been presented to implement a quick, flexible and more detailed remote sensing system.
NASA Astrophysics Data System (ADS)
Bohlman, S.; Park, J.; Muller-Landau, H. C.; Rifai, S. W.; Dandois, J. P.
2017-12-01
Phenology is a critical driver of ecosystem processes. There is strong evidence that phenology is shifting in temperate ecosystems in response to climate change, but tropical tree and liana phenology remains poorly quantified and understood. A key challenge is that tropical forests contain hundreds of plant species with a wide variety of phenological patterns. Satellite-based observations, an important source of phenology data in northern latitudes, are hindered by frequent cloud cover in the tropics. To quantify phenology over a large number of individuals and species, we collected bi-weekly images from unmanned aerial vehicles (UAVs) in the well-studied 50-ha forest inventory plot on Barro Colorado Island, Panama. Between October 2014 and December 2015 and again in May 2015, we collected a total of 35 sets of UAV images, each with continuous coverage of the 50-ha plot, where every tree ≥ 1 cm DBH is mapped. Spectral, texture, and image information was extracted from the UAV images for individual tree crowns, which was then used as inputs for a machine learning algorithm to predict percent leaf and branch cover. We obtained the species identities of 2000 crowns in the images via field mapping. The objectives of this study are to (1) determined if machine learning algorithms, applied to UAV images, can effectively quantify changes in leaf cover, which we term "deciduousness; (2) determine how liana cover effects deciduousness and (3) test how well UAV-derived deciduousness patterns match satellite-derived temporal patterns. Machine learning algorithms trained on a variety of image parameters could effectively determine leaf cover, despite variation in lighting and viewing angles. Crowns with higher liana cover have less overall deciduousness (tree + liana together) than crowns with lower liana cover. Individual crown deciduousness, summed over all crowns measured in the 50-ha plot, showed a similar seasonal pattern as MODIS EVI composited over 10 years. However, MODIS EVI phenology was "greened" up earlier than UAV-based deciduousness, perhaps reflecting the new late dry season leaf flush that increases EVI but not overall leaf cover. We discuss how the potential mechanisms that explain variation among species and between trees and lianas and the consequences for these variation for ecosystem processes and modeling.
Monitoring landslide dynamics using timeseries of UAV imagery
NASA Astrophysics Data System (ADS)
de Jong, S. M.; Van Beek, L. P.
2017-12-01
Landslides are worldwide occurring processes that can have large economic impact and sometimes result in fatalities. Multiple factors are important in landslide processes and can make an area prone to landslide activity. Human factors like drainage and removal of vegetation or land clearing are examples of factors that may cause a landslide. Other environmental factors such as topography and the shear strength of the slope material are more difficult to control. Triggering factors for landslides are typically heavy rainfall events or sometimes by earthquakes or under cutting processes by a river. The collection of data about existing landslides in a given area is important for predicting future landslides in that region. We have setup a monitoring program for landslide using cameras aboard Unmanned Airborne Vehicles. UAV with cameras are able to collect ultra-high resolution images and UAVs can be operated in a very flexible way, they just fit in the back of a car. Here, in this study we used Unmanned Aerial Vehicles to collect a time series of high-resolution images over landslides in France and Australia. The algorithm used to process the UAV images into OrthoMosaics and OrthoDEMs is Structure from Motion (SfM). The process generally results in centimeter precision in the horizontal and vertical direction. Such multi-temporal datasets enable the detection of landslide area, the leading edge slope, temporal patterns and volumetric changes of particular areas of the landslide. We measured and computed surface movement of the landslide using the COSI-Corr image correlation algorithm with ground validation. Our study shows the possibilities of generating accurate Digital Surface Models (DSMs) of landslides using images collected with an Unmanned Aerial Vehicle (UAV). The technique is robust and repeatable such that a substantial time series of datasets can be routinely collected. It is shown that a time-series of UAV images can be used to map landslide movements with centimeter accuracy. It also found that there can be a cyclical nature to the slope of the leading edge of the landslide, suggesting that the steepness of the slope can be used to predict the next forward surge of the leading edge.
NASA Astrophysics Data System (ADS)
Rango, A.; Vivoni, E. R.; Anderson, C. A.; Perini, N. A.; Saripalli, S.; Laliberte, A.
2012-12-01
A common problem in many natural resource disciplines is the lack of high-enough spatial resolution images that can be used for monitoring and modeling purposes. Advances have been made in the utilization of Unmanned Aerial Vehicles (UAVs) in hydrology and rangeland science. By utilizing low flight altitudes and velocities, UAVs are able to produce high resolution (5 cm) images as well as stereo coverage (with 75% forward overlap and 40% sidelap) to extract digital elevation models (DEM). Another advantage of flying at low altitude is that the potential problems of atmospheric haze obscuration are eliminated. Both small fixed-wing and rotary-wing aircraft have been used in our experiments over two rangeland areas in the Jornada Experimental Range in southern New Mexico and the Santa Rita Experimental Range in southern Arizona. The fixed-wing UAV has a digital camera in the wing and six-band multispectral camera in the nose, while the rotary-wing UAV carries a digital camera as payload. Because we have been acquiring imagery for several years, there are now > 31,000 photos at one of the study sites, and 177 mosaics over rangeland areas have been constructed. Using the DEM obtained from the imagery we have determined the actual catchment areas of three watersheds and compared these to previous estimates. At one site, the UAV-derived watershed area is 4.67 ha which is 22% smaller compared to a manual survey using a GPS unit obtained several years ago. This difference can be significant in constructing a watershed model of the site. From a vegetation species classification, we also determined that two of the shrub types in this small watershed(mesquite and creosote with 6.47 % and 5.82% cover, respectively) grow in similar locations(flat upland areas with deep soils), whereas the most predominant shrub(mariola with 11.9% cover) inhabits hillslopes near stream channels(with steep shallow soils). The positioning of these individual shrubs throughout the catchment using UAV image classifications is required as input to detailed watershed modeling There are multiple advantages to UAVs for use in hydrology and rangeland science, including that coverage is less expensive while just as accurate as conventional ground measurements. The UAV guidance systems can also guarantee returning to the same location for change detection analysis. UAV capabilities also have advantages over manned aircraft because they are safer, less expensive, and can respond in a timelier manner to new flight requests. As a result, the use of UAVs for watershed and rangeland monitoring and modeling is a rapidly expanding civil application in natural resources.
Small Unmanned Aerial Vehicles; DHS’s Answer to Border Surveillance Requirements
2013-03-01
5 of more than 4000 illegal aliens, including the seizure of more than 15,000 pounds of marijuana .13 In addition to the Predator UAVs being...payload includes two color video cameras, an infrared camera that offers night vision capability and synthetic aperture radar that provides high
Flight Testing the X-48B at the Dryden Flight Research Center
NASA Technical Reports Server (NTRS)
Cosenito, Gary B.
2010-01-01
Topics discussed include: a) UAV s at NASA Dryden, Past and Present; b) Why Do We Flight Test?; c) The Blended (or Hybrid) Wing-Body Advantage; d) Program Objectives; e) The X-48B Vehicle and Ground Control Station; and f) Flight Test Highlights & Video.
Estimating evaporation with thermal UAV data and two-source energy balance models
NASA Astrophysics Data System (ADS)
Hoffmann, H.; Nieto, H.; Jensen, R.; Guzinski, R.; Zarco-Tejada, P.; Friborg, T.
2016-02-01
Estimating evaporation is important when managing water resources and cultivating crops. Evaporation can be estimated using land surface heat flux models and remotely sensed land surface temperatures (LST), which have recently become obtainable in very high resolution using lightweight thermal cameras and Unmanned Aerial Vehicles (UAVs). In this study a thermal camera was mounted on a UAV and applied into the field of heat fluxes and hydrology by concatenating thermal images into mosaics of LST and using these as input for the two-source energy balance (TSEB) modelling scheme. Thermal images are obtained with a fixed-wing UAV overflying a barley field in western Denmark during the growing season of 2014 and a spatial resolution of 0.20 m is obtained in final LST mosaics. Two models are used: the original TSEB model (TSEB-PT) and a dual-temperature-difference (DTD) model. In contrast to the TSEB-PT model, the DTD model accounts for the bias that is likely present in remotely sensed LST. TSEB-PT and DTD have already been well tested, however only during sunny weather conditions and with satellite images serving as thermal input. The aim of this study is to assess whether a lightweight thermal camera mounted on a UAV is able to provide data of sufficient quality to constitute as model input and thus attain accurate and high spatial and temporal resolution surface energy heat fluxes, with special focus on latent heat flux (evaporation). Furthermore, this study evaluates the performance of the TSEB scheme during cloudy and overcast weather conditions, which is feasible due to the low data retrieval altitude (due to low UAV flying altitude) compared to satellite thermal data that are only available during clear-sky conditions. TSEB-PT and DTD fluxes are compared and validated against eddy covariance measurements and the comparison shows that both TSEB-PT and DTD simulations are in good agreement with eddy covariance measurements, with DTD obtaining the best results. The DTD model provides results comparable to studies estimating evaporation with similar experimental setups, but with LST retrieved from satellites instead of a UAV. Further, systematic irrigation patterns on the barley field provide confidence in the veracity of the spatially distributed evaporation revealed by model output maps. Lastly, this study outlines and discusses the thermal UAV image processing that results in mosaics suited for model input. This study shows that the UAV platform and the lightweight thermal camera provide high spatial and temporal resolution data valid for model input and for other potential applications requiring high-resolution and consistent LST.
Feature-aided multiple target tracking in the image plane
NASA Astrophysics Data System (ADS)
Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.
2006-05-01
Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
UAV Monitoring for Enviromental Management in Galapagos Islands
NASA Astrophysics Data System (ADS)
Ballari, D.; Orellana, D.; Acosta, E.; Espinoza, A.; Morocho, V.
2016-06-01
In the Galapagos Islands, where 97% of the territory is protected and ecosystem dynamics are highly vulnerable, timely and accurate information is key for decision making. An appropriate monitoring system must meet two key features: on one hand, being able to capture information in a systematic and regular basis, and on the other hand, to quickly gather information on demand for specific purposes. The lack of such a system for geographic information limits the ability of Galapagos Islands' institutions to evaluate and act upon environmental threats such as invasive species spread and vegetation degradation. In this context, the use of UAVs (unmanned aerial vehicles) for capturing georeferenced images is a promising technology for environmental monitoring and management. This paper explores the potential of UAV images for monitoring degradation of littoral vegetation in Puerto Villamil (Isabela Island, Galapagos, Ecuador). Imagery was captured using two camera types: Red Green Blue (RGB) and Infrarred Red Green (NIR). First, vegetation presence was identified through NDVI. Second, object-based classification was carried out for characterization of vegetation vigor. Results demonstrates the feasibility of UAV technology for base-line studies and monitoring on the amount and vigorousness of littoral vegetation in the Galapagos Islands. It is also showed that UAV images are not only useful for visual interpretation and object delineation, but also to timely produce useful thematic information for environmental management.
3-D model-based tracking for UAV indoor localization.
Teulière, Céline; Marchand, Eric; Eck, Laurent
2015-05-01
This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.
NASA Astrophysics Data System (ADS)
Ramos, Antonio L. L.; Shao, Zhili; Holthe, Aleksander; Sandli, Mathias F.
2017-05-01
The introduction of the System-on-Chip (SoC) technology has brought exciting new opportunities for the development of smart low cost embedded systems spanning a wide range of applications. Currently available SoC devices are capable of performing high speed digital signal processing tasks in software while featuring relatively low development costs and reduced time-to-market. Unmanned aerial vehicles (UAV) are an application example that has shown tremendous potential in an increasing number of scenarios, ranging from leisure to surveillance as well as in search and rescue missions. Video capturing from UAV platforms is a relatively straightforward task that requires almost no preprocessing. However, that does not apply to audio signals, especially in cases where the data is to be used to support real-time decision making. In fact, the enormous amount of acoustic interference from the surroundings, including the noise from the UAVs propellers, becomes a huge problem. This paper discusses a real-time implementation of the NLMS adaptive filtering algorithm applied to enhancing acoustic signals captured from UAV platforms. The model relies on a combination of acoustic sensors and a computational inexpensive algorithm running on a digital signal processor. Given its simplicity, this solution can be incorporated into the main processing system of an UAV using the SoC technology, and run concurrently with other required tasks, such as flight control and communications. Simulations and real-time DSP-based implementations have shown significant signal enhancement results by efficiently mitigating the interference from the noise generated by the UAVs propellers as well as from other external noise sources.
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images
Ortega-Terol, Damian; Ballesteros, Rocio
2017-01-01
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.
Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego
2017-10-15
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.
Hall, Graham P.; McDonald, Paul G.
2016-01-01
The use of unmanned aerial vehicles (UAVs) for ecological research has grown rapidly in recent years, but few studies have assessed the disturbance impacts of these tools on focal subjects, particularly when observing easily disturbed species such as waterfowl. In this study we assessed the level of disturbance that a range of UAV shapes and sizes had on free-living, non-breeding waterfowl surveyed in two sites in eastern Australia between March and May 2015, as well as the capability of airborne digital imaging systems to provide adequate resolution for unambiguous species identification of these taxa. We found little or no obvious disturbance effects on wild, mixed-species flocks of waterfowl when UAVs were flown at least 60m above the water level (fixed wing models) or 40m above individuals (multirotor models). Disturbance in the form of swimming away from the UAV through to leaving the water surface and flying away from the UAV was visible at lower altitudes and when fixed-wing UAVs either approached subjects directly or rapidly changed altitude and/or direction near animals. Using tangential approach flight paths that did not cause disturbance, commercially available onboard optical equipment was able to capture images of sufficient quality to identify waterfowl and even much smaller taxa such as swallows. Our results show that with proper planning of take-off and landing sites, flight paths and careful UAV model selection, UAVs can provide an excellent tool for accurately surveying wild waterfowl populations and provide archival data with fewer logistical issues than traditional methods such as manned aerial surveys. PMID:27020132
Towards establishing compact imaging spectrometer standards
Slonecker, E. Terrence; Allen, David W.; Resmini, Ronald G.
2016-01-01
Remote sensing science is currently undergoing a tremendous expansion in the area of hyperspectral imaging (HSI) technology. Spurred largely by the explosive growth of Unmanned Aerial Vehicles (UAV), sometimes called Unmanned Aircraft Systems (UAS), or drones, HSI capabilities that once required access to one of only a handful of very specialized and expensive sensor systems are now miniaturized and widely available commercially. Small compact imaging spectrometers (CIS) now on the market offer a number of hyperspectral imaging capabilities in terms of spectral range and sampling. The potential uses of HSI/CIS on UAVs/UASs seem limitless. However, the rapid expansion of unmanned aircraft and small hyperspectral sensor capabilities has created a number of questions related to technological, legal, and operational capabilities. Lightweight sensor systems suitable for UAV platforms are being advertised in the trade literature at an ever-expanding rate with no standardization of system performance specifications or terms of reference. To address this issue, both the U.S. Geological Survey and the National Institute of Standards and Technology are eveloping draft standards to meet these issues. This paper presents the outline of a combined USGS/NIST cooperative strategy to develop and test a characterization methodology to meet the needs of a new and expanding UAV/CIS/HSI user community.
Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso
2016-11-01
This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).
Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso
2016-01-01
This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
NASA Astrophysics Data System (ADS)
Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.
2017-12-01
Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the Photogrammetry mapping software Agisoft PhotoScan. A workflow for conducting rapid TLS and UAV survey in the field and integrating point clouds obtained from TLS and UAV surveying will be introduced. Key words: UAV photogrammetry, ground control points, TLS, coastal morphology, topographic mapping
The Earth and Environmental Systems Podcast, and the Earth Explorations Video Series
NASA Astrophysics Data System (ADS)
Shorey, C. V.
2015-12-01
The Earth and Environmental Systems Podcast, a complete overview of the theoretical basics of Earth Science in 64 episodes, was completed in 2009, but has continued to serve the worldwide community as evidenced by listener feedback (e.g. "I am a 65 year old man. I have been retired for awhile and thought that retirement would be nothing more than waiting for the grave. However I want to thank you for your geo podcasts. They have given me a new lease on life and taught me a great deal." - FP, 2015). My current project is a video series on the practical basics of Earth Science titled "Earth Explorations". Each video is under 12 minutes long and tackles a major Earth Science concept. These videos go beyond a talking head, or even voice-over with static pictures or white-board graphics. Moving images are combined with animations created with Adobe After Effects, and aerial shots using a UAV. The dialog is scripted in a way to make it accessible at many levels, and the episodes as they currently stand have been used in K-12, and Freshman college levels with success. Though these videos are made to be used at this introductory level, they are also designed as remedial episodes for upper level classes, freeing up time given to review for new content. When completed, the series should contain close to 200 episodes, and this talk will cover the full range of resources I have produced, plan to produce, and how to access these resources. Both resources are available on iTunesU, and the videos are also available on YouTube.
NASA Astrophysics Data System (ADS)
Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis
2016-04-01
This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various parameters such as standard deviation of DTM, deviation of residual and standard deviation of prominence were calculated directly from the extracted DTM. Sophisticated statistical filters and elevation indices were developed to quantify both soil erosion and roughness. The applied methodology for monitoring both soil erosion and roughness provides an optimum way of reducing the existing gap between field scale and satellite scale. Keywords : UAV, soil, erosion, roughness, DTM
Solid images generated from UAVs to analyze areas affected by rock falls
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco
2015-04-01
The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.
Observing changes at Santiaguito Volcano, Guatemala with an Unmanned Aerial Vehicle (UAV)
NASA Astrophysics Data System (ADS)
De Angelis, S.; von Aulock, F.; Lavallée, Y.; Hornby, A. J.; Kennedy, B.; Lamb, O. D.; Kendrick, J. E.
2016-12-01
Santiaguito Volcano (Guatemala) is one of the most active volcanoes in Central America, producing several ash venting explosions per day for almost 100 years. Lahars, lava flows and dome and flank collapses that produce major pyroclastic density currents also present a major hazard to nearby farms and communities. Optical observations of both the vent as well as the lava flow fronts can provide scientists and local monitoring staff with important information on the current state of volcanic activity and hazard. Due to the strong activity, and difficult terrain, unmanned aerial vehicles can help to provide valuable data on the activities of the volcano at a safe distance. We collected a series of images and video footage of the active vent of Caliente and the flow front of the active lava flow and its associated lahar channels, both in May 2015 and in December 2015- January 2016. Images of the crater and the lava flows were used for the reconstruction of 3D terrain models using structure-from-motion. These models can be used to constrain topographical changes and distribution of ballistics via cloud comparisons. The preliminary data of aerial images and videos of the summit crater (during two separate ash venting episodes) and the lava flow fronts indicate the following differences in activity during those two field campaigns: - A recorded explosive event in December 2015 initiates at subparallel linear faults near the centre of the dome, with a later, separate, and more ash-laden burst occurring from an off-centre fracture. - A comparison of the point clouds before and after a degassing explosion shows minor subsidence of the dome surface and the formation of several small craters at the main venting locations. - The lava flow fronts did not advance more than a few meters between May and December 2015. - Damming of river valleys by the lava flows has established new stream channels that have modified established pathways for the recurring lahars, one of the major hazards of Santiaguito volcano. The preliminary results of this study from two fieldtrips to Santiaguito Volcano are exemplary for the plethora of applications of UAVs in the field of volcano monitoring, and we urge funding agencies and legislative bodies to consider the value of these scientific instruments in future decisions and allocation of funding.
Performance Evaluation of 3d Modeling Software for Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Yanagi, H.; Chikatsu, H.
2016-06-01
UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.
NASA Astrophysics Data System (ADS)
Li, Q. S.; Wong, F. K. K.; Fung, T.
2017-08-01
Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.
Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation
Nitti, Davide O.; Bovenga, Fabio; Chiaradia, Maria T.; Greco, Mario; Pinelli, Gianpaolo
2015-01-01
This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimate UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system. PMID:26225977
Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation.
Nitti, Davide O; Bovenga, Fabio; Chiaradia, Maria T; Greco, Mario; Pinelli, Gianpaolo
2015-07-28
This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimated UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system.
Brief communication: Landslide motion from cross correlation of UAV-derived morphological attributes
NASA Astrophysics Data System (ADS)
Peppa, Maria V.; Mills, Jon P.; Moore, Phil; Miller, Pauline E.; Chambers, Jonathan E.
2017-12-01
Unmanned aerial vehicles (UAVs) can provide observations of high spatio-temporal resolution to enable operational landslide monitoring. In this research, the construction of digital elevation models (DEMs) and orthomosaics from UAV imagery is achieved using structure-from-motion (SfM) photogrammetric procedures. The study examines the additional value that the morphological attribute of openness
, amongst others, can provide to surface deformation analysis. Image-cross-correlation functions and DEM subtraction techniques are applied to the SfM outputs. Through the proposed integrated analysis, the automated quantification of a landslide's motion over time is demonstrated, with implications for the wider interpretation of landslide kinematics via UAV surveys.
The use of UAVs for monitoring land degradation
NASA Astrophysics Data System (ADS)
Themistocleous, Kyriacos
2017-10-01
Land degradation is one of the causes of desertification of drylands in the Mediterranean. UAVs can be used to monitor and document the various variables that cause desertification in drylands, including overgrazing, aridity, vegetation loss, etc. This paper examines the use of UAVs and accompanying sensors to monitor overgrazing, vegetation stress and aridity in the study area. UAV images can be used to generate digital elevation models (DEMs) to examine the changes in microtopography as well as ortho-photos were used to detect changes in vegetation patterns. The combined data of the digital elevation models and the orthophotos can be used to identify the mechanisms for desertification in the study area.
Surveillance of ground vehicles for airport security
NASA Astrophysics Data System (ADS)
Blasch, Erik; Wang, Zhonghai; Shen, Dan; Ling, Haibin; Chen, Genshe
2014-06-01
Future surveillance systems will work in complex and cluttered environments which require systems engineering solutions for such applications such as airport ground surface management. In this paper, we highlight the use of a L1 video tracker for monitoring activities at an airport. We present methods of information fusion, entity detection, and activity analysis using airport videos for runway detection and airport terminal events. For coordinated airport security, automated ground surveillance enhances efficient and safe maneuvers for aircraft, unmanned air vehicles (UAVs) and unmanned ground vehicles (UGVs) operating within airport environments.
Development of Uav Photogrammetry Method by Using Small Number of Vertical Images
NASA Astrophysics Data System (ADS)
Kunii, Y.
2018-05-01
This new and efficient photogrammetric method for unmanned aerial vehicles (UAVs) requires only a few images taken in the vertical direction at different altitudes. The method includes an original relative orientation procedure which can be applied to images captured along the vertical direction. The final orientation determines the absolute orientation for every parameter and is used for calculating the 3D coordinates of every measurement point. The measurement accuracy was checked at the UAV test site of the Japan Society for Photogrammetry and Remote Sensing. Five vertical images were taken at 70 to 90 m altitude. The 3D coordinates of the measurement points were calculated. The plane and height accuracies were ±0.093 m and ±0.166 m, respectively. These values are of higher accuracy than the results of the traditional photogrammetric method. The proposed method can measure 3D positions efficiently and would be a useful tool for construction and disaster sites and for other field surveying purposes.
Reinforcement Learning with Autonomous Small Unmanned Aerial Vehicles in Cluttered Environments
NASA Technical Reports Server (NTRS)
Tran, Loc; Cross, Charles; Montague, Gilbert; Motter, Mark; Neilan, James; Qualls, Garry; Rothhaar, Paul; Trujillo, Anna; Allen, B. Danette
2015-01-01
We present ongoing work in the Autonomy Incubator at NASA Langley Research Center (LaRC) exploring the efficacy of a data set aggregation approach to reinforcement learning for small unmanned aerial vehicle (sUAV) flight in dense and cluttered environments with reactive obstacle avoidance. The goal is to learn an autonomous flight model using training experiences from a human piloting a sUAV around static obstacles. The training approach uses video data from a forward-facing camera that records the human pilot's flight. Various computer vision based features are extracted from the video relating to edge and gradient information. The recorded human-controlled inputs are used to train an autonomous control model that correlates the extracted feature vector to a yaw command. As part of the reinforcement learning approach, the autonomous control model is iteratively updated with feedback from a human agent who corrects undesired model output. This data driven approach to autonomous obstacle avoidance is explored for simulated forest environments furthering autonomous flight under the tree canopy research. This enables flight in previously inaccessible environments which are of interest to NASA researchers in Earth and Atmospheric sciences.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time
Avellar, Gustavo S. C.; Pereira, Guilherme A. S.; Pimenta, Luciano C. A.; Iscold, Paulo
2015-01-01
This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem’s (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles’ maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs. PMID:26540055
Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time.
Avellar, Gustavo S C; Pereira, Guilherme A S; Pimenta, Luciano C A; Iscold, Paulo
2015-11-02
This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem's (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles' maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs.
Portable Imagery Quality Assessment Test Field for Uav Sensors
NASA Astrophysics Data System (ADS)
Dąbrowski, R.; Jenerowicz, A.
2015-08-01
Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.
NASA Astrophysics Data System (ADS)
Zhou, Hao; Hirose, Mitsuhito; Greenwood, William; Xiao, Yong; Lynch, Jerome; Zekkos, Dimitrios; Kamat, Vineet
2016-04-01
Unmanned aerial vehicles (UAVs) can serve as a powerful mobile sensing platform for assessing the health of civil infrastructure systems. To date, the majority of their uses have been dedicated to vision and laser-based spatial imaging using on-board cameras and LiDAR units, respectively. Comparatively less work has focused on integration of other sensing modalities relevant to structural monitoring applications. The overarching goal of this study is to explore the ability for UAVs to deploy a network of wireless sensors on structures for controlled vibration testing. The study develops a UAV platform with an integrated robotic gripper that can be used to install wireless sensors in structures, drop a heavy weight for the introduction of impact loads, and to uninstall wireless sensors for reinstallation elsewhere. A pose estimation algorithm is embedded in the UAV to estimate the location of the UAV during sensor placement and impact load introduction. The Martlet wireless sensor network architecture is integrated with the UAV to provide the UAV a mobile sensing capability. The UAV is programmed to command field deployed Martlets, aggregate and temporarily store data from the wireless sensor network, and to communicate data to a fixed base station on site. This study demonstrates the integrated UAV system using a simply supported beam in the lab with Martlet wireless sensors placed by the UAV and impact load testing performed. The study verifies the feasibility of the integrated UAV-wireless monitoring system architecture with accurate modal characteristics of the beam estimated by modal analysis.
Towards a More Efficient Detection of Earthquake Induced FAÇADE Damages Using Oblique Uav Imagery
NASA Astrophysics Data System (ADS)
Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.
2017-08-01
Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for façade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient façade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the façades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for façade location in the oblique images captured in subsequent flights, and iii) selection of the façade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted façade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of L'Aquila, Italy, affected in 2009 by an earthquake.
NASA Astrophysics Data System (ADS)
Lendzioch, Theodora; Langhammer, Jakub; Jenicek, Michal
2017-04-01
A rapid and robust approach using Unmanned Aerial Vehicle (UAV) digital photogrammetry was performed for evaluating snow accumulation over different small localities (e.g. disturbed forest and open area) and for indirect field measurements of Leaf Area Index (LAI) of coniferous forest within the Šumava National Park, Czech Republic. The approach was used to reveal impacts related to changes in forest and snowpack and to determine winter effective LAI for monitoring the impact of forest canopy metrics on snow accumulation. Due to the advancement of the technique, snow depth and volumetric changes of snow depth over these selected study areas were estimated at high spatial resolution (1 cm) by subtracting a snow-free digital elevation model (DEM) from a snow-covered DEM. Both, downward-looking UAV images and upward-looking digital hemispherical photography (DHP), and additional widely used LAI-2200 canopy analyser measurements were applied to determine the winter LAI, controlling interception and transmitting radiation. For the performance of downward-looking UAV images the snow background instead of the sky fraction was used. The reliability of UAV-based LAI retrieval was tested by taking an independent data set during the snow cover mapping campaigns. The results showed the potential of digital photogrammetry for snow depth mapping and LAI determination by UAV techniques. The average difference obtained between ground-based and UAV-based measurements of snow depth was 7.1 cm with higher values obtained by UAV. The SD of 22 cm for the open area seemed competitive with the typical precision of point measurements. In contrast, the average difference in disturbed forest area was 25 cm with lower values obtained by UAV and a SD of 36 cm, which is in agreement with other studies. The UAV-based LAI measurements revealed the lowest effective LAI values and the plant canopy analyser LAI-2200 the highest effective LAI values. The biggest bias of effective LAI was observed between LAI-2200 and UAV-based analyses. Since the LAI parameter is important for snowpack modelling, this method presents the potential of simplifying LAI retrieval and mapping of snow dynamics while reducing running costs and time.
Cloud-Assisted UAV Data Collection for Multiple Emerging Events in Distributed WSNs.
Cao, Huiru; Liu, Yongxin; Yue, Xuejun; Zhu, Wenjian
2017-08-07
In recent years, UAVs (Unmanned Aerial Vehicles) have been widely applied for data collection and image capture. Specifically, UAVs have been integrated with wireless sensor networks (WSNs) to create data collection platforms with high flexibility. However, most studies in this domain focus on system architecture and UAVs' flight trajectory planning while event-related factors and other important issues are neglected. To address these challenges, we propose a cloud-assisted data gathering strategy for UAV-based WSN in the light of emerging events. We also provide a cloud-assisted approach for deriving UAV's optimal flying and data acquisition sequence of a WSN cluster. We validate our approach through simulations and experiments. It has been proved that our methodology outperforms conventional approaches in terms of flying time, energy consumption, and integrity of data acquisition. We also conducted a real-world experiment using a UAV to collect data wirelessly from multiple clusters of sensor nodes for monitoring an emerging event, which are deployed in a farm. Compared against the traditional method, this proposed approach requires less than half the flying time and achieves almost perfect data integrity.
Low-cost, quantitative assessment of highway bridges through the use of unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Ellenberg, Andrew; Kontsos, Antonios; Moon, Franklin; Bartoli, Ivan
2016-04-01
Many envision that in the near future the application of Unmanned Aerial Vehicles (UAVs) will impact the civil engineering industry. Use of UAVs is currently experiencing tremendous growth, primarily in military and homeland security applications. It is only a matter of time until UAVs will be widely accepted as platforms for implementing monitoring/surveillance and inspection in other fields. Most UAVs already have payloads as well as hardware/software capabilities to incorporate a number of non-contact remote sensors, such as high resolution cameras, multi-spectral imaging systems, and laser ranging systems (LIDARs). Of critical importance to realizing the potential of UAVs within the infrastructure realm is to establish how (and the extent to which) such information may be used to inform preservation and renewal decisions. Achieving this will depend both on our ability to quantify information from images (through, for example, optical metrology techniques) and to fuse data from the array of non-contact sensing systems. Through a series of applications to both laboratory-scale and field implementations on operating infrastructure, this paper will present and evaluate (through comparison with conventional approaches) various image processing and data fusion strategies tailored specifically for the assessment of highway bridges. Example scenarios that guided this study include the assessment of delaminations within reinforced concrete bridge decks, the quantification of the deterioration of steel coatings, assessment of the functionality of movement mechanisms, and the estimation of live load responses (inclusive of both strain and displacement).
The future of structural fieldwork - UAV assisted aerial photogrammetry
NASA Astrophysics Data System (ADS)
Vollgger, Stefan; Cruden, Alexander
2015-04-01
Unmanned aerial vehicles (UAVs), commonly referred to as drones, are opening new and low cost possibilities to acquire high-resolution aerial images and digital surface models (DSM) for applications in structural geology. UAVs can be programmed to fly autonomously along a user defined grid to systematically capture high-resolution photographs, even in difficult to access areas. The photographs are subsequently processed using software that employ SIFT (scale invariant feature transform) and SFM (structure from motion) algorithms. These photogrammetric routines allow the extraction of spatial information (3D point clouds, digital elevation models, 3D meshes, orthophotos) from 2D images. Depending on flight altitude and camera setup, sub-centimeter spatial resolutions can be achieved. By "digitally mapping" georeferenced 3D models and images, orientation data can be extracted directly and used to analyse the structural framework of the mapped object or area. We present UAV assisted aerial mapping results from a coastal platform near Cape Liptrap (Victoria, Australia), where deformed metasediments of the Palaeozoic Lachlan Fold Belt are exposed. We also show how orientation and spatial information of brittle and ductile structures extracted from the photogrammetric model can be linked to the progressive development of folds and faults in the region. Even though there are both technical and legislative limitations, which might prohibit the use of UAVs without prior commercial licensing and training, the benefits that arise from the resulting high-resolution, photorealistic models can substantially contribute to the collection of new data and insights for applications in structural geology.
NASA Astrophysics Data System (ADS)
Steadman, Bob; Finklea, John; Kershaw, James; Loughman, Cathy; Shaffner, Patti; Frost, Dean; Deller, Sean
2014-06-01
Textron's Advanced MicroObserver(R) is a next generation remote unattended ground sensor system (UGS) for border security, infrastructure protection, and small combat unit security. The original MicroObserver(R) is a sophisticated seismic sensor system with multi-node fusion that supports target tracking. This system has been deployed in combat theaters. The system's seismic sensor nodes are uniquely able to be completely buried (including antennas) for optimal covertness. The advanced version adds a wireless day/night Electro-Optic Infrared (EOIR) system, cued by seismic tracking, with sophisticated target discrimination and automatic frame capture features. Also new is a field deployable Gateway configurable with a variety of radio systems and flexible networking, an important upgrade that enabled the research described herein. BattleHawkTM is a small tube launched Unmanned Air Vehicle (UAV) with a warhead. Using transmitted video from its EOIR subsystem an operator can search for and acquire a target day or night, select a target for attack, and execute terminal dive to destroy the target. It is designed as a lightweight squad level asset carried by an individual infantryman. Although BattleHawk has the best loiter time in its class, it's still relatively short compared to large UAVs. Also it's a one-shot asset in its munition configuration. Therefore Textron Defense Systems conducted research, funded internally, to determine if there was military utility in having the highly persistent MicroObserver(R) system cue BattleHawk's launch and vector it to beyond visual range targets for engagement. This paper describes that research; the system configuration implemented, and the results of field testing that was performed on a government range early in 2013. On the integrated system that was implemented, MicroObserver(R) seismic detections activated that system's camera which then automatically captured images of the target. The geo-referenced and time-tagged MicroObserver(R) target reports and images were then automatically forwarded to the BattleHawk Android-based controller. This allowed the operator to see the intruder (classified and geo-located) on the map based display, assess the intruder as likely hostile (via the image), and launch BattleHawk with the pre-loaded target coordinates. The operator was thus able to quickly acquire the intended target (without a search) and initiate target engagement immediately. System latencies were a major concern encountered during the research.
USDA-ARS?s Scientific Manuscript database
Advances in technologies associated with unmanned aerial vehicles (UAVs) has allowed for researchers, farmers and agribusinesses to incorporate UAVs coupled with various imaging systems into data collection activities and aid expert systems for making decisions. Multispectral imageries allow for a q...
NASA Astrophysics Data System (ADS)
Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.
2017-09-01
Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.
2014-08-01
Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.
Applications of UAVs in row-crop agriculture: advantages and limitations
NASA Astrophysics Data System (ADS)
Basso, B.; Putnam, G.; Price, R.; Zhang, J.
2016-12-01
The application of Unmanned Aerial Vehicles (UAV) to monitor agricultural fields has increased over the last few years due to advances in the technology, sensors, post-processing software for image analysis, along with more favorable regulations that allowed UAVs to be flown for commercial use. UAV have several capabilities depending on the type of sensors that are mounted onboard. The most widely used application remains crop scouting to identify areas within fields where the crops underperform for various reasons (nutritional status and water stress, presence of weeds, poor stands etc). In this talk, we present the preliminary results of UAVs field based research to better understand spatial and temporal variability of crop yield. Their advantage in providing timely information is critical, but adaptive management requires a system approach to account for the interactions occurring between genetics, environment and management.
NASA Astrophysics Data System (ADS)
Ma, Yi; Zhang, Jie; Zhang, Jingyu
2016-01-01
The coastal wetland, a transitional zone between terrestrial ecosystems and marine ecosystems, is the type of great value to ecosystem services. For the recent 3 decades, area of the coastal wetland is decreasing and the ecological function is gradually degraded with the rapid development of economy, which restricts the sustainable development of economy and society in the coastal areas of China in turn. It is a major demand of the national reality to carry out the monitoring of coastal wetlands, to master the distribution and dynamic change. UAV, namely unmanned aerial vehicle, is a new platform for remote sensing. Compared with the traditional satellite and manned aerial remote sensing, it has the advantage of flexible implementation, no cloud cover, strong initiative and low cost. Image-spectrum merging is one character of high spectral remote sensing. At the same time of imaging, the spectral curve of each pixel is obtained, which is suitable for quantitative remote sensing, fine classification and target detection. Aimed at the frontier and hotspot of remote sensing monitoring technology, and faced the demand of the coastal wetland monitoring, this paper used UAV and the new remote sensor of high spectral imaging instrument to carry out the analysis of the key technologies of monitoring coastal wetlands by UAV on the basis of the current situation in overseas and domestic and the analysis of developing trend. According to the characteristic of airborne hyperspectral data on UAV, that is "three high and one many", the key technology research that should develop are promoted as follows: 1) the atmosphere correction of the UAV hyperspectral in coastal wetlands under the circumstance of complex underlying surface and variable geometry, 2) the best observation scale and scale transformation method of the UAV platform while monitoring the coastal wetland features, 3) the classification and detection method of typical features with high precision from multi scale hyperspectral images based on time sequence. The research results of this paper will help to break the traditional concept of remote sensing monitoring coastal wetlands by satellite and manned aerial vehicle, lead the trend of this monitoring technology, and put forward a new technical proposal for grasping the distribution of the coastal wetland and the changing trend and carrying out the protection and management of the coastal wetland.
Vehicle tracking in wide area motion imagery from an airborne platform
NASA Astrophysics Data System (ADS)
van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan
2015-10-01
Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.
Landslide Mapping Using Imagery Acquired by a Fixed-Wing Uav
NASA Astrophysics Data System (ADS)
Rau, J. Y.; Jhan, J. P.; Lo, C. F.; Lin, Y. S.
2011-09-01
In Taiwan, the average annual rainfall is about 2,500 mm, about three times the world average. Hill slopes where are mostly under meta-stable conditions due to fragmented surface materials can easily be disturbed by heavy typhoon rainfall and/or earthquakes, resulting in landslides and debris flows. Thus, an efficient data acquisition and disaster surveying method is critical for decision making. Comparing with satellite and airplane, the unmanned aerial vehicle (UAV) is a portable and dynamic platform for data acquisition. In particularly when a small target area is required. In this study, a fixed-wing UAV that equipped with a consumer grade digital camera, i.e. Canon EOS 450D, a flight control computer, a Garmin GPS receiver and an attitude heading reference system (AHRS) are proposed. The adopted UAV has about two hours flight duration time with a flight control range of 20 km and has a payload of 3 kg, which is suitable for a medium scale mapping and surveying mission. In the paper, a test area with 21.3 km2 in size containing hundreds of landslides induced by Typhoon Morakot is used for landslides mapping. The flight height is around 1,400 meters and the ground sampling distance of the acquired imagery is about 17 cm. The aerial triangulation, ortho-image generation and mosaicking are applied to the acquired images in advance. An automatic landslides detection algorithm is proposed based on the object-based image analysis (OBIA) technique. The color ortho-image and a digital elevation model (DEM) are used. The ortho-images before and after typhoon are utilized to estimate new landslide regions. Experimental results show that the developed algorithm can achieve a producer's accuracy up to 91%, user's accuracy 84%, and a Kappa index of 0.87. It demonstrates the feasibility of the landslide detection algorithm and the applicability of a fixed-wing UAV for landslide mapping.
Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations
Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.
2017-01-01
Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.
NASA Astrophysics Data System (ADS)
Merlaud, Alexis; Tack, Frederik; Constantin, Daniel; Fayt, Caroline; Maes, Jeroen; Mingireanu, Florin; Mocanu, Ionut; Georgescu, Lucian; Van Roozendael, Michel
2015-04-01
The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) is an instrument dedicated to atmospheric trace gas retrieval from an Unmanned Aerial Vehicle (UAV). The payload is based on a compact visible spectrometer and a scanning mirror to collect scattered sunlight. Its weight, size, and power consumption are respectively 920 g, 27x12x12 cm3, and 6 W. The custom-built 2.5 m flying wing UAV is electrically powered, has a typical airspeed of 100 km/h, and can operate at a maximum altitude of 3 km. Both the payload and the UAV were developed in the framework of a collaboration between the Belgian Institute for Space Aeronomy (BIRA-IASB) and the Dunarea de Jos University of Galati, Romania. We present here SWING-UAV test flights dedicated to NO2 measurements and performed in Romania on 10 and 11 September 2014, during the Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT) campaign. The UAV performed 5 flights in the vicinity of the large thermal power station of Turceni (44.67° N, 23.4° E). The UAV was operated in visual range during the campaign, up to 900 m AGL , downwind of the plant and crossing its exhaust plume. The spectra recorded on flight are analyzed with the Differential Optical Absorption Spectroscopy (DOAS) method. The retrieved NO2 Differential Slant Column Densities (DSCDs) are up to 1.5e17 molec/cm2 and reveal the horizontal gradients around the plant. The DSCDs are converted to vertical columns and compared with coincident car-based DOAS measurements. We also present the near-future perspective of the SWING-UAV observation system, which includes flights in 2015 above the Black Sea to quantify ship emissions, the addition of SO2 as a target species, and autopilot flights at higher altitudes to cover a typical satellite pixel extent (10x10 km2).
Optical and acoustical UAV detection
NASA Astrophysics Data System (ADS)
Christnacher, Frank; Hengy, Sébastien; Laurenzis, Martin; Matwyschuk, Alexis; Naz, Pierre; Schertzer, Stéphane; Schmitt, Gwenael
2016-10-01
Recent world events have highlighted that the proliferation of UAVs is bringing with it a new and rapidly increasing threat for national defense and security agencies. Whilst many of the reported UAV incidents seem to indicate that there was no terrorist intent behind them, it is not unreasonable to assume that it may not be long before UAV platforms are regularly employed by terrorists or other criminal organizations. The flight characteristics of many of these mini- and micro-platforms present challenges for current systems which have been optimized over time to defend against the traditional air-breathing airborne platforms. A lot of programs to identify cost-effective measures for the detection, classification, tracking and neutralization have begun in the recent past. In this paper, lSL shows how the performance of a UAV detection and tracking concept based on acousto-optical technology can be powerfully increased through active imaging.
Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images.
Gašparović, Mateo; Jurjević, Luka
2017-02-18
In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined.
NASA Astrophysics Data System (ADS)
Trizzino, Rosamaria; Caprioli, Mauro; Mazzone, Francesco; Scarano, Mario
2017-04-01
Unmanned Aerial Vehicle (UAV) systems are increasingly seen as an attractive low-cost alternative or supplement to aerial and terrestrial photogrammetry due to their low cost, flexibility, availability and readiness for duty. In addition, UAVs can be operated in hazardous or temporarily inaccessible locations. The combination of photogrammetric aerial and terrestrial recording methods using a mini UAV (also known as "drone") opens a broad range of applications, such as surveillance and monitoring of the environment and infrastructural assets. In particular, these methods and techniques are of paramount interest for the documentation of cultural heritage sites and areas of natural importance, facing threats from natural deterioration and hazards. In order to verify the reliability of these technologies an UAV survey and a LIDAR survey have been carried out along about 1 km of coast in the Salento peninsula, near the towns of San Foca, Torre dell' Orso and SantAndrea ( Lecce, Southern Italy). This area is affected by serious environmental hazards due to the presence of dangerous rocky cliffs named "falesie". The UAV platform was equipped with a photogrammetric measurement system that allowed us to obtain a mobile mapping of the fractured fronts of dangerous rocky cliffs. UAV-images data have been processed using dedicated software (Agisoft Photoscan). The point clouds obtained from both the UAV and LIDAR surveys have been processed using Cloud Compare software, with the aim of testing the UAV results with respect to the LIDAR ones. The analysis were done using the C2C algorithm which provides good results in terms of Euclidian distances, highlighting differences between the 3D models obtained from both the survey techiques. The total error obtained was of centimeter-order that is a very satisfactory result. In the the 2nd study area, the opportunities of obtaining more detailed documentation of cultural goods throughout UAV survey have been investigated. The study area is an ancient Aragonese watchtower of the seventeenth century, located near the Abbey of San Vito in the countryside of Polignano a Mare (in the province of Bari, Southern Italy). The survey has been carried out with an "esacopter" equipped with a CANON EOS 550D. The image processing was carried out with Photogrammetric and Structure from Motion software (Agisoft PhotoScan) and, as a result, a cloud of 524.607 points with a 0.010096 m/pix resolution was obtained starting from 330 nadiral and inclined images. In order to verify the suitability of this technique we carried out also a terrestrial photogrammetric survey using three different photographic media, a reflex camera with integrated GPS, a compact digital camera and a camera of a smartphone. Three data set of image have been obtained and then compared. In conclusion, it is possible to say that the peculiarity of the RPAS photogrammetric survey allowed highlighting some peculiariar features of the tower, such as the presence of a trapdoor and of a chimney at the roof level, not detectable with a terrestrial survey, that could provide essential elements in order to execute restoration works aimed at the recovery of the cultural heritage.
NASA Astrophysics Data System (ADS)
Merlaud, Alexis; Tack, Frederik; Constantin, Daniel; Georgescu, Lucian; Maes, Jeroen; Fayt, Caroline; Mingireanu, Florin; Schuettemeyer, Dirk; Meier, Andreas Carlos; Schönardt, Anja; Ruhtz, Thomas; Bellegante, Livio; Nicolae, Doina; Den Hoed, Mirjam; Allaart, Marc; Van Roozendael, Michel
2018-01-01
The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) is a compact remote sensing instrument dedicated to mapping trace gases from an unmanned aerial vehicle (UAV). SWING is based on a compact visible spectrometer and a scanning mirror to collect scattered sunlight. Its weight, size, and power consumption are respectively 920 g, 27 cm × 12 cm × 8 cm, and 6 W. SWING was developed in parallel with a 2.5 m flying-wing UAV. This unmanned aircraft is electrically powered, has a typical airspeed of 100 km h-1, and can operate at a maximum altitude of 3 km. We present SWING-UAV experiments performed in Romania on 11 September 2014 during the Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT) campaign, which was dedicated to test newly developed instruments in the context of air quality satellite validation. The UAV was operated up to 700 m above ground, in the vicinity of the large power plant of Turceni (44.67° N, 23.41° E; 116 m a. s. l. ). These SWING-UAV flights were coincident with another airborne experiment using the Airborne imaging differential optical absorption spectroscopy (DOAS) instrument for Measurements of Atmospheric Pollution (AirMAP), and with ground-based DOAS, lidar, and balloon-borne in situ observations. The spectra recorded during the SWING-UAV flights are analysed with the DOAS technique. This analysis reveals NO2 differential slant column densities (DSCDs) up to 13±0.6×1016 molec cm-2. These NO2 DSCDs are converted to vertical column densities (VCDs) by estimating air mass factors. The resulting NO2 VCDs are up to 4.7±0.4×1016 molec cm-2. The water vapour DSCD measurements, up to 8±0.15×1022 molec cm-2, are used to estimate a volume mixing ratio of water vapour in the boundary layer of 0.013±0.002 mol mol-1. These geophysical quantities are validated with the coincident measurements.
Near Real Time Structural Health Monitoring with Multiple Sensors in a Cloud Environment
NASA Astrophysics Data System (ADS)
Bock, Y.; Todd, M.; Kuester, F.; Goldberg, D.; Lo, E.; Maher, R.
2017-12-01
A repeated near real time 3-D digital surrogate representation of critical engineered structures can be used to provide actionable data on subtle time-varying displacements in support of disaster resiliency. We describe a damage monitoring system of optimally-integrated complementary sensors, including Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical Systems (MEMS) accelerometers coupled with the GNSS (seismogeodesy), light multi-rotor Unmanned Aerial Vehicles (UAVs) equipped with high-resolution digital cameras and GNSS/IMU, and ground-based Light Detection and Ranging (LIDAR). The seismogeodetic system provides point measurements of static and dynamic displacements and seismic velocities of the structure. The GNSS ties the UAV and LIDAR imagery to an absolute reference frame with respect to survey stations in the vicinity of the structure to isolate the building response to ground motions. The GNSS/IMU can also estimate the trajectory of the UAV with respect to the absolute reference frame. With these constraints, multiple UAVs and LIDAR images can provide 4-D displacements of thousands of points on the structure. The UAV systematically circumnavigates the target structure, collecting high-resolution image data, while the ground LIDAR scans the structure from different perspectives to create a detailed baseline 3-D reference model. UAV- and LIDAR-based imaging can subsequently be repeated after extreme events, or after long time intervals, to assess before and after conditions. The unique challenge is that disaster environments are often highly dynamic, resulting in rapidly evolving, spatio-temporal data assets with the need for near real time access to the available data and the tools to translate these data into decisions. The seismogeodetic analysis has already been demonstrated in the NASA AIST Managed Cloud Environment (AMCE) designed to manage large NASA Earth Observation data projects on Amazon Web Services (AWS). The Cloud provides distinct advantages in terms of extensive storage and computing resources required for processing UAV and LIDAR imagery. Furthermore, it avoids single points of failure and allows for remote operations during emergencies, when near real time access to structures may be limited.
High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.
2015-10-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment, particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the catchment area as model input. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increases as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data are often unavailable. Modern unmanned aerial vehicles (UAVs) allow one to acquire high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility of deriving high-resolution imperviousness maps for urban areas from UAV imagery and of using this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is proposed and evaluated in a state-of-the-art urban drainage modelling exercise. In a real-life case study (Lucerne, Switzerland), we compare imperviousness maps generated using a fixed-wing consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their overall accuracy, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyse the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and runoff volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated from UAV images processed with modern classification methods achieve an accuracy comparable to standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on predicted surface runoff and pipe flows, when traditional workflows are used. We expect that they will have a substantial influence when more detailed modelling approaches are employed to characterize land use and to predict surface runoff. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility of flexibly acquiring up-to-date aerial images at a quality compared with off-the-shelf image products and a competitive price at the same time. We believe that in the future, urban drainage models representing a higher degree of spatial detail will fully benefit from the strengths of UAV imagery.
NASA Astrophysics Data System (ADS)
Smigaj, M.; Gaulton, R.; Barr, S. L.; Suárez, J. C.
2015-08-01
Climate change has a major influence on forest health and growth, by indirectly affecting the distribution and abundance of forest pathogens, as well as the severity of tree diseases. Temperature rise and changes in precipitation may also allow the ranges of some species to expand, resulting in the introduction of non-native invasive species, which pose a significant risk to forests worldwide. The detection and robust monitoring of affected forest stands is therefore crucial for allowing management interventions to reduce the spread of infections. This paper investigates the use of a low-cost fixed-wing UAV-borne thermal system for monitoring disease-induced canopy temperature rise. Initially, camera calibration was performed revealing a significant overestimation (by over 1 K) of the temperature readings and a non-uniformity (exceeding 1 K) across the imagery. These effects have been minimised with a two-point calibration technique ensuring the offsets of mean image temperature readings from blackbody temperature did not exceed ± 0.23 K, whilst 95.4% of all the image pixels fell within ± 0.14 K (average) of mean temperature reading. The derived calibration parameters were applied to a test data set of UAV-borne imagery acquired over a Scots pine stand, representing a range of Red Band Needle Blight infection levels. At canopy level, the comparison of tree crown temperature recorded by a UAV-borne infrared camera suggests a small temperature increase related to disease progression (R = 0.527, p = 0.001); indicating that UAV-borne cameras might be able to detect sub-degree temperature differences induced by disease onset.
NASA Astrophysics Data System (ADS)
Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G.
2017-08-01
Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.
Uav Photogrammetry: Block Triangulation Comparisons
NASA Astrophysics Data System (ADS)
Gini, R.; Pagliari, D.; Passoni, D.; Pinto, L.; Sona, G.; Dosso, P.
2013-08-01
UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord"), useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey), allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes) were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma), Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.
NASA Astrophysics Data System (ADS)
Lu, Bing; He, Yuhong
2017-06-01
Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition, owing to its high spatial resolution. The overall accuracy is around 85% for images acquired at different times. Species composition is spatially attributed by topographical features and soil moisture conditions. Spatio-temporal variation of species composition implies the growing process and succession of different species, which is critical for understanding the evolutionary features of grassland ecosystems. Strengths and challenges of applying UAV-acquired imagery for vegetation studies are summarized at the end.
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
Unmanned aerial vehicle: A unique platform for low-altitude remote sensing for crop management
USDA-ARS?s Scientific Manuscript database
Unmanned aerial vehicles (UAV) provide a unique platform for remote sensing to monitor crop fields that complements remote sensing from satellite, aircraft and ground-based platforms. The UAV-based remote sensing is versatile at ultra-low altitude to be able to provide an ultra-high-resolution imag...
A UAV and S2A data-based estimation of the initial biomass of green algae in the South Yellow Sea.
Xu, Fuxiang; Gao, Zhiqiang; Jiang, Xiaopeng; Shang, Weitao; Ning, Jicai; Song, Debin; Ai, Jinquan
2018-03-01
Previous studies have shown that the initial biomass of green tide was the green algae attaching to Pyropia aquaculture rafts in the Southern Yellow Sea. In this study, the green algae was identified with unmanned aerial vehicle (UAV), an biomass estimation model was proposed for green algae biomass in the radial sand ridge area based on Sentinel-2A image (S2A) and UAV images. The result showed that the green algae was detected highly accurately with the normalized green-red difference index (NGRDI); approximately 1340 tons and 700 tons of green algae were attached to rafts and raft ropes respectively, and the lower biomass might be the main cause for the smaller scale of green tide in 2017. In addition, UAV play an important role in raft-attaching green algae monitoring and long-term research of its biomass would provide a scientific basis for the control and forecast of green tide in the Yellow Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distinguishing plant population and variety with UAV-derived vegetation indices
NASA Astrophysics Data System (ADS)
Oakes, Joseph; Balota, Maria
2017-05-01
Variety selection and seeding rate are two important choice that a peanut grower must make. High yielding varieties can increase profit with no additional input costs, while seeding rate often determines input cost a grower will incur from seed costs. The overall purpose of this study was to examine the effect that seeding rate has on different peanut varieties. With the advent of new UAV technology, we now have the possibility to use indices collected with the UAV to measure emergence, seeding rate, growth rate, and perhaps make yield predictions. This information could enable growers to make management decisions early in the season based on low plant populations due to poor emergence, and could be a useful tool for growers to use to estimate plant population and growth rate in order to help achieve desired crop stands. Red-Green-Blue (RGB) and near-infrared (NIR) images were collected from a UAV platform starting two weeks after planting and continued weekly for the next six weeks. Ground NDVI was also collected each time aerial images were collected. Vegetation indices were derived from both the RGB and NIR images. Greener area (GGA- the proportion of green pixels with a hue angle from 80° to 120°) and a* (the average red/green color of the image) were derived from the RGB images while Normalized Differential Vegetative Index (NDVI) was derived from NIR images. Aerial indices were successful in distinguishing seeding rates and determining emergence during the first few weeks after planting, but not later in the season. Meanwhile, these aerial indices are not an adequate predictor of yield in peanut at this point.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Papadavid, G.; Christoforou, M.; Agapiou, A.; Andreou, K.; Tsaltas, D.; Hadjimitsis, D. G.
2014-08-01
This paper provides the results obtained by using satellite imagery and UAV data for managing a degraded system over Randi Forest in Cyprus. Landsat TM/ETM+ and GeoEye images have been used to retrieve several indices with the main aim to managing the overgrazed area. Aerial photographs were acquired in order to document and monitor the overgrazed areas, which also include seasonal changes in vegetation and soil. UAVs were used to create ortho-photos and DEMS. Satellite images were used to conduct NDVIs of the study area. The resulting findings provide a detailed image of the specific location of overgrazed areas. The results of the study can be used for decision makers to establish effective strategies to avoid similar scenarios of overgrazing in other parts of Cyprus.This study was funded by the FP7 programme CASCADE Project on sudden and catastrophic shifts in dryland Mediterranean ecosystems (2012-2017).
Habib, Ayman; Han, Youkyung; Xiong, Weifeng; ...
2016-09-24
Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Lastly, experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Ayman; Han, Youkyung; Xiong, Weifeng
Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Lastly, experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude.« less
Experiment on Uav Photogrammetry and Terrestrial Laser Scanning for Ict-Integrated Construction
NASA Astrophysics Data System (ADS)
Takahashi, N.; Wakutsu, R.; Kato, T.; Wakaizumi, T.; Ooishi, T.; Matsuoka, R.
2017-08-01
In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named "i-Construction" focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for "i-Construction" are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony α6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an α6000 and TLS using a Focus3D X330 following the standards for "i-Construction" would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for "i-Construction." The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.
Chosen Aspects of the Production of the Basic Map Using Uav Imagery
NASA Astrophysics Data System (ADS)
Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.
2016-06-01
For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.
UAV-Based Hyperspectral Remote Sensing for Precision Agriculture: Challenges and Opportunities
NASA Astrophysics Data System (ADS)
Angel, Y.; Parkes, S. D.; Turner, D.; Houborg, R.; Lucieer, A.; McCabe, M.
2017-12-01
Modern agricultural production relies on monitoring crop status by observing and measuring variables such as soil condition, plant health, fertilizer and pesticide effect, irrigation and crop yield. Managing all of these factors is a considerable challenge for crop producers. As such, providing integrated technological solutions that enable improved diagnostics of field condition to maximize profits, while minimizing environmental impacts, would be of much interest. Such challenges can be addressed by implementing remote sensing systems such as hyperspectral imaging to produce precise biophysical indicator maps across the various cycles of crop development. Recent progress in unmanned aerial vehicles (UAVs) have advanced traditional satellite-based capabilities, providing a capacity for high-spatial, spectral and temporal response. However, while some hyperspectral sensors have been developed for use onboard UAVs, significant investment is required to develop a system and data processing workflow that retrieves accurately georeferenced mosaics. Here we explore the use of a pushbroom hyperspectral camera that is integrated on-board a multi-rotor UAV system to measure the surface reflectance in 272 distinct spectral bands across a wavelengths range spanning 400-1000 nm, and outline the requirement for sensor calibration, integration onto a stable UAV platform enabling accurate positional data, flight planning, and development of data post-processing workflows for georeferenced mosaics. The provision of high-quality and geo-corrected imagery facilitates the development of metrics of vegetation health that can be used to identify potential problems such as production inefficiencies, diseases and nutrient deficiencies and other data-streams to enable improved crop management. Immense opportunities remain to be exploited in the implementation of UAV-based hyperspectral sensing (and its combination with other imaging systems) to provide a transferable and scalable integrated framework for crop growth monitoring and yield prediction. Here we explore some of the challenges and issues in translating the available technological capacity into a useful and useable image collection and processing flow-path that enables these potential applications to be better realized.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
A customizable commercial miniaturized 320×256 indium gallium arsenide shortwave infrared camera
NASA Astrophysics Data System (ADS)
Huang, Shih-Che; O'Grady, Matthew; Groppe, Joseph V.; Ettenberg, Martin H.; Brubaker, Robert M.
2004-10-01
The design and performance of a commercial short-wave-infrared (SWIR) InGaAs microcamera engine is presented. The 0.9-to-1.7 micron SWIR imaging system consists of a room-temperature-TEC-stabilized, 320x256 (25 μm pitch) InGaAs focal plane array (FPA) and a high-performance, highly customizable image-processing set of electronics. The detectivity, D*, of the system is greater than 1013 cm-√Hz/W at 1.55 μm, and this sensitivity may be adjusted in real-time over 100 dB. It features snapshot-mode integration with a minimum exposure time of 130 μs. The digital video processor provides real time pixel-to-pixel, 2-point dark-current subtraction and non-uniformity compensation along with defective-pixel substitution. Other features include automatic gain control (AGC), gamma correction, 7 preset configurations, adjustable exposure time, external triggering, and windowing. The windowing feature is highly flexible; the region of interest (ROI) may be placed anywhere on the imager and can be varied at will. Windowing allows for high-speed readout enabling such applications as target acquisition and tracking; for example, a 32x32 ROI window may be read out at over 3500 frames per second (fps). Output video is provided as EIA170-compatible analog, or as 12-bit CameraLink-compatible digital. All the above features are accomplished in a small volume < 28 cm3, weight < 70 g, and with low power consumption < 1.3 W at room temperature using this new microcamera engine. Video processing is based on a field-programmable gate array (FPGA) platform with a soft-embedded processor that allows for ease of integration/addition of customer-specific algorithms, processes, or design requirements. The camera was developed with the high-performance, space-restricted, power-conscious application in mind, such as robotic or UAV deployment.
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-01-01
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845
Alexandridis, Thomas K; Tamouridou, Afroditi Alexandra; Pantazi, Xanthoula Eirini; Lagopodi, Anastasia L; Kashefi, Javid; Ovakoglou, Georgios; Polychronos, Vassilios; Moshou, Dimitrios
2017-09-01
In the present study, the detection and mapping of Silybum marianum (L.) Gaertn. weed using novelty detection classifiers is reported. A multispectral camera (green-red-NIR) on board a fixed wing unmanned aerial vehicle (UAV) was employed for obtaining high-resolution images. Four novelty detection classifiers were used to identify S. marianum between other vegetation in a field. The classifiers were One Class Support Vector Machine (OC-SVM), One Class Self-Organizing Maps (OC-SOM), Autoencoders and One Class Principal Component Analysis (OC-PCA). As input features to the novelty detection classifiers, the three spectral bands and texture were used. The S. marianum identification accuracy using OC-SVM reached an overall accuracy of 96%. The results show the feasibility of effective S. marianum mapping by means of novelty detection classifiers acting on multispectral UAV imagery.
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-03-04
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.
Bird's-Eye View of Sampling Sites: Using Unmanned Aerial Vehicles to Make Chemistry Fieldwork Videos
ERIC Educational Resources Information Center
Fung, Fun Man; Watts, Simon Francis
2017-01-01
Drones, unmanned aerial vehicles (UAVs), usually helicopters or airplanes, are commonly used for warfare, aerial surveillance, and recreation. In recent years, drones have become more accessible to the public as a platform for photography. In this report, we explore the use of drones as a new technological filming tool to enhance student learning…
Peña, José M; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I; López-Granados, Francisca
2015-03-06
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5-6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations.
Peña, José M.; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I.; López-Granados, Francisca
2015-01-01
In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5–6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations. PMID:25756867
NASA Astrophysics Data System (ADS)
Kasprzak, Marek; Jancewicz, Kacper; Michniewicz, Aleksandra
2017-11-01
The paper presents an example of using photographs taken by unmanned aerial vehicles (UAV) and processed using the structure from motion (SfM) procedure in a geomorphological study of rock relief. Subject to analysis is a small rock city in the West Sudetes (SW Poland), known as Starościńskie Skały and developed in coarse granite bedrock. The aims of this paper were, first, to compare UAV/SfM-derived data with the cartographical image based on the traditional geomorphological field-mapping methods and the digital elevation model derived from airborne laser scanning (ALS). Second, to test if the proposed combination of UAV and SfM methods may be helpful in recognizing the detailed structure of granite tors. As a result of conducted UAV flights and digital image post-processing in AgiSoft software, it was possible to obtain datasets (dense point cloud, texture model, orthophotomap, bare-ground-type digital terrain model—DTM) which allowed to visualize in detail the surface of the study area. In consequence, it was possible to distinguish even the very small forms of rock surface microrelief: joints, aplite veins, rills and karren, weathering pits, etc., otherwise difficult to map and measure. The study includes also valorization of particular datasets concerning microtopography and allows to discuss indisputable advantages of using the UAV/SfM-based DTM in geomorphic studies of tors and rock cities, even those located within forest as in the presented case study.
Slic Superpixels for Object Delineation from Uav Data
NASA Astrophysics Data System (ADS)
Crommelinck, S.; Bennett, R.; Gerke, M.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.
2017-08-01
Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64 %. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.
Using Unmanned Aerial Vehicles (UAVs) to Modeling Tornado Impacts
NASA Astrophysics Data System (ADS)
Wagner, M.; Doe, R. K.
2017-12-01
Using Unmanned Aerial Vehicles (UAVs) to assess storm damage is a useful research tool. Benefits include their ability to access remote or impassable areas post-storm, identify unknown damages and assist with more detailed site investigations and rescue efforts. Technological advancement of UAVs mean that they can capture high resolution images often at an affordable price. These images can be used to create 3D environments to better interpret and delineate damages from large areas that would have been difficult in ground surveys. This research presents the results of a rapid response site investigation of the 29 April 2017 Canton, Texas, USA, tornado using low cost UAVs. This was a multiple, high impact tornado event measuring EF4 at maximum. Rural farmland was chosen as a challenging location to test both equipment and methodology. Such locations provide multiple impacts at a variety of scales including structural and vegetation damage and even animal fatalities. The 3D impact models allow for a more comprehensive study prior to clean-up. The results show previously unseen damages and better quantify damage impacts at the local level. 3D digital track swaths were created allowing for a more accurate track width determination. These results demonstrate how effective the use of low cost UAVs can be for rapid response storm damage assessments, the high quality of data they can achieve, and how they can help us better visualize tornado site investigations.
NASA Astrophysics Data System (ADS)
Yastikli, N.; Özerdem, Ö. Z.
2017-11-01
The digital documentation of architectural heritage is important for monitoring, preserving, managing as well as 3B BIM modelling, time-space VR (virtual reality) applications. The unmanned aerial vehicles (UAVs) have been widely used in these application thanks to rapid developments in technology which enable the high resolution images with resolutions in millimeters. Moreover, it has become possible to produce highly accurate 3D point clouds with structure from motion (SfM) and multi-view stereo (MVS), to obtain a surface reconstruction of a realistic 3D architectural heritage model by using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan. In this study, digital documentation of Otag-i Humayun (The Ottoman Empire Sultan's Summer Palace) located in Davutpaşa, Istanbul/Turkey is aimed using low cost UAV. The data collections have been made with low cost UAS 3DR Solo UAV with GoPro Hero 4 with fisheye lens. The data processing was accomplished by using commercial Pix4D software. The dense point clouds, a true orthophoto and 3D solid model of the Otag-i Humayun were produced results. The quality check of the produced point clouds has been performed. The obtained result from Otag-i Humayun in Istanbul proved that, the low cost UAV with fisheye lens can be successfully used for architectural heritage documentation.
High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.
2015-01-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.
Using UAV data for soil surface change detection at a loess field plot
NASA Astrophysics Data System (ADS)
Eltner, Anette; Baumgart, Philipp
2014-05-01
Application of unmanned aerial vehicles (UAV) denotes an increasing interest in geosciences due to major developments within the last years. Today, UAV are economical, reliable and flexible in usage. They provide a non-invasive method to measure the soil surface and its changes - e.g. due to erosion - with high resolution. Advances in digital photogrammetry and computer vision allow for fast and dense digital surface reconstruction from overlapping images. The study site is located in the Saxonian loess (Germany). The area is fragile due to erodible soils and intense agricultural utilisation. Hence, detectable soil surface changes are expected. The size of the field plot is 20 x 30 meters and the period of investigation lasts from October 2012 till July 2013 at which four surveys were performed. The UAV deployed in this study is equipped with a compact camera which is attached to an active stabilising camera mount. In addition, the micro drone integrates GPS and IMU that enables autonomous surveys with programmed flight patterns. About 100 photos are needed to cover the study site at a minimal flying height of eight metres and 65%/80% image overlap. For multi-temporal comparison a stable local reference system is established. Total station control of the signalised ground control points confirms two mm accuracy for the study period. To estimate the accuracy of the digital surface models (DSM) derived from the UAV images a comparison to DSM from terrestrial laser scanning (TLS) is conducted. The standard deviation of differences amounts five millimetres. To analyse surface changes methods from image processing are applied to the DSM. Erosion rills could be extracted for quantitative and qualitative consideration. Furthermore, volumetric changes are measured. First results indicate levelling processes during the winter season and reveal rill and inter-rill erosion during spring and summer season.
4D very high-resolution topography monitoring of surface deformation using UAV-SfM framework.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Schlunegger, Fritz; Van Oost, Kristof
2016-04-01
During the last years, exploratory research has shown that UAV-based image acquisition is suitable for environmental remote sensing and monitoring. Image acquisition with cameras mounted on an UAV can be performed at very-high spatial resolution and high temporal frequency in the most dynamic environments. Combined with Structure-from-Motion algorithm, the UAV-SfM framework is capable of providing digital surface models (DSM) which are highly accurate when compared to other very-high resolution topographic datasets and highly reproducible for repeated measurements over the same study area. In this study, we aim at assessing (1) differential movement of the Earth's surface and (2) the sediment budget of a complex earthflow located in the Central Swiss Alps based on three topographic datasets acquired over a period of 2 years. For three time steps, we acquired aerial photographs with a standard reflex camera mounted on a low-cost and lightweight UAV. Image datasets were then processed with the Structure-from-Motion algorithm in order to reconstruct a 3D dense point cloud representing the topography. Georeferencing of outputs has been achieved based on the ground control point (GCP) extraction method, previously surveyed on the field with a RTK GPS. Finally, digital elevation model of differences (DOD) has been computed to assess the topographic changes between the three acquisition dates while surface displacements have been quantified by using image correlation techniques. Our results show that the digital elevation model of topographic differences is able to capture surface deformation at cm-scale resolution. The mean annual displacement of the earthflow is about 3.6 m while the forefront of the landslide has advanced by ca. 30 meters over a period of 18 months. The 4D analysis permits to identify the direction and velocity of Earth movement. Stable topographic ridges condition the direction of the flow with highest downslope movement on steep slopes, and diffuse movement due to lateral sediment flux in the central part of the earthflow.
The sky is the limit: reconstructing physical geography fieldwork from an aerial perspective
NASA Astrophysics Data System (ADS)
Williams, R.; Tooth, S.; Gibson, M.; Barrett, B.
2017-12-01
In an era of rapid geographical data acquisition, interpretations of remote sensing products (e.g. aerial photographs, satellite images, digital elevation models) are an integral part of many undergraduate geography degree schemes but there are fewer opportunities for collection and processing of primary remote sensing data. Unmanned aerial vehicles (UAVs) provide a relatively cheap opportunity to introduce the principles and practice of airborne remote sensing into fieldcourses, enabling students to learn about image acquisition, data processing and interpretation of derived products. Three case studies illustrate how a low cost DJI Phantom UAV can be used by students to acquire images that can be processed using off the shelf Structure-from-Motion photogrammetry software. Two case studies are drawn from an international fieldcourse that takes students to field sites that are the focus of current funded research whilst a third case study is from a course in topographic mapping. Results from a student questionnaire and analysis of assessed student reports showed that using UAVs in fieldwork enhanced student engagement with themes on their fieldcourse and equipped them with data processing skills. The derivation of bespoke orthophotos and Digital Elevation Models also provided students with opportunities to gain insight into the various data quality issues that are associated with aerial imagery acquisition and topographic reconstruction, although additional training is required to maximise this potential. Recognition of the successes and limitations of this teaching intervention provides scope for improving exercises that use UAVs and other technologies in future fieldcourses. UAVs are enabling both a reconstruction of how we measure the Earth's surface and a reconstruction of how students do fieldwork.
NASA Astrophysics Data System (ADS)
Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan
2018-01-01
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
Demonstrating Acquisition of Real-Time Thermal Data Over Fires Utilizing UAVs
NASA Technical Reports Server (NTRS)
Ambrosia, Vincent G.; Wegener, Steven S.; Brass, James A.; Buechel, Sally W.; Peterson, David L. (Technical Monitor)
2002-01-01
A disaster mitigation demonstration, designed to integrate remote-piloted aerial platforms, a thermal infrared imaging payload, over-the-horizon (OTH) data telemetry and advanced image geo-rectification technologies was initiated in 2001. Project FiRE incorporates the use of a remotely piloted Uninhabited Aerial Vehicle (UAV), thermal imagery, and over-the-horizon satellite data telemetry to provide geo-corrected data over a controlled burn, to a fire management community in near real-time. The experiment demonstrated the use of a thermal multi-spectral scanner, integrated on a large payload capacity UAV, distributing data over-the-horizon via satellite communication telemetry equipment, and precision geo-rectification of the resultant data on the ground for data distribution to the Internet. The use of the UAV allowed remote-piloted flight (thereby reducing the potential for loss of human life during hazardous missions), and the ability to "finger and stare" over the fire for extended periods of time (beyond the capabilities of human-pilot endurance). Improved bit-rate capacity telemetry capabilities increased the amount, structure, and information content of the image data relayed to the ground. The integration of precision navigation instrumentation allowed improved accuracies in geo-rectification of the resultant imagery, easing data ingestion and overlay in a GIS framework. We focus on these technological advances and demonstrate how these emerging technologies can be readily integrated to support disaster mitigation and monitoring strategies regionally and nationally.
NASA Astrophysics Data System (ADS)
Zhu, Boqin
2015-08-01
The purpose of using unmanned aerial vehicle (UAV) remote sensing application in Five-hundred-meter aperture spherical telescope (FAST) project is to dynamically record the construction process with high resolution image, monitor the environmental impact, and provide services for local environmental protection and the reserve immigrants. This paper introduces the use of UAV remote sensing system and the course design and implementation for the FAST site. Through the analysis of the time series data, we found that: (1) since the year 2012, the project has been widely carried out; (2) till 2013, the internal project begun to take shape;(3) engineering excavation scope was kept stable in 2014, and the initial scale of the FAST engineering construction has emerged as in the meantime, the vegetation recovery went well on the bare soil area; (4) in 2015, none environmental problems caused by engineering construction and other engineering geological disaster were found in the work area through the image interpretation of UAV images. This paper also suggested that the UAV technology need some improvements to fulfill the requirements of surveying and mapping specification., including a new data acquisition and processing measures assigned with the background of highly diverse elevation, usage of telephoto camera, hierarchical photography with different flying height, and adjustment with terrain using the joint empty three settlement method.
Crack identification for rigid pavements using unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bahaddin Ersoz, Ahmet; Pekcan, Onur; Teke, Turker
2017-09-01
Pavement condition assessment is an essential piece of modern pavement management systems as rehabilitation strategies are planned based upon its outcomes. For proper evaluation of existing pavements, they must be continuously and effectively monitored using practical means. Conventionally, truck-based pavement monitoring systems have been in-use in assessing the remaining life of in-service pavements. Although such systems produce accurate results, their use can be expensive and data processing can be time consuming, which make them infeasible considering the demand for quick pavement evaluation. To overcome such problems, Unmanned Aerial Vehicles (UAVs) can be used as an alternative as they are relatively cheaper and easier-to-use. In this study, we propose a UAV based pavement crack identification system for monitoring rigid pavements’ existing conditions. The system consists of recently introduced image processing algorithms used together with conventional machine learning techniques, both of which are used to perform detection of cracks on rigid pavements’ surface and their classification. Through image processing, the distinct features of labelled crack bodies are first obtained from the UAV based images and then used for training of a Support Vector Machine (SVM) model. The performance of the developed SVM model was assessed with a field study performed along a rigid pavement exposed to low traffic and serious temperature changes. Available cracks were classified using the UAV based system and obtained results indicate it ensures a good alternative solution for pavement monitoring applications.
Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images
Gašparović, Mateo; Jurjević, Luka
2017-01-01
In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined. PMID:28218699
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
Smart Camera System for Aircraft and Spacecraft
NASA Technical Reports Server (NTRS)
Delgado, Frank; White, Janis; Abernathy, Michael F.
2003-01-01
This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.
Configuration and Specifications of AN Unmanned Aerial Vehicle for Precision Agriculture
NASA Astrophysics Data System (ADS)
Erena, M.; Montesinos, S.; Portillo, D.; Alvarez, J.; Marin, C.; Fernandez, L.; Henarejos, J. M.; Ruiz, L. A.
2016-06-01
Unmanned Aerial Vehicles (UAVs) with multispectral sensors are increasingly attractive in geosciences for data capture and map updating at high spatial and temporal resolutions. These autonomously-flying systems can be equipped with different sensors, such as a six-band multispectral camera (Tetracam mini-MCA-6), GPS Ublox M8N, and MEMS gyroscopes, and miniaturized sensor systems for navigation, positioning, and mapping purposes. These systems can be used for data collection in precision viticulture. In this study, the efficiency of a light UAV system for data collection, processing, and map updating in small areas is evaluated, generating correlations between classification maps derived from remote sensing and production maps. Based on the comparison of the indices derived from UAVs incorporating infrared sensors with those obtained by satellites (Sentinel 2A and Landsat 8), UAVs show promise for the characterization of vineyard plots with high spatial variability, despite the low vegetative coverage of these crops. Consequently, a procedure for zoning map production based on UAV/UV images could provide important information for farmers.
Design of rapid prototype of UAV line-of-sight stabilized control system
NASA Astrophysics Data System (ADS)
Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe
2018-01-01
The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.
Baena, Susana; Moat, Justin; Whaley, Oliver; Boyd, Doreen S
2017-01-01
The Pacific Equatorial dry forest of Northern Peru is recognised for its unique endemic biodiversity. Although highly threatened the forest provides livelihoods and ecosystem services to local communities. As agro-industrial expansion and climatic variation transform the region, close ecosystem monitoring is essential for viable adaptation strategies. UAVs offer an affordable alternative to satellites in obtaining both colour and near infrared imagery to meet the specific requirements of spatial and temporal resolution of a monitoring system. Combining this with their capacity to produce three dimensional models of the environment provides an invaluable tool for species level monitoring. Here we demonstrate that object-based image analysis of very high resolution UAV images can identify and quantify keystone tree species and their health across wide heterogeneous landscapes. The analysis exposes the state of the vegetation and serves as a baseline for monitoring and adaptive implementation of community based conservation and restoration in the area.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.
2016-10-01
The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.
NASA Astrophysics Data System (ADS)
Chirayath, V.; Instrella, R.
2016-02-01
We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.
NASA Astrophysics Data System (ADS)
Chirayath, V.
2015-12-01
We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.
Image processing analysis of geospatial uav orthophotos for palm oil plantation monitoring
NASA Astrophysics Data System (ADS)
Fahmi, F.; Trianda, D.; Andayani, U.; Siregar, B.
2018-03-01
Unmanned Aerial Vehicle (UAV) is one of the tools that can be used to monitor palm oil plantation remotely. With the geospatial orthophotos, it is possible to identify which part of the plantation land is fertile for planted crops, means to grow perfectly. It is also possible furthermore to identify less fertile in terms of growth but not perfect, and also part of plantation field that is not growing at all. This information can be easily known quickly with the use of UAV photos. In this study, we utilized image processing algorithm to process the orthophotos for more accurate and faster analysis. The resulting orthophotos image were processed using Matlab including classification of fertile, infertile, and dead palm oil plants by using Gray Level Co-Occurrence Matrix (GLCM) method. The GLCM method was developed based on four direction parameters with specific degrees 0°, 45°, 90°, and 135°. From the results of research conducted with 30 image samples, it was found that the accuracy of the system can be reached by using the features extracted from the matrix as parameters Contras, Correlation, Energy, and Homogeneity.
Low-cost multispectral imaging for remote sensing of lettuce health
NASA Astrophysics Data System (ADS)
Ren, David D. W.; Tripathi, Siddhant; Li, Larry K. B.
2017-01-01
In agricultural remote sensing, unmanned aerial vehicle (UAV) platforms offer many advantages over conventional satellite and full-scale airborne platforms. One of the most important advantages is their ability to capture high spatial resolution images (1-10 cm) on-demand and at different viewing angles. However, UAV platforms typically rely on the use of multiple cameras, which can be costly and difficult to operate. We present the development of a simple low-cost imaging system for remote sensing of crop health and demonstrate it on lettuce (Lactuca sativa) grown in Hong Kong. To identify the optimal vegetation index, we recorded images of both healthy and unhealthy lettuce, and used them as input in an expectation maximization cluster analysis with a Gaussian mixture model. Results from unsupervised and supervised clustering show that, among four widely used vegetation indices, the blue wide-dynamic range vegetation index is the most accurate. This study shows that it is readily possible to design and build a remote sensing system capable of determining the health status of lettuce at a reasonably low cost (
Multi-UAV Supervisory Control Interface Technology (MUSCIT)
2012-09-01
similar capability into the Vigilant Spirit Control Station ( VSCS ). During operations each vehicle is placed in its loiter mode. While in loiter mode...the vehicle will maintain a loiter over its designated loiter position. During pervious spirals, VSCS included a Loiter Slave mode where the sensor...available control station features. Prior to Spiral 3 simulation, VSCS developers had been working with Real Time Video Systems (RTVS) and had
Comparison of a UAV-derived point-cloud to Lidar data at Haig Glacier, Alberta, Canada
NASA Astrophysics Data System (ADS)
Bash, E. A.; Moorman, B.; Montaghi, A.; Menounos, B.; Marshall, S. J.
2016-12-01
The use of unmanned aerial vehicles (UAVs) is expanding rapidly in glaciological research as a result of technological improvements that make UAVs a cost-effective solution for collecting high resolution datasets with relative ease. The cost and difficult access traditionally associated with performing fieldwork in glacial environments makes UAVs a particularly attractive tool. In the small, but growing, body of literature using UAVs in glaciology the accuracy of UAV data is tested through the comparison of a UAV-derived DEM to measured control points. A field campaign combining simultaneous lidar and UAV flights over Haig Glacier in April 2015, provided the unique opportunity to directly compare UAV data to lidar. The UAV was a six-propeller Mikrokopter carrying a Panasonic Lumix DMC-GF1 camera with a 12 Megapixel Live MOS sensor and Lumix G 20 mm lens flown at a height of 90 m, resulting in sub-centimetre ground resolution per image pixel. Lidar data collection took place April 20, while UAV flights were conducted April 20-21. A set of 65 control points were laid out and surveyed on the glacier surface on April 19 and 21 using a RTK GPS with a vertical uncertainty of 5 cm. A direct comparison of lidar points to these control points revealed a 9 cm offset between the control points and the lidar points on average, but the difference changed distinctly from points collected on April 19 versus those collected April 21 (7 cm and 12 cm). Agisoft Photoscan was used to create a point-cloud from imagery collected with the UAV and CloudCompare was used to calculate the difference between this and the lidar point cloud, revealing an average difference of less than 17 cm. This field campaign also highlighted some of the benefits and drawbacks of using a rotary UAV for glaciological research. The vertical takeoff and landing capabilities, combined with quick responsiveness and higher carrying capacity, make the rotary vehicle favourable for high-resolution photos when working in mountainous terrain. Battery life is limited, however, compared to fixed-wing vehicles, making it more difficult to cover large areas in a short time. This analysis shows that UAVs are able to fill an important role in the future of glaciological research, when research goals are balanced with instrument accuracy and UAV platform selection.
Rapid mapping of landslide disaster using UAV- photogrammetry
NASA Astrophysics Data System (ADS)
Cahyono, A. B.; Zayd, R. A.
2018-03-01
Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.
a Study on Automatic Uav Image Mosaic Method for Paroxysmal Disaster
NASA Astrophysics Data System (ADS)
Li, M.; Li, D.; Fan, D.
2012-07-01
As everyone knows, some paroxysmal disasters, such as flood, can do a great damage in short time. Timely, accurate, and fast acquisition of sufficient disaster information is the prerequisite facing with disaster emergency. Due to UAV's superiority in acquiring disaster data, UAV, a rising remote sensed data has gradually become the first choice for departments of disaster prevention and mitigation to collect the disaster information at first hand. In this paper, a novel and fast strategy is proposed for registering and mosaicing UAV data. Firstly, the original images will not be zoomed in to be 2 times larger ones at the initial course of SIFT operator, and the total number of the pyramid octaves in scale space is reduced to speed up the matching process; sequentially, RANSAC(Random Sample Consensus) is used to eliminate the mismatching tie points. Then, bundle adjustment is introduced to solve all of the camera geometrical calibration parameters jointly. Finally, the best seamline searching strategy based on dynamic schedule is applied to solve the dodging problem arose by aeroplane's side-looking. Beside, a weighted fusion estimation algorithm is employed to eliminate the "fusion ghost" phenomenon.
Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area
NASA Astrophysics Data System (ADS)
Gini, R.; Passoni, D.; Pinto, L.; Sona, G.
2012-07-01
The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.
Towards distributed ATR using subjective logic combination rules with a swarm of UAVs
NASA Astrophysics Data System (ADS)
O'Hara, Stephen; Simon, Michael; Zhu, Qiuming
2007-04-01
In this paper, we present our initial findings demonstrating a cost-effective approach to Aided Target Recognition (ATR) employing a swarm of inexpensive Unmanned Aerial Vehicles (UAVs). We call our approach Distributed ATR (DATR). Our paper describes the utility of DATR for autonomous UAV operations, provides an overview of our methods, and the results of our initial simulation-based implementation and feasibility study. Our technology is aimed towards small and micro UAVs where platform restrictions allow only a modest quality camera and limited on-board computational capabilities. It is understood that an inexpensive sensor coupled with limited processing capability would be challenged in deriving a high probability of detection (P d) while maintaining a low probability of false alarms (P fa). Our hypothesis is that an evidential reasoning approach to fusing the observations of multiple UAVs observing approximately the same scene can raise the P d and lower the P fa sufficiently in order to provide a cost-effective ATR capability. This capability can lead to practical implementations of autonomous, coordinated, multi-UAV operations. In our system, the live video feed from a UAV is processed by a lightweight real-time ATR algorithm. This algorithm provides a set of possible classifications for each detected object over a possibility space defined by a set of exemplars. The classifications for each frame within a short observation interval (a few seconds) are used to generate a belief statement. Our system considers how many frames in the observation interval support each potential classification. A definable function transforms the observational data into a belief value. The belief value, or opinion, represents the UAV's belief that an object of the particular class exists in the area covered during the observation interval. The opinion is submitted as evidence in an evidential reasoning system. Opinions from observations over the same spatial area will have similar index values in the evidence cache. The evidential reasoning system combines observations of similar spatial indexes, discounting older observations based upon a parameterized information aging function. We employ Subjective Logic operations in the discounting and combination of opinions. The result is the consensus opinion from all observations that an object of a given class exists in a given region.
Optimal Path Planning and Control of Quadrotor Unmanned Aerial Vehicle for Area Coverage
NASA Astrophysics Data System (ADS)
Fan, Jiankun
An Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot on board. Its flight is controlled either autonomously by computers onboard the vehicle, or remotely by a pilot on the ground, or by another vehicle. In recent years, UAVs have been used more commonly than prior years. The example includes areo-camera where a high speed camera was attached to a UAV which can be used as an airborne camera to obtain aerial video. It also could be used for detecting events on ground for tasks such as surveillance and monitoring which is a common task during wars. Similarly UAVs can be used for relaying communication signal during scenarios when regular communication infrastructure is destroyed. The objective of this thesis is motivated from such civilian operations such as search and rescue or wildfire detection and monitoring. One scenario is that of search and rescue where UAV's objective is to geo-locate a person in a given area. The task is carried out with the help of a camera whose live feed is provided to search and rescue personnel. For this objective, the UAV needs to carry out scanning of the entire area in the shortest time. The aim of this thesis to develop algorithms to enable a UAV to scan an area in optimal time, a problem referred to as "Coverage Control" in literature. The thesis focuses on a special kind of UAVs called "quadrotor" that is propelled with the help of four rotors. The overall objective of this thesis is achieved via solving two problems. The first problem is to develop a dynamic control model of quadrtor. In this thesis, a proportional-integral-derivative controller (PID) based feedback control system is developed and implemented on MATLAB's Simulink. The PID controller helps track any given trajectory. The second problem is to design a trajectory that will fulfill the mission. The planed trajectory should make sure the quadrotor will scan the whole area without missing any part to make sure that the quadrotor will find the lost person in the area. The generated trajectory should also be optimal. This is achieved via making some assumptions on the form of the trajectory and solving the optimization problem to obtain optimal parameters of the trajectory. The proposed techniques are validated with the help of numerous simulations.
Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry
NASA Astrophysics Data System (ADS)
Julge, Kalev; Ellmann, Artu
2015-04-01
Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the benefits and drawbacks of UAV remote sensing compared to more traditional methods (e.g. national mapping from airplanes or direct measurements on the ground with GNSS devices or total stations) are outlined.
NASA Astrophysics Data System (ADS)
Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank
2017-12-01
Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.
Increasing the UAV data value by an OBIA methodology
NASA Astrophysics Data System (ADS)
García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo
2017-10-01
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Zheng, Haijing; Bai, Tingzhu; Wang, Quanxi; Cao, Fengmei; Shao, Long; Sun, Zhaotian
2018-01-01
This study investigates multispectral characteristics of an unmanned aerial vehicle (UAV) at different observation angles by experiment. The UAV and its engine are tested on the ground in the cruise state. Spectral radiation intensities at different observation angles are obtained in the infrared band of 0.9–15 μm by a spectral radiometer. Meanwhile, infrared images are captured separately by long-wavelength infrared (LWIR), mid-wavelength infrared (MWIR), and short-wavelength infrared (SWIR) cameras. Additionally, orientation maps of the radiation area and radiance are obtained. The results suggest that the spectral radiation intensity of the UAV is determined by its exhaust plume and that the main infrared emission bands occur at 2.7 μm and 4.3 μm. At observation angles in the range of 0°–90°, the radiation area of the UAV in MWIR band is greatest; however, at angles greater than 90°, the radiation area in the SWIR band is greatest. In addition, the radiance of the UAV at an angle of 0° is strongest. These conclusions can guide IR stealth technique development for UAVs. PMID:29389880
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-01-01
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds. PMID:27827883
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm.
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-11-03
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.
Observing changes at Santiaguito Volcano, Guatemala with an Unmanned Aerial Vehicle (UAV)
NASA Astrophysics Data System (ADS)
von Aulock, Felix W.; Lavallée, Yan; Hornby, Adrian J.; Lamb, Oliver D.; Andrews, Benjamin J.; Kendrick, Jackie E.
2016-04-01
Santiaguito Volcano (Guatemala) is one of the most active volcanoes in Central America, producing several ash venting explosions per day for almost 100 years. Lahars, lava flows and dome and flank collapses that produce major pyroclastic density currents also present a major hazard to nearby farms and communities. Optical observations of both the vent as well as the lava flow fronts can provide scientists and local monitoring staff with important information on the current state of volcanic activity and hazard. Due to the strong activity, and difficult terrain, unmanned aerial vehicles can help to provide valuable data on the activities of the volcano at a safe distance. We collected a series of images and video footage of A.) The active vent of Caliente and B.) The flow front of the active lava flow and its associated lahar channels, both in May 2015 and in December 2015- January 2016. Images of the crater and the lava flows were used for the reconstruction of 3D terrain models using structure-from-motion. These were supported by still frames from the video recording. Video footage of the summit crater (during two separate ash venting episodes) and the lava flow fronts indicate the following differences in activity during those two field campaigns: A.) - A new breach opened on the east side of the crater rim, possibly during the collapse in November 2015. - The active lava dome is now almost completely covered with ash, only leaving the largest blocks and faults exposed in times without gas venting - A recorded explosive event in December 2015 initiates at subparallel linear faults near the centre of the dome, rather than arcuate or ring faults, with a later, separate, and more ash-laden burst occurring from an off-centre fracture, however, other explosions during the observation period were seen to persist along the ring fault system observed on the lava dome since at least 2007 - suggesting a diversification of explosive activity. B.) - The lava flow fronts did not advance more than a few metres between May and December 2015 . - The width and thickness of the lava flows can be estimated by relative comparison of the 3D models. - Damming of river valleys by the lava flows has established new stream channels that have modified established pathways for the recurring lahars, one of the major hazards of Santiaguito volcano. The preliminary results of this study from two fieldtrips to Santiaguito Volcano are exemplary for the plethora of applications of UAVs in the field of volcano monitoring, and we urge funding agencies and legislative bodies to consider the value of these scientific instruments in future decisions and allocation of funding.
Design of Smart Multi-Functional Integrated Aviation Photoelectric Payload
NASA Astrophysics Data System (ADS)
Zhang, X.
2018-04-01
To coordinate with the small UAV at reconnaissance mission, we've developed a smart multi-functional integrated aviation photoelectric payload. The payload weighs only 1kg, and has a two-axis stabilized platform with visible task payload, infrared task payload, laser pointers and video tracker. The photoelectric payload could complete the reconnaissance tasks above the target area (including visible and infrared). Because of its light weight, small size, full-featured, high integrated, the constraints of the UAV platform carrying the payload will be reduced a lot, which helps the payload suit for more extensive using occasions. So all users of this type of smart multi-functional integrated aviation photoelectric payload will do better works on completion of the ground to better pinpoint targets, artillery calibration, assessment of observe strike damage, customs officials and other tasks.
Sensor data fusion for automated threat recognition in manned-unmanned infantry platoons
NASA Astrophysics Data System (ADS)
Wildt, J.; Varela, M.; Ulmke, M.; Brüggermann, B.
2017-05-01
To support a dismounted infantry platoon during deployment we team it with several unmanned aerial and ground vehicles (UAV and UGV, respectively). The unmanned systems integrate seamlessly into the infantry platoon, providing automated reconnaissance during movement while keeping formation as well as conducting close range reconnaissance during halt. The sensor data each unmanned system provides is continuously analyzed in real time by specialized algorithms, detecting humans in live videos of UAV mounted infrared cameras as well as gunshot detection and bearing by acoustic sensors. All recognized threats are fused into a consistent situational picture in real time, available to platoon and squad leaders as well as higher level command and control (C2) systems. This gives friendly forces local information superiority and increased situational awareness without the need to constantly monitor the unmanned systems and sensor data.
System Considerations and Challendes in 3d Mapping and Modeling Using Low-Cost Uav Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2015-08-01
In the last few years, low-cost UAV systems have been acknowledged as an affordable technology for geospatial data acquisition that can meet the needs of a variety of traditional and non-traditional mapping applications. In spite of its proven potential, UAV-based mapping is still lacking in terms of what is needed for it to become an acceptable mapping tool. In other words, a well-designed system architecture that considers payload restrictions as well as the specifications of the utilized direct geo-referencing component and the imaging systems in light of the required mapping accuracy and intended application is still required. Moreover, efficient data processing workflows, which are capable of delivering the mapping products with the specified quality while considering the synergistic characteristics of the sensors onboard, the wide range of potential users who might lack deep knowledge in mapping activities, and time constraints of emerging applications, are still needed to be adopted. Therefore, the introduced challenges by having low-cost imaging and georeferencing sensors onboard UAVs with limited payload capability, the necessity of efficient data processing techniques for delivering required products for intended applications, and the diversity of potential users with insufficient mapping-related expertise needs to be fully investigated and addressed by UAV-based mapping research efforts. This paper addresses these challenges and reviews system considerations, adaptive processing techniques, and quality assurance/quality control procedures for achievement of accurate mapping products from these systems.
Volcano surveillance by ACR silver fox
Patterson, M.C.L.; Mulligair, A.; Douglas, J.; Robinson, J.; Pallister, J.S.
2005-01-01
Recent growth in the business of unmanned air vehicles (UAVs) both in the US and abroad has improved their overall capability, resulting in a reduction in cost, greater reliability and adoption into areas where they had previously not been considered. Uses in coastal and border patrol, forestry and agriculture have recently been evaluated in an effort to expand the observed area and reduce surveillance and reconnaissance costs for information gathering. The scientific community has both contributed and benefited greatly in this development. A larger suite of light-weight miniaturized sensors now exists for a range of applications which in turn has led to an increase in the gathering of information from these autonomous vehicles. In October 2004 the first eruption of Mount St Helens since 1986 caused tremendous interest amoUg people worldwide. Volcanologists at the U.S. Geological Survey rapidly ramped up the level of monitoring using a variety of ground-based sensors deployed in the crater and on the flanks of the volcano using manned helicopters. In order to develop additional unmanned sensing methods that can be used in potentially hazardous and low visibility conditions, a UAV experiment was conducted during the ongoing eruption early in November. The Silver Fox UAV was flown over and inside the crater to perform routine observation and data gathering, thereby demonstrating a technology that could reduce physical risk to scientists and other field operatives. It was demonstrated that UAVs can be flown autonomously at an active volcano and can deliver real time data to a remote location. Although still relatively limited in extent, these initial flights provided information on volcanic activity and thermal conditions within the crater and at the new (2004) lava dome. The flights demonstrated that readily available visual and infrared video sensors mounted in a small and relatively low-cost aerial platform can provide useful data on volcanic phenomena. This was made possible by utilizing GPS and computer-controlled flight direction and stabilization to acquire and track target areas within the Mount St. Helens crater. It was also determined that additional light-weight sensor development will be needed to enable autonomous measurements of volcanic gasses and imaging in poor-weather conditions. Copyright ?? 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-01-01
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775
Direct Georeferencing of Uav Data Based on Simple Building Structures
NASA Astrophysics Data System (ADS)
Tampubolon, W.; Reinhardt, W.
2016-06-01
Unmanned Aerial Vehicle (UAV) data acquisition is more flexible compared with the more complex traditional airborne data acquisition. This advantage puts UAV platforms in a position as an alternative acquisition method in many applications including Large Scale Topographical Mapping (LSTM). LSTM, i.e. larger or equal than 1:10.000 map scale, is one of a number of prominent priority tasks to be solved in an accelerated way especially in third world developing countries such as Indonesia. As one component of fundamental geospatial data sets, large scale topographical maps are mandatory in order to enable detailed spatial planning. However, the accuracy of the products derived from the UAV data are normally not sufficient for LSTM as it needs robust georeferencing, which requires additional costly efforts such as the incorporation of sophisticated GPS Inertial Navigation System (INS) or Inertial Measurement Unit (IMU) on the platform and/or Ground Control Point (GCP) data on the ground. To reduce the costs and the weight on the UAV alternative solutions have to be found. This paper outlines a direct georeferencing method of UAV data by providing image orientation parameters derived from simple building structures and presents results of an investigation on the achievable results in a LSTM application. In this case, the image orientation determination has been performed through sequential images without any input from INS/IMU equipment. The simple building structures play a significant role in such a way that geometrical characteristics have been considered. Some instances are the orthogonality of the building's wall/rooftop and the local knowledge of the building orientation in the field. In addition, we want to include the Structure from Motion (SfM) approach in order to reduce the number of required GCPs especially for the absolute orientation purpose. The SfM technique applied to the UAV data and simple building structures additionally presents an effective tool for the LSTM application at low cost. Our results show that image orientation calculations from building structure essentially improve the accuracy of direct georeferencing procedure adjusted also by the GCPs. To gain three dimensional (3D) point clouds in local coordinate system, an extraction procedure has been performed by using Agisoft Photo Scan. Subsequently, a Digital Surface Model (DSM) generated from the acquired data is the main output for LSTM that has to be assessed using standard field and conventional mapping workflows. For an appraisal, our DSM is compared directly with a similar DSM obtained by conventional airborne data acquisition using Leica RCD-30 metric camera as well as Trimble Phase One (P65+) camera. The comparison reveals that our approach can achieve meter level accuracy both in planimetric and vertical dimensions.
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
Medium Altitude Endurance Unmanned Air Vehicle
NASA Astrophysics Data System (ADS)
Ernst, Larry L.
1994-10-01
The medium altitude endurance unmanned air vehicle (MAE UAV) program (formerly the tactical endurance TE UAV) is a new effort initiated by the Department of Defense to develop a ground launched UAV that can fly out 500 miles, remain on station for 24 hours, and return. It will transmit high resolution optical, infrared, and synthetic aperture radar (SAR) images of well-defended target areas through satellite links. It will provide near-real-time, releasable, low cost/low risk surveillance, targeting and damage assessment complementary to that of satellites and manned aircraft. The paper describes specific objectives of the MAE UAV program (deliverables and schedule) and the program's unique position as one of several programs to streamline the acquisition process under the cognizance of the newly established Airborne Reconnaissance Office. I discuss the system requirements and operational concept and describe the technical capabilities and characteristics of the major subsystems (airframe, propulsion, navigation, sensors, communication links, ground station, etc.) in some detail.
Active landslide monitoring using remote sensing data, GPS measurements and cameras on board UAV
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.; Kavoura, Katerina; Depountis, Nikolaos; Argyropoulos, Nikolaos; Koukouvelas, Ioannis; Sabatakakis, Nikolaos
2015-10-01
An active landslide can be monitored using many different methods: Classical geotechnical measurements like inclinometer, topographical survey measurements with total stations or GPS and photogrammetric techniques using airphotos or high resolution satellite images. As the cost of the aerial photo campaign and the acquisition of very high resolution satellite data is quite expensive the use of cameras on board UAV could be an identical solution. Small UAVs (Unmanned Aerial Vehicles) have started their development as expensive toys but they currently became a very valuable tool in remote sensing monitoring of small areas. The purpose of this work is to demonstrate a cheap but effective solution for an active landslide monitoring. We present the first experimental results of the synergistic use of UAV, GPS measurements and remote sensing data. A six-rotor aircraft with a total weight of 6 kg carrying two small cameras has been used. Very accurate digital airphotos, high accuracy DSM, DGPS measurements and the data captured from the UAV are combined and the results are presented in the current study.
Cloud-Assisted UAV Data Collection for Multiple Emerging Events in Distributed WSNs
Cao, Huiru; Liu, Yongxin; Yue, Xuejun; Zhu, Wenjian
2017-01-01
In recent years, UAVs (Unmanned Aerial Vehicles) have been widely applied for data collection and image capture. Specifically, UAVs have been integrated with wireless sensor networks (WSNs) to create data collection platforms with high flexibility. However, most studies in this domain focus on system architecture and UAVs’ flight trajectory planning while event-related factors and other important issues are neglected. To address these challenges, we propose a cloud-assisted data gathering strategy for UAV-based WSN in the light of emerging events. We also provide a cloud-assisted approach for deriving UAV’s optimal flying and data acquisition sequence of a WSN cluster. We validate our approach through simulations and experiments. It has been proved that our methodology outperforms conventional approaches in terms of flying time, energy consumption, and integrity of data acquisition. We also conducted a real-world experiment using a UAV to collect data wirelessly from multiple clusters of sensor nodes for monitoring an emerging event, which are deployed in a farm. Compared against the traditional method, this proposed approach requires less than half the flying time and achieves almost perfect data integrity. PMID:28783100
NASA Astrophysics Data System (ADS)
Fernandez Galarreta, J.; Kerle, N.; Gerke, M.
2015-06-01
Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.
NASA Astrophysics Data System (ADS)
Hlotov, Volodymyr; Hunina, Alla; Siejka, Zbigniew
2017-06-01
The main purpose of this work is to confirm the possibility of making largescale orthophotomaps applying unmanned aerial vehicle (UAV) Trimble- UX5. A planned altitude reference of the studying territory was carried out before to the aerial surveying. The studying territory has been marked with distinctive checkpoints in the form of triangles (0.5 × 0.5 × 0.2 m). The checkpoints used to precise the accuracy of orthophotomap have been marked with similar triangles. To determine marked reference point coordinates and check-points method of GNSS in real-time kinematics (RTK) measuring has been applied. Projecting of aerial surveying has been done with the help of installed Trimble Access Aerial Imaging, having been used to run out the UX5. Aerial survey out of the Trimble UX5 UAV has been done with the help of the digital camera SONY NEX-5R from 200m and 300 m altitude. These aerial surveying data have been calculated applying special photogrammetric software Pix 4D. The orthophotomap of the surveying objects has been made with its help. To determine the precise accuracy of the got results of aerial surveying the checkpoint coordinates according to the orthophotomap have been set. The average square error has been calculated according to the set coordinates applying GNSS measurements. A-priori accuracy estimation of spatial coordinates of the studying territory using the aerial surveying data have been calculated: mx=0.11 m, my=0.15 m, mz=0.23 m in the village of Remeniv and mx=0.26 m, my=0.38 m, mz=0.43 m in the town of Vynnyky. The accuracy of determining checkpoint coordinates has been investigated using images obtained out of UAV and the average square error of the reference points. Based on comparative analysis of the got results of the accuracy estimation of the made orthophotomap it can be concluded that the value the average square error does not exceed a-priori accuracy estimation. The possibility of applying Trimble UX5 UAV for making large-scale orthophotomaps has been investigated. The aerial surveying output data using UAV can be applied for monitoring potentially dangerous for people objects, the state border controlling, checking out the plots of settlements. Thus, it is important to control the accuracy the got results. Having based on the done analysis and experimental researches it can be concluded that applying UAV gives the possibility to find data more efficiently in comparison with the land surveying methods. As the result, the Trimble UX5 UAV gives the possibility to survey built-up territories with the required accuracy for making orthophotomaps with the following scales 1: 2000, 1: 1000, 1: 500.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Uav-Based Automatic Tree Growth Measurement for Biomass Estimation
NASA Astrophysics Data System (ADS)
Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.
2016-06-01
Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery.
Zhao, Yi; Ma, Jiale; Li, Xiaohui; Zhang, Jie
2018-02-27
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset 'UAV_Fire'. A 15-layered self-learning DCNN architecture named 'Fire_Net' is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, 'Fire_Net' guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
Zhao, Yi; Ma, Jiale; Li, Xiaohui
2018-01-01
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. PMID:29495504
Identifying species from the air: UAVs and the very high resolution challenge for plant conservation
Moat, Justin; Whaley, Oliver; Boyd, Doreen S.
2017-01-01
The Pacific Equatorial dry forest of Northern Peru is recognised for its unique endemic biodiversity. Although highly threatened the forest provides livelihoods and ecosystem services to local communities. As agro-industrial expansion and climatic variation transform the region, close ecosystem monitoring is essential for viable adaptation strategies. UAVs offer an affordable alternative to satellites in obtaining both colour and near infrared imagery to meet the specific requirements of spatial and temporal resolution of a monitoring system. Combining this with their capacity to produce three dimensional models of the environment provides an invaluable tool for species level monitoring. Here we demonstrate that object-based image analysis of very high resolution UAV images can identify and quantify keystone tree species and their health across wide heterogeneous landscapes. The analysis exposes the state of the vegetation and serves as a baseline for monitoring and adaptive implementation of community based conservation and restoration in the area. PMID:29176860
Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: a Case Study
NASA Astrophysics Data System (ADS)
Boon, M. A.; Drijfhout, A. P.; Tesfamichael, S.
2017-08-01
The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping of the wetland slope and contour mapping including the representation of hydrological features within the wetland. Factors such as cost, maintenance and flight time is in favour of the Skywalker fixed-wing. The multi-rotor on the other hand is more favourable in terms of data accuracy including for precision environmental planning purposes although the quality of the data of the fixed-wing is satisfactory for most environmental mapping applications.
UAV-based remote sensing of the Heumoes landslide, Austria Vorarlberg
NASA Astrophysics Data System (ADS)
Niethammer, U.; Joswig, M.
2009-04-01
The Heumoes landslide, is located in the eastern Vorarlberg Alps, Austria, 10 km southeast of Dornbirn. The extension of the landslide is about 2000 m in west to east direction and about 500 m at its widest extent in north to south direction. It occurs between an elevation of 940 m in the east and 1360 m in the west, slope angles of more than 60 % can be observed as well as almost flat areas. Its total volume is estimated to be 9.400.000 cubic meters and its average velocities amount to some centimeter per year. Surface signatures or 'photolineations' of creeping landslides, e.g. fractures and rupture lines in sediments and street pavings, and vegetation contrasts by changes of water table in shallow vegetation in principle can be resolved by remote sensing. The necessary ground cell resolution of few centimeters, however, generally can't be achieved by routine areal or satellite imagery. The fast technological progress of unmanned areal vehicles (UAV) and the reduced payload by miniaturized optical cameras now allow for UAV remote sensing applications that are below the high financial limits of military intelligence. Even with 'low-cost' equipment, the necessary centimeter-scale ground cell resolution can be achieved by adapting the flight altitude to some ten to one hundred meters. Operated by scientists experienced in remote-control flight models, UAV remote sensing can now be performed routinely, and campaign-wise after any significant event of, e.g., heavy rainfall, or partial mudflow. We have investigated a concept of UAV-borne remote sensing based on motorized gliders, and four-propeller helicopters or 'quad-rotors'. Several missions were flown over the Heumoes landslide. Between 2006 and 2008 three series UAV-borne photographs of the Heumoes landslide were taken and could be combined to orto-mosaics of the slope area within few centimeters ground cell resolution. We will present the concept of our low cost quad-rotor UAV system and first results of the image-processing based evaluation of the acquired images to characterize spatial and temporal details of landslide behaviour. We will also sketch first schemes of joint interpretation or 'data fusion' of UAV-based remote sensing with the results from geophysical mapping of underground distribution of soil moisture and fracture processes (Walter & Joswig, EGU 2009).
Scanning Rocket Impact Area with an UAV: First Results
NASA Astrophysics Data System (ADS)
Santos, C. C. C.; Costa, D. A. L. M.; Junior, V. L. S.; Silva, B. R. F.; Leite, D. L.; Junor, C. E. B. S.; Liberator, B. A.; Nogueira, M. B.; Senna, M. D.; Santiago, G. S.; Dantas, J. B. D.; Alsina, P. J.; Albuquerque, G. L. A.
2015-09-01
This paper presents the first subsystems developed for an UAV used in safety procedures of sounding rockets campaigns. The aim of this UAV is to scan the rocket impact area in order to search for unexpected boats. To achieve this mission, designers developed an image recognition algorithm, two human-machine interfaces and two communication links, one to control the drone and the other for receiving telemetry data. In this paper, developers take all major engineering decisions in order to overcome the project constraints. A secondary goal of the project is to encourage young people to take part in Brazilian space program. For this reason, most of designers are undergraduate students under supervision of experts.
Using small unmanned aerial vehicle for instream habitat evaluation and modelling
NASA Astrophysics Data System (ADS)
Astegiano, Luca; Vezza, Paolo; Comoglio, Claudio; Lingua, Andrea; Spairani, Michele
2015-04-01
Recent advances in digital image collection and processing have led to the increased use of unmanned aerial vehicles (UAV) for river research and management. In this paper, we assess the capabilities of a small UAV to characterize physical habitat for fish in three river stretches of North-Western Italy. The main aim of the study was identifying the advantages and challenges of this technology for environmental river management, in the context of the increasing river exploitation for hydropower production. The UAV used to acquire overlapping images was a small quadcopter with a two different high-resolution (non-metric) cameras (Nikon J1™ and Go-Pro Hero 3 Black Edition™). The quadcopter was preprogrammed to fly set waypoints using a small tablet PC. With the acquired imagery, we constructed a 5-cm resolution orthomosaic image and a digital surface model (DSM). The two products were used to map the distribution of aquatic and riparian habitat features, i.e., wetted area, morphological unit distributions, bathymetry, water surface gradient, substrates and grain sizes, shelters and cover for fish. The study assessed the quality of collected data and used such information to identify key reach-scale metrics and important aspects of fluvial morphology and aquatic habitat. The potential and limitations of using UAV for physical habitat survey were evaluated and the collected data were used to initialize and run common habitat simulation tools (MesoHABSIM). Several advantages of using UAV-based imagery were found, including low cost procedures, high resolution and efficiency in data collection. However, some challenges were identified for bathymetry extraction (vegetation obstructions, white waters, turbidity) and grain size assessment (preprocessing of data and automatic object detection). The application domain and possible limitation for instream habitat mapping were defined and will be used as a reference for future studies. Ongoing activities include the possibility of using topographic data and discharge measurements to extract average values of flow velocity in cross sections.
NASA Astrophysics Data System (ADS)
Fernández, T.; Pérez, J. L.; Cardenal, F. J.; López, A.; Gómez, J. M.; Colomo, C.; Delgado, J.; Sánchez, M.
2015-08-01
This paper presents a methodology for slope instability monitoring using photogrammetric techniques with very high resolution images from an unmanned aerial vehicle (UAV). An unstable area located in La Guardia (Jaen, Southern Spain), where an active mud flow has been identified, was surveyed between 2012 and 2014 by means of four UAV flights. These surveys were also compared with those data from a previous conventional aerial photogrammetric and LiDAR survey. The UAV was an octocopter equipped with GPS, inertial units and a mirrorless interchangeable-lens camera. The flight height was 90 m, which allowed covering an area of about 250 x 100 m with a ground pixel size of 2.5 cm. The orientation of the UAV flights were carried out by means of ground control points measured with GPS, but the previous aerial photogrammetric/LiDAR flight was oriented by means of direct georeferencing with in flight positioning and inertial data, although some common ground control points were used to adjust all flights in the same reference system. The DSMs of all surveys were obtained by automatic image correlation and then the differential models were calculated, allowing estimate changes in the surface. At the same time, orthophotos were obtained so horizontal and vertical displacements between relevant points were registered. Significant displacements were observed between some campaigns (some centimeters on the vertical and meters on the horizontal). Finally, we have analyzed the relation of displacements to rainfalls in recent years in the area, finding a significant temporal correlation between the two variables.
Coastal areas mapping using UAV photogrammetry
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.; Kozarski, Dimitrios; Kogkas, Stefanos
2017-10-01
The coastal areas in the Patras Gulf suffer degradation due to the sea action and other natural and human-induced causes. Changes in beaches, ports, and other man made constructions need to be assessed, both after severe events and on a regular basis, to build models that can predict the evolution in the future. Thus, reliable spatial data acquisition is a critical process for the identification of the coastline and the broader coastal zones for geologists and other scientists involved in the study of coastal morphology. High resolution satellite data, airphotos and airborne Lidar provided in the past the necessary data for the coastline monitoring. High-resolution digital surface models (DSMs) and orthophoto maps had become a necessity in order to map with accuracy all the variations in costal environments. Recently, unmanned aerial vehicles (UAV) photogrammetry offers an alternative solution to the acquisition of high accuracy spatial data along the coastline. This paper presents the use of UAV to map the coastline in Rio area Western Greece. Multiple photogrammetric aerial campaigns were performed. A small commercial UAV (DJI Phantom 3 Advance) was used to acquire thousands of images with spatial resolutions better than 5 cm. Different photogrammetric software's were used to orientate the images, extract point clouds, build a digital surface model and produce orthoimage mosaics. In order to achieve the best positional accuracy signalised ground control points were measured with a differential GNSS receiver. The results of this coastal monitoring programme proved that UAVs can replace many of the conventional surveys, with considerable gains in the cost of the data acquisition and without any loss in the accuracy.
NASA Astrophysics Data System (ADS)
Xie, Bing; Duan, Zhemin; Chen, Yu
2017-11-01
The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.
Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R
2018-06-01
A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Kefauver, Shawn C; Vicente, Rubén; Vergara-Díaz, Omar; Fernandez-Gallego, Jose A; Kerfal, Samir; Lopez, Antonio; Melichar, James P E; Serret Molins, María D; Araus, José L
2017-01-01
With the commercialization and increasing availability of Unmanned Aerial Vehicles (UAVs) multiple rotor copters have expanded rapidly in plant phenotyping studies with their ability to provide clear, high resolution images. As such, the traditional bottleneck of plant phenotyping has shifted from data collection to data processing. Fortunately, the necessarily controlled and repetitive design of plant phenotyping allows for the development of semi-automatic computer processing tools that may sufficiently reduce the time spent in data extraction. Here we present a comparison of UAV and field based high throughput plant phenotyping (HTPP) using the free, open-source image analysis software FIJI (Fiji is just ImageJ) using RGB (conventional digital cameras), multispectral and thermal aerial imagery in combination with a matching suite of ground sensors in a study of two hybrids and one conventional barely variety with ten different nitrogen treatments, combining different fertilization levels and application schedules. A detailed correlation network for physiological traits and exploration of the data comparing between treatments and varieties provided insights into crop performance under different management scenarios. Multivariate regression models explained 77.8, 71.6, and 82.7% of the variance in yield from aerial, ground, and combined data sets, respectively.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Unmanned Aerial Vehicle (UAV) associated DTM quality evaluation and hazard assessment
NASA Astrophysics Data System (ADS)
Huang, Mei-Jen; Chen, Shao-Der; Chao, Yu-Jui; Chiang, Yi-Lin; Chang, Kuo-Jen
2014-05-01
Taiwan, due to the high seismicity and high annual rainfall, numerous landslides triggered every year and severe impacts affect the island. Concerning to the catastrophic landslides, the key information of landslide, including range of landslide, volume estimation and the subsequent evolution are important when analyzing the triggering mechanism, hazard assessment and mitigation. Thus, the morphological analysis gives a general overview for the landslides and been considered as one of the most fundamental information. We try to integrate several technologies, especially by Unmanned Aerial Vehicle (UAV) and multi-spectral camera, to decipher the consequence and the potential hazard, and the social impact. In recent years, the remote sensing technology improves rapidly, providing a wide range of image, essential and precious information. Benefited of the advancing of informatics, remote-sensing and electric technologies, the Unmanned Aerial Vehicle (UAV) photogrammetry mas been improve significantly. The study tries to integrate several methods, including, 1) Remote-sensing images gathered by Unmanned Aerial Vehicle (UAV) and by aerial photos taken in different periods; 2) field in-situ geologic investigation; 3) Differential GPS, RTK GPS and Ground LiDAR field in-site geoinfomatics measurements; 4) Construct the DTMs before and after landslide, as well as the subsequent periods using UAV and aerial photos; 5) Discrete element method should be applied to understand the geomaterial composing the slope failure, for predicting earthquake-induced and rainfall-induced landslides displacement. First at all, we evaluate the Microdrones MD4-1000 UAV airphotos derived Digital Terrain Model (DTM). The ground resolution of the DSM point cloud of could be as high as 10 cm. By integrated 4 ground control point within an area of 56 hectares, compared with LiDAR DSM and filed RTK-GPS surveying, the mean error is as low as 6cm with a standard deviation of 17cm. The quality of the UAV DSM could be as good as LiDAR data, and is ready for other applications. The quality of the data set provides not only geoinfomatics and GIS dataset of the hazards, but also for essential geomorphologic information for other study, and for hazard mitigation and planning, as well.
NASA Astrophysics Data System (ADS)
Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank
2015-04-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.
NASA Astrophysics Data System (ADS)
Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub
2016-04-01
Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical granulometry and (iii) the workflow for analysis of spatial distribution and temporal changes of granulometric properties across the point bar. The proposed technique was tested on a case study of an active point bar of mid-latitude mountain stream at Sumava mountains, Czech Republic, exposed to repeated flooding. The UAV photogrammetry was used to acquire very high resolution imagery to build high-precision digital terrain models and orthoimage. The orthoimage was then analyzed using the digital optical granulometric tool BaseGrain. This approach allowed us (i) to analyze the spatial distribution of the grain size in a seamless transects over an active point bar and (ii) to assess the multitemporal changes of granulometric properties of the point bar material resulting from flooding. The tested framework prove the applicability of the proposed method for granulometric analysis with accuracy comparable with field optical granulometry. The seamless nature of the data enables to study spatial distribution of granulometric properties across the study sites as well as the analysis of multitemporal changes, resulting from repeated imaging.
Integration of multi-sensor data to measure soil surface changes
NASA Astrophysics Data System (ADS)
Eltner, Anette; Schneider, Danilo
2016-04-01
Digital elevation models (DEM) of high resolution and accuracy covering a suitable sized area of interest can be a promising approach to help understanding the processes of soil erosion. Thereby, the plot under investigation should remain undisturbed. The fragile marl landscape in Andalusia (Spain) is especially prone to soil detachment and transport with unique sediment connectivity characteristics due to the soil properties and climatic conditions. A 600 m² field plot is established and monitored during three field campaigns (Sep. 2013, Nov. 2013 and Feb. 2014). Unmanned aerial vehicle (UAV) photogrammetry and terrestrial laser scanning (TLS) are suitable tools to generate high resolution topography data that describe soil surface changes at large field plots. Thereby, the advantages of both methods are utilised in a synergetic manner. On the one hand, TLS data is assumed to comprise a higher reliability regarding consistent error behaviour than DEMs derived from overlapping UAV images. Therefore, global errors (e.g. dome effect) and local errors (e.g. DEM blunders due to erroneous image matching) within the UAV data are assessed with the DEMs produced by TLS. Furthermore, TLS point clouds allow for fast and reliable filtering of vegetation spots, which is not as straightforward within the UAV data due to known image matching problems in areas displaying plant cover. On the other hand, systematic DEM errors linked to TLS are detected and possibly corrected utilising the DEMs reconstructed from overlapping UAV images. Furthermore, TLS point clouds are filtered corresponding to the degree of point quality, which is estimated from parameters of the scan geometry (i.e. incidence angle and footprint size). This is especially relevant for this study because the area of interest is located at gentle hillslopes that are prone to soil erosion. Thus, the view of the scanning device onto the surface results in an adverse angle, which is solely slightly improved by the usage of a 4 m high tripod. Surface roughness is considered as a further parameter to evaluate the TLS point quality. The filtering tool allows for choosing each data point either from the TLS or UAV data corresponding to the data acquisition geometry and surface properties. The filtered points are merged into one point cloud, which is finally processed to reduce remaining data noise. DEM analysis reveals a continuous decrease of soil surface roughness after tillage, the reappearance of former wheel tracks and local patterns of erosion as well as accumulation.
NASA Astrophysics Data System (ADS)
Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.
2018-04-01
This study is about systematic approach to photogrammetric survey that is applicable in the extraction of elevation data for geophysical surveys in hilly terrains using Unmanned Aerial Vehicles (UAVs). The outcome will be to acquire high-quality geophysical data from areas where elevations vary by locating the best survey lines. The study area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. Seismic refraction surveys were carried out for the modelling of the subsurface for detailed site investigations. Study were carried out to identify the accuracy of the digital elevation model (DEM) produced from an UAV. At 100 m altitude (flying height), over 135 overlapping images were acquired using a DJI Phantom 3 quadcopter. All acquired images were processed for automatic 3D photo-reconstruction using Agisoft PhotoScan digital photogrammetric software, which was applied to all photogrammetric stages. The products generated included a 3D model, dense point cloud, mesh surface, digital orthophoto, and DEM. In validating the accuracy of the produced DEM, the coordinates of the selected ground control point (GCP) of the survey line in the imaging area were extracted from the generated DEM with the aid of Global Mapper software. These coordinates were compared with the GCPs obtained using a real-time kinematic global positioning system. The maximum percentage of difference between GCP’s and photogrammetry survey is 13.3 %. UAVs are suitable for acquiring elevation data for geophysical surveys which can save time and cost.
NASA Astrophysics Data System (ADS)
Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.
2018-04-01
The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.
NASA Astrophysics Data System (ADS)
Bareth, G.; Bolten, A.; Gnyp, M. L.; Reusch, S.; Jasper, J.
2016-06-01
The development of UAV-based sensing systems for agronomic applications serves the improvement of crop management. The latter is in the focus of precision agriculture which intends to optimize yield, fertilizer input, and crop protection. Besides, in some cropping systems vehicle-based sensing devices are less suitable because fields cannot be entered from certain growing stages onwards. This is true for rice, maize, sorghum, and many more crops. Consequently, UAV-based sensing approaches fill a niche of very high resolution data acquisition on the field scale in space and time. While mounting RGB digital compact cameras to low-weight UAVs (< 5 kg) is well established, the miniaturization of sensors in the last years also enables hyperspectral data acquisition from those platforms. From both, RGB and hyperspectral data, vegetation indices (VIs) are computed to estimate crop growth parameters. In this contribution, we compare two different sensing approaches from a low-weight UAV platform (< 5 kg) for monitoring a nitrogen field experiment of winter wheat and a corresponding farmers' field in Western Germany. (i) A standard digital compact camera was flown to acquire RGB images which are used to compute the RGBVI and (ii) NDVI is computed from a newly modified version of the Yara N-Sensor. The latter is a well-established tractor-based hyperspectral sensor for crop management and is available on the market since a decade. It was modified for this study to fit the requirements of UAV-based data acquisition. Consequently, we focus on three objectives in this contribution: (1) to evaluate the potential of the uncalibrated RGBVI for monitoring nitrogen status in winter wheat, (2) investigate the UAV-based performance of the modified Yara N-Sensor, and (3) compare the results of the two different UAV-based sensing approaches for winter wheat.
NASA Astrophysics Data System (ADS)
Brown, Anthony M.
2018-01-01
Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.
Unmanned Aerial Vehicles for High-Throughput Phenotyping and Agronomic Research
Shi, Yeyin; Thomasson, J. Alex; Murray, Seth C.; Pugh, N. Ace; Rooney, William L.; Shafian, Sanaz; Rajan, Nithya; Rouze, Gregory; Morgan, Cristine L. S.; Neely, Haly L.; Rana, Aman; Bagavathiannan, Muthu V.; Henrickson, James; Bowden, Ezekiel; Valasek, John; Olsenholler, Jeff; Bishop, Michael P.; Sheridan, Ryan; Putman, Eric B.; Popescu, Sorin; Burks, Travis; Cope, Dale; Ibrahim, Amir; McCutchen, Billy F.; Baltensperger, David D.; Avant, Robert V.; Vidrine, Misty; Yang, Chenghai
2016-01-01
Advances in automation and data science have led agriculturists to seek real-time, high-quality, high-volume crop data to accelerate crop improvement through breeding and to optimize agronomic practices. Breeders have recently gained massive data-collection capability in genome sequencing of plants. Faster phenotypic trait data collection and analysis relative to genetic data leads to faster and better selections in crop improvement. Furthermore, faster and higher-resolution crop data collection leads to greater capability for scientists and growers to improve precision-agriculture practices on increasingly larger farms; e.g., site-specific application of water and nutrients. Unmanned aerial vehicles (UAVs) have recently gained traction as agricultural data collection systems. Using UAVs for agricultural remote sensing is an innovative technology that differs from traditional remote sensing in more ways than strictly higher-resolution images; it provides many new and unique possibilities, as well as new and unique challenges. Herein we report on processes and lessons learned from year 1—the summer 2015 and winter 2016 growing seasons–of a large multidisciplinary project evaluating UAV images across a range of breeding and agronomic research trials on a large research farm. Included are team and project planning, UAV and sensor selection and integration, and data collection and analysis workflow. The study involved many crops and both breeding plots and agronomic fields. The project’s goal was to develop methods for UAVs to collect high-quality, high-volume crop data with fast turnaround time to field scientists. The project included five teams: Administration, Flight Operations, Sensors, Data Management, and Field Research. Four case studies involving multiple crops in breeding and agronomic applications add practical descriptive detail. Lessons learned include critical information on sensors, air vehicles, and configuration parameters for both. As the first and most comprehensive project of its kind to date, these lessons are particularly salient to researchers embarking on agricultural research with UAVs. PMID:27472222
Long-term monitoring of a large landslide by using an Unmanned Aerial Vehicle (UAV)
NASA Astrophysics Data System (ADS)
Lindner, Gerald; Schraml, Klaus; Mansberger, Reinfried; Hübl, Johannes
2015-04-01
Currently UAVs become more and more important in various scientific areas, including forestry, precision farming, archaeology and hydrology. Using these drones in natural hazards research enables a completely new level of data acquisition being flexible of site, invariant in time, cost-efficient and enabling arbitrary spatial resolution. In this study, a rotary-wing Mini-UAV carrying a DSLR camera was used to acquire time series of overlapping aerial images. These photographs were taken as input to extract Digital Surface Models (DSM) as well as orthophotos in the area of interest. The "Pechgraben" area in Upper Austria has a catchment area of approximately 2 km². Geology is mainly dominated by limestone and sandstone. Caused by heavy rainfalls in the late spring of 2013, an area of about 70 ha began to move towards the village in the valley. In addition to the urgent measures, the slow-moving landslide was monitored approximately every month over a time period of more than 18 months. A detailed documentation of the change process was the result. Moving velocities and height differences were quantified and validated using a dense network of Ground Control Points (GCP). For further analysis, 14 image flights with a total amount of 10.000 photographs were performed to create multi-temporal geodata in in sub-decimeter-resolution for two depicted areas of the landslide. Using a UAV for this application proved to be an excellent choice, as it allows short repetition times, low flying heights and high spatial resolution. Furthermore, the UAV acts almost weather independently as well as highly autonomously. High-quality results can be expected within a few hours after the photo flight. The UAV system performs very well in an alpine environment. Time series of the assessed geodata detect changes in topography and provide a long-term documentation of the measures taken in order to stop the landslide and to prevent infrastructure from damage.
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in computational efficiency on board and in optimized time constraints.
Employing unmanned aerial vehicle to monitor the health condition of wind turbines
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang; Cheng, Chia-Chi
2018-04-01
Unmanned aerial vehicle (UAV) can gather the spatial information of huge structures, such as wind turbines, that can be difficult to obtain with traditional approaches. In this paper, the UAV used in the experiments is equipped with high resolution camera and thermal infrared camera. The high resolution camera can provide a series of images with resolution up to 10 Megapixels. Those images can be used to form the 3D model using the digital photogrammetry technique. By comparing the 3D scenes of the same wind turbine at different times, possible displacement of the supporting tower of the wind turbine, caused by ground movement or foundation deterioration may be determined. The recorded thermal images are analyzed by applying the image segmentation methods to the surface temperature distribution. A series of sub-regions are separated by the differences of the surface temperature. The high-resolution optical image and the segmented thermal image are fused such that the surface anomalies are more easily identified for wind turbines.
Stereo matching algorithm based on double components model
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang
2018-03-01
The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.
Thermal Remote Sensing with Uav-Based Workflows
NASA Astrophysics Data System (ADS)
Boesch, R.
2017-08-01
Climate change will have a significant influence on vegetation health and growth. Predictions of higher mean summer temperatures and prolonged summer draughts may pose a threat to agriculture areas and forest canopies. Rising canopy temperatures can be an indicator of plant stress because of the closure of stomata and a decrease in the transpiration rate. Thermal cameras are available for decades, but still often used for single image analysis, only in oblique view manner or with visual evaluations of video sequences. Therefore remote sensing using a thermal camera can be an important data source to understand transpiration processes. Photogrammetric workflows allow to process thermal images similar to RGB data. But low spatial resolution of thermal cameras, significant optical distortion and typically low contrast require an adapted workflow. Temperature distribution in forest canopies is typically completely unknown and less distinct than for urban or industrial areas, where metal constructions and surfaces yield high contrast and sharp edge information. The aim of this paper is to investigate the influence of interior camera orientation, tie point matching and ground control points on the resulting accuracy of bundle adjustment and dense cloud generation with a typically used photogrammetric workflow for UAVbased thermal imagery in natural environments.
UAV-based Natural Hazard Management in High-Alpine Terrain - Case Studies from Austria
NASA Astrophysics Data System (ADS)
Sotier, Bernadette; Adams, Marc; Lechner, Veronika
2015-04-01
Unmanned Aerial Vehicles (UAV) have become a standard tool for geodata collection, as they allow conducting on-demand mapping missions in a flexible, cost-effective manner at an unprecedented level of detail. Easy-to-use, high-performance image matching software make it possible to process the collected aerial images to orthophotos and 3D-terrain models. Such up-to-date geodata have proven to be an important asset in natural hazard management: Processes like debris flows, avalanches, landslides, fluvial erosion and rock-fall can be detected and quantified; damages can be documented and evaluated. In the Alps, these processes mostly originate in remote areas, which are difficult and hazardous to access, thus presenting a challenging task for RPAS data collection. In particular, the problems include finding suitable landing and piloting-places, dealing with bad or no GPS-signals and the installation of ground control points (GCP) for georeferencing. At the BFW, RPAS have been used since 2012 to aid natural hazard management of various processes, of which three case studies are presented below. The first case study deals with the results from an attempt to employ UAV-based multi-spectral remote sensing to monitor the state of natural hazard protection forests. Images in the visible and near-infrared (NIR) band were collected using modified low-cost cameras, combined with different optical filters. Several UAV-flights were performed in the 72 ha large study site in 2014, which lies in the Wattental, Tyrol (Austria) between 1700 and 2050 m a.s.l., where the main tree species are stone pine and mountain pine. The matched aerial images were analysed using different UAV-specific vitality indices, evaluating both single- and dual-camera UAV-missions. To calculate the mass balance of a debris flow in the Tyrolean Halltal (Austria), an RPAS flight was conducted in autumn 2012. The extreme alpine environment was challenging for both the mission and the evaluation of the aerial images: In the upper part of the steep channel there was no GPS-signal available, because of the high surrounding rock faces, the landing area consisted of coarse gravel. Therefore, only a manual flight with a high risk of damage was possible. With the calculated RPAS-based digital surface model, created from the 600 aerial images, a chronologically resolved back-calculation of the last big debris-flow event could be performed. In a third case study, aerial images from RPAS were used for a similar investigation in Virgen, Eastern Tyrol (Austria). A debris flow in the Firschnitzbach catchment caused severe damages to the village of Virgen in August 2012. An RPAS-flight was performed, in order to refine the estimated displaced debris mass for assessment purposes. The upper catchment of the Firschnitzbach is situated above the timberline and covers an area of 6.5 ha at a height difference of 1000 m. Therefore, three separate flights were necessary to achieve a sufficient image overlap. The central part of the Firschnitzbach consists of a steep and partly dense forested canyon / gorge, so there was no flight possible for this section up to now. The evaluation of the surface model from the images showed, that only half of the estimated debris mass came from the upper part of the catchment.
NASA 2007 Western States Fire Missions (WSFM)
NASA Technical Reports Server (NTRS)
Buoni, Greg
2008-01-01
This viewgraph presentation describes the Western states Fire Missions (WSFM) that occurred in 2007. The objectives of this mission are: (1) Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. (1) Demonstrate long-endurance mission capabilities (20-hours+). (2) Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. (3) Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. (4) Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. (5) Deliver real-time imagery to (within 10-minutes of acquisition). (6) Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).
Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment
NASA Astrophysics Data System (ADS)
Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.
2011-09-01
This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.
NASA Astrophysics Data System (ADS)
Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.
2015-12-01
The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.
NASA Astrophysics Data System (ADS)
Brady, J. J.; Tweedie, C. E.; Escapita, I. J.
2009-12-01
There is a fundamental need to improve capacities for monitoring environmental change using remote sensing technologies. Recently, researchers have begun using Unmanned Aerial Vehicles (UAVs) to expand and improve upon remote sensing capabilities. Limitations to most non-military and relatively small-scale Unmanned Aircraft Systems (UASs) include a need to develop more reliable communications between ground and aircraft, tools to optimize flight control, real time data processing, and visually ascertaining the quantity of data collected while in air. Here we present a prototype software system that has enhanced communication between ground and the vehicle, can synthesize near real time data acquired from sensors on board, can log operation data during flights, and can visually demonstrate the amount and quality of data for a sampling area. This software has the capacity to greatly improve the utilization of UAS in the environmental sciences. The software system is being designed for use on a paraglider UAV that has a suite of sensors suitable for characterizing the footprints of eddy covariance towers situated in the Chihuahuan Desert and in the Arctic. Sensors on board relay operational flight data (airspeed, ground speed, latitude, longitude, pitch, yaw, roll, acceleration, and video) as well as a suite of customized sensors. Additional sensors can be added to an on board laptop or a CR1000 data logger thereby allowing data from these sensors to be visualized in the prototype software. This poster will describe the development, use and customization of our UAS and multimedia will be available during AGU to illustrate the system in use. UAV on workbench in the lab UAV in flight
D Reconstruction from Uav-Based Hyperspectral Images
NASA Astrophysics Data System (ADS)
Liu, L.; Xu, L.; Peng, J.
2018-04-01
Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.
Development of a Micro-UAV Hyperspectral Imaging Platform for Assessing Hydrogeological Hazards
NASA Astrophysics Data System (ADS)
Chen, Z.; Alabsi, M.
2015-12-01
The exacerbating global weather changes have cast significant impacts upon the proportion of water supplied to agriculture. Therefore, one of the 21stCentury Grant Challenges faced by global population is securing water for food. However, the soil-water behavior in an agricultural environment is complex; among others, one of the key properties we recognize is water repellence or hydrophobicity, which affects many hydrogeological and hazardous conditions such as excessive water infiltration, runoff, and soil erosion. Under a US-Israel research program funded by USDA and BARD at Israel, we have proposed the development of a novel micro-unmanned aerial vehicle (micro-UAV or drone) based hyperspectral imaging platform for identifying and assessing soil repellence at low altitudes with enhanced flexibility, much reduced cost, and ultimately easy use. This aerial imaging system consists of a generic micro-UAV, hyperspectral sensor aided by GPS/IMU, on-board computing units, and a ground station. The target benefits of this system include: (1) programmable waypoint navigation and robotic control for multi-view imaging; (2) ability of two- or three-dimensional scene reconstruction for complex terrains; and (3) fusion with other sensors to realize real-time diagnosis (e.g., humidity and solar irradiation that may affect soil-water sensing). In this talk we present our methodology and processes in integration of hyperspectral imaging, on-board sensing and computing, hyperspectral data modeling, and preliminary field demonstration and verification of the developed prototype.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Sense and avoid technology for Global Hawk and Predator UAVs
NASA Astrophysics Data System (ADS)
McCalmont, John F.; Utt, James; Deschenes, Michael; Taylor, Michael J.
2005-05-01
The Sensors Directorate at the Air Force Research Laboratory (AFRL) along with Defense Research Associates, Inc. (DRA) conducted a flight demonstration of technology that could potentially satisfy the Federal Aviation Administration's (FAA) requirement for Unmanned Aerial Vehicles (UAVs) to sense and avoid local air traffic sufficient to provide an "...equivalent level of safety, comparable to see-and-avoid requirements for manned aircraft". This FAA requirement must be satisfied for autonomous UAV operation within the national airspace. The real-time on-board system passively detects approaching aircraft, both cooperative and non-cooperative, using imaging sensors operating in the visible/near infrared band and a passive moving target indicator algorithm. Detection range requirements for RQ-4 and MQ-9 UAVs were determined based on analysis of flight geometries, avoidance maneuver timelines, system latencies and human pilot performance. Flight data and UAV operating parameters were provided by the system program offices, prime contractors, and flight-test personnel. Flight demonstrations were conducted using a surrogate UAV (Aero Commander) and an intruder aircraft (Beech Bonanza). The system demonstrated target detection ranges out to 3 nautical miles in nose-to-nose scenarios and marginal visual meteorological conditions. (VMC) This paper will describe the sense and avoid requirements definition process and the system concept (sensors, algorithms, processor, and flight rest results) that has demonstrated the potential to satisfy the FAA sense and avoid requirements.
NASA Astrophysics Data System (ADS)
Huang, Haifeng; Long, Jingjing; Yi, Wu; Yi, Qinglin; Zhang, Guodong; Lei, Bangjun
2017-11-01
In recent years, unmanned aerial vehicles (UAVs) have become widely used in emergency investigations of major natural hazards over large areas; however, UAVs are less commonly employed to investigate single geo-hazards. Based on a number of successful investigations in the Three Gorges Reservoir area, China, a complete UAV-based method for performing emergency investigations of single geo-hazards is described. First, a customized UAV system that consists of a multi-rotor UAV subsystem, an aerial photography subsystem, a ground control subsystem and a ground surveillance subsystem is described in detail. The implementation process, which includes four steps, i.e., indoor preparation, site investigation, on-site fast processing and application, and indoor comprehensive processing and application, is then elaborated, and two investigation schemes, automatic and manual, that are used in the site investigation step are put forward. Moreover, some key techniques and methods - e.g., the layout and measurement of ground control points (GCPs), route planning, flight control and image collection, and the Structure from Motion (SfM) photogrammetry processing - are explained. Finally, three applications are given. Experience has shown that using UAVs for emergency investigation of single geo-hazards greatly reduces the time, intensity and risks associated with on-site work and provides valuable, high-accuracy, high-resolution information that supports emergency responses.
Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing
NASA Astrophysics Data System (ADS)
Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.
2016-06-01
In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
UAV-LiDAR accuracy and comparison to Structure from Motion photogrammetry
NASA Astrophysics Data System (ADS)
Kucharczyk, M.; Hugenholtz, C.; Zou, X.; Nesbit, P. R.; Barchyn, T.
2016-12-01
We compare the spatial accuracy of a UAV-LiDAR system with Structure from Motion (SfM) photogrammetry. UAV-based LiDAR remote sensing potentially offers advantages over SfM photogrammetry in vegetated terrain, particularly with respect to canopy penetration and related measurements of ground surface elevation and vegetation height; however, little quantitative evidence has been presented to date. To address this, we performed a case study at a field site in Alberta, Canada with six different land cover types: short grass, tall grass, short shrubs, tall shrubs, deciduous trees, and coniferous trees. Both UAV datasets were acquired on the same day. The SfM dataset was derived from images acquired by a senseFly eBee fixed-wing UAV equipped with a 16.1 megapixel RGB camera. The UAV-LiDAR system is a proprietary design that consists of a single-rotor helicopter (2-m rotor diameter) equipped with a Riegl VUX-1UAV laser scanner, KVH 1750 inertial measurement unit, and dual NovAtel GNSS receivers. We measured vegetation height from at least 30 samples in each land cover type and acquired check point measurements to determine horizontal and vertical accuracy. Vegetation height was measured manually for grasses and shrubs with a level staff, and with a total station for trees. Coordinates of horizontal and vertical check points were surveyed with real-time kinematic (RTK) GNSS. We followed standard methods for computing horizontal and vertical accuracies based on the 2015 guidelines from the American Society of Photogrammetry and Remote Sensing. Results will be presented at the AGU Fall Meeting.
The Use of UAV in Housing Renovation Identification: A Case Study at Taman Manis 2
NASA Astrophysics Data System (ADS)
Mustaffa, A. A.; Hasmori, M. F.; Sarif, A. S.; Ahmad, N. F.; Zainun, N. Y.
2018-04-01
Housing industry in Malaysia is growing rapidly due to the increase in population and the arising of economic level of Malaysian people. Most residential houses are built according to the standard residential design that may lead to house renovation by the buyers after purchasing the house. A method of using Unmanned Aerial Vehicle (UAV) monitoring was used to obtain information of the renovated houses directly on-site at Taman Manis 2, Parit Raja, Batu Pahat. Through comparison of image captured by the UAV with the original house plans, we found out that a total of 160 units out of 336 units of houses undergo a renovation process. Surprisingly, 41 units have been renovated illegally which has 40% to 96% of renovation rate. The acquired data were analyzed and can be concluded that the method of using UAVs to obtain information is highly recommended. The study is expected to help Municipal Council to detect improper & illegal renovation by the residents in a residential area.
Uav Application in Coastal Environment, Example of the Oleron Island for Dunes and Dikes Survey
NASA Astrophysics Data System (ADS)
Guillot, B.; Pouget, F.
2015-08-01
The recent evolutions in civil UAV ease of use led the University of La Rochelle to conduct an UAV program around its own potential costal application. An application program involving La Rochelle University and the District of Oleron Island began in January 2015 and lasted through July of 2015. The aims were to choose 9 study areas and survey them during the winter season. The studies concerned surveying the dikes and coastal sand dunes of Oleron Island. During each flight, an action sport camera fixed on the UAV's brushless gimbal took a series of 150 pictures. After processing the photographs and using a 3D reconstruction plugin via Photoscan, we were able to export high-resolution ortho-imagery, DSM and 3D models. After applying GIS treatment to these images, volumetric evolutions between flights were revealed through a DDVM (Difference of Digital volumetric Model), in order to study sand movements on coastal sand dunes.
NASA Astrophysics Data System (ADS)
Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O.
2017-05-01
This article presents a coupled system consisting of a single-frequency GPS receiver and a light photogrammetric quality camera embedded in an Unmanned Aerial Vehicle (UAV). The aim is to produce high quality data that can be used in metrology applications. The issue of Integrated Sensor Orientation (ISO) of camera poses using only GPS measurements is presented and discussed. The accuracy reached by our system based on sensors developed at the French Mapping Agency (IGN) Opto-Electronics, Instrumentation and Metrology Laboratory (LOEMI) is qualified. These sensors are specially designed for close-range aerial image acquisition with a UAV. Lever-arm calibration and time synchronization are explained and performed to reach maximum accuracy. All processing steps are detailed from data acquisition to quality control of final products. We show that an accuracy of a few centimeters can be reached with this system which uses low-cost UAV and GPS module coupled with the IGN-LOEMI home-made camera.
System Aware Cybersecurity: A Multi-Sentinel Scheme to Protect a Weapons Research Lab
2015-12-07
In the simplified deployment scenario, some sensors report their output over a wireless link and other sensors are connected via CAT 5 (Ethernet...cable to reduce the chance of a wireless ‘jamming’ event impacting ALL sensors . In addition to this first sensor suite ( Sensor Suite “A”), the team...generating wind turbines , and video reconnaissance systems on unmanned aerial vehicles (UAVs). The most basic decision problem in designing a systems
NASA Astrophysics Data System (ADS)
Higa, E.; Valencia, D.; Hunt, A.
2017-12-01
Over the past decade, the use of unmanned aerial vehicles (UAV's) has seen unprecedented growth in diverse research areas due to advances in UAV hardware and reduced total operating costs. These developments have given environmental investigators a new aerial data acquisition technique that can be used to not only survey large areas of terrain in a time-efficient and cost-effective manner but can be used to gather previously almost unattainable air quality data. Vertically resolved profiles of air pollutant data can be readily constructed. This project's goal is to produce a time resolved (seasonal) aerial survey of a 150-acre section from a 1300-acre ecologically diverse park of bottomland forests, wetlands and prairies. This ecosystem provides abundant habitats for a diverse wildlife community. This section was chosen due to its close proximity to the city landfill located 0.5 miles due north from the chosen section. The process of collecting UAV aerial images at a constant altitude of ( 200ft) on a bi-monthly basis (for a period of 6 months) has commenced. The UAV has been fitted with a custom made mount to secure an Ultrafine Particle (UFP) counter; this is providing information on UFP levels over the study area as a proxy for airborne particle inputs to the site. Sediment samples will be taken from several runoff ponds within the survey area to evaluate possible anthropogenic contamination of the park . Post processing imaging software, DroneDeploy, is being used to create an orthomosaic, topographic surface and 3D model that can be integrated with GIS platforms to create a comprehensive and cohesive multi-layered data set. Data sets of this nature will provide information on temporally constrained sources of runoff material to the pond areas in the preserve.
Searching Lost People with Uavs: the System and Results of the Close-Search Project
NASA Astrophysics Data System (ADS)
Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Skaloud, J.; Kornus, W.; Prades, R.; Aguilera, C.
2012-07-01
This paper will introduce the goals, concept and results of the project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost Search-And-Rescue (SAR) operations'. The main goal is to integrate a medium-size, helicopter-type Unmanned Aerial Vehicle (UAV), a thermal imaging sensor and an EGNOS-based multi-sensor navigation system, including an Autonomous Integrity Monitoring (AIM) capability, to support search operations in difficult-to-access areas and/or night operations. The focus of the paper is three-fold. Firstly, the operational and technical challenges of the proposed approach are discussed, such as ultra-safe multi-sensor navigation system, the use of combined thermal and optical vision (infrared plus visible) for person recognition and Beyond-Line-Of-Sight communications among others. Secondly, the implementation of the integrity concept for UAV platforms is discussed herein through the AIM approach. Based on the potential of the geodetic quality analysis and on the use of the European EGNOS system as a navigation performance starting point, AIM approaches integrity from the precision standpoint; that is, the derivation of Horizontal and Vertical Protection Levels (HPLs, VPLs) from a realistic precision estimation of the position parameters is performed and compared to predefined Alert Limits (ALs). Finally, some results from the project test campaigns are described to report on particular project achievements. Together with actual Search-and-Rescue teams, the system was operated in realistic, user-chosen test scenarios. In this context, and specially focusing on the EGNOS-based UAV navigation, the AIM capability and also the RGB/thermal imaging subsystem, a summary of the results is presented.
Hassanein, Mohamed; El-Sheimy, Naser
2018-01-01
Over the last decade, the use of unmanned aerial vehicle (UAV) technology has evolved significantly in different applications as it provides a special platform capable of combining the benefits of terrestrial and aerial remote sensing. Therefore, such technology has been established as an important source of data collection for different precision agriculture (PA) applications such as crop health monitoring and weed management. Generally, these PA applications depend on performing a vegetation segmentation process as an initial step, which aims to detect the vegetation objects in collected agriculture fields’ images. The main result of the vegetation segmentation process is a binary image, where vegetations are presented in white color and the remaining objects are presented in black. Such process could easily be performed using different vegetation indexes derived from multispectral imagery. Recently, to expand the use of UAV imagery systems for PA applications, it was important to reduce the cost of such systems through using low-cost RGB cameras Thus, developing vegetation segmentation techniques for RGB images is a challenging problem. The proposed paper introduces a new vegetation segmentation methodology for low-cost UAV RGB images, which depends on using Hue color channel. The proposed methodology follows the assumption that the colors in any agriculture field image can be distributed into vegetation and non-vegetations colors. Therefore, four main steps are developed to detect five different threshold values using the hue histogram of the RGB image, these thresholds are capable to discriminate the dominant color, either vegetation or non-vegetation, within the agriculture field image. The achieved results for implementing the proposed methodology showed its ability to generate accurate and stable vegetation segmentation performance with mean accuracy equal to 87.29% and standard deviation as 12.5%. PMID:29670055
NASA Astrophysics Data System (ADS)
Ilehag, R.; Schenk, A.; Hinz, S.
2017-08-01
This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.
Stereo Correspondence Using Moment Invariants
NASA Astrophysics Data System (ADS)
Premaratne, Prashan; Safaei, Farzad
Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.
UNMANNED AERIAL VEHICLE (UAV) HYPERSPECTRAL REMOTE SENSING FOR DRYLAND VEGETATION MONITORING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy F. Glenn; Jessica J. Mitchell; Matthew O. Anderson
2012-06-01
UAV-based hyperspectral remote sensing capabilities developed by the Idaho National Lab and Idaho State University, Boise Center Aerospace Lab, were recently tested via demonstration flights that explored the influence of altitude on geometric error, image mosaicking, and dryland vegetation classification. The test flights successfully acquired usable flightline data capable of supporting classifiable composite images. Unsupervised classification results support vegetation management objectives that rely on mapping shrub cover and distribution patterns. Overall, supervised classifications performed poorly despite spectral separability in the image-derived endmember pixels. Future mapping efforts that leverage ground reference data, ultra-high spatial resolution photos and time series analysis shouldmore » be able to effectively distinguish native grasses such as Sandberg bluegrass (Poa secunda), from invasives such as burr buttercup (Ranunculus testiculatus) and cheatgrass (Bromus tectorum).« less
Multi-Temporal Classification and Change Detection Using Uav Images
NASA Astrophysics Data System (ADS)
Makuti, S.; Nex, F.; Yang, M. Y.
2018-05-01
In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell
González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-García, Mateo; Dorta-Naranjo, Blas-Pablo
2008-01-01
This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar. PMID:27879884
SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell.
González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-Garcia, Mateo; Dorta-Naranjo, Blas-Pablo
2008-05-23
This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar.
NASA Astrophysics Data System (ADS)
Cruden, A. R.; Vollgger, S.
2016-12-01
The emerging capability of UAV photogrammetry combines a simple and cost-effective method to acquire digital aerial images with advanced computer vision algorithms that compute spatial datasets from a sequence of overlapping digital photographs from various viewpoints. Depending on flight altitude and camera setup, sub-centimeter spatial resolution orthophotographs and textured dense point clouds can be achieved. Orientation data can be collected for detailed structural analysis by digitally mapping such high-resolution spatial datasets in a fraction of time and with higher fidelity compared to traditional mapping techniques. Here we describe a photogrammetric workflow applied to a structural study of folds and fractures within alternating layers of sandstone and mudstone at a coastal outcrop in SE Australia. We surveyed this location using a downward looking digital camera mounted on commercially available multi-rotor UAV that autonomously followed waypoints at a set altitude and speed to ensure sufficient image overlap, minimum motion blur and an appropriate resolution. The use of surveyed ground control points allowed us to produce a geo-referenced 3D point cloud and an orthophotograph from hundreds of digital images at a spatial resolution < 10 mm per pixel, and cm-scale location accuracy. Orientation data of brittle and ductile structures were semi-automatically extracted from these high-resolution datasets using open-source software. This resulted in an extensive and statistically relevant orientation dataset that was used to 1) interpret the progressive development of folds and faults in the region, and 2) to generate a 3D structural model that underlines the complex internal structure of the outcrop and quantifies spatial variations in fold geometries. Overall, our work highlights how UAV photogrammetry can contribute to new insights in structural analysis.
Comparative Analysis of Uninhibited and Constrained Avian Wing Aerodynamics
NASA Astrophysics Data System (ADS)
Cox, Jordan A.
The flight of birds has intrigued and motivated man for many years. Bird flight served as the primary inspiration of flying machines developed by Leonardo Da Vinci, Otto Lilienthal, and even the Wright brothers. Avian flight has once again drawn the attention of the scientific community as unmanned aerial vehicles (UAV) are not only becoming more popular, but smaller. Birds are once again influencing the designs of aircraft. Small UAVs operating within flight conditions and low Reynolds numbers common to birds are not yet capable of the high levels of control and agility that birds display with ease. Many researchers believe the potential to improve small UAV performance can be obtained by applying features common to birds such as feathers and flapping flight to small UAVs. Although the effects of feathers on a wing have received some attention, the effects of localized transient feather motion and surface geometry on the flight performance of a wing have been largely overlooked. In this research, the effects of freely moving feathers on a preserved red tailed hawk wing were studied. A series of experiments were conducted to measure the aerodynamic forces on a hawk wing with varying levels of feather movement permitted. Angle of attack and air speed were varied within the natural flight envelope of the hawk. Subsequent identical tests were performed with the feather motion constrained through the use of externally-applied surface treatments. Additional tests involved the study of an absolutely fixed geometry mold-and-cast wing model of the original bird wing. Final tests were also performed after applying surface coatings to the cast wing. High speed videos taken during tests revealed the extent of the feather movement between wing models. Images of the microscopic surface structure of each wing model were analyzed to establish variations in surface geometry between models. Recorded aerodynamic forces were then compared to the known feather motion and surface geometry to correlate the performance to these two features. The results of this study revealed that the performance of the bird wing was directly affected by feather motion. It was also found that the motion of covert and secondary covert feathers had the greatest influence on the performance. Increased coefficients of lift and drag were found when higher frequencies of these feathers were observed. Noticeable reductions in the coefficient of drag were found to be associated with micron level variations in the depth of surface features on the wing.
Budget Uav Systems for the Prospection of - and Medium-Scale Archaeological Sites
NASA Astrophysics Data System (ADS)
Ostrowski, W.; Hanus, K.
2016-06-01
One of the popular uses of UAVs in photogrammetry is providing an archaeological documentation. A wide offer of low-cost (consumer) grade UAVs, as well as the popularity of user-friendly photogrammetric software allowing obtaining satisfying results, contribute to facilitating the process of preparing documentation for small archaeological sites. However, using solutions of this kind is much more problematic for larger areas. The limited possibilities of autonomous flight makes it significantly harder to obtain data for areas too large to be covered during a single mission. Moreover, sometimes the platforms used are not equipped with telemetry systems, which makes navigating and guaranteeing a similar quality of data during separate flights difficult. The simplest solution is using a better UAV, however the cost of devices of such type often exceeds the financial capabilities of archaeological expeditions. The aim of this article is to present methodology allowing obtaining data for medium scale areas using only a basic UAV. The proposed methodology assumes using a simple multirotor, not equipped with any flight planning system or telemetry. Navigating of the platform is based solely on live-view images sent from the camera attached to the UAV. The presented survey was carried out using a simple GoPro camera which, from the perspective of photogrammetric use, was not the optimal configuration due to the fish eye geometry of the camera. Another limitation is the actual operational range of UAVs which in the case of cheaper systems, rarely exceeds 1 kilometre and is in fact often much smaller. Therefore the surveyed area must be divided into sub-blocks which correspond to the range of the drone. It is inconvenient since the blocks must overlap, so that they will later be merged during their processing. This increases the length of required flights as well as the computing power necessary to process a greater number of images. These issues make prospection highly inconvenient, but not impossible. Our paper presents our experiences through two case studies: surveys conducted in Nepal under the aegis of UNESCO, and works carried out as a part of a Polish archaeological expedition in Cyprus, which both prove that the proposed methodology allows obtaining satisfying results. The article is an important voice in the ongoing debate between commercial and academic archaeologists who discuss the balance between the required standards of conducting archaeological works and economic capabilities of archaeological missions.
NASA Astrophysics Data System (ADS)
Vasterling, Margarete; Schloemer, Stefan; Fischer, Christian; Ehrler, Christoph
2010-05-01
Spontaneous combustion of coal and resulting coal fires lead to very high temperatures in the subsurface. To a large amount the heat is transferred to the surface by convective and conductive transport inducing a more or less pronounced thermal anomaly. During the past decade satellite-based infrared-imaging (ASTER, MODIS) was the method of choice for coal fire detection on a local and regional scale. However, the resolution is by far too low for a detailed analysis of single coal fires which is essential prerequisite for corrective measures (i.e. fire fighting) and calculation of carbon dioxide emission based on a complex correlation between energy release and CO2 generation. Consequently, within the framework of the Sino-German research project "Innovative Technologies for Exploration, Extinction and Monitoring of Coal Fires in Northern China", a new concept was developed and successfully tested. An unmanned aerial vehicle (UAV) was equipped with a lightweight camera for thermografic (resolution 160 by 120 pixel, dynamic range -20 to 250°C) and for visual imaging. The UAV designed as an octocopter is able to hover at GPS controlled waypoints during predefined flight missions. The application of a UAV has several advantages. Compared to point measurements on the ground the thermal imagery quickly provides the spatial distribution of the temperature anomaly with a much better resolution. Areas otherwise not accessible (due to topography, fire induced cracks, etc.) can easily be investigated. The results of areal surveys on two coal fires in Xinjiang are presented. Georeferenced thermal and visual images were mosaicked together and analyzed. UAV-born data do well compared to temperatures measured directly on the ground and cover large areas in detail. However, measuring surface temperature alone is not sufficient. Simultaneous measurements made at the surface and in roughly 15cm depth proved substantial temperature gradients in the upper soil. Thus the temperature measured at the surface underestimates the energy emitted by the subsurface coal fire. In addition, surface temperature is strongly influenced by solar radiation and the prevailing ambient conditions (wind, temperature, humidity). As a consequence there is no simple correlation between surface and subsurface soil temperature. Efforts have been made to set up a coupled energy transport and energy balance model for the near surface considering thermal conduction, solar irradiation, thermal radiative energy and ambient temperature so far. The model can help to validate space-born and UAV-born thermal imagery and link surface to subsurface temperature but depends on in-situ measurements for input parameter determination and calibration. Results obtained so far strongly necessitate the integration of different data sources (in-situ / remote; point / area; local / medium scale) to obtain a reliable energy release estimation which is then used for coal fire characterization.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Papadavid, G.; Christoforou, M.; Hadjimitsis, D. G.
2015-10-01
This paper focuses on the use of Unmanned Aerial Vehicles (UAVs) over the study area of Pissouri in Cyprus to document the sloping landscapes of the area. The study area has been affected by overgrazing, which has led to shifts in the vegetation patterns and changing microtopography of the soil. The UAV images were used to generate digital elevation models (DEMs) to examine the changes in microtopography. Next to that orthophotos were used to detect changes in vegetation patterns. The combined data of the digital elevation models and the orthophotos will be used to detect the occurrence of catastrophic shifts and mechanisms for desertification in the study area due to overgrazing. This study is part of the "CASCADE- Catastrophic shifts in dryland" project.
Mobile 3d Mapping with a Low-Cost Uav System
NASA Astrophysics Data System (ADS)
Neitzel, F.; Klonowski, J.
2011-09-01
In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.
Wetland Assessment Using Unmanned Aerial Vehicle (uav) Photogrammetry
NASA Astrophysics Data System (ADS)
Boon, M. A.; Greenfield, R.; Tesfamichael, S.
2016-06-01
The use of Unmanned Arial Vehicle (UAV) photogrammetry is a valuable tool to enhance our understanding of wetlands. Accurate planning derived from this technological advancement allows for more effective management and conservation of wetland areas. This paper presents results of a study that aimed at investigating the use of UAV photogrammetry as a tool to enhance the assessment of wetland ecosystems. The UAV images were collected during a single flight within 2½ hours over a 100 ha area at the Kameelzynkraal farm, Gauteng Province, South Africa. An AKS Y-6 MKII multi-rotor UAV and a digital camera on a motion compensated gimbal mount were utilised for the survey. Twenty ground control points (GCPs) were surveyed using a Trimble GPS to achieve geometrical precision and georeferencing accuracy. Structure-from-Motion (SfM) computer vision techniques were used to derive ultra-high resolution point clouds, orthophotos and 3D models from the multi-view photos. The geometric accuracy of the data based on the 20 GCP's were 0.018 m for the overall, 0.0025 m for the vertical root mean squared error (RMSE) and an over all root mean square reprojection error of 0.18 pixel. The UAV products were then edited and subsequently analysed, interpreted and key attributes extracted using a selection of tools/ software applications to enhance the wetland assessment. The results exceeded our expectations and provided a valuable and accurate enhancement to the wetland delineation, classification and health assessment which even with detailed field studies would have been difficult to achieve.
Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe
2018-01-17
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.
Feature-Based Approach for the Registration of Pushbroom Imagery with Existing Orthophotos
NASA Astrophysics Data System (ADS)
Xiong, Weifeng
Low-cost Unmanned Airborne Vehicles (UAVs) are rapidly becoming suitable platforms for acquiring remote sensing data for a wide range of applications. For example, a UAV-based mobile mapping system (MMS) is emerging as a novel phenotyping tool that delivers several advantages to alleviate the drawbacks of conventional manual plant trait measurements. Moreover, UAVs equipped with direct geo-referenced frame cameras and pushbroom scanners can acquire geospatial data for comprehensive high-throughput phenotyping. UAVs for mobile mapping platforms are low-cost and easy to use, can fly closer to the objects, and are filling an important gap between ground wheel-based and traditional manned-airborne platforms. However, consumer-grade UAVs are capable of carrying only equipment with a relatively light payload and their flying time is determined by a limited battery life. These restrictions of UAVs unfortunately force potential users to adopt lower-quality direct geo-referencing and imaging systems that may negatively impact the quality of the deliverables. Recent advances in sensor calibration and automated triangulation have made it feasible to obtain accurate mapping using low-cost camera systems equipped with consumer-grade GNSS/INS units. However, ortho-rectification of the data from a linear-array scanner is challenging for low-cost UAV systems, because the derived geo-location information from pushbroom sensors is quite sensitive to the performance of the implemented direct geo-referencing unit. This thesis presents a novel approach for improving the ortho-rectification of hyperspectral pushbroom scanner imagery with the aid of orthophotos generated from frame cameras through the identification of conjugate features while modeling the impact of residual artifacts in the direct geo-referencing information. The experimental results qualitatively and quantitatively proved the feasibility of the proposed methodology in improving the geo-referencing accuracy of real datasets collected over an agricultural field.
Interactive Cadastral Boundary Delineation from Uav Data
NASA Astrophysics Data System (ADS)
Crommelinck, S.; Höfle, B.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.
2018-05-01
Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100 m is reduced by up to 86 %, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.
UAV, DGPS, and Laser Transit Mapping of Microbial Mat Ecosystems on Little Ambergris Cay, B.W.I.
NASA Astrophysics Data System (ADS)
Stein, N.; Quinn, D. P.; Grotzinger, J. P.; Fischer, W. W.; Knoll, A. H.; Cantine, M.; Gomes, M. L.; Grotzinger, H. M.; Lingappa, U.; Metcalfe, K.; O'Reilly, S. S.; Orzechowski, E. A.; Riedman, L. A.; Strauss, J. V.; Trower, L.
2016-12-01
Little Ambergris Cay is a 6 km long, 1.6 km wide uninhabited island on the Caicos platform in the Turks and Caicos. Little Ambergris provides an analog for the study of microbial mat development in the sedimentary record. Recent field mapping during July of 2016 used UAV- and satellite-based images, differential GPS (DGPS), and total station theodolite (TST) measurements to characterize sedimentology and biofacies across the entirety of Little Ambergris Cay. Nine facies were identified in-situ during DGPS island transects including oolitic grainstone bedrock, sand flats, cutbank and mat-filled channels, hardground-lined bays with EPS-rich mat particles, mangroves, EPS mats, polygonal mats, and mats with blistered surface texture. These facies were mapped onto a 15 cm/pixel visible light orthomosaic of the island generated from more than 1500 nadir images taken by a UAV at 350 m standoff distance. A corresponding stereogrammetric digital elevation map was generated from drone images and 910 DGPS measurements acquired during several island transects. More than 1000 TST measurements provide additional facies elevation constraints, control points for satellite-based water depth calculations, and means to cross-calibrate and reconstruct the topographic profile of bedrock exposed at the beach. Additionally, the thickness of the underlying Holocene sediment fill was estimated over several island transects using a depth probe. Sub-cm resolution drone-based orthophotos of microbial mats were used to quantify polygonal mat size and textures. The mapping results highlight that sedimentary and bio-facies (including mat morphology and fabrics) correlate strongly with elevation. Notably, mat morphology was observed to be highly sensitive to cm-scale variations in topography and water depth. The productivity metric NDVI was computed for mat and vegetation facies using nadir images from a UAV-mounted two-band red-NIR camera. In combination with in situ facies mapping, these measurements provided ground truth for reduction of multispectral Landsat and Worldview-2 satellite images to evaluate mat distribution and diversity across a range of spatial and spectral facies variations.
LUNA: low-flying UAV-based forest monitoring system
NASA Astrophysics Data System (ADS)
Keizer, Jan Jacob; Pereira, Luísa; Pinto, Glória; Alves, Artur; Barros, Antonio; Boogert, Frans-Joost; Cambra, Sílvia; de Jesus, Cláudia; Frankenbach, Silja; Mesquita, Raquel; Serôdio, João; Martins, José; Almendra, Ricardo
2015-04-01
The LUNA project is aiming to develop an information system for precision forestry and, in particular, the monitoring of eucalypt plantations that is first and foremost based on multi-spectral imagery acquired using low-flying uav's. The presentation will focus on the first phase of image acquisition, processing and analysis for a series of pot experiments addressing main threats for early-stage eucalypt plantations in Portugal, i.e. acute , chronic and cyclic hydric stress, nutrient stress, fungal infections and insect plague attacks. The imaging results will be compared with spectroscopic measurements as well as with eco-physiological and plant morphological measurements. Furthermore, the presentation will show initial results of the project's second phase, comprising field tests in existing eucalypt plantations in north-central Portugal.
3D Tree Dimensionality Assessment Using Photogrammetry and Small Unmanned Aerial Vehicles
2015-01-01
Detailed, precise, three-dimensional (3D) representations of individual trees are a prerequisite for an accurate assessment of tree competition, growth, and morphological plasticity. Until recently, our ability to measure the dimensionality, spatial arrangement, shape of trees, and shape of tree components with precision has been constrained by technological and logistical limitations and cost. Traditional methods of forest biometrics provide only partial measurements and are labor intensive. Active remote technologies such as LiDAR operated from airborne platforms provide only partial crown reconstructions. The use of terrestrial LiDAR is laborious, has portability limitations and high cost. In this work we capitalized on recent improvements in the capabilities and availability of small unmanned aerial vehicles (UAVs), light and inexpensive cameras, and developed an affordable method for obtaining precise and comprehensive 3D models of trees and small groups of trees. The method employs slow-moving UAVs that acquire images along predefined trajectories near and around targeted trees, and computer vision-based approaches that process the images to obtain detailed tree reconstructions. After we confirmed the potential of the methodology via simulation we evaluated several UAV platforms, strategies for image acquisition, and image processing algorithms. We present an original, step-by-step workflow which utilizes open source programs and original software. We anticipate that future development and applications of our method will improve our understanding of forest self-organization emerging from the competition among trees, and will lead to a refined generation of individual-tree-based forest models. PMID:26393926
3D Tree Dimensionality Assessment Using Photogrammetry and Small Unmanned Aerial Vehicles.
Gatziolis, Demetrios; Lienard, Jean F; Vogs, Andre; Strigul, Nikolay S
2015-01-01
Detailed, precise, three-dimensional (3D) representations of individual trees are a prerequisite for an accurate assessment of tree competition, growth, and morphological plasticity. Until recently, our ability to measure the dimensionality, spatial arrangement, shape of trees, and shape of tree components with precision has been constrained by technological and logistical limitations and cost. Traditional methods of forest biometrics provide only partial measurements and are labor intensive. Active remote technologies such as LiDAR operated from airborne platforms provide only partial crown reconstructions. The use of terrestrial LiDAR is laborious, has portability limitations and high cost. In this work we capitalized on recent improvements in the capabilities and availability of small unmanned aerial vehicles (UAVs), light and inexpensive cameras, and developed an affordable method for obtaining precise and comprehensive 3D models of trees and small groups of trees. The method employs slow-moving UAVs that acquire images along predefined trajectories near and around targeted trees, and computer vision-based approaches that process the images to obtain detailed tree reconstructions. After we confirmed the potential of the methodology via simulation we evaluated several UAV platforms, strategies for image acquisition, and image processing algorithms. We present an original, step-by-step workflow which utilizes open source programs and original software. We anticipate that future development and applications of our method will improve our understanding of forest self-organization emerging from the competition among trees, and will lead to a refined generation of individual-tree-based forest models.
NASA Astrophysics Data System (ADS)
Possoch, M.; Bieker, S.; Hoffmeister, D.; Bolten, A.; Schellberg, J.; Bareth, G.
2016-06-01
Remote sensing of crop biomass is important in regard to precision agriculture, which aims to improve nutrient use efficiency and to develop better stress and disease management. In this study, multi-temporal crop surface models (CSMs) were generated from UAV-based dense imaging in order to derive plant height distribution and to determine forage mass. The low-cost UAV-based RGB imaging was carried out in a grassland experiment at the University of Bonn, Germany, in summer 2015. The test site comprised three consecutive growths including six different nitrogen fertilizer levels and three replicates, in sum 324 plots with a size of 1.5×1.5 m. Each growth consisted of six harvesting dates. RGB-images and biomass samples were taken at twelve dates nearly biweekly within two growths between June and September 2015. Images were taken with a DJI Phantom 2 in combination of a 2D Zenmuse gimbal and a GoPro Hero 3 (black edition). Overlapping images were captured in 13 to 16 m and overview images in approximately 60 m height at 2 frames per second. The RGB vegetation index (RGBVI) was calculated as the normalized difference of the squared green reflectance and the product of blue and red reflectance from the non-calibrated images. The post processing was done with Agisoft PhotoScan Professional (SfM-based) and Esri ArcGIS. 14 ground control points (GCPs) were located in the field, distinguished by 30 cm × 30 cm markers and measured with a RTK-GPS (HiPer Pro Topcon) with 0.01 m horizontal and vertical precision. The errors of the spatial resolution in x-, y-, z-direction were in a scale of 3-4 cm. From each survey, also one distortion corrected image was georeferenced by the same GCPs and used for the RGBVI calculation. The results have been used to analyse and evaluate the relationship between estimated plant height derived with this low-cost UAV-system and forage mass. Results indicate that the plant height seems to be a suitable indicator for forage mass. There is a robust correlation of crop height related with dry matter (R² = 0.6). The RGBVI seems not to be a suitable indicator for forage mass in grassland, although the results provided a medium correlation by combining plant height and RGBVI to dry matter (R² = 0.5).
The payload bay in the nose of NASA's Altair unmanned aerial vehicle (UAV) will be able to carry up
NASA Technical Reports Server (NTRS)
2002-01-01
The payload bay in the nose of NASA's Altair unmanned aerial vehicle (UAV), shown here during final construction at General Atomics Aeronautical Systems, Inc., (GA-ASI) facility at Adelanto, Calif., will be able to carry up to 700 lbs. of sensors, imaging equipment and other instruments for Earth science missions. General Atomics Aeronautical Systems, Inc., is developing the Altair version of its Predator B unmanned reconnaissance aircraft under NASA's Environmental Research Aircraft and Sensor Technology (ERAST) project. NASA plans to use the Altair as a technology demonstrator to validate a variety of command and control technologies for UAVs, as well as demonstrate the capability to perform a variety of Earth science missions. The Altair is designed to carry an 700-lb. payload of scientific instruments and imaging equipment for as long as 32 hours at up to 52,000 feet altitude. Eleven-foot extensions have been added to each wing, giving the Altair an overall wingspan of 86 feet with an aspect ratio of 23. It is powered by a 700-hp. rear-mounted TPE-331-10 turboprop engine, driving a three-blade propeller. Altair is scheduled to begin flight tests in the fourth quarter of 2002, and be acquired by NASA following successful completion of basic airworthiness tests in early 2003 for evaluation of over-the-horizon control, detect, see and avoid and other technologies required to allow UAVs to operate safely with other aircraft in the national airspace.
NASA Astrophysics Data System (ADS)
Lei, Tianjie; Zhang, Yazhen; Wang, Xingyong; Fu, Jun'e.; Li, Lin; Pang, Zhiguo; Zhang, Xiaolei; Kan, Guangyuan
2017-07-01
Remote sensing system fitted on Unmanned Aerial Vehicle (UAV) can obtain clear images and high-resolution aerial photographs. It has advantages of strong real-time, flexibility and convenience, free from influence of external environment, low cost, low-flying under clouds and ability to work full-time. When an earthquake happened, it could go deep into the places safely and reliably which human staff can hardly approach, such as secondary geological disasters hit areas. The system can be timely precise in response to secondary geological disasters monitoring by a way of obtaining first-hand information as quickly as possible, producing a unique emergency response capacity to provide a scientific basis for overall decision-making processes. It can greatly enhance the capability of on-site disaster emergency working team in data collection and transmission. The great advantages of UAV remote sensing system played an irreplaceable role in monitoring secondary geological disaster dynamics and influences. Taking the landslides and barrier lakes for example, the paper explored the basic application and process of UAV remote sensing in the disaster emergency relief. UAV high-resolution remote sensing images had been exploited to estimate the situation of disaster-hit areas and monitor secondary geological disasters rapidly, systematically and continuously. Furthermore, a rapid quantitative assessment on the distribution and size of landslides and barrier lakes was carried out. Monitoring results could support relevant government departments and rescue teams, providing detailed and reliable scientific evidence for disaster relief and decision-making.
UAV based mapping of variation in grassland yield for forage production in Arctic environments
NASA Astrophysics Data System (ADS)
Davids, C.; Karlsen, S. R.; Jørgensen, M.; Ancin Murguzur, F. J.
2017-12-01
Grassland cultivation for animal feed is the key agricultural activity in northern Norway. Even though the growing season has increased by at least a week in the last 30 years, grassland yields appear to have declined, probably due to more challenging winter conditions and changing agronomy practices. The ability for local and regional crop productivity forecasting would assist farmers with management decisions and would provide local and national authorities with a better overview over productivity and potential problems due to e.g. winter damage. Remote sensing technology has long been used to estimate and map the variability of various biophysical parameters, but calibration is important. In order to establish the relationship between spectral reflectance and grass yield in northern European environments we combine Sentinel-2 time series, UAV-based multispectral measurements, and ground-based spectroradiometry, with biomass analyses and observations of species composition. In this presentation we will focus on the results from the UAV data acquisition. We used a multirotor UAV with different sensors (a multispectral Rikola camera, and NDVI and RGB cameras) to image a number of cultivated grasslands of different age and productivity in northern Norway in June/July 2016 and 2017. Following UAV data acquisition, 10 to 20 in situ measurements were made per field using a FieldSpec3 (350-2500 nm). In addition, samples were taken to determine biomass and grass species composition. The imaging and sampling was done immediately prior to harvesting. The Rikola camera, when used as a stand-alone camera mounted on a UAV, can collect 15 bands with a spectral width of 10-15 nm in the range between 500-890 nm. In the initial analysis of the 2016 data we investigated how well different vegetation indices correlated with biomass and showed that vegetation indices that include red edge bands perform better than widely used indices such as NDVI. We will extend the analysis with partial least square regression once the 2017 data becomes available and in this presentation we will show the results of both the partial least square regression analysis and vegetation indices for the pooled data from the 2016 and 2017 acquisition.
Scaling forest phenology from trees to the landscape using an unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Klosterman, S.; Melaas, E. K.; Martinez, A.; Richardson, A. D.
2013-12-01
Vegetation phenology monitoring has yielded a decades-long archive documenting the impacts of global change on the biosphere. However, the coarse spatial resolution of remote sensing obscures the organismic level processes driving phenology, while point measurements on the ground limit the extent of observation. Unmanned aerial vehicles (UAVs) enable low altitude remote sensing at higher spatial and temporal resolution than available from space borne platforms, and have the potential to elucidate the links between organism scale processes and landscape scale analyses of terrestrial phenology. This project demonstrates the use of a low cost multirotor UAV, equipped with a consumer grade digital camera, for observation of deciduous forest phenology and comparison to ground- and tower-based data as well as remote sensing. The UAV was flown approximately every five days during the spring green-up period in 2013, to obtain aerial photography over an area encompassing a 250m resolution MODIS (Moderate Resolution Imaging Spectroradiometer) pixel at Harvard Forest in central Massachusetts, USA. The imagery was georeferenced and tree crowns were identified using a detailed species map of the study area. Image processing routines were used to extract canopy 'greenness' time series, which were used to calculate phenology transition dates corresponding to early, middle, and late stages of spring green-up for the dominant canopy trees. Aggregated species level phenology estimates from the UAV data, including the mean and variance of phenology transition dates within species in the study area, were compared to model predictions based on visual assessment of a smaller sample size of individual trees, indicating the extent to which limited ground observations represent the larger landscape. At an intermediate scale, the UAV data was compared to data from repeat digital photography, integrating over larger portions of canopy within and near the study area, as a validation step and to see how well tower-based approaches characterize the surrounding landscape. Finally, UAV data was compared to MODIS data to determine how tree crowns within a remote sensing pixel combine to create the aggregate landscape phenology measured by remote sensing, using an area weighted average of the phenology of all dominant crowns.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing
2016-12-01
The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.
UAV observation of newly formed volcanic island, Nishinoshima, Japan, from a ship
NASA Astrophysics Data System (ADS)
Ohminato, T.; Kaneko, T.; Takagi, A.
2016-12-01
We conducted an aerial observation at Nishinoshima island, south of Japan, from Jun 7 to Jun 9, 2016 by using an Unmanned Aerial Vehicle (UAV), a radio controlled small helicopter. Takeoff and landing of the UAV was conducted on a ship. Nishinoshima is a small island, 130km west of Chichijima in Ogasawara Islands, Japan. New eruption started in November 2013 in a shallow sea approximately 400 m southeast of the existing Nishinoshima Island. It started from a small islet and evolved with 1-5 × 105 m3/day discharge rate (Maeno et al, 2016). In late December 2013, the islet coalesced with the existing Nishinoshima. In 16 month, the lava field reached 2.6×106 m2and covered almost all of the existing Nishinoshima. Human landing upon the newly formed part of the island has still been prohibited due to the danger of sudden eruptions. Before our mission, some pumice or rock samples had been taken from the island but their amount was not enough to conduct detailed petrological analyses. The evolution of the lava field from the central cone has been well documented by using images taken from satellites and airplanes. However, due to the limited resolution of satellite images or photos taken from distant airplanes, there still be uncertainties in detailed morphological evolution of lava flows. The purpose of our observation includes, 1) sampling of pyroclasts near the central cone in order to investigate the condition of magma chamber and magma ascent process, and 2) taking high resolution 4K images in order to clarify the characteristic morphology of the lava flow covering the island. During the three days operation, we were successfully able to sample 250g of pyroclasts and to take 1.5TB of 4K movies. Conducting UAV's takeoff and landing on a ship was not an easy task. We used a marine research ship, Keifu-Maru, operated by Japan Meteorological Agency. The ship size is 1483 tons. On the ship deck, there are several structures which can interfere with the helicopter flights. During the UAV operation, the ship kept the lowest velocity but it was always pitching, yawing and rolling. Takeoff or landing on the ship were far more difficult than those on the ground at complete rest. In the presentation, we will show the difficulty in operating the UAV on the ship and how we overcame the difficulties.
NASA Astrophysics Data System (ADS)
Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong
2017-10-01
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought
Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine
2017-01-01
Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F2 partially inbred population (termed here ‘POP6’), whose F1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature (Tc) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype variability under drought stress conditions. Tc derived from aerial thermal imagery presented a good correlation with ground-truth stomatal conductance (gs) in both segmentation techniques. Interestingly, the HTFP approach was instrumental to detect drought-tolerant response in 25% of the population. This study shows the potential of UAV-based thermal imaging for field phenomics of poplar and other tree species. This is anticipated to have tremendous implications for accelerating forest tree genetic improvement against abiotic stress. PMID:29021803
UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought.
Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine
2017-01-01
Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F 2 partially inbred population (termed here 'POP6'), whose F 1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature ( T c ) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype variability under drought stress conditions. T c derived from aerial thermal imagery presented a good correlation with ground-truth stomatal conductance ( g s ) in both segmentation techniques. Interestingly, the HTFP approach was instrumental to detect drought-tolerant response in 25% of the population. This study shows the potential of UAV-based thermal imaging for field phenomics of poplar and other tree species. This is anticipated to have tremendous implications for accelerating forest tree genetic improvement against abiotic stress.
NASA Astrophysics Data System (ADS)
Holloway, John H., Jr.; Witherspoon, Ned H.; Miller, Richard E.; Davis, Kenn S.; Suiter, Harold R.; Hilton, Russell J.
2000-08-01
JMDT is a Navy/Marine Corps 6.2 Exploratory Development program that is closely coordinated with the 6.4 COBRA acquisition program. The objective of the program is to develop innovative science and technology to enhance future mine detection capabilities. The objective of the program is to develop innovative science and technology to enhance future mine detection capabilities. Prior to transition to acquisition, the COBRA ATD was extremely successful in demonstrating a passive airborne multispectral video sensor system operating in the tactical Pioneer unmanned aerial vehicle (UAV), combined with an integrated ground station subsystem to detect and locate minefields from surf zone to inland areas. JMDT is investigating advanced technology solutions for future enhancements in mine field detection capability beyond the current COBRA ATD demonstrated capabilities. JMDT has recently been delivered next- generation, innovative hardware which was specified by the Coastal System Station and developed under contract. This hardware includes an agile-tuning multispectral, polarimetric, digital video camera and advanced multi wavelength laser illumination technologies to extend the same sorts of multispectral detections from a UAV into the night and over shallow water and other difficult littoral regions. One of these illumination devices is an ultra- compact, highly-efficient near-IR laser diode array. The other is a multi-wavelength range-gateable laser. Additionally, in conjunction with this new technology, algorithm enhancements are being developed in JMDT for future naval capabilities which will outperform the already impressive record of automatic detection of minefields demonstrated by the COBAR ATD.
[Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.
Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning
2016-05-01
Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.
Sandino, Juan; Wooler, Adam; Gonzalez, Felipe
2017-09-24
The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms' outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is "resolution-dependent". These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only.
NASA Astrophysics Data System (ADS)
Ridenoure, Rex
2004-09-01
Space-borne imaging systems derived from commercial technology have been successfully employed on launch vehicles for several years. Since 1997, over sixty such imagers - all in the product family called RocketCamTM - have operated successfully on 29 launches involving most U.S. launch systems. During this time, these inexpensive systems have demonstrated their utility in engineering analysis of liftoff and ascent events, booster performance, separation events and payload separation operations, and have also been employed to support and document related ground-based engineering tests. Such views from various vantage points provide not only visualization of key events but stunning and extremely positive public relations video content. Near-term applications include capturing key events on Earth-orbiting spacecraft and related proximity operations. This paper examines the history to date of RocketCams on expendable and manned launch vehicles, assesses their current utility on rockets, spacecraft and other aerospace vehicles (e.g., UAVs), and provides guidance for their use in selected defense and security applications. Broad use of RocketCams on defense and security projects will provide critical engineering data for developmental efforts, a large database of in-situ measurements onboard and around aerospace vehicles and platforms, compelling public relations content, and new diagnostic information for systems designers and failure-review panels alike.
Wehrhan, Marc; Rauneker, Philipp; Sommer, Michael
2016-01-01
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b899. The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part. PMID:26907284
Wehrhan, Marc; Rauneker, Philipp; Sommer, Michael
2016-02-19
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b899. The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part.
Vanegas, Fernando; Weiss, John; Gonzalez, Felipe
2018-01-01
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used—the sensors, the UAV, and the flight operations—the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analysing and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications. PMID:29342101
Video Image Stabilization and Registration
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)
2002-01-01
A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.
Video Image Stabilization and Registration
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)
2003-01-01
A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.
Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca
2013-01-01
The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963
Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca
2013-01-01
The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.
Development of a UAV system for VNIR-TIR acquisitions in precision agriculture
NASA Astrophysics Data System (ADS)
Misopolinos, L.; Zalidis, Ch.; Liakopoulos, V.; Stavridou, D.; Katsigiannis, P.; Alexandridis, T. K.; Zalidis, G.
2015-06-01
Adoption of precision agriculture techniques requires the development of specialized tools that provide spatially distributed information. Both flying platforms and airborne sensors are being continuously evolved to cover the needs of plant and soil sensing at affordable costs. Due to restrictions in payload, flying platforms are usually limited to carry a single sensor on board. The aim of this work is to present the development of a vertical take-off and landing autonomous unmanned aerial vehicle (VTOL UAV) system for the simultaneous acquisition of high resolution vertical images at the visible, near infrared (VNIR) and thermal infrared (TIR) wavelengths. A system was developed that has the ability to trigger two cameras simultaneously with a fully automated process and no pilot intervention. A commercial unmanned hexacopter UAV platform was optimized to increase reliability, ease of operation and automation. The designed systems communication platform is based on a reduced instruction set computing (RISC) processor running Linux OS with custom developed drivers in an efficient way, while keeping the cost and weight to a minimum. Special software was also developed for the automated image capture, data processing and on board data and metadata storage. The system was tested over a kiwifruit field in northern Greece, at flying heights of 70 and 100m above the ground. The acquired images were mosaicked and geo-corrected. Images from both flying heights were of good quality and revealed unprecedented detail within the field. The normalized difference vegetation index (NDVI) was calculated along with the thermal image in order to provide information on the accurate location of stressors and other parameters related to the crop productivity. Compared to other available sources of data, this system can provide low cost, high resolution and easily repeatable information to cover the requirements of precision agriculture.
Open source software and low cost sensors for teaching UAV science
NASA Astrophysics Data System (ADS)
Kefauver, S. C.; Sanchez-Bragado, R.; El-Haddad, G.; Araus, J. L.
2016-12-01
Drones, also known as UASs (unmanned aerial systems), UAVs (Unmanned Aerial Vehicles) or RPAS (Remotely piloted aircraft systems), are both useful advanced scientific platforms and recreational toys that are appealing to younger generations. As such, they can make for excellent education tools as well as low-cost scientific research project alternatives. However, the process of taking pretty pictures to remote sensing science can be daunting if one is presented with only expensive software and sensor options. There are a number of open-source tools and low cost platform and sensor options available that can provide excellent scientific research results, and, by often requiring more user-involvement than commercial software and sensors, provide even greater educational benefits. Scale-invariant feature transform (SIFT) algorithm implementations, such as the Microsoft Image Composite Editor (ICE), which can create quality 2D image mosaics with some motion and terrain adjustments and VisualSFM (Structure from Motion), which can provide full image mosaicking with movement and orthorectification capacities. RGB image quantification using alternate color space transforms, such as the BreedPix indices, can be calculated via plugins in the open-source software Fiji (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). Recent analyses of aerial images from UAVs over different vegetation types and environments have shown RGB metrics can outperform more costly commercial sensors. Specifically, Hue-based pixel counts, the Triangle Greenness Index (TGI), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating abiotic and biotic stress impacts on crop health. Also, simple kits are available for NDVI camera conversions. Furthermore, suggestions for multivariate analyses of the different RGB indices in the "R program for statistical computing", such as classification and regression trees can allow for a more approachable interpretation of results in the classroom.
Poblete, Tomas; Ortega-Farías, Samuel; Ryu, Dongryeol
2018-01-30
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.
NASA Astrophysics Data System (ADS)
Ryan, Jonathan C.; Hubbard, Alun; Box, Jason E.; Brough, Stephen; Cameron, Karen; Cook, Joseph M.; Cooper, Matthew; Doyle, Samuel H.; Edwards, Arwyn; Holt, Tom; Irvine-Fynn, Tristram; Jones, Christine; Pitcher, Lincoln H.; Rennermalm, Asa K.; Smith, Laurence C.; Stibal, Marek; Snooke, Neal
2017-05-01
Measurements of albedo are a prerequisite for modelling surface melt across the Earth's cryosphere, yet available satellite products are limited in spatial and/or temporal resolution. Here, we present a practical methodology to obtain centimetre resolution albedo products with accuracies of 5% using consumer-grade digital camera and unmanned aerial vehicle (UAV) technologies. Our method comprises a workflow for processing, correcting and calibrating raw digital images using a white reference target, and upward and downward shortwave radiation measurements from broadband silicon pyranometers. We demonstrate the method with a set of UAV sorties over the western, K-sector of the Greenland Ice Sheet. The resulting albedo product, UAV10A1, covers 280 km2, at a resolution of 20 cm per pixel and has a root-mean-square difference of 3.7% compared to MOD10A1 and 4.9% compared to ground-based broadband pyranometer measurements. By continuously measuring downward solar irradiance, the technique overcomes previous limitations due to variable illumination conditions during and between surveys over glaciated terrain. The current miniaturization of multispectral sensors and incorporation of upward facing radiation sensors on UAV packages means that this technique will likely become increasingly attractive in field studies and used in a wide range of applications for high temporal and spatial resolution surface mapping of debris, dust, cryoconite and bioalbedo and for directly constraining surface energy balance models.
High Resolution UAV-based Passive Microwave L-band Imaging of Soil Moisture
NASA Astrophysics Data System (ADS)
Gasiewski, A. J.; Stachura, M.; Elston, J.; McIntyre, E. M.
2013-12-01
Due to long electrical wavelengths and aperture size limitations the scaling of passive microwave remote sensing of soil moisture from spaceborne low-resolution applications to high resolution applications suitable for precision agriculture requires use of low flying aerial vehicles. This presentation summarizes a project to develop a commercial Unmanned Aerial Vehicle (UAV) hosting a precision microwave radiometer for mapping of soil moisture in high-value shallow root-zone crops. The project is based on the use of the Tempest electric-powered UAV and a compact digital L-band (1400-1427 MHz) passive microwave radiometer developed specifically for extremely small and lightweight aerial platforms or man-portable, tractor, or tower-based applications. Notable in this combination are a highly integrated UAV/radiometer antenna design and use of both the upwelling emitted signal from the surface and downwelling cold space signal for precise calibration using a lobe-correlating radiometer architecture. The system achieves a spatial resolution comparable to the altitude of the UAV above the ground while referencing upwelling measurements to the constant and well-known background temperature of cold space. The radiometer incorporates digital sampling and radio frequency interference mitigation along with infrared, near-infrared, and visible (red) sensors for surface temperature and vegetation biomass correction. This NASA-sponsored project is being developed both for commercial application in cropland water management, L-band satellite validation, and estuarian plume studies.
NASA Astrophysics Data System (ADS)
Frankl, Amaury; Stal, Cornelis; De Wit, Bart; De Wulf, Alain; Salvador, Pierre-Gil; Nyssen, Jan
2014-05-01
In erosion studies, accurate spatio-temporal data are required to fully understand the processes involved and their relationship with environmental controls. With cameras being mounted on Unmanned Aerial Vehicles (UAVs), the latter allow to collect low-altitude aerial photographs over small catchments in a cost-effective and rapid way. From large data sets of overlapping aerial photographs, Structure from Motion - Multi View Stereo workflows, integrated in various software such as PhotoScan used here, allow to produced detailed Digital Surface Models (DSMs) and ortho-mosaics. In this study we present the results from a survey carried out in a small agricultural catchment near Hallines, in Northern France. A DSM and ortho-mosaic was produced of the catchment using photographs taken from a low-cost radio-controlled microdrone (DroneFlyer Hexacopter). Photographs were taken with a Sony Nex 5 (16.1 M pixels) camera having a fixed normal lens of 50 mm. In the field, Ground Control Points were materialized by unambiguously determinable targets, measured with a 1'' total station (Leica TS15i). Cross-sections of rills and ephemeral gullies were also quantified from total station measurements and from terrestrial image-based 3D modelling. These data allowed to define the accuracy of the DSM and the representation of the erosion features in it. The feasibility of UAVs photographic surveys to improve our understanding on water-erosion processes such as sheet, rill and gully erosion is discussed. Keywords: Ephemeral gully, Erosion study, Image-based 3D modelling, Microdrone, Rill, UAVs.
Compact camera technologies for real-time false-color imaging in the SWIR band
NASA Astrophysics Data System (ADS)
Dougherty, John; Jennings, Todd; Snikkers, Marco
2013-11-01
Previously real-time false-colored multispectral imaging was not available in a true snapshot single compact imager. Recent technology improvements now allow for this technique to be used in practical applications. This paper will cover those advancements as well as a case study for its use in UAV's where the technology is enabling new remote sensing methodologies.
Video image stabilization and registration--plus
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor)
2009-01-01
A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.
Nearshore Measurements From a Small UAV.
NASA Astrophysics Data System (ADS)
Holman, R. A.; Brodie, K. L.; Spore, N.
2016-02-01
Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.
Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-01-01
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957
Method for the visualization of landform by mapping using low altitude UAV application
NASA Astrophysics Data System (ADS)
Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William
2018-05-01
Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.
Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-10-11
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.
A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng
2017-12-01
A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.
Time Series of Images to Improve Tree Species Classification
NASA Astrophysics Data System (ADS)
Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.
2017-10-01
Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.
Deriving Temporal Height Information for Maize Breeding
NASA Astrophysics Data System (ADS)
Malambo, L.; Popescu, S. C.; Murray, S.; Sheridan, R.; Richardson, G.; Putman, E.
2016-12-01
Phenotypic data such as height provide useful information to crop breeders to better understand their field experiments and associated field variability. However, the measurement of crop height in many breeding programs is done manually which demands significant effort and time and does not scale well when large field experiments are involved. Through structure from motion (SfM) techniques, small unmanned aerial vehicles (sUAV) or drones offer tremendous potential for generating crop height data and other morphological data such as canopy area and biomass in cost-effective and efficient way. We present results of an on-going UAV application project aimed at generating temporal height metrics for maize breeding at the Texas A&M AgriLife Research farm in Burleson County, Texas. We outline the activities involved from the drone aerial surveys, image processing and generation of crop height metrics. The experimental period ran from April (planting) through August (harvest) 2016 and involved 36 maize hybrids replicated over 288 plots ( 1.7 Ha). During the time, crop heights were manually measured per plot at weekly intervals. Corresponding aerial flights were carried out using a DJI Phantom 3 Professional UAV at each interval and images captured processed into point clouds and image mosaics using Pix4D (Pix4D SA; Lausanne, Switzerland) software. LiDAR data was also captured at two intervals (05/06 and 07/29) to provide another source of height information. To obtain height data per plot from SfM point clouds and LiDAR data, percentile height metrics were then generated using FUSION software. Results of the comparison between SfM and field measurement height show high correlation (R2 > 0.7), showing that use of sUAV can replace laborious manual height measurement and enhance plant breeding programs. Similar results were also obtained from the comparison of SfM and LiDAR heights. Outputs of this project are helping plant breeders at Texas A&M automate routine height measurements in maize and quickly make actionable decisions and discover new hybrids.
NASA Astrophysics Data System (ADS)
Bond, C. E.; Howell, J.; Butler, R.
2016-12-01
With an increase in flood and storm events affecting infrastructure the role of weather systems, in a changing climate, and their impact is of increasing interest. Here we present a new workflow integrating crowd sourced imagery from the public with UAV photogrammetry to create, the first 3D hydrograph of a major flooding event. On December 30th 2015, Storm Frank resulted in high magnitude rainfall, within the Dee catchment in Aberdeenshire, resulting in the highest ever-recorded river level for the Dee, with significant impact on infrastructure and river morphology. The worst of the flooding occurred during daylight hours and was digitally captured by the public on smart phones and cameras. After the flood event a UAV was used to shoot photogrammetry to create a textured elevation model of the area around Aboyne Bridge on the River Dee. A media campaign aided crowd sourced digital imagery from the public, resulting in over 1,000 images submitted by the public. EXIF data captured by the imagery of the time, date were used to sort the images into a time series. Markers such as signs, walls, fences and roads within the images were used to determine river level height through the flood, and matched onto the elevation model to contour the change in river level. The resulting 3D hydrograph shows the build up of water on the up-stream side of the Bridge that resulted in significant scouring and under-mining in the flood. We have created the first known data based 3D hydrograph for a river section, from a UAV photogrammetric model and crowd sourced imagery. For future flood warning and infrastructure management a solution that allows a realtime hydrograph to be created utilising augmented reality to integrate the river level information in crowd sourced imagery directly onto a 3D model, would significantly improve management planning and infrastructure resilience assessment.
Unmanned aerial vehicles for surveying marine fauna: assessing detection probability.
Hodgson, Amanda; Peel, David; Kelly, Natalie
2017-06-01
Aerial surveys are conducted for various fauna to assess abundance, distribution, and habitat use over large spatial scales. They are traditionally conducted using light aircraft with observers recording sightings in real time. Unmanned Aerial Vehicles (UAVs) offer an alternative with many potential advantages, including eliminating human risk. To be effective, this emerging platform needs to provide detection rates of animals comparable to traditional methods. UAVs can also acquire new types of information, and this new data requires a reevaluation of traditional analyses used in aerial surveys; including estimating the probability of detecting animals. We conducted 17 replicate UAV surveys of humpback whales (Megaptera novaeangliae) while simultaneously obtaining a 'census' of the population from land-based observations, to assess UAV detection probability. The ScanEagle UAV, carrying a digital SLR camera, continuously captured images (with 75% overlap) along transects covering the visual range of land-based observers. We also used ScanEagle to conduct focal follows of whale pods (n = 12, mean duration = 40 min), to assess a new method of estimating availability. A comparison of the whale detections from the UAV to the land-based census provided an estimated UAV detection probability of 0.33 (CV = 0.25; incorporating both availability and perception biases), which was not affected by environmental covariates (Beaufort sea state, glare, and cloud cover). According to our focal follows, the mean availability was 0.63 (CV = 0.37), with pods including mother/calf pairs having a higher availability (0.86, CV = 0.20) than those without (0.59, CV = 0.38). The follows also revealed (and provided a potential correction for) a downward bias in group size estimates from the UAV surveys, which resulted from asynchronous diving within whale pods, and a relatively short observation window of 9 s. We have shown that UAVs are an effective alternative to traditional methods, providing a detection probability that is within the range of previous studies for our target species. We also describe a method of assessing availability bias that represents spatial and temporal characteristics of a survey, from the same perspective as the survey platform, is benign, and provides additional data on animal behavior. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg
2013-04-01
Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.
NASA Astrophysics Data System (ADS)
Leitão, João P.; Moy de Vitry, Matthew; Scheidegger, Andreas; Rieckermann, Jörg
2016-04-01
Precise and detailed digital elevation models (DEMs) are essential to accurately predict overland flow in urban areas. Unfortunately, traditional sources of DEM, such as airplane light detection and ranging (lidar) DEMs and point and contour maps, remain a bottleneck for detailed and reliable overland flow models, because the resulting DEMs are too coarse to provide DEMs of sufficient detail to inform urban overland flows. Interestingly, technological developments of unmanned aerial vehicles (UAVs) suggest that they have matured enough to be a competitive alternative to satellites or airplanes. However, this has not been tested so far. In this study we therefore evaluated whether DEMs generated from UAV imagery are suitable for urban drainage overland flow modelling. Specifically, 14 UAV flights were conducted to assess the influence of four different flight parameters on the quality of generated DEMs: (i) flight altitude, (ii) image overlapping, (iii) camera pitch, and (iv) weather conditions. In addition, we compared the best-quality UAV DEM to a conventional lidar-based DEM. To evaluate both the quality of the UAV DEMs and the comparison to lidar-based DEMs, we performed regression analysis on several qualitative and quantitative metrics, such as elevation accuracy, quality of object representation (e.g. buildings, walls and trees) in the DEM, which were specifically tailored to assess overland flow modelling performance, using the flight parameters as explanatory variables. Our results suggested that, first, as expected, flight altitude influenced the DEM quality most, where lower flights produce better DEMs; in a similar fashion, overcast weather conditions are preferable, but weather conditions and other factors influence DEM quality much less. Second, we found that for urban overland flow modelling, the UAV DEMs performed competitively in comparison to a traditional lidar-based DEM. An important advantage of using UAVs to generate DEMs in urban areas is their flexibility that enables more frequent, local, and affordable elevation data updates, allowing, for example, to capture different tree foliage conditions.
NASA Astrophysics Data System (ADS)
Keleshis, C.; Ioannou, S.; Vrekoussis, M.; Levin, Z.; Lange, M. A.
2014-08-01
Continuous advances in unmanned aerial vehicles (UAV) and the increased complexity of their applications raise the demand for improved data acquisition systems (DAQ). These improvements may comprise low power consumption, low volume and weight, robustness, modularity and capability to interface with various sensors and peripherals while maintaining the high sampling rates and processing speeds. Such a system has been designed and developed and is currently integrated on the Autonomous Flying Platforms for Atmospheric and Earth Surface Observations (APAESO/NEA-YΠOΔOMH/NEKΠ/0308/09) however, it can be easily adapted to any UAV or any other mobile vehicle. The system consists of a single-board computer with a dual-core processor, rugged surface-mount memory and storage device, analog and digital input-output ports and many other peripherals that enhance its connectivity with various sensors, imagers and on-board devices. The system is powered by a high efficiency power supply board. Additional boards such as frame-grabbers, differential global positioning system (DGPS) satellite receivers, general packet radio service (3G-4G-GPRS) modems for communication redundancy have been interfaced to the core system and are used whenever there is a mission need. The onboard DAQ system can be preprogrammed for automatic data acquisition or it can be remotely operated during the flight from the ground control station (GCS) using a graphical user interface (GUI) which has been developed and will also be presented in this paper. The unique design of the GUI and the DAQ system enables the synchronized acquisition of a variety of scientific and UAV flight data in a single core location. The new DAQ system and the GUI have been successfully utilized in several scientific UAV missions. In conclusion, the novel DAQ system provides the UAV and the remote-sensing community with a new tool capable of reliably acquiring, processing, storing and transmitting data from any sensor integrated on an UAV.
NASA Astrophysics Data System (ADS)
Darmawan, H.; Walter, T. R.; Brotopuspito, K. S.; Subandriyo, S.; Nandaka, M. A.
2017-12-01
Six gas-driven explosions between 2012 and 2014 had changed the morphology and structures of the Merapi lava dome. The explosions mostly occurred during rainfall season and caused NW-SE elongated open fissures that dissected the lava dome. In this study, we conducted UAVs photogrammetry before and after the explosions to investigate the morphological and structural changes and to assess the quality of the UAV photogrammetry. The first UAV photogrammetry was conducted on 26 April 2012. After the explosions, we conducted Terrestrial Laser Scanning (TLS) survey on 18 September 2014 and repeated UAV photogrammetry on 6 October 2015. We applied Structure from Motion (SfM) algorithm to reconstruct 3D SfM point clouds and photomosaics of the 2012 and 2015 UAVs images. Topography changes has been analyzed by calculating height difference between the 2012 and 2015 SfM point clouds, while structural changes has been investigated by visual comparison between the 2012 and 2015 photo mosaics. Moreover, a quality assessment of the results of UAV photogrammetry has been done by comparing the 3D SfM point clouds to TLS dataset. Result shows that the 2012 and 2015 SfM point clouds have 0.19 and 0.57 m difference compared to the TLS point cloud. Furthermore, topography, and structural changes reveal that the 2012-14 explosions were controlled by pre-existing structures. The volume of the 2012-14 explosions is 26.400 ± 1320 m3 DRE. In addition, we find a structurally delineated unstable block at the southern front of the dome which potentially collapses in the future. We concluded that the 2012-14 explosions occurred due to interaction between magma intrusion and rain water and were facilitated by pre-existing structures. The unstable block potentially leads to a rock avalanche hazard. Furthermore, our drone photogrammetry results show very promising and therefore we recommend to use drone for topography mapping in lava dome building volcanoes.
NASA Astrophysics Data System (ADS)
Hu, Hui; Ning, Zhe
2016-11-01
Due to the auto-rotating trait of maple seeds during falling down process, flow characteristics of rotating maple seeds have been studied by many researchers in recent years. In the present study, an experimental investigation was performed to explore maple-seed-inspired UAV propellers for improved aerodynamic and aeroacoustic performances. Inspired by the auto-rotating trait of maple seeds, the shape of a maple seed is leveraged for the planform design of UAV propellers. The aerodynamic and aeroacoustic performances of the maple-seed-inspired propellers are examined in great details, in comparison with a commercially available UAV propeller purchased on the market (i.e., a baseline propeller). During the experiments, in addition to measuring the aerodynamic forces generated by the maple-seed-inspired propellers and the baseline propeller, a high-resolution Particle Image Velocimetry (PIV) system was used to quantify the unsteady flow structures in the wakes of the propellers. The aeroacoustic characteristics of the propellers are also evaluated by leveraging an anechoic chamber available at the Aerospace Engineering Department of Iowa State University. The research work is supported by National Science Foundation under Award Numbers of OSIE-1064235.
Informal settlement classification using point-cloud and image-based features from UAV data
NASA Astrophysics Data System (ADS)
Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.
2017-03-01
Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Furthermore, it is of interest to analyse which fundamental attributes are suitable for describing these objects in different geographic locations. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. UAV datasets from informal settlements in two different countries are compared in order to identify salient features for specific objects in heterogeneous urban environments. Findings show that the integration of 2D and 3D features leads to an overall accuracy of 91.6% and 95.2% respectively for informal settlements in Kigali, Rwanda and Maldonado, Uruguay.
Object Based Building Extraction and Building Period Estimation from Unmanned Aerial Vehicle Data
NASA Astrophysics Data System (ADS)
Comert, Resul; Kaplan, Onur
2018-04-01
The aim of this study is to examine whether it is possible to estimate the building periods with respect to the building heights in the urban scale seismic performance assessment studies by using the building height retrieved from the unmanned aerial vehicle (UAV) data. For this purpose, a small area, which includes eight residential reinforced concrete buildings, was selected in Eskisehir (Turkey) city center. In this paper, the possibilities of obtaining the building heights that are used in the estimation of building periods from UAV based data, have been investigated. The investigations were carried out in 3 stages; (i) Building boundary extraction with Object Based Image Analysis (OBIA), (ii) height calculation for buildings of interest from nDSM and accuracy assessment with the terrestrial survey. (iii) Estimation of building period using height information. The average difference between the periods estimated according to the heights obtained from field measurements and from the UAV data is 2.86 % and the maximum difference is 13.2 %. Results of this study have shown that the building heights retrieved from the UAV data can be used in the building period estimation in the urban scale vulnerability assessments.
Kim, In-Ho; Jeon, Haemin; Baek, Seung-Chan; Hong, Won-Hwa; Jung, Hyung-Jo
2018-06-08
Bridge inspection using unmanned aerial vehicles (UAV) with high performance vision sensors has received considerable attention due to its safety and reliability. As bridges become obsolete, the number of bridges that need to be inspected increases, and they require much maintenance cost. Therefore, a bridge inspection method based on UAV with vision sensors is proposed as one of the promising strategies to maintain bridges. In this paper, a crack identification method by using a commercial UAV with a high resolution vision sensor is investigated in an aging concrete bridge. First, a point cloud-based background model is generated in the preliminary flight. Then, cracks on the structural surface are detected with the deep learning algorithm, and their thickness and length are calculated. In the deep learning method, region with convolutional neural networks (R-CNN)-based transfer learning is applied. As a result, a new network for the 384 collected crack images of 256 × 256 pixel resolution is generated from the pre-trained network. A field test is conducted to verify the proposed approach, and the experimental results proved that the UAV-based bridge inspection is effective at identifying and quantifying the cracks on the structures.
NASA Astrophysics Data System (ADS)
Shea, J. M.; Harder, P.; Pomeroy, J. W.; Kraaijenbrink, P. D. A.
2017-12-01
Mountain snowpacks represent a critical seasonal reservoir of water for downstream needs, and snowmelt is a significant component of mountain hydrological budgets. Ground-based point measurements are unable to describe the full spatial variability of snow accumulation and melt rates, and repeat Unmanned Air Vehicle (UAV) surveys provide an unparalleled opportunity to measure snow accumulation, redistribution and melt in alpine environments. This study presents results from a UAV-based observation campaign conducted at the Fortress Mountain Snow Laboratory in the Canadian Rockies in 2017. Seven survey flights were conducted between April (maximum snow accumulation) and mid-July (bare ground) to collect imagery with both an RGB camera and thermal infrared imager with the sensefly eBee RTK platform. UAV imagery are processed with structure from motion techniques, and orthoimages, digital elevation models, and surface temperature maps are validated against concurrent ground observations of snow depth, snow water equivalent, and snow surface temperature. We examine the seasonal evolution of snow depth and snow surface temperature, and explore the spatial covariances of these variables with respect to topographic factors and snow ablation rates. Our results have direct implications for scaling snow ablation calculations and model resolution and discretization.
An Innovative Unmanned System for Advanced Environmental Monitoring: Design and Development
NASA Astrophysics Data System (ADS)
Marsella, Ennio; Giordano, Laura; Evangelista, Lorenza; Iengo, Antonio; di Filippo, Alessandro; Coppola, Aniello
2015-04-01
The paper summarizes the design and development of a new technology and tools for real-time coordination and control of unmanned vehicles for advanced environmental monitoring. A new Unmanned System has been developed at Institute for Coastal Marine Environmental - National Research Council (Italy), in the framework of two National Operational Programs (PON): Technological Platform for Geophysical and Environmental Marine Survey-PITAM and Integrated Systems and Technologies for Geophysical and Environmental Monitoring in coastal-marine areas-STIGEAC. In particular, the system includes one Unmanned Aerial Vehicle (UAV) and two Unmanned Marine Vehicles (UMV). Major innovations concern the implementation of a new architecture to control each drone and/or to allow the cooperation between heterogeneous vehicles, the integration of distributed sensing techniques and real-time image processing capabilities. Part of the research in these projects involves, therefore, an architecture, where the ground operator can communicate with the Unmanned Vehicles at various levels of abstraction using pointing devices and video viewing. In detail, a Ground Control Station (GCS) has been design and developed to allow the government in security of the drones within a distance up to twenty kilometers for air explorations and within ten nautical miles for marine activities. The Ground Control Station has the following features: 1. hardware / software system for the definition of the mission profiles; 3. autonomous and semi-autonomous control system by remote control (joystick or other) for the UAV and UMVs; 4. integrated control system with comprehensive visualization capabilities, monitoring and archiving of real-time data acquired from scientific payload; 5. open structure to future additions of systems, sensors and / or additional vehicles. In detail, the UAV architecture is a dual-rotor, with an endurance ranging from 55 to 200 minutes, depending on payload weight (maximum 26 kg) and wind conditions, and a capability to survey an area of up to 5x5 square kilometers. The UAV payload consists of three different types of sensors: a laser scanner, a thermal-camera and an integrated camera reflex with gimbal. The laser scanner has 10 mm survey-grade accuracy and a field of view up to 330°. The thermal-camera has a resolution 640x480 pixels and a thermal sensitivity <20 mK (at 30 °C), while the reflex is a 22.3 Megapixel full-frame sensor. In addition to the common applications, such as generating mapping, charting, and geodesy products, the system allows performing real-time survey and monitoring of different natural risk under dangerous condition. The system is, also, address to environmental risk monitoring and prevention, industrial activity and emergency interventions related to environmental crises (i.e. oil spills).
A UAV System for Observing Volcanoes and Natural Hazards
NASA Astrophysics Data System (ADS)
Saggiani, G.; Persiani, F.; Ceruti, A.; Tortora, P.; Troiani, E.; Giuletti, F.; Amici, S.; Buongiorno, M.; Distefano, G.; Bentini, G.; Bianconi, M.; Cerutti, A.; Nubile, A.; Sugliani, S.; Chiarini, M.; Pennestri, G.; Petrini, S.; Pieri, D.
2007-12-01
Fixed or rotary wing manned aircraft are currently the most commonly used platforms for airborne reconnaissance in response to natural hazards, such as volcanic eruptions, oil spills, wild fires, earthquakes. Such flights are very often undertaken in hazardous flying conditions (e.g., turbulence, downdrafts, reduced visibility, close proximity to dangerous terrain) and can be expensive. To mitigate these two fundamental issues-- safety and cost--we are exploring the use of small (less than 100kg), relatively inexpensive, but effective, unmanned aerial vehicles (UAVs) for this purpose. As an operational test, in 2004 we flew a small autonomous UAV in the airspace above and around Stromboli Volcano. Based in part on this experience, we are adapting the RAVEN UAV system for such natural hazard surveillance missions. RAVEN has a 50km range, with a 3.5m wingspan, main fuselage length of 4.60m, and maximum weight of 56kg. It has autonomous flight capability and a ground control Station for the mission planning and control. It will carry a variety of imaging devices, including a visible camera, and an IR camera. It will also carry an experimental Fourier micro-interferometer based on MOEMS technology, (developed by IMM Institute of CNR), to detect atmospheric trace gases. Such flexible, capable, and easy-to-deploy UAV systems may significantly shorten the time necessary to characterize the nature and scale of the natural hazard threats if used from the outset of, and systematically during, natural hazard events. When appropriately utilized, such UAVs can provide a powerful new hazard mitigation and documentation tool for civil protection hazard responders. This research was carried out under the auspices of the Italian government, and, in part, under contract to NASA at the Jet Propulsion Laboratory.
Application of high resolution images from unmanned aerial vehicles for hydrology and range science
USDA-ARS?s Scientific Manuscript database
A common problem in many natural resource disciplines is the lack of high-enough spatial resolution images that can be used for monitoring and modeling purposes. Advances have been made in the utilization of Unmanned Aerial Vehicles (UAVs) in hydrology and rangeland science. By utilizing low fligh...
Impact of Hurricane Irma on Little Ambergris Cay, Turks and Caicos
NASA Astrophysics Data System (ADS)
Stein, N.; Grotzinger, J. P.; Hayden, A.; Quinn, D. P.; Trower, L.; Lingappa, U.; Present, T. M.; Gomes, M.; Orzechowski, E. A.; Fischer, W. W.
2017-12-01
Little Ambergris Cay (21.3° N, 71.7° W) is a 6 km long, 1.6 km wide island on the Caicos platform. The island was the focus of mapping campaigns in July 2016, August 2017, and following Hurricane Irma in September 2017. The cay is lined with lithified upper shoreface and eolian ooid grainstone forming a 1-4 m high bedrock rim that is locally breached, allowing tides to inundate an interior basin lined with extensive microbial mats. The island was mapped in July of 2016 using UAV- and satellite-based images and in situ measurements. Sedimentologic and biofacies were mapped onto a 15 cm/pixel visible light orthomosaic of the cay made from more than 1500 UAV images, and a corresponding stereogrammetric digital elevation model (DEM) was used to track how microbial mat texture varies in response to water depth. An identical UAV-based visible light map of the island was made in August 2017. On September 7th, 2017, the eye of hurricane Irma directly crossed Little Ambergris Cay with sustained winds exceeding 170 MPH. The island was remapped with a UAV on September 24th, yielding a 5 cm/pixel UAV-based visible light orthomosaic and a corresponding DEM. In situ observations and comparison with previous UAV maps shows that Irma caused significant channel and bedrock erosion, scouring and removal of broad tracts of microbial mats, and blanketing by ooid sediment of large portions of the interior basin including smothering of mats by up to 1 m of sediment. The southern rim of the cay was overtopped by water and sediment, indicating a storm surge of at least 3 m. Blocks of rock more than 1 m in length and 50 cm thick were separated from bedrock on the north side of the island and washed higher to form imbricated boulder deposits. Hundreds of 5-30 cm diameter imbricated rip-up intraclasts of rounded microbial mat now line exposed bedrock in the interior basin. Fresh ooid sediment and microbial mats were sampled from three sites: on desiccated mats 50 cm above tide level, on submerged mats in the interior basin, and on mats near the head of a newly incised channel. This work highlights how major disturbances alter sedimentological and biofacies distributions on carbonate platforms and provides insight into interpreting carbonate sedimentology and biosignatures in the rock record.
NASA Astrophysics Data System (ADS)
Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.
2017-06-01
Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
Integrated long-range UAV/UGV collaborative target tracking
NASA Astrophysics Data System (ADS)
Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv
2009-05-01
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes
NASA Astrophysics Data System (ADS)
Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio
2017-12-01
A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Improved TDEM formation using fused ladar/digital imagery from a low-cost small UAV
NASA Astrophysics Data System (ADS)
Khatiwada, Bikalpa; Budge, Scott E.
2017-05-01
Formation of a Textured Digital Elevation Model (TDEM) has been useful in many applications in the fields of agriculture, disaster response, terrain analysis and more. Use of a low-cost small UAV system with a texel camera (fused lidar/digital imagery) can significantly reduce the cost compared to conventional aircraft-based methods. This paper reports continued work on this problem reported in a previous paper by Bybee and Budge, and reports improvements in performance. A UAV fitted with a texel camera is flown at a fixed height above the terrain and swaths of texel image data of the terrain below is taken continuously. Each texel swath has one or more lines of lidar data surrounded by a narrow strip of EO data. Texel swaths are taken such that there is some overlap from one swath to its adjacent swath. The GPS/IMU fitted on the camera also give coarse knowledge of attitude and position. Using this coarse knowledge and the information from the texel image, the error in the camera position and attitude is reduced which helps in producing an accurate TDEM. This paper reports improvements in the original work by using multiple lines of lidar data per swath. The final results are shown and analyzed for numerical accuracy.
Ballesteros, Rocío
2017-01-01
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. PMID:28946606
Gao, Mingxing; Xu, Xiwei; Klinger, Yann; van der Woerd, Jerome; Tapponnier, Paul
2017-08-15
The recent dramatic increase in millimeter- to centimeter- resolution topographic datasets obtained via multi-view photogrammetry raises the possibility of mapping detailed offset geomorphology and constraining the spatial characteristics of active faults. Here, for the first time, we applied this new method to acquire high-resolution imagery and generate topographic data along the Altyn Tagh fault, which is located in a remote high elevation area and shows preserved ancient earthquake surface ruptures. A digital elevation model (DEM) with a resolution of 0.065 m and an orthophoto with a resolution of 0.016 m were generated from these images. We identified piercing markers and reconstructed offsets based on both the orthoimage and the topography. The high-resolution UAV data were used to accurately measure the recent seismic offset. We obtained the recent offset of 7 ± 1 m. Combined with the high resolution satellite image, we measured cumulative offsets of 15 ± 2 m, 20 ± 2 m, 30 ± 2 m, which may be due to multiple paleo-earthquakes. Therefore, UAV mapping can provide fine-scale data for the assessment of the seismic hazards.
Ribeiro-Gomes, Krishna; Hernández-López, David; Ortega, José F; Ballesteros, Rocío; Poblete, Tomás; Moreno, Miguel A
2017-09-23
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In previous work by the author, effective persistent and pervasive sensing for recognition and tracking of battlefield targets were seen to be achieved, using intelligent algorithms implemented by distributed mobile agents over a composite system of unmanned aerial vehicles (UAVs) for persistence and a wireless network of unattended ground sensors for pervasive coverage of the mission environment. While simulated performance results for the supervised algorithms of the composite system are shown to provide satisfactory target recognition over relatively brief periods of system operation, this performance can degrade by as much as 50% as target dynamics in the environment evolve beyond the period of system operation in which the training data are representative. To overcome this limitation, this paper applies the distributed approach using mobile agents to the network of ground-based wireless sensors alone, without the UAV subsystem, to provide persistent as well as pervasive sensing for target recognition and tracking. The supervised algorithms used in the earlier work are supplanted by unsupervised routines, including competitive-learning neural networks (CLNNs) and new versions of support vector machines (SVMs) for characterization of an unknown target environment. To capture the same physical phenomena from battlefield targets as the composite system, the suite of ground-based sensors can be expanded to include imaging and video capabilities. The spatial density of deployed sensor nodes is increased to allow more precise ground-based location and tracking of detected targets by active nodes. The "swarm" mobile agents enabling WSN intelligence are organized in a three processing stages: detection, recognition and sustained tracking of ground targets. Features formed from the compressed sensor data are down-selected according to an information-theoretic algorithm that reduces redundancy within the feature set, reducing the dimension of samples used in the target recognition and tracking routines. Target tracking is based on simplified versions of Kalman filtration. Accuracy of recognition and tracking of implemented versions of the proposed suite of unsupervised algorithms is somewhat degraded from the ideal. Target recognition and tracking by supervised routines and by unsupervised SVM and CLNN routines in the ground-based WSN is evaluated in simulations using published system values and sensor data from vehicular targets in ground-surveillance scenarios. Results are compared with previously published performance for the system of the ground-based sensor network (GSN) and UAV swarm.