Sample records for uav video image

  1. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  2. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  3. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  4. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  5. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    NASA Astrophysics Data System (ADS)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  6. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  7. Annotation of UAV surveillance video

    NASA Astrophysics Data System (ADS)

    Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John

    2004-08-01

    Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.

  8. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer

  9. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  10. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  11. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  12. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  13. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the

  14. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  15. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  16. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  17. Pricise Target Geolocation and Tracking Based on Uav Video Imagery

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.

  18. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  19. The application of micro UAV in construction project

    NASA Astrophysics Data System (ADS)

    Kaamin, Masiri; Razali, Siti Nooraiin Mohd; Ahmad, Nor Farah Atiqah; Bukari, Saifullizan Mohd; Ngadiman, Norhayati; Kadir, Aslila Abd; Hamid, Nor Baizura

    2017-10-01

    In every outstanding construction project, there is definitely have an effective construction management. Construction management allows a construction project to be implemented according to plan. Every construction project must have a progress development works that is usually created by the site engineer. Documenting the progress of works is one of the requirements in construction management. In a progress report it is necessarily have a visual image as an evidence. The conventional method used for photographing on the construction site is by using common digital camera which is has few setback comparing to Micro Unmanned Aerial Vehicles (UAV). Besides, site engineer always have a current issues involving limitation of monitoring on high reach point and entire view of the construction site. The purpose of this paper is to provide a concise review of Micro UAV technology in monitoring the progress on construction site through visualization approach. The aims of this study are to replace the conventional method of photographing on construction site using Micro UAV which can portray the whole view of the building, especially on high reach point and allows to produce better images, videos and 3D model and also facilitating site engineer to monitor works in progress. The Micro UAV was flown around the building construction according to the Ground Control Points (GCPs) to capture images and record videos. The images taken from Micro UAV have been processed generate 3D model and were analysed to visualize the building construction as well as monitoring the construction progress work and provides immediate reliable data for project estimation. It has been proven that by using Micro UAV, a better images and videos can give a better overview of the construction site and monitor any defects on high reach point building structures. Not to be forgotten, with Micro UAV the construction site progress is more efficiently tracked and kept on the schedule.

  20. Image restoration for civil engineering structure monitoring using imaging system embedded on UAV

    NASA Astrophysics Data System (ADS)

    Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem

    2013-04-01

    Nowadays, civil engineering structures are periodically surveyed by qualified technicians (i.e. alpinist) operating visual inspection using heavy mechanical pods. This method is far to be safe, not only for civil engineering structures monitoring staff, but also for users. Due to the unceasing traffic increase, making diversions or closing lanes on bridge becomes more and more difficult. New inspection methods have to be found. One of the most promising technique is to develop inspection method using images acquired by a dedicated monitoring system operating around the civil engineering structures, without disturbing the traffic. In that context, the use of images acquired with an UAV, which fly around the structures is of particular interest. The UAV can be equipped with different vision system (digital camera, infrared sensor, video, etc.). Nonetheless, detection of small distresses on images (like cracks of 1 mm or less) depends on image quality, which is sensitive to internal parameters of the UAV (vibration modes, video exposure times, etc.) and to external parameters (turbulence, bad illumination of the scene, etc.). Though progresses were made at UAV level and at sensor level (i.e. optics), image deterioration is still an open problem. These deteriorations are mainly represented by motion blur that can be coupled with out-of-focus blur and observation noise on acquired images. In practice, deteriorations are unknown if no a priori information is available or dedicated additional instrumentation is set-up at UAV level. Image restoration processing is therefore required. This is a difficult problem [1-3] which has been intensively studied over last decades [4-12]. Image restoration can be addressed by following a blind approach or a myopic one. In both cases, it includes two processing steps that can be implemented in sequential or alternate mode. The first step carries out the identification of the blur impulse response and the second one makes use of this

  1. Autonomous target tracking of UAVs based on low-power neural network hardware

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe

    2014-05-01

    Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.

  2. Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.

    2015-12-01

    There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.

  3. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of

  4. A method of fast mosaic for massive UAV images

    NASA Astrophysics Data System (ADS)

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  5. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  6. Introducing a Low-Cost Mini-Uav for - and Multispectral-Imaging

    NASA Astrophysics Data System (ADS)

    Bendig, J.; Bolten, A.; Bareth, G.

    2012-07-01

    The trend to minimize electronic devices also accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices. Consequently, it is not surprising that UAVs are already part of our daily life and the current pace of development will increase civil applications. A well known and already wide spread example is the so called flying video game based on Parrot's AR.Drone which is remotely controlled by an iPod, iPhone, or iPad (http://ardrone.parrot.com). The latter can be considered as a low-weight and low-cost Mini-UAV. In this contribution a Mini-UAV is considered to weigh less than 5 kg and is being able to carry 0.2 kg to 1.5 kg of sensor payload. While up to now Mini-UAVs like Parrot's AR.Drone are mainly equipped with RGB cameras for videotaping or imaging, the development of such carriage systems clearly also goes to multi-sensor platforms like the ones introduced for larger UAVs (5 to 20 kg) by Jaakkolla et al. (2010) for forestry applications or by Berni et al. (2009) for agricultural applications. The problem when designing a Mini-UAV for multi-sensor imaging is the limitation of payload of up to 1.5 kg and a total weight of the whole system below 5 kg. Consequently, the Mini-UAV without sensors but including navigation system and GPS sensors must weigh less than 3.5 kg. A Mini-UAV system with these characteristics is HiSystems' MK-Okto (www.mikrokopter.de). Total weight including battery without sensors is less than 2.5 kg. Payload of a MK-Okto is approx. 1 kg and maximum speed is around 30 km/h. The MK-Okto can be operated up to a wind speed of less than 19 km/h which corresponds to Beaufort scale number 3 for wind speed. In our study, the MK-Okto is equipped with a handheld low-weight NEC F30IS thermal imaging system. The F30IS which was developed for veterinary applications, covers 8 to 13 μm, weighs only 300 g

  7. Spectral Imaging from Uavs Under Varying Illumination Conditions

    NASA Astrophysics Data System (ADS)

    Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I.

    2013-08-01

    Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.

  8. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images

    PubMed Central

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-01-01

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179

  9. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.

    PubMed

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-08-19

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.

  10. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  11. Real-time target tracking and locating system for UAV

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen

    2017-07-01

    In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.

  12. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  13. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  14. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    PubMed

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  15. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow

    PubMed Central

    Zhang, Weilong; Guo, Bingxuan; Liao, Xuan; Li, Wenzhuo

    2018-01-01

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images. PMID:29659526

  16. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.

    PubMed

    Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo

    2018-04-16

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.

  17. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  18. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  19. A debugging method of the Quadrotor UAV based on infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Cui, Guangjie; Hao, Qian; Yang, Jianguo; Chen, Lizhi; Hu, Hongkang; Zhang, Lijun

    2018-01-01

    High-performance UAV has been popular and in great need in recent years. The paper introduces a new method in debugging Quadrotor UAVs. Based on the infrared thermal technology and heat transfer theory, a UAV is under debugging above a hot-wire grid which is composed of 14 heated nichrome wires. And the air flow propelled by the rotating rotors has an influence on the temperature distribution of the hot-wire grid. An infrared thermal imager below observes the distribution and gets thermal images of the hot-wire grid. With the assistance of mathematic model and some experiments, the paper discusses the relationship between thermal images and the speed of rotors. By means of getting debugged UAVs into test, the standard information and thermal images can be acquired. The paper demonstrates that comparing to the standard thermal images, a UAV being debugging in the same test can draw some critical data directly or after interpolation. The results are shown in the paper and the advantages are discussed.

  20. Semantic Segmentation and Unregistered Building Detection from Uav Images Using a Deconvolutional Network

    NASA Astrophysics Data System (ADS)

    Ham, S.; Oh, Y.; Choi, K.; Lee, I.

    2018-05-01

    Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.

  1. Using crowd sourcing to combat potentially illegal or dangerous UAV operations

    NASA Astrophysics Data System (ADS)

    Tapsall, Brooke T.

    2016-10-01

    The UAV (Unmanned Aerial Vehicles) industry is growing exponentially at a pace that policy makers, individual countries and law enforcement agencies are finding difficult to keep up. The UAV market is large, as such the amount of UAVs being operated in potentially dangerous situations is prevalent and rapidly increasing. Media is continually reporting `near-miss' incidents between UAVs and commercial aircraft, UAV breaching security in sensitive areas or invading public privacy. One major challenge for law enforcement agencies is gaining tangible evidence against potentially dangerous or illegal UAV operators due to the rapidity with which UAV operators are able to enter, fly and exit a scene before authorities can arrive or before they can be located. DroneALERT, an application available via the Airport-UAV.com website, allows users to capture potentially dangerous or illegal UAV activity using their mobile device as it the incident is occurring. A short online DroneALERT Incident Report (DIR) is produced, emailed to the user and the Airport-UAV.com custodians. The DIR can be used to aid authorities in their investigations. The DIR contains details such as images and videos, location, time, date of the incident, drone model, its distance and height. By analysing information from the DIR, photos or video, there is a high potential for law enforcement authorities to use this evidence to identify the type of UAV used, triangulate the location of the potential dangerous UAV and operator, create a timeline of events, potential areas of operator exit and to determine the legalities breached. All provides crucial evidence for identifying and prosecuting a UAV operator.

  2. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  3. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  4. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  5. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  6. Preliminary Study on Earthquake Surface Rupture Extraction from Uav Images

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Wang, X.; Ding, X.; Wu, X.; Dou, A.; Wang, S.

    2018-04-01

    Because of the advantages of low-cost, lightweight and photography under the cloud, UAVs have been widely used in the field of seismic geomorphology research in recent years. Earthquake surface rupture is a typical seismic tectonic geomorphology that reflects the dynamic and kinematic characteristics of crustal movement. The quick identification of earthquake surface rupture is of great significance for understanding the mechanism of earthquake occurrence, disasters distribution and scale. Using integrated differential UAV platform, series images were acquired with accuracy POS around the former urban area (Qushan town) of Beichuan County as the area stricken seriously by the 2008 Wenchuan Ms8.0 earthquake. Based on the multi-view 3D reconstruction technique, the high resolution DSM and DOM are obtained from differential UAV images. Through the shade-relief map and aspect map derived from DSM, the earthquake surface rupture is extracted and analyzed. The results show that the surface rupture can still be identified by using the UAV images although the time of earthquake elapse is longer, whose middle segment is characterized by vertical movement caused by compression deformation from fault planes.

  7. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    PubMed

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  8. Image-based tracking and sensor resource management for UAVs in an urban environment

    NASA Astrophysics Data System (ADS)

    Samant, Ashwin; Chang, K. C.

    2010-04-01

    Coordination and deployment of multiple unmanned air vehicles (UAVs) requires a lot of human resources in order to carry out a successful mission. The complexity of such a surveillance mission is significantly increased in the case of an urban environment where targets can easily escape from the UAV's field of view (FOV) due to intervening building and line-of-sight obstruction. In the proposed methodology, we focus on the control and coordination of multiple UAVs having gimbaled video sensor onboard for tracking multiple targets in an urban environment. We developed optimal path planning algorithms with emphasis on dynamic target prioritizations and persistent target updates. The command center is responsible for target prioritization and autonomous control of multiple UAVs, enabling a single operator to monitor and control a team of UAVs from a remote location. The results are obtained using extensive 3D simulations in Google Earth using Tangent plus Lyapunov vector field guidance for target tracking.

  9. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching

    PubMed Central

    Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng

    2017-01-01

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547

  10. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Treesearch

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  11. Roadside IED detection using subsurface imaging radar and rotary UAV

    NASA Astrophysics Data System (ADS)

    Qin, Yexian; Twumasi, Jones O.; Le, Viet Q.; Ren, Yu-Jiun; Lai, C. P.; Yu, Tzuyang

    2016-05-01

    Modern improvised explosive device (IED) and mine detection sensors using microwave technology are based on ground penetrating radar operated by a ground vehicle. Vehicle size, road conditions, and obstacles along the troop marching direction limit operation of such sensors. This paper presents a new conceptual design using a rotary unmanned aerial vehicle (UAV) to carry subsurface imaging radar for roadside IED detection. We have built a UAV flight simulator with the subsurface imaging radar running in a laboratory environment and tested it with non-metallic and metallic IED-like targets. From the initial lab results, we can detect the IED-like target 10-cm below road surface while carried by a UAV platform. One of the challenges is to design the radar and antenna system for a very small payload (less than 3 lb). The motion compensation algorithm is also critical to the imaging quality. In this paper, we also demonstrated the algorithm simulation and experimental imaging results with different IED target materials, sizes, and clutters.

  12. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  13. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  14. Wireless Command-and-Control of UAV-Based Imaging LANs

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.

    2006-01-01

    Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.

  15. BgCut: automatic ship detection from UAV images.

    PubMed

    Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  16. BgCut: Automatic Ship Detection from UAV Images

    PubMed Central

    Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182

  17. An accelerated image matching technique for UAV orthoimage registration

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  18. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    PubMed Central

    Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu

    2017-01-01

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254

  19. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    PubMed

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  20. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  1. Detection of the power lines in UAV remote sensed images using spectral-spatial methods.

    PubMed

    Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham

    2018-01-15

    In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging

    NASA Astrophysics Data System (ADS)

    Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.

    2013-08-01

    Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.

  3. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  4. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  5. Adaptive pattern for autonomous UAV guidance

    NASA Astrophysics Data System (ADS)

    Sung, Chen-Ko; Segor, Florian

    2013-09-01

    The research done at the Fraunhofer IOSB in Karlsruhe within the AMFIS project is focusing on a mobile system to support rescue forces in accidents or disasters. The system consists of a ground control station which has the capability to communicate with a large number of heterogeneous sensors and sensor carriers and provides several open interfaces to allow easy integration of additional sensors into the system. Within this research we focus mainly on UAV such as VTOL (Vertical takeoff and Landing) systems because of their ease of use and their high maneuverability. To increase the positioning capability of the UAV, different onboard processing chains of image exploitation for real time detection of patterns on the ground and the interfacing technology for controlling the UAV from the payload during flight were examined. The earlier proposed static ground pattern was extended by an adaptive component which admits an additional visual communication channel to the aircraft. For this purpose different components were conceived to transfer additive information using changeable patterns on the ground. The adaptive ground pattern and their application suitability had to be tested under external influence. Beside the adaptive ground pattern, the onboard process chains and the adaptations to the demands of changing patterns are introduced in this paper. The tracking of the guiding points, the UAV navigation and the conversion of the guiding point positions from the images to real world co-ordinates in video sequences, as well as use limits and the possibilities of an adaptable pattern are examined.

  6. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  7. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  8. Moving object detection using dynamic motion modelling from UAV aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  9. Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images

    PubMed Central

    Ortega-Terol, Damian; Ballesteros, Rocio

    2017-01-01

    Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930

  10. Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.

    PubMed

    Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego

    2017-10-15

    Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.

  11. Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2017-08-01

    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing

  12. Acquisition and Processing Protocols for Uav Images: 3d Modeling of Historical Buildings Using Photogrammetry

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Koehl, M.; Grussenmeyer, P.; Freville, T.

    2017-08-01

    Photogrammetry has seen an increase in the use of UAVs (Unmanned Aerial Vehicles) for both large and smaller scale cartography. The use of UAVs is also advantageous because it may be used for tasks requiring quick response, including in the case of the inspection and monitoring of buildings. The objective of the project is to study the acquisition and processing protocols which exist in the literature and to adapt them for UAV projects. This implies a study on the calibration of the sensors, flight planning, comparison of software solutions, data management, and analysis on the different products of a UAV project. Two historical buildings of the city of Strasbourg were used as case studies: a part of the Rohan Palace façade and the St-Pierre-le-Jeune Catholic church. In addition, a preliminary test was performed on the Josephine Pavilion. Two UAVs were used in this research; namely the Sensefly Albris and the DJI Phantom 3 Professional. The experiments have shown that the calibration parameters tend to be unstable for small sensors. Furthermore, the dense matching of images remains a particular problem to address in a close range photogrammetry project, more so in the presence of noise on the images. Data management in cases where the number of images is high is also very important. The UAV is nevertheless a suitable solution for the surveying and recording of historical buildings because it is able to take images from points of view which are normally inaccessible to classical terrestrial techniques.

  13. Study on Vignetting Correction of Uav Images and Its Application to 2010 Ms7.0 Lushan Earthquake, China

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Wang, X.; Dou, A.; Ding, X.

    2014-12-01

    As the UAV is widely used in earthquake disaster prevention and mitigation, the efficiency of UAV image processing determines the effectiveness of its application to pre-earthquake disaster prevention, post-earthquake emergency rescue, and disaster assessment. Because of bad weather conditions after destructive earthquake, the wide field cameras captured images with serious vignetting phenomenon, which can significantly affects the speed and efficiency of image mosaic, especially the extraction of pre-earthquake building and geological structure information and also the accuracy of post-earthquake quantitative damage extraction. In this paper, an improved radial gradient correction method (IRGCM) was developed to reduce the influence from random distribution of land surface objects on the images based on radial gradient correction method (RGCM, Y. Zheng, 2008; 2013). First, a mean-value image was obtained by the average of serial UAV images. It was used as calibration instead of single images to obtain the comprehensive vignetting function by using RGCM. Then each UAV image would be corrected by the comprehensive vignetting function. A case study was done to correct the UAV images sequence, which were obtained in Lushan County after Ms7.0 Lushan, Sichuan, China earthquake occurred on April 20, 2013. The results show that the comprehensive vignetting function generated by IRGCM is more robust and accurate to express the specific optical response of camera in a particular setting. Thus it is particularly useful for correction of a mass UAV images with non-uniform illuminations. Also, the correction process was simplified and it is faster than conventional methods. After correction, the images have better radial homogeneity and clearer details, to a certain extent, which reduces the difficulties of image mosaic, and provides a better result for further analysis and damage information extraction. Further test shows also that better results were obtained by taking

  14. Applications of UAVs for Remote Sensing of Critical Infrastructure

    NASA Technical Reports Server (NTRS)

    Wegener, Steve; Brass, James; Schoenung, Susan

    2003-01-01

    The surveillance of critical facilities and national infrastructure such as waterways, roadways, pipelines and utilities requires advanced technological tools to provide timely, up to date information on structure status and integrity. Unmanned Aerial Vehicles (UAVs) are uniquely suited for these tasks, having large payload and long duration capabilities. UAVs also have the capability to fly dangerous and dull missions, orbiting for 24 hours over a particular area or facility providing around the clock surveillance with no personnel onboard. New UAV platforms and systems are becoming available for commercial use. High altitude platforms are being tested for use in communications, remote sensing, agriculture, forestry and disaster management. New payloads are being built and demonstrated onboard the UAVs in support of these applications. Smaller, lighter, lower power consumption imaging systems are currently being tested over coffee fields to determine yield and over fires to detect fire fronts and hotspots. Communication systems that relay video, meteorological and chemical data via satellite to users on the ground in real-time have also been demonstrated. Interest in this technology for infrastructure characterization and mapping has increased dramatically in the past year. Many of the UAV technological developments required for resource and disaster monitoring are being used for the infrastructure and facility mapping activity. This paper documents the unique contributions from NASA;s Environmental Research Aircraft and Sensor Technology (ERAST) program to these applications. ERAST is a UAV technology development effort by a consortium of private aeronautical companies and NASA. Details of demonstrations of UAV capabilities currently underway are also presented.

  15. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  16. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  17. Solid images generated from UAVs to analyze areas affected by rock falls

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco

    2015-04-01

    The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.

  18. Efficient structure from motion for oblique UAV images based on maximal spanning tree expansion

    NASA Astrophysics Data System (ADS)

    Jiang, San; Jiang, Wanshou

    2017-10-01

    The primary contribution of this paper is an efficient Structure from Motion (SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an algorithm, considering spatial relationship constraints between image footprints, is designed for match pair selection with the assistance of UAV flight control data and oblique camera mounting angles. Second, a topological connection network (TCN), represented by an undirected weighted graph, is constructed from initial match pairs, which encodes the overlap areas and intersection angles into edge weights. Then, an algorithm, termed MST-Expansion, is proposed to extract the match graph from the TCN, where the TCN is first simplified by a maximum spanning tree (MST). By further analysis of the local structure in the MST, expansion operations are performed on the vertices of the MST for match graph enhancement, which is achieved by introducing critical connections in the expansion directions. Finally, guided by the match graph, an efficient SfM is proposed. Under extensive analysis and comparison, its performance is verified by using three oblique UAV datasets captured with different multi-camera systems. Experimental results demonstrate that the efficiency of image matching is improved, with speedup ratios ranging from 19 to 35, and competitive orientation accuracy is achieved from both relative bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA with GCPs. At the same time, images in the three datasets are successfully oriented. For the orientation of oblique UAV images, the proposed method can be a more efficient solution.

  19. Uav-Based 3d Urban Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Boonpook, Wuttichai; Tan, Yumin; Liu, Huaqing; Zhao, Binbin; He, Lingfeng

    2018-04-01

    Unmanned Aerial Vehicle (UAV) based remote sensing can be used to make three-dimensions (3D) mapping with great flexibility, besides the ability to provide high resolution images. In this paper we propose a quick-change detection method on UAV images by combining altitude from Digital Surface Model (DSM) and texture analysis from images. Cases of UAV images with and without georeferencing are both considered. Research results show that the accuracy of change detection can be enhanced with georeferencing procedure, and the accuracy and precision of change detection on UAV images which are collected both vertically and obliquely but without georeferencing also have a good performance.

  20. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  1. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV

    NASA Astrophysics Data System (ADS)

    Chianucci, Francesco; Disperati, Leonardo; Guzzi, Donatella; Bianchini, Daniele; Nardino, Vanni; Lastri, Cinzia; Rindinella, Andrea; Corona, Piermaria

    2016-05-01

    Accurate estimates of forest canopy are essential for the characterization of forest ecosystems. Remotely-sensed techniques provide a unique way to obtain estimates over spatially extensive areas, but their application is limited by the spectral and temporal resolution available from these systems, which is often not suited to meet regional or local objectives. The use of unmanned aerial vehicles (UAV) as remote sensing platforms has recently gained increasing attention, but their applications in forestry are still at an experimental stage. In this study we described a methodology to obtain rapid and reliable estimates of forest canopy from a small UAV equipped with a commercial RGB camera. The red, green and blue digital numbers were converted to the green leaf algorithm (GLA) and to the CIE L*a*b* colour space to obtain estimates of canopy cover, foliage clumping and leaf area index (L) from aerial images. Canopy attributes were compared with in situ estimates obtained from two digital canopy photographic techniques (cover and fisheye photography). The method was tested in beech forests. UAV images accurately quantified canopy cover even in very dense stand conditions, despite a tendency to not detecting small within-crown gaps in aerial images, leading to a measurement of a quantity much closer to crown cover estimated from in situ cover photography. Estimates of L from UAV images significantly agreed with that obtained from fisheye images, but the accuracy of UAV estimates is influenced by the appropriate assumption of leaf angle distribution. We concluded that true colour UAV images can be effectively used to obtain rapid, cheap and meaningful estimates of forest canopy attributes at medium-large scales. UAV can combine the advantage of high resolution imagery with quick turnaround series, being therefore suitable for routine forest stand monitoring and real-time applications.

  2. Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications

    NASA Astrophysics Data System (ADS)

    Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.

    2011-09-01

    Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.

  3. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    NASA Astrophysics Data System (ADS)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  4. Emergency response to landslide using GNSS measurements and UAV

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Koukouvelas, Ioannis K.

    2017-10-01

    Landslide monitoring can be performed using many different methods: Classical geotechnical measurements like inclinometer, topographical survey measurements with total stations or GNSS sensors and photogrammetric techniques using airphotos or high resolution satellite images. However all these methods are expensive or difficult to be developed immediately after the landslide triggering. In contrast airborne technology and especially the use of Unmanned Aerial Vehicles (UAVs) make response to landslide disaster easier as UAVs can be launched quickly in dangerous terrains and send data about the sliding areas to responders on the ground either as RGB images or as videos. In addition, the emergency response to landslide is critical for the further monitoring. For proper displacement identification all the above mentioned monitoring methods need a high resolution and a very accurate representation of the relief. The ideal solution for the accurate and quick mapping of a landslide is the combined use of UAV's photogrammetry and GNSS measurements. UAVs have started their development as expensive toys but they currently became a very valuable tool in large scale mapping of sliding areas. The purpose of this work is to demonstrate an effective solution for the initial landslide mapping immediately after the occurrence of the phenomenon and the possibility of the periodical assessment of the landslide. Three different landslide cases from Greece are presented in the current study. All three landslides have different characteristics: occurred in different geomorphologic environments, triggered by different causes and had different geologic bedrock. In all three cases we performed detailed GNSS measurements of the landslide area, we generated orthophotos as well as Digital Surface Models (DSMs) at an accuracy of less than +/-10 cm. Slide direction and velocity, mass balances as well as protection and mitigation measurements can be derived from the application of the UAVs

  5. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  6. Integrating critical interface elements for intuitive single-display aviation control of UAVs

    NASA Astrophysics Data System (ADS)

    Cooper, Joseph L.; Goodrich, Michael A.

    2006-05-01

    Although advancing levels of technology allow UAV operators to give increasingly complex commands with expanding temporal scope, it is unlikely that the need for immediate situation awareness and local, short-term flight adjustment will ever be completely superseded. Local awareness and control are particularly important when the operator uses the UAV to perform a search or inspection task. There are many different tasks which would be facilitated by search and inspection capabilities of a camera-equipped UAV. These tasks range from bridge inspection and news reporting to wilderness search and rescue. The system should be simple, inexpensive, and intuitive for non-pilots. An appropriately designed interface should (a) provide a context for interpreting video and (b) support UAV tasking and control, all within a single display screen. In this paper, we present and analyze an interface that attempts to accomplish this goal. The interface utilizes a georeferenced terrain map rendered from publicly available altitude data and terrain imagery to create a context in which the location of the UAV and the source of the video are communicated to the operator. Rotated and transformed imagery from the UAV provides a stable frame of reference for the operator and integrates cleanly into the terrain model. Simple icons overlaid onto the main display provide intuitive control and feedback when necessary but fade to a semi-transparent state when not in use to avoid distracting the operator's attention from the video signal. With various interface elements integrated into a single display, the interface runs nicely on a small, portable, inexpensive system with a single display screen and simple input device, but is powerful enough to allow a single operator to deploy, control, and recover a small UAV when coupled with appropriate autonomy. As we present elements of the interface design, we will identify concepts that can be leveraged into a large class of UAV applications.

  7. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    NASA Astrophysics Data System (ADS)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  8. D Reconstruction from Uav-Based Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  9. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  11. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  12. Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images

    NASA Astrophysics Data System (ADS)

    Kim, J.-I.; Kim, H.-C.

    2018-05-01

    Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.

  13. Colour-based Object Detection and Tracking for Autonomous Quadrotor UAV

    NASA Astrophysics Data System (ADS)

    Kadouf, Hani Hunud A.; Mohd Mustafah, Yasir

    2013-12-01

    With robotics becoming a fundamental aspect of modern society, further research and consequent application is ever increasing. Aerial robotics, in particular, covers applications such as surveillance in hostile military zones or search and rescue operations in disaster stricken areas, where ground navigation is impossible. The increased visual capacity of UAV's (Unmanned Air Vehicles) is also applicable in the support of ground vehicles to provide supplies for emergency assistance, for scouting purposes or to extend communication beyond insurmountable land or water barriers. The Quadrotor, which is a small UAV has its lift generated by four rotors and can be controlled by altering the speeds of its motors relative to each other. The four rotors allow for a higher payload than single or dual rotor UAVs, which makes it safer and more suitable to carry camera and transmitter equipment. An onboard camera is used to capture and transmit images of the Quadrotor's First Person View (FPV) while in flight, in real time, wirelessly to a base station. The aim of this research is to develop an autonomous quadrotor platform capable of transmitting real time video signals to a base station for processing. The result from the image analysis will be used as a feedback in the quadrotor positioning control. To validate the system, the algorithm should have the capacity to make the quadrotor identify, track or hover above stationary or moving objects.

  14. Development of Uav Photogrammetry Method by Using Small Number of Vertical Images

    NASA Astrophysics Data System (ADS)

    Kunii, Y.

    2018-05-01

    This new and efficient photogrammetric method for unmanned aerial vehicles (UAVs) requires only a few images taken in the vertical direction at different altitudes. The method includes an original relative orientation procedure which can be applied to images captured along the vertical direction. The final orientation determines the absolute orientation for every parameter and is used for calculating the 3D coordinates of every measurement point. The measurement accuracy was checked at the UAV test site of the Japan Society for Photogrammetry and Remote Sensing. Five vertical images were taken at 70 to 90 m altitude. The 3D coordinates of the measurement points were calculated. The plane and height accuracies were ±0.093 m and ±0.166 m, respectively. These values are of higher accuracy than the results of the traditional photogrammetric method. The proposed method can measure 3D positions efficiently and would be a useful tool for construction and disaster sites and for other field surveying purposes.

  15. An Augmented Virtuality Display for Improving UAV Usability

    DTIC Science & Technology

    2005-01-01

    cockpit. For a more universally-understood metaphor, we have turned to virtual environments of the type represented in video games . Many of the...people who have the need to fly UAVs (such as military personnel) have experience with playing video games . They are skilled in navigating virtual...Another aspect of tailoring the interface to those with video game experience is to use familiar controls. Microsoft has developed a popular and

  16. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  17. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  18. Uav Photgrammetric Workflows: a best Practice Guideline

    NASA Astrophysics Data System (ADS)

    Federman, A.; Santana Quintero, M.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J.

    2017-08-01

    The increasing commercialization of unmanned aerial vehicles (UAVs) has opened the possibility of performing low-cost aerial image acquisition for the documentation of cultural heritage sites through UAV photogrammetry. The flying of UAVs in Canada is regulated through Transport Canada and requires a Special Flight Operations Certificate (SFOC) in order to fly. Various image acquisition techniques have been explored in this review, as well as well software used to register the data. A general workflow procedure has been formulated based off of the literature reviewed. A case study example of using UAV photogrammetry at Prince of Wales Fort is discussed, specifically in relation to the data acquisition and processing. Some gaps in the literature reviewed highlight the need for streamlining the SFOC application process, and incorporating UAVs into cultural heritage documentation courses.

  19. Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach

    NASA Astrophysics Data System (ADS)

    Rahman, A. A. Ab; Maulud, K. N. Abdul; Mohd, F. A.; Jaafar, O.; Tahar, K. N.

    2017-12-01

    Unmanned Aerial Vehicles (UAV) technology has evolved dramatically in the 21st century. It is used by both military and general public for recreational purposes and mapping work. Operating cost for UAV is much cheaper compared to that of normal aircraft and it does not require a large work space. The UAV systems have similar functions with the LIDAR and satellite images technologies. These systems require a huge cost, labour and time consumption to produce elevation and dimension data. Measurement of difficult objects such as water tank can also be done by using UAV. The purpose of this paper is to show the capability of UAV to compute the volume of water tank based on a different number of images and control points. The results were compared with the actual volume of the tank to validate the measurement. In this study, the image acquisition was done using Phantom 3 Professional, which is a low cost UAV. The analysis in this study is based on different volume computations using two and four control points with variety set of UAV images. The results show that more images will provide a better quality measurement. With 95 images and four GCP, the error percentage to the actual volume is about 5%. Four controls are enough to get good results but more images are needed, estimated about 115 until 220 images. All in all, it can be concluded that the low cost UAV has a potential to be used for volume of water and dimension measurement.

  20. Micro-UAV tracking framework for EO exploitation

    NASA Astrophysics Data System (ADS)

    Browning, David; Wilhelm, Joe; Van Hook, Richard; Gallagher, John

    2012-06-01

    Historically, the Air Force's research into aerial platforms for sensing systems has focused on low-, mid-, and highaltitude platforms. Though these systems are likely to comprise the majority of the Air Force's assets for the foreseeable future, they have limitations. Specifically, these platforms, their sensor packages, and their data exploitation software are unsuited for close-quarter surveillance, such as in alleys and inside of buildings. Micro-UAVs have been gaining in popularity, especially non-fixed-wing platforms such as quad-rotors. These platforms are much more appropriate for confined spaces. However, the types of video exploitation techniques that can effectively be used are different from the typical nadir-looking aerial platform. This paper discusses the creation of a framework for testing existing and new video exploitation algorithms, as well as describes a sample micro-UAV-based tracker.

  1. Autonomous unmanned air vehicles (UAV) techniques

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Lee, Ting N.

    2007-04-01

    The UAVs (Unmanned Air Vehicles) have great potentials in different civilian applications, such as oil pipeline surveillance, precision farming, forest fire fighting (yearly), search and rescue, boarder patrol, etc. The related industries of UAVs can create billions of dollars for each year. However, the road block of adopting UAVs is that it is against FAA (Federal Aviation Administration) and ATC (Air Traffic Control) regulations. In this paper, we have reviewed the latest technologies and researches on UAV navigation and obstacle avoidance. We have purposed a system design of Jittering Mosaic Image Processing (JMIP) with stereo vision and optical flow to fulfill the functionalities of autonomous UAVs.

  2. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement

    PubMed Central

    Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang

    2016-01-01

    The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations. PMID:27171091

  3. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement.

    PubMed

    Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang

    2016-05-10

    The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.

  4. a Study on Automatic Uav Image Mosaic Method for Paroxysmal Disaster

    NASA Astrophysics Data System (ADS)

    Li, M.; Li, D.; Fan, D.

    2012-07-01

    As everyone knows, some paroxysmal disasters, such as flood, can do a great damage in short time. Timely, accurate, and fast acquisition of sufficient disaster information is the prerequisite facing with disaster emergency. Due to UAV's superiority in acquiring disaster data, UAV, a rising remote sensed data has gradually become the first choice for departments of disaster prevention and mitigation to collect the disaster information at first hand. In this paper, a novel and fast strategy is proposed for registering and mosaicing UAV data. Firstly, the original images will not be zoomed in to be 2 times larger ones at the initial course of SIFT operator, and the total number of the pyramid octaves in scale space is reduced to speed up the matching process; sequentially, RANSAC(Random Sample Consensus) is used to eliminate the mismatching tie points. Then, bundle adjustment is introduced to solve all of the camera geometrical calibration parameters jointly. Finally, the best seamline searching strategy based on dynamic schedule is applied to solve the dodging problem arose by aeroplane's side-looking. Beside, a weighted fusion estimation algorithm is employed to eliminate the "fusion ghost" phenomenon.

  5. Experiences of using UAVs for monitoring levee breaches

    NASA Astrophysics Data System (ADS)

    Brauneck, J.; Pohl, R.; Juepner, R.

    2016-11-01

    During floods technical protection facilities are subjected to high loads and might fail as several examples have shown in the past. During the major 2002 and 2013 floods in the catchment area of the Elbe River (Germany), some breaching levees caused large inundations in the hinterland. In such situations the emergency forces need comprehensive and reliable realtime information about the situation, especially the breach enlargement and discharge, the spatial and temporal development of the inundation and the damages. After an impressive progress meanwhile unmanned aerial vehicles (UAV) also called remotely piloted aircraft systems (RPAS) are highly capable to collect and transmit precise information from not accessible areas to the task force very quickly. Using the example of the Breitenhagen levee failure near the Saale-Elbe junction in Germany in June 2013 the processing steps will be explained that are needed to come from the visual UAV-flight information to a hydronumeric model. Modelling of the breach was implemented using photogrammetric ranging methods, such as structure from motion and dense image matching. These methods utilize conventional digital multiple view images or videos recorded by either a moving aerial platform or terrestrial photography and allow the construction of 3D point clouds, digital surface models and orthophotos. At Breitenhagen, a UAV recorded the beginning of the levee failure. Due to the dynamic character of the breach and the moving areal platform, 4 different surface models show valid data with extrapolated breach widths of 9 to 40 meters. By means of these calculations the flow rate through the breach has been determined. In addition the procedure has been tested in a physical model, whose results will be presented too.

  6. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  7. Multi-Temporal Classification and Change Detection Using Uav Images

    NASA Astrophysics Data System (ADS)

    Makuti, S.; Nex, F.; Yang, M. Y.

    2018-05-01

    In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  8. Image processing analysis of geospatial uav orthophotos for palm oil plantation monitoring

    NASA Astrophysics Data System (ADS)

    Fahmi, F.; Trianda, D.; Andayani, U.; Siregar, B.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) is one of the tools that can be used to monitor palm oil plantation remotely. With the geospatial orthophotos, it is possible to identify which part of the plantation land is fertile for planted crops, means to grow perfectly. It is also possible furthermore to identify less fertile in terms of growth but not perfect, and also part of plantation field that is not growing at all. This information can be easily known quickly with the use of UAV photos. In this study, we utilized image processing algorithm to process the orthophotos for more accurate and faster analysis. The resulting orthophotos image were processed using Matlab including classification of fertile, infertile, and dead palm oil plants by using Gray Level Co-Occurrence Matrix (GLCM) method. The GLCM method was developed based on four direction parameters with specific degrees 0°, 45°, 90°, and 135°. From the results of research conducted with 30 image samples, it was found that the accuracy of the system can be reached by using the features extracted from the matrix as parameters Contras, Correlation, Energy, and Homogeneity.

  9. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  10. Classical Photogrammetry and Uav - Selected Ascpects

    NASA Astrophysics Data System (ADS)

    Mikrut, S.

    2016-06-01

    The UAV technology seems to be highly future-oriented due to its low costs as compared to traditional aerial images taken from classical photogrammetry aircrafts. The AGH University of Science and Technology in Cracow - Department of Geoinformation, Photogrammetry and Environmental Remote Sensing focuses mainly on geometry and radiometry of recorded images. Various scientific research centres all over the world have been conducting the relevant research for years. The paper presents selected aspects of processing digital images made with the UAV technology. It provides on a practical example a comparison between a digital image taken from an airborne (classical) height, and the one made from an UAV level. In his research the author of the paper is trying to find an answer to the question: to what extent does the UAV technology diverge today from classical photogrammetry, and what are the advantages and disadvantages of both methods? The flight plan was made over the Tokarnia Village Museum (more than 0.5 km2) for two separate flights: the first was made by an UAV - System FT-03A built by FlyTech Solution Ltd. The second was made with the use of a classical photogrammetric Cesna aircraft furnished with an airborne photogrammetric camera (Ultra Cam Eagle). Both sets of photographs were taken with pixel size of about 3 cm, in order to have reliable data allowing for both systems to be compared. The project has made aerotriangulation independently for the two flights. The DTM was generated automatically, and the last step was the generation of an orthophoto. The geometry of images was checked under the process of aerotriangulation. To compare the accuracy of these two flights, control and check points were used. RMSE were calculated. The radiometry was checked by a visual method and using the author's own algorithm for feature extraction (to define edges with subpixel accuracy). After initial pre-processing of data, the images were put together, and shown side by side

  11. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images.

    PubMed

    Gašparović, Mateo; Jurjević, Luka

    2017-02-18

    In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined.

  12. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images

    PubMed Central

    Gašparović, Mateo; Jurjević, Luka

    2017-01-01

    In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined. PMID:28218699

  13. Critical infrastructure monitoring using UAV imagery

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos

    2016-08-01

    The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.

  14. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  15. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    NASA Astrophysics Data System (ADS)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  16. Radar sensing via a Micro-UAV-borne system

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Ludeno, Giovanni; Gennarelli, Gianluca; Soldovieri, Francesco; Rodi Vetrella, Amedeo; Fasano, Giancarmine

    2017-04-01

    In recent years, the miniaturization of flight control systems and payloads has contributed to a fast and widespread diffusion of micro-UAV (Unmanned Aircraft Vehicle). While micro-UAV can be a powerful tool in several civil applications such as environmental monitoring and surveillance, unleashing their full potential for societal benefits requires augmenting their sensing capability beyond the realm of active/passive optical sensors [1]. In this frame, radar systems are drawing attention since they allow performing missions in all-weather and day/night conditions and, thanks to the microwave ability to penetrate opaque media, they enable the detection and localization not only of surface objects but also of sub-surface/hidden targets. However, micro-UAV-borne radar imaging represents still a new frontier, since it is much more than a matter of technology miniaturization or payload installation, which can take advantage of the newly developed ultralight systems. Indeed, micro-UAV-borne radar imaging entails scientific challenges in terms of electromagnetic modeling and knowledge of flight dynamics and control. As a consequence, despite Synthetic Aperture Radar (SAR) imaging is a traditional remote sensing tool, its adaptation to micro-UAV is an open issue and so far only few case studies concerning the integration of SAR and UAV technologies have been reported worldwide [2]. In addition, only early results concerning subsurface imaging by means of an UAV-mounted radar are available [3]. As a contribution to radar imaging via autonomous micro-UAV, this communication presents a proof-of-concept experiment. This experiment represents the first step towards the development of a general methodological approach that exploits expertise about (sub-)surface imaging and aerospace systems with the aim to provide high-resolution images of the surveyed scene. In details, at the conference, we will present the results of a flight campaign carried out by using a single radar

  17. Feasibility study of a novel miniaturized spectral imaging system architecture in UAV surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Shuyang; Zhou, Tao; Jia, Xiaodong; Cui, Hushan; Huang, Chengjun

    2016-01-01

    The spectral imaging technology is able to analysis the spectral and spatial geometric character of the target at the same time. To break through the limitation brought by the size, weight and cost of the traditional spectral imaging instrument, a miniaturized novel spectral imaging based on CMOS processing has been introduced in the market. This technology has enabled the possibility of applying spectral imaging in the UAV platform. In this paper, the relevant technology and the related possible applications have been presented to implement a quick, flexible and more detailed remote sensing system.

  18. UAVs Being Used for Environmental Surveying

    ScienceCinema

    Chung, Sandra

    2017-12-09

    UAVs, are much more sophisticated than your typical remote-controlled plane. INL robotics and remote sensing experts have added state-of-the-art imaging and wireless technology to the UAVs to create intelligent remote surveillance craft that can rapidly survey a wide area for damage and track down security threats.

  19. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  20. Uav Borne Low Altitude Photogrammetry System

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Su, G.; Xie, F.

    2012-07-01

    In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.

  1. Automated geographic registration and radiometric correction for UAV-based mosaics

    USDA-ARS?s Scientific Manuscript database

    Texas A&M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to s...

  2. Comprehensive UAV agricultural remote-sensing research at Texas A M University

    NASA Astrophysics Data System (ADS)

    Thomasson, J. Alex; Shi, Yeyin; Olsenholler, Jeffrey; Valasek, John; Murray, Seth C.; Bishop, Michael P.

    2016-05-01

    Unmanned aerial vehicles (UAVs) have advantages over manned vehicles for agricultural remote sensing. Flying UAVs is less expensive, is more flexible in scheduling, enables lower altitudes, uses lower speeds, and provides better spatial resolution for imaging. The main disadvantage is that, at lower altitudes and speeds, only small areas can be imaged. However, on large farms with contiguous fields, high-quality images can be collected regularly by using UAVs with appropriate sensing technologies that enable high-quality image mosaics to be created with sufficient metadata and ground-control points. In the United States, rules governing the use of aircraft are promulgated and enforced by the Federal Aviation Administration (FAA), and rules governing UAVs are currently in flux. Operators must apply for appropriate permissions to fly UAVs. In the summer of 2015 Texas A&M University's agricultural research agency, Texas A&M AgriLife Research, embarked on a comprehensive program of remote sensing with UAVs at its 568-ha Brazos Bottom Research Farm. This farm is made up of numerous fields where various crops are grown in plots or complete fields. The crops include cotton, corn, sorghum, and wheat. After gaining FAA permission to fly at the farm, the research team used multiple fixed-wing and rotary-wing UAVs along with various sensors to collect images over all parts of the farm at least once per week. This article reports on details of flight operations and sensing and analysis protocols, and it includes some lessons learned in the process of developing a UAV remote-sensing effort of this sort.

  3. Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.

    2014-02-01

    Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.

  4. Woodland Mapping at Single-Tree Levels Using Object-Oriented Classification of Unmanned Aerial Vehicle (uav) Images

    NASA Astrophysics Data System (ADS)

    Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.

    2017-09-01

    Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  5. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  6. Uav-Borne Thermal Imaging for Forest Health Monitoring: Detection of Disease-Induced Canopy Temperature Increase

    NASA Astrophysics Data System (ADS)

    Smigaj, M.; Gaulton, R.; Barr, S. L.; Suárez, J. C.

    2015-08-01

    Climate change has a major influence on forest health and growth, by indirectly affecting the distribution and abundance of forest pathogens, as well as the severity of tree diseases. Temperature rise and changes in precipitation may also allow the ranges of some species to expand, resulting in the introduction of non-native invasive species, which pose a significant risk to forests worldwide. The detection and robust monitoring of affected forest stands is therefore crucial for allowing management interventions to reduce the spread of infections. This paper investigates the use of a low-cost fixed-wing UAV-borne thermal system for monitoring disease-induced canopy temperature rise. Initially, camera calibration was performed revealing a significant overestimation (by over 1 K) of the temperature readings and a non-uniformity (exceeding 1 K) across the imagery. These effects have been minimised with a two-point calibration technique ensuring the offsets of mean image temperature readings from blackbody temperature did not exceed ± 0.23 K, whilst 95.4% of all the image pixels fell within ± 0.14 K (average) of mean temperature reading. The derived calibration parameters were applied to a test data set of UAV-borne imagery acquired over a Scots pine stand, representing a range of Red Band Needle Blight infection levels. At canopy level, the comparison of tree crown temperature recorded by a UAV-borne infrared camera suggests a small temperature increase related to disease progression (R = 0.527, p = 0.001); indicating that UAV-borne cameras might be able to detect sub-degree temperature differences induced by disease onset.

  7. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    NASA Astrophysics Data System (ADS)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  8. Tracking, aiming, and hitting the UAV with ordinary assault rifle

    NASA Astrophysics Data System (ADS)

    Racek, František; Baláž, Teodor; Krejčí, Jaroslav; Procházka, Stanislav; Macko, Martin

    2017-10-01

    The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier's mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.

  9. UAV Trajectory Modeling Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  10. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  11. Development of a Micro-UAV Hyperspectral Imaging Platform for Assessing Hydrogeological Hazards

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Alabsi, M.

    2015-12-01

    The exacerbating global weather changes have cast significant impacts upon the proportion of water supplied to agriculture. Therefore, one of the 21stCentury Grant Challenges faced by global population is securing water for food. However, the soil-water behavior in an agricultural environment is complex; among others, one of the key properties we recognize is water repellence or hydrophobicity, which affects many hydrogeological and hazardous conditions such as excessive water infiltration, runoff, and soil erosion. Under a US-Israel research program funded by USDA and BARD at Israel, we have proposed the development of a novel micro-unmanned aerial vehicle (micro-UAV or drone) based hyperspectral imaging platform for identifying and assessing soil repellence at low altitudes with enhanced flexibility, much reduced cost, and ultimately easy use. This aerial imaging system consists of a generic micro-UAV, hyperspectral sensor aided by GPS/IMU, on-board computing units, and a ground station. The target benefits of this system include: (1) programmable waypoint navigation and robotic control for multi-view imaging; (2) ability of two- or three-dimensional scene reconstruction for complex terrains; and (3) fusion with other sensors to realize real-time diagnosis (e.g., humidity and solar irradiation that may affect soil-water sensing). In this talk we present our methodology and processes in integration of hyperspectral imaging, on-board sensing and computing, hyperspectral data modeling, and preliminary field demonstration and verification of the developed prototype.

  12. Mass-storage management for distributed image/video archives

    NASA Astrophysics Data System (ADS)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  14. Achieving an Optimal Medium Altitude UAV Force Balance in Support of COIN Operations

    DTIC Science & Technology

    2009-02-02

    and execute operations. UAS with common data links and remote video terminals (RVTs) provide input to the common operational picture (COP) and...full-motion video (FMV) is intuitive to many tactical warfighters who have used similar sensors in manned aircraft. Modern data links allow the video ...Document (AFDD) 2-9. Intelligence, Surveillance, and Reconnaissance Operations, 17 July 2007. Baldor, Lolita C. “Increased UAV reliance evident in

  15. Application Possibility of Smartphone as Payload for Photogrammetric Uav System

    NASA Astrophysics Data System (ADS)

    Yun, M. H.; Kim, J.; Seo, D.; Lee, J.; Choi, C.

    2012-07-01

    Smartphone can not only be operated under 3G network environment anytime and anyplace but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study is aimed to assess the possibility of smartphone as a payload for photogrammetric UAV system. Prior to such assessment, a smartphone-based photogrammetric UAV system application was developed, through which real-time image, location and attitude data was obtained using smartphone under both static and dynamic conditions. Subsequently the accuracy assessment on the location and attitude data obtained and sent by this system was conducted. The smartphone images were converted into ortho-images through image triangulation. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration. In case IO parameters were taken into account in the static experiment, the results from triangulation for any smartphone type were within 1.5 pixel (RMSE), which was improved at least by 35% compared to when IO parameters were not taken into account. On the contrary, the improvement effect of considering IO parameters on accuracy in triangulation for smartphone images in dynamic experiment was not significant compared to the static experiment. It was due to the significant impact of vibration and sudden attitude change of UAV on the actuator for automatic focus control within the camera built in smartphone under the dynamic condition. This cause appears to have a negative impact on the image-based DEM generation. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.

  16. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  18. Optical and acoustical UAV detection

    NASA Astrophysics Data System (ADS)

    Christnacher, Frank; Hengy, Sébastien; Laurenzis, Martin; Matwyschuk, Alexis; Naz, Pierre; Schertzer, Stéphane; Schmitt, Gwenael

    2016-10-01

    Recent world events have highlighted that the proliferation of UAVs is bringing with it a new and rapidly increasing threat for national defense and security agencies. Whilst many of the reported UAV incidents seem to indicate that there was no terrorist intent behind them, it is not unreasonable to assume that it may not be long before UAV platforms are regularly employed by terrorists or other criminal organizations. The flight characteristics of many of these mini- and micro-platforms present challenges for current systems which have been optimized over time to defend against the traditional air-breathing airborne platforms. A lot of programs to identify cost-effective measures for the detection, classification, tracking and neutralization have begun in the recent past. In this paper, lSL shows how the performance of a UAV detection and tracking concept based on acousto-optical technology can be powerfully increased through active imaging.

  19. Novelty Detection Classifiers in Weed Mapping: Silybum marianum Detection on UAV Multispectral Images.

    PubMed

    Alexandridis, Thomas K; Tamouridou, Afroditi Alexandra; Pantazi, Xanthoula Eirini; Lagopodi, Anastasia L; Kashefi, Javid; Ovakoglou, Georgios; Polychronos, Vassilios; Moshou, Dimitrios

    2017-09-01

    In the present study, the detection and mapping of Silybum marianum (L.) Gaertn. weed using novelty detection classifiers is reported. A multispectral camera (green-red-NIR) on board a fixed wing unmanned aerial vehicle (UAV) was employed for obtaining high-resolution images. Four novelty detection classifiers were used to identify S. marianum between other vegetation in a field. The classifiers were One Class Support Vector Machine (OC-SVM), One Class Self-Organizing Maps (OC-SOM), Autoencoders and One Class Principal Component Analysis (OC-PCA). As input features to the novelty detection classifiers, the three spectral bands and texture were used. The S. marianum identification accuracy using OC-SVM reached an overall accuracy of 96%. The results show the feasibility of effective S. marianum mapping by means of novelty detection classifiers acting on multispectral UAV imagery.

  20. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation

    PubMed Central

    Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.

    2016-01-01

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196

  1. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation.

    PubMed

    Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J

    2016-01-14

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.

  2. Uav Photogrammetry: Block Triangulation Comparisons

    NASA Astrophysics Data System (ADS)

    Gini, R.; Pagliari, D.; Passoni, D.; Pinto, L.; Sona, G.; Dosso, P.

    2013-08-01

    UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord"), useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey), allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes) were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma), Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

  3. Three-dimensional estimates of tree canopies: Scaling from high-resolution UAV data to satellite observations

    NASA Astrophysics Data System (ADS)

    Sankey, T.; Donald, J.; McVay, J.

    2015-12-01

    High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.

  4. Bridge Crack Detection Using Multi-Rotary Uav and Object-Base Image Analysis

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Hsiao, K. W.; Jhan, J. P.; Wang, S. H.; Fang, W. C.; Wang, J. L.

    2017-08-01

    Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2-8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image

  5. Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators.

    PubMed

    Ruano, Susana; Cuevas, Carlos; Gallego, Guillermo; García, Narciso

    2017-02-06

    Unmanned Aerial Vehicles (UAVs) are being extensively used nowadays. Therefore, pilots of traditional aerial platforms should adapt their skills to operate them from a Ground Control Station (GCS). Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs. The experiments show how the AR tool improves significantly the situational awareness of the UAV operators.

  6. Video library for video imaging detection at intersection stop lines.

    DOT National Transportation Integrated Search

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  7. Accuracy Investigation of Creating Orthophotomaps Based on Images Obtained by Applying Trimble-UX5 UAV

    NASA Astrophysics Data System (ADS)

    Hlotov, Volodymyr; Hunina, Alla; Siejka, Zbigniew

    2017-06-01

    The main purpose of this work is to confirm the possibility of making largescale orthophotomaps applying unmanned aerial vehicle (UAV) Trimble- UX5. A planned altitude reference of the studying territory was carried out before to the aerial surveying. The studying territory has been marked with distinctive checkpoints in the form of triangles (0.5 × 0.5 × 0.2 m). The checkpoints used to precise the accuracy of orthophotomap have been marked with similar triangles. To determine marked reference point coordinates and check-points method of GNSS in real-time kinematics (RTK) measuring has been applied. Projecting of aerial surveying has been done with the help of installed Trimble Access Aerial Imaging, having been used to run out the UX5. Aerial survey out of the Trimble UX5 UAV has been done with the help of the digital camera SONY NEX-5R from 200m and 300 m altitude. These aerial surveying data have been calculated applying special photogrammetric software Pix 4D. The orthophotomap of the surveying objects has been made with its help. To determine the precise accuracy of the got results of aerial surveying the checkpoint coordinates according to the orthophotomap have been set. The average square error has been calculated according to the set coordinates applying GNSS measurements. A-priori accuracy estimation of spatial coordinates of the studying territory using the aerial surveying data have been calculated: mx=0.11 m, my=0.15 m, mz=0.23 m in the village of Remeniv and mx=0.26 m, my=0.38 m, mz=0.43 m in the town of Vynnyky. The accuracy of determining checkpoint coordinates has been investigated using images obtained out of UAV and the average square error of the reference points. Based on comparative analysis of the got results of the accuracy estimation of the made orthophotomap it can be concluded that the value the average square error does not exceed a-priori accuracy estimation. The possibility of applying Trimble UX5 UAV for making large

  8. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  9. Comparison of UAV and WorldView-2 imagery for mapping leaf area index of mangrove forest

    NASA Astrophysics Data System (ADS)

    Tian, Jinyan; Wang, Le; Li, Xiaojuan; Gong, Huili; Shi, Chen; Zhong, Ruofei; Liu, Xiaomeng

    2017-09-01

    Unmanned Aerial Vehicle (UAV) remote sensing has opened the door to new sources of data to effectively characterize vegetation metrics at very high spatial resolution and at flexible revisit frequencies. Successful estimation of the leaf area index (LAI) in precision agriculture with a UAV image has been reported in several studies. However, in most forests, the challenges associated with the interference from a complex background and a variety of vegetation species have hindered research using UAV images. To the best of our knowledge, very few studies have mapped the forest LAI with a UAV image. In addition, the drawbacks and advantages of estimating the forest LAI with UAV and satellite images at high spatial resolution remain a knowledge gap in existing literature. Therefore, this paper aims to map LAI in a mangrove forest with a complex background and a variety of vegetation species using a UAV image and compare it with a WorldView-2 image (WV2). In this study, three representative NDVIs, average NDVI (AvNDVI), vegetated specific NDVI (VsNDVI), and scaled NDVI (ScNDVI), were acquired with UAV and WV2 to predict the plot level (10 × 10 m) LAI. The results showed that AvNDVI achieved the highest accuracy for WV2 (R2 = 0.778, RMSE = 0.424), whereas ScNDVI obtained the optimal accuracy for UAV (R2 = 0.817, RMSE = 0.423). In addition, an overall comparison results of the WV2 and UAV derived LAIs indicated that UAV obtained a better accuracy than WV2 in the plots that were covered with homogeneous mangrove species or in the low LAI plots, which was because UAV can effectively eliminate the influence from the background and the vegetation species owing to its high spatial resolution. However, WV2 obtained a slightly higher accuracy than UAV in the plots covered with a variety of mangrove species, which was because the UAV sensor provides a negative spectral response function(SRF) than WV2 in terms of the mangrove LAI estimation.

  10. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  11. Path planning and Ground Control Station simulator for UAV

    NASA Astrophysics Data System (ADS)

    Ajami, A.; Balmat, J.; Gauthier, J.-P.; Maillot, T.

    In this paper we present a Universal and Interoperable Ground Control Station (UIGCS) simulator for fixed and rotary wing Unmanned Aerial Vehicles (UAVs), and all types of payloads. One of the major constraints is to operate and manage multiple legacy and future UAVs, taking into account the compliance with NATO Combined/Joint Services Operational Environment (STANAG 4586). Another purpose of the station is to assign the UAV a certain degree of autonomy, via autonomous planification/replanification strategies. The paper is organized as follows. In Section 2, we describe the non-linear models of the fixed and rotary wing UAVs that we use in the simulator. In Section 3, we describe the simulator architecture, which is based upon interacting modules programmed independently. This simulator is linked with an open source flight simulator, to simulate the video flow and the moving target in 3D. To conclude this part, we tackle briefly the problem of the Matlab/Simulink software connection (used to model the UAV's dynamic) with the simulation of the virtual environment. Section 5 deals with the control module of a flight path of the UAV. The control system is divided into four distinct hierarchical layers: flight path, navigation controller, autopilot and flight control surfaces controller. In the Section 6, we focus on the trajectory planification/replanification question for fixed wing UAV. Indeed, one of the goals of this work is to increase the autonomy of the UAV. We propose two types of algorithms, based upon 1) the methods of the tangent and 2) an original Lyapunov-type method. These algorithms allow either to join a fixed pattern or to track a moving target. Finally, Section 7 presents simulation results obtained on our simulator, concerning a rather complicated scenario of mission.

  12. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  14. Autonomous Control of a Quadrotor UAV Using Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Sureshkumar, Vijaykumar

    UAVs are being increasingly used today than ever before in both military and civil applications. They are heavily preferred in "dull, dirty or dangerous" mission scenarios. Increasingly, UAVs of all kinds are being used in policing, fire-fighting, inspection of structures, pipelines etc. Recently, the FAA gave its permission for UAVs to be used on film sets for motion capture and high definition video recording. The rapid development in MEMS and actuator technology has made possible a plethora of UAVs that are suited for commercial applications in an increasingly cost effective manner. An emerging popular rotary wing UAV platform is the Quadrotor A Quadrotor is a helicopter with four rotors, that make it more stable; but more complex to model and control. Characteristics that provide a clear advantage over other fixed wing UAVs are VTOL and hovering capabilities as well as a greater maneuverability. It is also simple in construction and design compared to a scaled single rotorcraft. Flying such UAVs using a traditional radio Transmitter-Receiver setup can be a daunting task especially in high stress situations. In order to make such platforms widely applicable, a certain level of autonomy is imperative to the future of such UAVs. This thesis paper presents a methodology for the autonomous control of a Quadrotor UAV using Fuzzy Logic. Fuzzy logic control has been chosen over conventional control methods as it can deal effectively with highly nonlinear systems, allows for imprecise data and is extremely modular. Modularity and adaptability are the key cornerstones of FLC. The objective of this thesis is to present the steps of designing, building and simulating an intelligent flight control module for a Quadrotor UAV. In the course of this research effort, a Quadrotor UAV is indigenously developed utilizing the resources of an online open source project called Aeroquad. System design is comprehensively dealt with. A math model for the Quadrotor is developed and a

  15. a Three-Dimensional Simulation and Visualization System for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Qu, Y.; Cui, T.

    2017-08-01

    Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.

  16. a Comparison of Uav and Tls Data for Soil Roughness Assessment

    NASA Astrophysics Data System (ADS)

    Milenković, M.; Karel, W.; Ressl, C.; Pfeifer, N.

    2016-06-01

    Soil roughness represents fine-scale surface geometry which figures in many geophysical models. While static photogrammetric techniques (terrestrial images and laser scanning) have been recently proposed as a new source for deriving roughness heights, there is still need to overcome acquisition scale and viewing geometry issues. By contrast to the static techniques, images taken from unmanned aerial vehicles (UAV) can maintain near-nadir looking geometry over scales of several agricultural fields. This paper presents a pilot study on high-resolution, soil roughness reconstruction and assessment from UAV images over an agricultural plot. As a reference method, terrestrial laser scanning (TLS) was applied on a 10 m x 1.5 m subplot. The UAV images were self-calibrated and oriented within a bundle adjustment, and processed further up to a dense-matched digital surface model (DSM). The analysis of the UAV- and TLS-DSMs were performed in the spatial domain based on the surface autocorrelation function and the correlation length, and in the frequency domain based on the roughness spectrum and the surface fractal dimension (spectral slope). The TLS- and UAV-DSM differences were found to be under ±1 cm, while the UAV DSM showed a systematic pattern below this scale, which was explained by weakly tied sub-blocks of the bundle block. The results also confirmed that the existing TLS methods leads to roughness assessment up to 5 mm resolution. However, for our UAV data, this was not possible to achieve, though it was shown that for spatial scales of 12 cm and larger, both methods appear to be usable. Additionally, this paper suggests a method to propagate measurement errors to the correlation length.

  17. Smart Cruise Control: UAV sensor operator intent estimation and its application

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren; Kumar, Rakesh

    2006-05-01

    Due to their long endurance, superior mobility and the low risk posed to the pilot and sensor operator, UAVs have become the preferred platform for persistent ISR missions. However, currently most UAV based ISR missions are conducted through manual operation. Event the simplest tasks, such as vehicle tracking, route reconnaissance and site monitoring, need the sensor operator's undivided attention and constant adjustment of the sensor control. The lack of autonomous behaviour greatly limits of the effectiveness and the capability of UAV-based ISR, especially the use of a large number of UAVs simultaneously. Although fully autonomous UAV based ISR system is desirable, it is still a distant dream due to the complexity and diversity of combat and ISR missions. In this paper, we propose a Smart Cruise Control system that can learn UAV sensor operator's intent and use it to complete tasks automatically, such as route reconnaissance and site monitoring. Using an operator attention model, the proposed system can estimate the operator's intent from how they control the sensor (e.g. camera) and the content of the imagery that is acquired. Therefore, for example, from initially manually controlling the UAV sensor to follow a road, the system can learn not only the preferred operation, "tracking", but also the road appearance, "what to track" in real-time. Then, the learnt models of both road and the desired operation can be used to complete the task automatically. We have demonstrated the Smart Cruise Control system using real UAV videos where roads need to be tracked and buildings need to be monitored.

  18. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  19. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  20. Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators

    PubMed Central

    Ruano, Susana; Cuevas, Carlos; Gallego, Guillermo; García, Narciso

    2017-01-01

    Unmanned Aerial Vehicles (UAVs) are being extensively used nowadays. Therefore, pilots of traditional aerial platforms should adapt their skills to operate them from a Ground Control Station (GCS). Common GCSs provide information in separate screens: one presents the video stream while the other displays information about the mission plan and information coming from other sensors. To avoid the burden of fusing information displayed in the two screens, an Augmented Reality (AR) tool is proposed in this paper. The AR system has two functionalities for Medium-Altitude Long-Endurance (MALE) UAVs: route orientation and target identification. Route orientation allows the operator to identify the upcoming waypoints and the path that the UAV is going to follow. Target identification allows a fast target localization, even in the presence of occlusions. The AR tool is implemented following the North Atlantic Treaty Organization (NATO) standards so that it can be used in different GCSs. The experiments show how the AR tool improves significantly the situational awareness of the UAV operators. PMID:28178189

  1. Positional quality assessment of orthophotos obtained from sensors onboard multi-rotor UAV platforms.

    PubMed

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-11-26

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.

  2. Building Damage Extraction Triggered by Earthquake Using the Uav Imagery

    NASA Astrophysics Data System (ADS)

    Li, S.; Tang, H.

    2018-04-01

    When extracting building damage information, we can only determine whether the building is collapsed using the post-earthquake satellite images. Even the satellite images have the sub-meter resolution, the identification of slightly damaged buildings is still a challenge. As the complementary data to satellite images, the UAV images have unique advantages, such as stronger flexibility and higher resolution. In this paper, according to the spectral feature of UAV images and the morphological feature of the reconstructed point clouds, the building damage was classified into four levels: basically intact buildings, slightly damaged buildings, partially collapsed buildings and totally collapsed buildings, and give the rules of damage grades. In particular, the slightly damaged buildings are determined using the detected roof-holes. In order to verify the approach, we conduct experimental simulations in the cases of Wenchuan and Ya'an earthquakes. By analyzing the post-earthquake UAV images of the two earthquakes, the building damage was classified into four levels, and the quantitative statistics of the damaged buildings is given in the experiments.

  3. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  4. Self-Image--Alien Image: A Bilateral Video Project.

    ERIC Educational Resources Information Center

    Kracsay, Susanne

    1995-01-01

    Describes a project in which Austrian and Hungarian students learned how people see each other by creating video pictures and letters of their neighbors (alien images) that were returned with corrections (self-images). Discussion includes student critiques, impressions, and misconceptions. (AEF)

  5. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  6. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  7. Optimal trajectory planning for a UAV glider using atmospheric thermals

    NASA Astrophysics Data System (ADS)

    Kagabo, Wilson B.

    An Unmanned Aerial Vehicle Glider (UAV glider) uses atmospheric energy in its different forms to remain aloft for extended flight durations. This UAV glider's aim is to extract atmospheric thermal energy and use it to supplement its battery energy usage and increase the mission period. Given an infrared camera identified atmospheric thermal of known strength and location; current wind speed and direction; current battery level; altitude and location of the UAV glider; and estimating the expected altitude gain from the thermal, is it possible to make an energy-efficient based motivation to fly to an atmospheric thermal so as to achieve UAV glider extended flight time? For this work, an infrared thermal camera aboard the UAV glider takes continuous forward-looking ground images of "hot spots". Through image processing a candidate atmospheric thermal strength and location is estimated. An Intelligent Decision Model incorporates this information with the current UAV glider status and weather conditions to provide an energy-based recommendation to modify the flight path of the UAV glider. Research, development, and simulation of the Intelligent Decision Model is the primary focus of this work. Three models are developed: (1) Battery Usage Model, (2) Intelligent Decision Model, and (3) Altitude Gain Model. The Battery Usage Model comes from the candidate flight trajectory, wind speed & direction and aircraft dynamic model. Intelligent Decision Model uses a fuzzy logic based approach. The Altitude Gain Model requires the strength and size of the thermal and is found a priori.

  8. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  9. UAV Monitoring for Enviromental Management in Galapagos Islands

    NASA Astrophysics Data System (ADS)

    Ballari, D.; Orellana, D.; Acosta, E.; Espinoza, A.; Morocho, V.

    2016-06-01

    In the Galapagos Islands, where 97% of the territory is protected and ecosystem dynamics are highly vulnerable, timely and accurate information is key for decision making. An appropriate monitoring system must meet two key features: on one hand, being able to capture information in a systematic and regular basis, and on the other hand, to quickly gather information on demand for specific purposes. The lack of such a system for geographic information limits the ability of Galapagos Islands' institutions to evaluate and act upon environmental threats such as invasive species spread and vegetation degradation. In this context, the use of UAVs (unmanned aerial vehicles) for capturing georeferenced images is a promising technology for environmental monitoring and management. This paper explores the potential of UAV images for monitoring degradation of littoral vegetation in Puerto Villamil (Isabela Island, Galapagos, Ecuador). Imagery was captured using two camera types: Red Green Blue (RGB) and Infrarred Red Green (NIR). First, vegetation presence was identified through NDVI. Second, object-based classification was carried out for characterization of vegetation vigor. Results demonstrates the feasibility of UAV technology for base-line studies and monitoring on the amount and vigorousness of littoral vegetation in the Galapagos Islands. It is also showed that UAV images are not only useful for visual interpretation and object delineation, but also to timely produce useful thematic information for environmental management.

  10. Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study.

    PubMed

    Hodgson, Amanda; Kelly, Natalie; Peel, David

    2013-01-01

    Aerial surveys of marine mammals are routinely conducted to assess and monitor species' habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km(2) area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as 'certain' (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.

  11. Unmanned Aerial Vehicles (UAVs) for Surveying Marine Fauna: A Dugong Case Study

    PubMed Central

    Hodgson, Amanda; Kelly, Natalie; Peel, David

    2013-01-01

    Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys. PMID:24223967

  12. UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought

    PubMed Central

    Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine

    2017-01-01

    Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F2 partially inbred population (termed here ‘POP6’), whose F1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature (Tc) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype

  13. UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought.

    PubMed

    Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine

    2017-01-01

    Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F 2 partially inbred population (termed here 'POP6'), whose F 1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature ( T c ) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype

  14. The Practical Application of Uav-Based Photogrammetry Under Economic Aspects

    NASA Astrophysics Data System (ADS)

    Sauerbier, M.; Siegrist, E.; Eisenbeiss, H.; Demir, N.

    2011-09-01

    Nowadays, small size UAVs (Unmanned Aerial Vehicles) have reached a level of practical reliability and functionality that enables this technology to enter the geomatics market as an additional platform for spatial data acquisition. Though one could imagine a wide variety of interesting sensors to be mounted on such a device, here we will focus on photogrammetric applications using digital cameras. In praxis, UAV-based photogrammetry will only be accepted if it a) provides the required accuracy and an additional value and b) if it is competitive in terms of economic application compared to other measurement technologies. While a) was already proven by the scientific community and results were published comprehensively during the last decade, b) still has to be verified under real conditions. For this purpose, a test data set representing a realistic scenario provided by ETH Zurich was used to investigate cost effectiveness and to identify weak points in the processing chain that require further development. Our investigations are limited to UAVs carrying digital consumer cameras, for larger UAVs equipped with medium format cameras the situation has to be considered as significantly different. Image data was acquired during flights using a microdrones MD4-1000 quadrocopter equipped with an Olympus PE-1 digital compact camera. From these images, a subset of 5 images was selected for processing in order to register the effort of time required for the whole production chain of photogrammetric products. We see the potential of mini UAV-based photogrammetry mainly in smaller areas, up to a size of ca. 100 hectares. Larger areas can be efficiently covered by small airplanes with few images, reducing processing effort drastically. In case of smaller areas of a few hectares only, it depends more on the products required. UAVs can be an enhancement or alternative to GNSS measurements, terrestrial laser scanning and ground based photogrammetry. We selected the above mentioned

  15. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  16. Lessons Learned from NASA UAV Science Demonstration Program Missions

    NASA Technical Reports Server (NTRS)

    Wegener, Steven S.; Schoenung, Susan M.

    2003-01-01

    During the summer of 2002, two airborne missions were flown as part of a NASA Earth Science Enterprise program to demonstrate the use of uninhabited aerial vehicles (UAVs) to perform earth science. One mission, the Altus Cumulus Electrification Study (ACES), successfully measured lightning storms in the vicinity of Key West, Florida, during storm season using a high-altitude Altus(TM) UAV. In the other, a solar-powered UAV, the Pathfinder Plus, flew a high-resolution imaging mission over coffee fields in Kauai, Hawaii, to help guide the harvest.

  17. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of

  18. Monitoring landslide dynamics using timeseries of UAV imagery

    NASA Astrophysics Data System (ADS)

    de Jong, S. M.; Van Beek, L. P.

    2017-12-01

    Landslides are worldwide occurring processes that can have large economic impact and sometimes result in fatalities. Multiple factors are important in landslide processes and can make an area prone to landslide activity. Human factors like drainage and removal of vegetation or land clearing are examples of factors that may cause a landslide. Other environmental factors such as topography and the shear strength of the slope material are more difficult to control. Triggering factors for landslides are typically heavy rainfall events or sometimes by earthquakes or under cutting processes by a river. The collection of data about existing landslides in a given area is important for predicting future landslides in that region. We have setup a monitoring program for landslide using cameras aboard Unmanned Airborne Vehicles. UAV with cameras are able to collect ultra-high resolution images and UAVs can be operated in a very flexible way, they just fit in the back of a car. Here, in this study we used Unmanned Aerial Vehicles to collect a time series of high-resolution images over landslides in France and Australia. The algorithm used to process the UAV images into OrthoMosaics and OrthoDEMs is Structure from Motion (SfM). The process generally results in centimeter precision in the horizontal and vertical direction. Such multi-temporal datasets enable the detection of landslide area, the leading edge slope, temporal patterns and volumetric changes of particular areas of the landslide. We measured and computed surface movement of the landslide using the COSI-Corr image correlation algorithm with ground validation. Our study shows the possibilities of generating accurate Digital Surface Models (DSMs) of landslides using images collected with an Unmanned Aerial Vehicle (UAV). The technique is robust and repeatable such that a substantial time series of datasets can be routinely collected. It is shown that a time-series of UAV images can be used to map landslide movements with

  19. Feasibility of employing a smartphone as the payload in a photogrammetric UAV system

    NASA Astrophysics Data System (ADS)

    Kim, Jinsoo; Lee, Seongkyu; Ahn, Hoyong; Seo, Dongju; Park, Soyoung; Choi, Chuluong

    2013-05-01

    Smartphones can be operated in a 3G network environment at any time or location, and they also cost less than existing photogrammetric UAV systems, providing high-resolution images and 3D location and attitude data from a variety of built-in sensors. This study aims to assess the feasibility of using a smartphone as the payload for a photogrammetric UAV system. To carry out the assessment, a smartphone-based photogrammetric UAV system was developed and utilized to obtain image, location, and attitude data under both static and dynamic conditions. The accuracy of the location and attitude data obtained and sent by this system was then evaluated. The smartphone images were converted into ortho-images via image triangulation, which was carried out both with and without consideration of the interior orientation (IO) parameters determined by camera calibration. In the static experiment, when the IO parameters were taken into account, the triangulation results were less than 1.28 pixels (RMSE) for all smartphone types, an improvement of at least 47% compared with the case when IO parameters were not taken into account. In the dynamic experiment, on the other hand, the accuracy of smartphone image triangulation was not significantly improved by considering IO parameters. This was because the electronic rolling shutter within the complementary metal-oxide semiconductor (CMOS) sensor built into the smartphone and the actuator for the voice coil motor (VCM)-type auto-focusing affected by the vibration and the speed of the UAV, which is likely to have a negative effect on image-based digital elevation model (DEM) generation. However, considering that these results were obtained using a single smartphone, this suggests that a smartphone is not only feasible as the payload for a photogrammetric UAV system but it may also play a useful role when installed in existing UAV systems.

  20. Automated geographic registration and radiometric correction for UAV-based mosaics

    NASA Astrophysics Data System (ADS)

    Thomasson, J. Alex; Shi, Yeyin; Sima, Chao; Yang, Chenghai; Cope, Dale A.

    2017-05-01

    Texas A and M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to science-based utilization of such mosaics are geographic registration and radiometric calibration. In our current research project, image files are taken to the computer laboratory after the flight, and semi-manual pre-processing is implemented on the raw image data, including ortho-mosaicking and radiometric calibration. Ground control points (GCPs) are critical for high-quality geographic registration of images during mosaicking. Applications requiring accurate reflectance data also require radiometric-calibration references so that reflectance values of image objects can be calculated. We have developed a method for automated geographic registration and radiometric correction with targets that are installed semi-permanently at distributed locations around fields. The targets are a combination of black (≍5% reflectance), dark gray (≍20% reflectance), and light gray (≍40% reflectance) sections that provide for a transformation of pixel-value to reflectance in the dynamic range of crop fields. The exact spectral reflectance of each target is known, having been measured with a spectrophotometer. At the time of installation, each target is measured for position with a real-time kinematic GPS receiver to give its precise latitude and longitude. Automated location of the reference targets in the images is required for precise, automated, geographic registration; and automated calculation of the digital-number to reflectance transformation is required for automated radiometric calibration. To validate the system for radiometric calibration, a calibrated UAV-based image mosaic of a field was compared to a calibrated single image from a manned aircraft. Reflectance

  1. Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms

    PubMed Central

    Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer

    2014-01-01

    In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877

  2. Assessing the Utility of Uav-Borne Hyperspectral Image and Photogrammetry Derived 3d Data for Wetland Species Distribution Quick Mapping

    NASA Astrophysics Data System (ADS)

    Li, Q. S.; Wong, F. K. K.; Fung, T.

    2017-08-01

    Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.

  3. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4

  4. DAZZLE project: UAV to ground communication system using a laser and a modulated retro-reflector

    NASA Astrophysics Data System (ADS)

    Thueux, Yoann; Avlonitis, Nicholas; Erry, Gavin

    2014-10-01

    The advent of the Unmanned Aerial Vehicle (UAV) has generated the need for reduced size, weight and power (SWaP) requirements for communications systems with a high data rate, enhanced security and quality of service. This paper presents the current results of the DAZZLE project run by Airbus Group Innovations. The specifications, integration steps and initial performance of a UAV to ground communication system using a laser and a modulated retro-reflector are detailed. The laser operates at the wavelength of 1550nm and at power levels that keep it eye safe. It is directed using a FLIR pan and tilt unit driven by an image processing-based system that tracks the UAV in flight at a range of a few kilometers. The modulated retro-reflector is capable of a data rate of 20Mbps over short distances, using 200mW of electrical power. The communication system was tested at the Pershore Laser Range in July 2014. Video data from a flying Octocopter was successfully transmitted over 1200m. During the next phase of the DAZZLE project, the team will attempt to produce a modulated retro-reflector capable of 1Gbps in partnership with the research institute Acreo1 based in Sweden. A high speed laser beam steering capability based on a Spatial Light Modulator will also be added to the system to improve beam pointing accuracy.

  5. The use of UAVs for monitoring land degradation

    NASA Astrophysics Data System (ADS)

    Themistocleous, Kyriacos

    2017-10-01

    Land degradation is one of the causes of desertification of drylands in the Mediterranean. UAVs can be used to monitor and document the various variables that cause desertification in drylands, including overgrazing, aridity, vegetation loss, etc. This paper examines the use of UAVs and accompanying sensors to monitor overgrazing, vegetation stress and aridity in the study area. UAV images can be used to generate digital elevation models (DEMs) to examine the changes in microtopography as well as ortho-photos were used to detect changes in vegetation patterns. The combined data of the digital elevation models and the orthophotos can be used to identify the mechanisms for desertification in the study area.

  6. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  7. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  8. Multiple Event Localization in a Sparse Acoustic Sensor Network Using UAVs as Data Mules

    DTIC Science & Technology

    2012-12-01

    necessarily reflect the position or the policy of the Government , and no official endorsement should be inferred. Path Acoustic Sensor Communication Footprint...a Microhard radio to forward the ToAs to the mule-UAV. Two Procerus Unicorn UAVs were used with different payloads. The imaging- UAV was equipped

  9. Configuration and specifications of an Unmanned Aerial Vehicle (UAV) for early site specific weed management.

    PubMed

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).

  10. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management

    PubMed Central

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997

  11. Runway Detection From Map, Video and Aircraft Navigational Data

    DTIC Science & Technology

    2016-03-01

    FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA by Jose R. Espinosa Gloria March 2016 Thesis Advisor: Roberto Cristi Co-Advisor: Oleg...COVERED Master’s thesis 4. TITLE AND SUBTITLE RUNWAY DETECTION FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA 5. FUNDING NUMBERS 6. AUTHOR...Mexican Navy, unmanned aerial vehicles (UAV) have been equipped with daylight and infrared cameras. Processing the video information obtained from these

  12. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  13. Hurricane Harvey Building Damage Assessment Using UAV Data

    NASA Astrophysics Data System (ADS)

    Yeom, J.; Jung, J.; Chang, A.; Choi, I.

    2017-12-01

    Hurricane Harvey which was extremely destructive major hurricane struck southern Texas, U.S.A on August 25, causing catastrophic flooding and storm damages. We visited Rockport suffered severe building destruction and conducted UAV (Unmanned Aerial Vehicle) surveying for building damage assessment. UAV provides very high resolution images compared with traditional remote sensing data. In addition, prompt and cost-effective damage assessment can be performed regardless of several limitations in other remote sensing platforms such as revisit interval of satellite platforms, complicated flight plan in aerial surveying, and cloud amounts. In this study, UAV flight and GPS surveying were conducted two weeks after hurricane damage to generate an orthomosaic image and a DEM (Digital Elevation Model). 3D region growing scheme has been proposed to quantitatively estimate building damages considering building debris' elevation change and spectral difference. The result showed that the proposed method can be used for high definition building damage assessment in a time- and cost-effective way.

  14. Unmanned Aerial Vehicle (UAV) Dynamic-Tracking Directional Wireless Antennas for Low Powered Applications that Require Reliable Extended Range Operations in Time Critical Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott G. Bauer; Matthew O. Anderson; James R. Hanneman

    2005-10-01

    The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs requiremore » wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.« less

  15. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  16. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  17. Design and implementation of a remote UAV-based mobile health monitoring system

    NASA Astrophysics Data System (ADS)

    Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix

    2017-04-01

    Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.

  18. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  19. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  20. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  1. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  2. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  3. 17 CFR 232.304 - Graphic, image, audio and video material.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...

  4. Slic Superpixels for Object Delineation from Uav Data

    NASA Astrophysics Data System (ADS)

    Crommelinck, S.; Bennett, R.; Gerke, M.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.

    2017-08-01

    Unmanned aerial vehicles (UAV) are increasingly investigated with regard to their potential to create and update (cadastral) maps. UAVs provide a flexible and low-cost platform for high-resolution data, from which object outlines can be accurately delineated. This delineation could be automated with image analysis methods to improve existing mapping procedures that are cost, time and labor intensive and of little reproducibility. This study investigates a superpixel approach, namely simple linear iterative clustering (SLIC), in terms of its applicability to UAV data. The approach is investigated in terms of its applicability to high-resolution UAV orthoimages and in terms of its ability to delineate object outlines of roads and roofs. Results show that the approach is applicable to UAV orthoimages of 0.05 m GSD and extents of 100 million and 400 million pixels. Further, the approach delineates the objects with the high accuracy provided by the UAV orthoimages at completeness rates of up to 64 %. The approach is not suitable as a standalone approach for object delineation. However, it shows high potential for a combination with further methods that delineate objects at higher correctness rates in exchange of a lower localization quality. This study provides a basis for future work that will focus on the incorporation of multiple methods for an interactive, comprehensive and accurate object delineation from UAV data. This aims to support numerous application fields such as topographic and cadastral mapping.

  5. Authenticity and privacy of a team of mini-UAVs by means of nonlinear recursive shuffling

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Ming-Kai; Baier, Patrick; Lee, Ting N.; Buss, James R.; Madan, Rabinder N.

    2006-04-01

    We have developed a real-time EOIR video counter-jittering sub-pixel image correction algorithm for a single mini- Unmanned Air Vehicle (m-UAV) for surveillance and communication (Szu et al. SPIE Proc. V 5439 5439, pp.183-197, April 12, 2004). In this paper, we wish to plan and execute the next challenge---- a team of m-UAVs. The minimum unit for a robust chain saw communication must have the connectivity of five second-nearest-neighbor members with a sliding, arbitrary center. The team members require an authenticity check (AC) among a unit of five, in order to carry out a jittering mosaic image processing (JMIP) on-board for every m-UAV without gimbals. The JMIP does not use any NSA security protocol ("cardinal rule: no-man, no-NSA codec"). Besides team flight dynamics (Szu et al "Nanotech applied to aerospace and aeronautics: swarming,' AIAA 2005-6933 Sept 26-29 2005), several new modules: AOA, AAM, DSK, AC, FPGA are designed, and the JMIP must develop their own control, command and communication system, safeguarded by the authenticity and privacy checks presented in this paper. We propose a Nonlinear Invertible (deck of card) Shuffler (NIS) algorithm, which has a Feistel structure similar to the Data Encryption Standard (DES) developed by Feistel et. al. at IBM in the 1970's; but DES is modified here by a set of chaotic dynamical shuffler Key (DSK), as re-computable lookup tables generated by every on-board Chaotic Neural Network (CNN). The initializations of CNN are periodically provided by the private version of RSA from the ground control to team members to avoid any inadvertent failure of broken chain among m-UAVs. Efficient utilization of communication bandwidth is necessary for a constantly moving and jittering m-UAV platform, e.g. the wireless LAN protocol wastes the bandwidth due to a constant need of hand-shaking procedures (as demonstrated by NRL; though sensible for PCs and 3 rd gen. mobile phones). Thus, the chaotic DSK must be embedded in a fault

  6. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  7. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  8. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  9. The pan-sharpening of satellite and UAV imagery for agricultural applications

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata

    2016-10-01

    Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.

  10. Determination of Landslide and Driftwood Potentials by Fixed-wing UAV-Borne RGB and NIR images: A Case Study of Shenmu Area in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Su-Chin; Hsiao, Yu-Shen; Chung, Ta-Hsien

    2015-04-01

    This study is aimed at determining the landslide and driftwood potentials at Shenmu area in Taiwan by Unmanned Aerial Vehicle (UAV). High-resolution orthomosaics and digital surface models (DSMs) are both obtained from several UAV practical surveys by using a red-green-blue(RGB) camera and a near-infrared(NIR) one, respectively. Couples of artificial aerial survey targets are used for ground control in photogrammtry. The algorithm for this study is based on Logistic regression. 8 main factors, which are elevations, terrain slopes, terrain aspects, terrain reliefs, terrain roughness, distances to roads, distances to rivers, land utilizations, are taken into consideration in our Logistic regression model. The related results from UAV are compared with those from traditional photogrammetry. Overall, the study is focusing on monitoring the distribution of the areas with high-risk landslide and driftwood potentials in Shenmu area by Fixed-wing UAV-Borne RGB and NIR images. We also further analyze the relationship between forests, landslides, disaster potentials and upper river areas.

  11. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  12. Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR

    PubMed Central

    Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng

    2018-01-01

    In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner. PMID:29439447

  13. Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR.

    PubMed

    Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng

    2018-02-11

    In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner.

  14. Quantifying cell mono-layer cultures by video imaging.

    PubMed

    Miller, K S; Hook, L A

    1996-04-01

    A method is described in which the relative number of adherent cells in multi-well tissue-culture plates is assayed by staining the cells with Giemsa and capturing the image of the stained cells with a video camera and charged-coupled device. The resultant image is quantified using the associated video imaging software. The method is shown to be sensitive and reproducible and should be useful for studies where quantifying relative cell numbers and/or proliferation in vitro is required.

  15. Portable Imagery Quality Assessment Test Field for Uav Sensors

    NASA Astrophysics Data System (ADS)

    Dąbrowski, R.; Jenerowicz, A.

    2015-08-01

    Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.

  16. Improving stop line detection using video imaging detectors.

    DOT National Transportation Integrated Search

    2010-11-01

    The Texas Department of Transportation and other state departments of transportation as well as cities : nationwide are using video detection successfully at signalized intersections. However, operational : issues with video imaging vehicle detection...

  17. Close Range Uav Accurate Recording and Modeling of St-Pierre Neo-Romanesque Church in Strasbourg (france)

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Grussenmeyer, P.; Freville, T.

    2017-02-01

    Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19th century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs' images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.

  18. Coastal areas mapping using UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Kozarski, Dimitrios; Kogkas, Stefanos

    2017-10-01

    The coastal areas in the Patras Gulf suffer degradation due to the sea action and other natural and human-induced causes. Changes in beaches, ports, and other man made constructions need to be assessed, both after severe events and on a regular basis, to build models that can predict the evolution in the future. Thus, reliable spatial data acquisition is a critical process for the identification of the coastline and the broader coastal zones for geologists and other scientists involved in the study of coastal morphology. High resolution satellite data, airphotos and airborne Lidar provided in the past the necessary data for the coastline monitoring. High-resolution digital surface models (DSMs) and orthophoto maps had become a necessity in order to map with accuracy all the variations in costal environments. Recently, unmanned aerial vehicles (UAV) photogrammetry offers an alternative solution to the acquisition of high accuracy spatial data along the coastline. This paper presents the use of UAV to map the coastline in Rio area Western Greece. Multiple photogrammetric aerial campaigns were performed. A small commercial UAV (DJI Phantom 3 Advance) was used to acquire thousands of images with spatial resolutions better than 5 cm. Different photogrammetric software's were used to orientate the images, extract point clouds, build a digital surface model and produce orthoimage mosaics. In order to achieve the best positional accuracy signalised ground control points were measured with a differential GNSS receiver. The results of this coastal monitoring programme proved that UAVs can replace many of the conventional surveys, with considerable gains in the cost of the data acquisition and without any loss in the accuracy.

  19. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  20. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  1. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  2. USB video image controller used in CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxuan; Wang, Yuxia; Fan, Hong

    2002-09-01

    CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.

  3. UAV based hydromorphological mapping of a river reach to improve hydrodynamic numerical models

    NASA Astrophysics Data System (ADS)

    Lükő, Gabriella; Baranya, Sándor; Rüther, Nils

    2017-04-01

    Unmanned Aerial Vehicles (UAVs) are increasingly used in the field of engineering surveys. In river engineering, or in general, water resources engineering, UAV based measurements have a huge potential. For instance, indirect measurements of the flow discharge using e.g. large-scale particle image velocimetry (LSPIV), particle tracking velocimetry (PTV), space-time image velocimetry (STIV) or radars became a real alternative for direct flow measurements. Besides flow detection, topographic surveys are also essential for river flow studies as the channel and floodplain geometry is the primary steering feature of the flow. UAVs can play an important role in this field, too. The widely used laser based topographic survey method (LIDAR) can be deployed on UAVs, moreover, the application of the Structure from Motion (SfM) method, which is based on images taken by UAVs, might be an even more cost-efficient alternative to reveal the geometry of distinct objects in the river or on the floodplain. The goal of this study is to demonstrate the utilization of photogrammetry and videogrammetry from airborne footage to provide geometry and flow data for a hydrodynamic numerical simulation of a 2 km long river reach in Albania. First, the geometry of the river is revealed from photogrammetry using the SfM method. Second, a more detailed view of the channel bed at low water level is taken. Using the fine resolution images, a Matlab based code, BASEGrain, developed by the ETH in Zürich, will be applied to determine the grain size characteristics of the river bed. This information will be essential to define the hydraulic roughness in the numerical model. Third, flow mapping is performed using UAV measurements and LSPIV method to quantitatively asses the flow field at the free surface and to estimate the discharge in the river. All data collection and analysis will be carried out using a simple, low-cost UAV, moreover, for all the data processing, open source, freely available

  4. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  5. D Modeling with Photogrammetry by Uavs and Model Quality Verification

    NASA Astrophysics Data System (ADS)

    Barrile, V.; Bilotta, G.; Nunnari, A.

    2017-11-01

    This paper deals with a test lead by Geomatics laboratory (DICEAM, Mediterranea University of Reggio Calabria), concerning the application of UAV photogrammetry for survey, monitoring and checking. The study case relies with the surroundings of the Department of Agriculture Sciences. In the last years, such area was interested by landslides and survey activities carried out to take the phenomenon under control. For this purpose, a set of digital images were acquired through a UAV equipped with a digital camera and GPS. Successively, the processing for the production of a 3D georeferenced model was performed by using the commercial software Agisoft PhotoScan. Similarly, the use of a terrestrial laser scanning technique allowed to product dense cloud and 3D models of the same area. To assess the accuracy of the UAV-derived 3D models, a comparison between image and range-based methods was performed.

  6. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  7. Orthorectification, mosaicking, and analysis of sub-decimeter resolution UAV imagery for rangeland monitoring

    USDA-ARS?s Scientific Manuscript database

    Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...

  8. Applications of UAVs in row-crop agriculture: advantages and limitations

    NASA Astrophysics Data System (ADS)

    Basso, B.; Putnam, G.; Price, R.; Zhang, J.

    2016-12-01

    The application of Unmanned Aerial Vehicles (UAV) to monitor agricultural fields has increased over the last few years due to advances in the technology, sensors, post-processing software for image analysis, along with more favorable regulations that allowed UAVs to be flown for commercial use. UAV have several capabilities depending on the type of sensors that are mounted onboard. The most widely used application remains crop scouting to identify areas within fields where the crops underperform for various reasons (nutritional status and water stress, presence of weeds, poor stands etc). In this talk, we present the preliminary results of UAVs field based research to better understand spatial and temporal variability of crop yield. Their advantage in providing timely information is critical, but adaptive management requires a system approach to account for the interactions occurring between genetics, environment and management.

  9. The future of structural fieldwork - UAV assisted aerial photogrammetry

    NASA Astrophysics Data System (ADS)

    Vollgger, Stefan; Cruden, Alexander

    2015-04-01

    Unmanned aerial vehicles (UAVs), commonly referred to as drones, are opening new and low cost possibilities to acquire high-resolution aerial images and digital surface models (DSM) for applications in structural geology. UAVs can be programmed to fly autonomously along a user defined grid to systematically capture high-resolution photographs, even in difficult to access areas. The photographs are subsequently processed using software that employ SIFT (scale invariant feature transform) and SFM (structure from motion) algorithms. These photogrammetric routines allow the extraction of spatial information (3D point clouds, digital elevation models, 3D meshes, orthophotos) from 2D images. Depending on flight altitude and camera setup, sub-centimeter spatial resolutions can be achieved. By "digitally mapping" georeferenced 3D models and images, orientation data can be extracted directly and used to analyse the structural framework of the mapped object or area. We present UAV assisted aerial mapping results from a coastal platform near Cape Liptrap (Victoria, Australia), where deformed metasediments of the Palaeozoic Lachlan Fold Belt are exposed. We also show how orientation and spatial information of brittle and ductile structures extracted from the photogrammetric model can be linked to the progressive development of folds and faults in the region. Even though there are both technical and legislative limitations, which might prohibit the use of UAVs without prior commercial licensing and training, the benefits that arise from the resulting high-resolution, photorealistic models can substantially contribute to the collection of new data and insights for applications in structural geology.

  10. Possibilities of Use of UAVS for Technical Inspection of Buildings and Constructions

    NASA Astrophysics Data System (ADS)

    Banaszek, Anna; Banaszek, Sebastian; Cellmer, Anna

    2017-12-01

    In recent years, Unmanned Aerial Vehicles (UAVs) have been used in various sectors of the economy. This is due to the development of new technologies for acquiring and processing geospatial data. The paper presents the results of experiments using UAV, equipped with a high resolution digital camera, for a visual assessment of the technical condition of the building roof and for the inventory of energy infrastructure and its surroundings. The usefulness of digital images obtained from the UAV deck is presented in concrete examples. The use of UAV offers new opportunities in the area of technical inspection due to the detail and accuracy of the data, low operating costs and fast data acquisition.

  11. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    ERIC Educational Resources Information Center

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  12. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less

  13. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1997-09-30

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less

  14. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time

    PubMed Central

    Avellar, Gustavo S. C.; Pereira, Guilherme A. S.; Pimenta, Luciano C. A.; Iscold, Paulo

    2015-01-01

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem’s (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles’ maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs. PMID:26540055

  15. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time.

    PubMed

    Avellar, Gustavo S C; Pereira, Guilherme A S; Pimenta, Luciano C A; Iscold, Paulo

    2015-11-02

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem's (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles' maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs.

  16. Multi-UAV Collaborative Sensor Management for UAV Team Survivability

    DTIC Science & Technology

    2006-08-01

    Multi-UAV Collaborative Sensor Management for UAV Team Survivability Craig Stoneking, Phil DiBona , and Adria Hughes Lockheed Martin Advanced...Command, Aviation Applied Technology Directorate. REFERENCES [1] DiBona , P., Belov, N., Pawlowski, A. (2006). “Plan-Driven Fusion: Shaping the

  17. a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Remondino, F.; Santana Quintero, M.

    2018-05-01

    This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.

  18. Shigaraki UAV-Radar Experiment (ShUREX): overview of the campaign with some preliminary results

    NASA Astrophysics Data System (ADS)

    Kantha, Lakshmi; Lawrence, Dale; Luce, Hubert; Hashiguchi, Hiroyuki; Tsuda, Toshitaka; Wilson, Richard; Mixa, Tyler; Yabuki, Masanori

    2017-12-01

    The Shigaraki unmanned aerial vehicle (UAV)-Radar Experiment (ShUREX) is an international (USA-Japan-France) observational campaign, whose overarching goal is to demonstrate the utility of small, lightweight, inexpensive, autonomous UAVs in probing and monitoring the lower troposphere and to promote synergistic use of UAVs and very high frequency (VHF) radars. The 2-week campaign lasting from June 1 to June 14, 2015, was carried out at the Middle and Upper Atmosphere (MU) Observatory in Shigaraki, Japan. During the campaign, the DataHawk UAV, developed at the University of Colorado, Boulder, and equipped with high-frequency response cold wire and pitot tube sensors (as well as an iMET radiosonde), was flown near and over the VHF-band MU radar. Measurements in the atmospheric column in the immediate vicinity of the radar were obtained. Simultaneous and continuous operation of the radar in range imaging mode enabled fine-scale structures in the atmosphere to be visualized by the radar. It also permitted the UAV to be commanded to sample interesting structures, guided in near real time by the radar images. This overview provides a description of the ShUREX campaign and some interesting but preliminary results of the very first simultaneous and intensive probing of turbulent structures by UAVs and the MU radar. The campaign demonstrated the validity and utility of the radar range imaging technique in obtaining very high vertical resolution ( 20 m) images of echo power in the atmospheric column, which display evolving fine-scale atmospheric structures in unprecedented detail. The campaign also permitted for the very first time the evaluation of the consistency of turbulent kinetic energy dissipation rates in turbulent structures inferred from the spectral broadening of the backscattered radar signal and direct, in situ measurements by the high-frequency response velocity sensor on the UAV. The data also enabled other turbulence parameters such as the temperature

  19. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  20. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  1. Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation

    PubMed Central

    Nitti, Davide O.; Bovenga, Fabio; Chiaradia, Maria T.; Greco, Mario; Pinelli, Gianpaolo

    2015-01-01

    This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimate UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system. PMID:26225977

  2. Feasibility of Using Synthetic Aperture Radar to Aid UAV Navigation.

    PubMed

    Nitti, Davide O; Bovenga, Fabio; Chiaradia, Maria T; Greco, Mario; Pinelli, Gianpaolo

    2015-07-28

    This study explores the potential of Synthetic Aperture Radar (SAR) to aid Unmanned Aerial Vehicle (UAV) navigation when Inertial Navigation System (INS) measurements are not accurate enough to eliminate drifts from a planned trajectory. This problem can affect medium-altitude long-endurance (MALE) UAV class, which permits heavy and wide payloads (as required by SAR) and flights for thousands of kilometres accumulating large drifts. The basic idea is to infer position and attitude of an aerial platform by inspecting both amplitude and phase of SAR images acquired onboard. For the amplitude-based approach, the system navigation corrections are obtained by matching the actual coordinates of ground landmarks with those automatically extracted from the SAR image. When the use of SAR amplitude is unfeasible, the phase content can be exploited through SAR interferometry by using a reference Digital Terrain Model (DTM). A feasibility analysis was carried out to derive system requirements by exploring both radiometric and geometric parameters of the acquisition setting. We showed that MALE UAV, specific commercial navigation sensors and SAR systems, typical landmark position accuracy and classes, and available DTMs lead to estimated UAV coordinates with errors bounded within ±12 m, thus making feasible the proposed SAR-based backup system.

  3. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors.

    PubMed

    Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal

    2009-01-01

    This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.

  4. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  5. The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) and its operations from an unmanned aerial vehicle (UAV) during the AROMAT campaign

    NASA Astrophysics Data System (ADS)

    Merlaud, Alexis; Tack, Frederik; Constantin, Daniel; Georgescu, Lucian; Maes, Jeroen; Fayt, Caroline; Mingireanu, Florin; Schuettemeyer, Dirk; Meier, Andreas Carlos; Schönardt, Anja; Ruhtz, Thomas; Bellegante, Livio; Nicolae, Doina; Den Hoed, Mirjam; Allaart, Marc; Van Roozendael, Michel

    2018-01-01

    The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) is a compact remote sensing instrument dedicated to mapping trace gases from an unmanned aerial vehicle (UAV). SWING is based on a compact visible spectrometer and a scanning mirror to collect scattered sunlight. Its weight, size, and power consumption are respectively 920 g, 27 cm × 12 cm × 8 cm, and 6 W. SWING was developed in parallel with a 2.5 m flying-wing UAV. This unmanned aircraft is electrically powered, has a typical airspeed of 100 km h-1, and can operate at a maximum altitude of 3 km. We present SWING-UAV experiments performed in Romania on 11 September 2014 during the Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT) campaign, which was dedicated to test newly developed instruments in the context of air quality satellite validation. The UAV was operated up to 700 m above ground, in the vicinity of the large power plant of Turceni (44.67° N, 23.41° E; 116 m a. s. l. ). These SWING-UAV flights were coincident with another airborne experiment using the Airborne imaging differential optical absorption spectroscopy (DOAS) instrument for Measurements of Atmospheric Pollution (AirMAP), and with ground-based DOAS, lidar, and balloon-borne in situ observations. The spectra recorded during the SWING-UAV flights are analysed with the DOAS technique. This analysis reveals NO2 differential slant column densities (DSCDs) up to 13±0.6×1016 molec cm-2. These NO2 DSCDs are converted to vertical column densities (VCDs) by estimating air mass factors. The resulting NO2 VCDs are up to 4.7±0.4×1016 molec cm-2. The water vapour DSCD measurements, up to 8±0.15×1022 molec cm-2, are used to estimate a volume mixing ratio of water vapour in the boundary layer of 0.013±0.002 mol mol-1. These geophysical quantities are validated with the coincident measurements.

  6. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Estimating evaporation with thermal UAV data and two-source energy balance models

    NASA Astrophysics Data System (ADS)

    Hoffmann, H.; Nieto, H.; Jensen, R.; Guzinski, R.; Zarco-Tejada, P.; Friborg, T.

    2016-02-01

    Estimating evaporation is important when managing water resources and cultivating crops. Evaporation can be estimated using land surface heat flux models and remotely sensed land surface temperatures (LST), which have recently become obtainable in very high resolution using lightweight thermal cameras and Unmanned Aerial Vehicles (UAVs). In this study a thermal camera was mounted on a UAV and applied into the field of heat fluxes and hydrology by concatenating thermal images into mosaics of LST and using these as input for the two-source energy balance (TSEB) modelling scheme. Thermal images are obtained with a fixed-wing UAV overflying a barley field in western Denmark during the growing season of 2014 and a spatial resolution of 0.20 m is obtained in final LST mosaics. Two models are used: the original TSEB model (TSEB-PT) and a dual-temperature-difference (DTD) model. In contrast to the TSEB-PT model, the DTD model accounts for the bias that is likely present in remotely sensed LST. TSEB-PT and DTD have already been well tested, however only during sunny weather conditions and with satellite images serving as thermal input. The aim of this study is to assess whether a lightweight thermal camera mounted on a UAV is able to provide data of sufficient quality to constitute as model input and thus attain accurate and high spatial and temporal resolution surface energy heat fluxes, with special focus on latent heat flux (evaporation). Furthermore, this study evaluates the performance of the TSEB scheme during cloudy and overcast weather conditions, which is feasible due to the low data retrieval altitude (due to low UAV flying altitude) compared to satellite thermal data that are only available during clear-sky conditions. TSEB-PT and DTD fluxes are compared and validated against eddy covariance measurements and the comparison shows that both TSEB-PT and DTD simulations are in good agreement with eddy covariance measurements, with DTD obtaining the best results. The

  8. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  9. High Resolution UAV-based Passive Microwave L-band Imaging of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Gasiewski, A. J.; Stachura, M.; Elston, J.; McIntyre, E. M.

    2013-12-01

    Due to long electrical wavelengths and aperture size limitations the scaling of passive microwave remote sensing of soil moisture from spaceborne low-resolution applications to high resolution applications suitable for precision agriculture requires use of low flying aerial vehicles. This presentation summarizes a project to develop a commercial Unmanned Aerial Vehicle (UAV) hosting a precision microwave radiometer for mapping of soil moisture in high-value shallow root-zone crops. The project is based on the use of the Tempest electric-powered UAV and a compact digital L-band (1400-1427 MHz) passive microwave radiometer developed specifically for extremely small and lightweight aerial platforms or man-portable, tractor, or tower-based applications. Notable in this combination are a highly integrated UAV/radiometer antenna design and use of both the upwelling emitted signal from the surface and downwelling cold space signal for precise calibration using a lobe-correlating radiometer architecture. The system achieves a spatial resolution comparable to the altitude of the UAV above the ground while referencing upwelling measurements to the constant and well-known background temperature of cold space. The radiometer incorporates digital sampling and radio frequency interference mitigation along with infrared, near-infrared, and visible (red) sensors for surface temperature and vegetation biomass correction. This NASA-sponsored project is being developed both for commercial application in cropland water management, L-band satellite validation, and estuarian plume studies.

  10. Monitoring of rock glacier dynamics by multi-temporal UAV images

    NASA Astrophysics Data System (ADS)

    Morra di Cella, Umberto; Pogliotti, Paolo; Diotri, Fabrizio; Cremonese, Edoardo; Filippa, Gianluca; Galvagno, Marta

    2015-04-01

    During the last years several steps forward have been made in the comprehension of rock glaciers dynamics mainly for their potential evolution into rapid mass movements phenomena. Monitoring the surface movement of creeping mountain permafrost is important for understanding the potential effect of ongoing climate change on such a landforms. This study presents the reconstruction of two years of surface movements and DEM changes obtained by multi-temporal analysis of UAV images (provided by SenseFly Swinglet CAM drone). The movement rate obtained by photogrammetry are compared to those obtained by differential GNSS repeated campaigns on almost fifty points distributed on the rock glacier. Results reveals a very good agreements between both rates velocities obtained by the two methods and vertical displacements on fixed points. Strengths, weaknesses and shrewdness of this methods will be discussed. Such a method is very promising mainly for remote regions with difficult access.

  11. UAV Cooperation Architectures for Persistent Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, R S; Kent, C A; Jones, E D

    2003-03-20

    With the number of small, inexpensive Unmanned Air Vehicles (UAVs) increasing, it is feasible to build multi-UAV sensing networks. In particular, by using UAVs in conjunction with unattended ground sensors, a degree of persistent sensing can be achieved. With proper UAV cooperation algorithms, sensing is maintained even though exceptional events, e.g., the loss of a UAV, have occurred. In this paper a cooperation technique that allows multiple UAVs to perform coordinated, persistent sensing with unattended ground sensors over a wide area is described. The technique automatically adapts the UAV paths so that on the average, the amount of time thatmore » any sensor has to wait for a UAV revisit is minimized. We also describe the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture. This architecture is designed to help simulate and operate distributed sensor networks where multiple UAVs are used to collect data.« less

  12. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  13. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  14. The Analysis of Burrows Recognition Accuracy in XINJIANG'S Pasture Area Based on Uav Visible Images with Different Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.

    2018-04-01

    The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.

  15. Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) from an Unmanned Aerial Vehicle (UAV): Results from the 2014 AROMAT campaign

    NASA Astrophysics Data System (ADS)

    Merlaud, Alexis; Tack, Frederik; Constantin, Daniel; Fayt, Caroline; Maes, Jeroen; Mingireanu, Florin; Mocanu, Ionut; Georgescu, Lucian; Van Roozendael, Michel

    2015-04-01

    The Small Whiskbroom Imager for atmospheric compositioN monitorinG (SWING) is an instrument dedicated to atmospheric trace gas retrieval from an Unmanned Aerial Vehicle (UAV). The payload is based on a compact visible spectrometer and a scanning mirror to collect scattered sunlight. Its weight, size, and power consumption are respectively 920 g, 27x12x12 cm3, and 6 W. The custom-built 2.5 m flying wing UAV is electrically powered, has a typical airspeed of 100 km/h, and can operate at a maximum altitude of 3 km. Both the payload and the UAV were developed in the framework of a collaboration between the Belgian Institute for Space Aeronomy (BIRA-IASB) and the Dunarea de Jos University of Galati, Romania. We present here SWING-UAV test flights dedicated to NO2 measurements and performed in Romania on 10 and 11 September 2014, during the Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT) campaign. The UAV performed 5 flights in the vicinity of the large thermal power station of Turceni (44.67° N, 23.4° E). The UAV was operated in visual range during the campaign, up to 900 m AGL , downwind of the plant and crossing its exhaust plume. The spectra recorded on flight are analyzed with the Differential Optical Absorption Spectroscopy (DOAS) method. The retrieved NO2 Differential Slant Column Densities (DSCDs) are up to 1.5e17 molec/cm2 and reveal the horizontal gradients around the plant. The DSCDs are converted to vertical columns and compared with coincident car-based DOAS measurements. We also present the near-future perspective of the SWING-UAV observation system, which includes flights in 2015 above the Black Sea to quantify ship emissions, the addition of SO2 as a target species, and autopilot flights at higher altitudes to cover a typical satellite pixel extent (10x10 km2).

  16. Using Unmanned Aerial Vehicles (UAVs) to Modeling Tornado Impacts

    NASA Astrophysics Data System (ADS)

    Wagner, M.; Doe, R. K.

    2017-12-01

    Using Unmanned Aerial Vehicles (UAVs) to assess storm damage is a useful research tool. Benefits include their ability to access remote or impassable areas post-storm, identify unknown damages and assist with more detailed site investigations and rescue efforts. Technological advancement of UAVs mean that they can capture high resolution images often at an affordable price. These images can be used to create 3D environments to better interpret and delineate damages from large areas that would have been difficult in ground surveys. This research presents the results of a rapid response site investigation of the 29 April 2017 Canton, Texas, USA, tornado using low cost UAVs. This was a multiple, high impact tornado event measuring EF4 at maximum. Rural farmland was chosen as a challenging location to test both equipment and methodology. Such locations provide multiple impacts at a variety of scales including structural and vegetation damage and even animal fatalities. The 3D impact models allow for a more comprehensive study prior to clean-up. The results show previously unseen damages and better quantify damage impacts at the local level. 3D digital track swaths were created allowing for a more accurate track width determination. These results demonstrate how effective the use of low cost UAVs can be for rapid response storm damage assessments, the high quality of data they can achieve, and how they can help us better visualize tornado site investigations.

  17. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  18. SUSI 62 A Robust and Safe Parachute Uav with Long Flight Time and Good Payload

    NASA Astrophysics Data System (ADS)

    Thamm, H. P.

    2011-09-01

    In many research areas in the geo-sciences (erosion, land use, land cover change, etc.) or applications (e.g. forest management, mining, land management etc.) there is a demand for remote sensing images of a very high spatial and temporal resolution. Due to the high costs of classic aerial photo campaigns, the use of a UAV is a promising option for obtaining the desired remote sensed information at the time it is needed. However, the UAV must be easy to operate, safe, robust and should have a high payload and long flight time. For that purpose, the parachute UAV SUSI 62 was developed. It consists of a steel frame with a powerful 62 cm3 2- stroke engine and a parachute wing. The frame can be easily disassembled for transportation or to replace parts. On the frame there is a gimbal mounted sensor carrier where different sensors, standard SLR cameras and/or multi-spectral and thermal sensors can be mounted. Due to the design of the parachute, the SUSI 62 is very easy to control. Two different parachute sizes are available for different wind speed conditions. The SUSI 62 has a payload of up to 8 kg providing options to use different sensors at the same time or to extend flight duration. The SUSI 62 needs a runway of between 10 m and 50 m, depending on the wind conditions. The maximum flight speed is approximately 50 km/h. It can be operated in a wind speed of up to 6 m/s. The design of the system utilising a parachute UAV makes it comparatively safe as a failure of the electronics or the remote control only results in the UAV coming to the ground at a slow speed. The video signal from the camera, the GPS coordinates and other flight parameters are transmitted to the ground station in real time. An autopilot is available, which guarantees that the area of investigation is covered at the desired resolution and overlap. The robustly designed SUSI 62 has been used successfully in Europe, Africa and Australia for scientific projects and also for agricultural, forestry and

  19. Rapid mapping of landslide disaster using UAV- photogrammetry

    NASA Astrophysics Data System (ADS)

    Cahyono, A. B.; Zayd, R. A.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.

  20. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  1. Experiment on Uav Photogrammetry and Terrestrial Laser Scanning for Ict-Integrated Construction

    NASA Astrophysics Data System (ADS)

    Takahashi, N.; Wakutsu, R.; Kato, T.; Wakaizumi, T.; Ooishi, T.; Matsuoka, R.

    2017-08-01

    In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named "i-Construction" focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for "i-Construction" are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony α6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an α6000 and TLS using a Focus3D X330 following the standards for "i-Construction" would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for "i-Construction." The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.

  2. Cloud-Assisted UAV Data Collection for Multiple Emerging Events in Distributed WSNs.

    PubMed

    Cao, Huiru; Liu, Yongxin; Yue, Xuejun; Zhu, Wenjian

    2017-08-07

    In recent years, UAVs (Unmanned Aerial Vehicles) have been widely applied for data collection and image capture. Specifically, UAVs have been integrated with wireless sensor networks (WSNs) to create data collection platforms with high flexibility. However, most studies in this domain focus on system architecture and UAVs' flight trajectory planning while event-related factors and other important issues are neglected. To address these challenges, we propose a cloud-assisted data gathering strategy for UAV-based WSN in the light of emerging events. We also provide a cloud-assisted approach for deriving UAV's optimal flying and data acquisition sequence of a WSN cluster. We validate our approach through simulations and experiments. It has been proved that our methodology outperforms conventional approaches in terms of flying time, energy consumption, and integrity of data acquisition. We also conducted a real-world experiment using a UAV to collect data wirelessly from multiple clusters of sensor nodes for monitoring an emerging event, which are deployed in a farm. Compared against the traditional method, this proposed approach requires less than half the flying time and achieves almost perfect data integrity.

  3. Optimal design of UAV's pod shape

    NASA Astrophysics Data System (ADS)

    Wei, Qun; Jia, Hong-guang

    2011-08-01

    In the modern war, UAV(unmanned aircraft system) plays a more and more important role in the army. UAVs always carry electrical-optical reconnaissance systems. These systems are used to accomplish the missions of observing and reconnaissance the battlefield. For traditional UAV, the shape of the pod on UAV is sphericity. In addition, the pod of UAV not only has the job of observing and reconnaissance the battlefield, but its shape also has impact on the UAV's drag when it flies in the air. In this paper, two different kinds of pod models are set up, one is the traditional sphericity model, the other is a new model. Unstructured grid is used on the flow field. Using CFD(computational fluid dynamic) method, the results of the drags of the different kinds of pod are got. The drag's relationship between the pod and the UAV is obtained by comparing the results of simulations. After analyzing the results we can get: when UAV flies at low speed(0.3Ma{0.7Ma), the drag's difference between the two kinds of pod is little, the pod's drag takes a small part of the UAV's whole drag which is only about 14%. At transonic speed(0.8Ma{1.2Ma), the drag's difference between these two kinds of pod is getting bigger and bigger along with the speed goes higher. The traditional pod's drag is 1/3 of the UAV's whole drag value, but for the new pod, it is only 1/5. At supersonic speed(1.3Ma{2.0Ma), the traditional pod's drag goes up rapidly, but the new kind of pod's drag goes up slowly. This makes the difference between the two kinds of UAVs' total drag comes greater. For example, at 2Ma, the total drag of new UAV is only 2/3 of the traditional UAV. These results show: when the UAV flies at low speed, these two kinds of pod have little difference in drag. But if it flies at supersonic speed, the pod has great impact on the UAV's total drag, so the designer of UAV's pod should pay more attention on the out shape.

  4. Uav-Based Photogrammetric Point Clouds and Hyperspectral Imaging for Mapping Biodiversity Indicators in Boreal Forests

    NASA Astrophysics Data System (ADS)

    Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.

    2017-10-01

    Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.

  5. Multi-Temporal Crop Surface Models Combined with the RGB Vegetation Index from Uav-Based Images for Forage Monitoring in Grassland

    NASA Astrophysics Data System (ADS)

    Possoch, M.; Bieker, S.; Hoffmeister, D.; Bolten, A.; Schellberg, J.; Bareth, G.

    2016-06-01

    Remote sensing of crop biomass is important in regard to precision agriculture, which aims to improve nutrient use efficiency and to develop better stress and disease management. In this study, multi-temporal crop surface models (CSMs) were generated from UAV-based dense imaging in order to derive plant height distribution and to determine forage mass. The low-cost UAV-based RGB imaging was carried out in a grassland experiment at the University of Bonn, Germany, in summer 2015. The test site comprised three consecutive growths including six different nitrogen fertilizer levels and three replicates, in sum 324 plots with a size of 1.5×1.5 m. Each growth consisted of six harvesting dates. RGB-images and biomass samples were taken at twelve dates nearly biweekly within two growths between June and September 2015. Images were taken with a DJI Phantom 2 in combination of a 2D Zenmuse gimbal and a GoPro Hero 3 (black edition). Overlapping images were captured in 13 to 16 m and overview images in approximately 60 m height at 2 frames per second. The RGB vegetation index (RGBVI) was calculated as the normalized difference of the squared green reflectance and the product of blue and red reflectance from the non-calibrated images. The post processing was done with Agisoft PhotoScan Professional (SfM-based) and Esri ArcGIS. 14 ground control points (GCPs) were located in the field, distinguished by 30 cm × 30 cm markers and measured with a RTK-GPS (HiPer Pro Topcon) with 0.01 m horizontal and vertical precision. The errors of the spatial resolution in x-, y-, z-direction were in a scale of 3-4 cm. From each survey, also one distortion corrected image was georeferenced by the same GCPs and used for the RGBVI calculation. The results have been used to analyse and evaluate the relationship between estimated plant height derived with this low-cost UAV-system and forage mass. Results indicate that the plant height seems to be a suitable indicator for forage mass. There is a

  6. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  7. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  8. Interactive Cadastral Boundary Delineation from Uav Data

    NASA Astrophysics Data System (ADS)

    Crommelinck, S.; Höfle, B.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.

    2018-05-01

    Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100 m is reduced by up to 86 %, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.

  9. Landslide Mapping Using Imagery Acquired by a Fixed-Wing Uav

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Jhan, J. P.; Lo, C. F.; Lin, Y. S.

    2011-09-01

    In Taiwan, the average annual rainfall is about 2,500 mm, about three times the world average. Hill slopes where are mostly under meta-stable conditions due to fragmented surface materials can easily be disturbed by heavy typhoon rainfall and/or earthquakes, resulting in landslides and debris flows. Thus, an efficient data acquisition and disaster surveying method is critical for decision making. Comparing with satellite and airplane, the unmanned aerial vehicle (UAV) is a portable and dynamic platform for data acquisition. In particularly when a small target area is required. In this study, a fixed-wing UAV that equipped with a consumer grade digital camera, i.e. Canon EOS 450D, a flight control computer, a Garmin GPS receiver and an attitude heading reference system (AHRS) are proposed. The adopted UAV has about two hours flight duration time with a flight control range of 20 km and has a payload of 3 kg, which is suitable for a medium scale mapping and surveying mission. In the paper, a test area with 21.3 km2 in size containing hundreds of landslides induced by Typhoon Morakot is used for landslides mapping. The flight height is around 1,400 meters and the ground sampling distance of the acquired imagery is about 17 cm. The aerial triangulation, ortho-image generation and mosaicking are applied to the acquired images in advance. An automatic landslides detection algorithm is proposed based on the object-based image analysis (OBIA) technique. The color ortho-image and a digital elevation model (DEM) are used. The ortho-images before and after typhoon are utilized to estimate new landslide regions. Experimental results show that the developed algorithm can achieve a producer's accuracy up to 91%, user's accuracy 84%, and a Kappa index of 0.87. It demonstrates the feasibility of the landslide detection algorithm and the applicability of a fixed-wing UAV for landslide mapping.

  10. Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu

    2015-04-01

    Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the

  11. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  12. Brief communication: Landslide motion from cross correlation of UAV-derived morphological attributes

    NASA Astrophysics Data System (ADS)

    Peppa, Maria V.; Mills, Jon P.; Moore, Phil; Miller, Pauline E.; Chambers, Jonathan E.

    2017-12-01

    Unmanned aerial vehicles (UAVs) can provide observations of high spatio-temporal resolution to enable operational landslide monitoring. In this research, the construction of digital elevation models (DEMs) and orthomosaics from UAV imagery is achieved using structure-from-motion (SfM) photogrammetric procedures. The study examines the additional value that the morphological attribute of openness, amongst others, can provide to surface deformation analysis. Image-cross-correlation functions and DEM subtraction techniques are applied to the SfM outputs. Through the proposed integrated analysis, the automated quantification of a landslide's motion over time is demonstrated, with implications for the wider interpretation of landslide kinematics via UAV surveys.

  13. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images

    PubMed Central

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  14. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    PubMed

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.

  15. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    PubMed

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  16. AirSTAR: A UAV Platform for Flight Dynamics and Control System Testing

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas L.; Foster, John V.; Bailey, Roger M.; Belcastro, Christine M.

    2006-01-01

    As part of the NASA Aviation Safety Program at Langley Research Center, a dynamically scaled unmanned aerial vehicle (UAV) and associated ground based control system are being developed to investigate dynamics modeling and control of large transport vehicles in upset conditions. The UAV is a 5.5% (seven foot wingspan), twin turbine, generic transport aircraft with a sophisticated instrumentation and telemetry package. A ground based, real-time control system is located inside an operations vehicle for the research pilot and associated support personnel. The telemetry system supports over 70 channels of data plus video for the downlink and 30 channels for the control uplink. Data rates are in excess of 200 Hz. Dynamic scaling of the UAV, which includes dimensional, weight, inertial, actuation, and control system scaling, is required so that the sub-scale vehicle will realistically simulate the flight characteristics of the full-scale aircraft. This testbed will be utilized to validate modeling methods, flight dynamics characteristics, and control system designs for large transport aircraft, with the end goal being the development of technologies to reduce the fatal accident rate due to loss-of-control.

  17. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  18. Thermal Imaging of Subsurface Coal Fires by means of an Unmanned Aerial Vehicle (UAV) in the Autonomous Province Xinjiang, PRC

    NASA Astrophysics Data System (ADS)

    Vasterling, Margarete; Schloemer, Stefan; Fischer, Christian; Ehrler, Christoph

    2010-05-01

    Spontaneous combustion of coal and resulting coal fires lead to very high temperatures in the subsurface. To a large amount the heat is transferred to the surface by convective and conductive transport inducing a more or less pronounced thermal anomaly. During the past decade satellite-based infrared-imaging (ASTER, MODIS) was the method of choice for coal fire detection on a local and regional scale. However, the resolution is by far too low for a detailed analysis of single coal fires which is essential prerequisite for corrective measures (i.e. fire fighting) and calculation of carbon dioxide emission based on a complex correlation between energy release and CO2 generation. Consequently, within the framework of the Sino-German research project "Innovative Technologies for Exploration, Extinction and Monitoring of Coal Fires in Northern China", a new concept was developed and successfully tested. An unmanned aerial vehicle (UAV) was equipped with a lightweight camera for thermografic (resolution 160 by 120 pixel, dynamic range -20 to 250°C) and for visual imaging. The UAV designed as an octocopter is able to hover at GPS controlled waypoints during predefined flight missions. The application of a UAV has several advantages. Compared to point measurements on the ground the thermal imagery quickly provides the spatial distribution of the temperature anomaly with a much better resolution. Areas otherwise not accessible (due to topography, fire induced cracks, etc.) can easily be investigated. The results of areal surveys on two coal fires in Xinjiang are presented. Georeferenced thermal and visual images were mosaicked together and analyzed. UAV-born data do well compared to temperatures measured directly on the ground and cover large areas in detail. However, measuring surface temperature alone is not sufficient. Simultaneous measurements made at the surface and in roughly 15cm depth proved substantial temperature gradients in the upper soil. Thus the temperature

  19. Employing UAVs to Acquire Detailed Vegetation and Bare Ground Data for Assessing Rangeland Health

    NASA Astrophysics Data System (ADS)

    Rango, A.; Laliberte, A.; Herrick, J. E.; Winters, C.

    2007-12-01

    Because of its value as a historical record (extending back to the mid 1930s), aerial photography is an important tool used in many rangeland studies. However, these historical photos are not very useful for detailed analysis of rangeland health because of inadequate spatial resolution and scheduling limitations. These issues are now being resolved by using Unmanned Aerial Vehicles (UAVs) over rangeland study areas. Spatial resolution improvements have been rapid in the last 10 years from the QuickBird satellite through improved aerial photography to the new UAV coverage and have utilized improved sensors and the more simplistic approach of low altitude flights. Our rangeland health experiments have shown that the low altitude UAV digital photography is preferred by rangeland scientists because it allows, for the first time, their identification of vegetation and land surface patterns and patches, gap sizes, bare soil percentages, and vegetation type. This hyperspatial imagery (imagery with a resolution finer than the object of interest) is obtained at about 5cm resolution by flying at an altitude of 150m above the surface of the Jornada Experimental Range in southern New Mexico. Additionally, the UAV provides improved temporal flexibility, such as flights immediately following fires, floods, and other catastrophic disturbances, because the flight capability is located near the study area and the vehicles are under the direct control of the users, eliminating the additional steps associated with budgets and contracts. There are significant challenges to improve the data to make them useful for operational agencies, namely, image distortion with inexpensive, consumer grade digital cameras, difficulty in detecting sufficient ground control points in small scenes (152m by 114m), accuracy of exterior UAV information on X,Y, Z, roll, pitch, and heading, the sheer number of images collected, and developing reliable relationships with ground-based data across a broad

  20. Biased lineup instructions and face identification from video images.

    PubMed

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  1. Distinguishing plant population and variety with UAV-derived vegetation indices

    NASA Astrophysics Data System (ADS)

    Oakes, Joseph; Balota, Maria

    2017-05-01

    Variety selection and seeding rate are two important choice that a peanut grower must make. High yielding varieties can increase profit with no additional input costs, while seeding rate often determines input cost a grower will incur from seed costs. The overall purpose of this study was to examine the effect that seeding rate has on different peanut varieties. With the advent of new UAV technology, we now have the possibility to use indices collected with the UAV to measure emergence, seeding rate, growth rate, and perhaps make yield predictions. This information could enable growers to make management decisions early in the season based on low plant populations due to poor emergence, and could be a useful tool for growers to use to estimate plant population and growth rate in order to help achieve desired crop stands. Red-Green-Blue (RGB) and near-infrared (NIR) images were collected from a UAV platform starting two weeks after planting and continued weekly for the next six weeks. Ground NDVI was also collected each time aerial images were collected. Vegetation indices were derived from both the RGB and NIR images. Greener area (GGA- the proportion of green pixels with a hue angle from 80° to 120°) and a* (the average red/green color of the image) were derived from the RGB images while Normalized Differential Vegetative Index (NDVI) was derived from NIR images. Aerial indices were successful in distinguishing seeding rates and determining emergence during the first few weeks after planting, but not later in the season. Meanwhile, these aerial indices are not an adequate predictor of yield in peanut at this point.

  2. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  3. Wetland Assessment Using Unmanned Aerial Vehicle (uav) Photogrammetry

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Greenfield, R.; Tesfamichael, S.

    2016-06-01

    The use of Unmanned Arial Vehicle (UAV) photogrammetry is a valuable tool to enhance our understanding of wetlands. Accurate planning derived from this technological advancement allows for more effective management and conservation of wetland areas. This paper presents results of a study that aimed at investigating the use of UAV photogrammetry as a tool to enhance the assessment of wetland ecosystems. The UAV images were collected during a single flight within 2½ hours over a 100 ha area at the Kameelzynkraal farm, Gauteng Province, South Africa. An AKS Y-6 MKII multi-rotor UAV and a digital camera on a motion compensated gimbal mount were utilised for the survey. Twenty ground control points (GCPs) were surveyed using a Trimble GPS to achieve geometrical precision and georeferencing accuracy. Structure-from-Motion (SfM) computer vision techniques were used to derive ultra-high resolution point clouds, orthophotos and 3D models from the multi-view photos. The geometric accuracy of the data based on the 20 GCP's were 0.018 m for the overall, 0.0025 m for the vertical root mean squared error (RMSE) and an over all root mean square reprojection error of 0.18 pixel. The UAV products were then edited and subsequently analysed, interpreted and key attributes extracted using a selection of tools/ software applications to enhance the wetland assessment. The results exceeded our expectations and provided a valuable and accurate enhancement to the wetland delineation, classification and health assessment which even with detailed field studies would have been difficult to achieve.

  4. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  5. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  6. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  7. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  8. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  9. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  10. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  11. UAV-Based Hyperspectral Remote Sensing for Precision Agriculture: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Angel, Y.; Parkes, S. D.; Turner, D.; Houborg, R.; Lucieer, A.; McCabe, M.

    2017-12-01

    Modern agricultural production relies on monitoring crop status by observing and measuring variables such as soil condition, plant health, fertilizer and pesticide effect, irrigation and crop yield. Managing all of these factors is a considerable challenge for crop producers. As such, providing integrated technological solutions that enable improved diagnostics of field condition to maximize profits, while minimizing environmental impacts, would be of much interest. Such challenges can be addressed by implementing remote sensing systems such as hyperspectral imaging to produce precise biophysical indicator maps across the various cycles of crop development. Recent progress in unmanned aerial vehicles (UAVs) have advanced traditional satellite-based capabilities, providing a capacity for high-spatial, spectral and temporal response. However, while some hyperspectral sensors have been developed for use onboard UAVs, significant investment is required to develop a system and data processing workflow that retrieves accurately georeferenced mosaics. Here we explore the use of a pushbroom hyperspectral camera that is integrated on-board a multi-rotor UAV system to measure the surface reflectance in 272 distinct spectral bands across a wavelengths range spanning 400-1000 nm, and outline the requirement for sensor calibration, integration onto a stable UAV platform enabling accurate positional data, flight planning, and development of data post-processing workflows for georeferenced mosaics. The provision of high-quality and geo-corrected imagery facilitates the development of metrics of vegetation health that can be used to identify potential problems such as production inefficiencies, diseases and nutrient deficiencies and other data-streams to enable improved crop management. Immense opportunities remain to be exploited in the implementation of UAV-based hyperspectral sensing (and its combination with other imaging systems) to provide a transferable and scalable

  12. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  13. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    PubMed

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  14. Performance Evaluation of Cots Uav for Architectural Heritage Documentation. a Test on S.GIULIANO Chapel in Savigliano (cn) - Italy

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Teppati Losè, L.

    2017-08-01

    Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (Dà-Jiāng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen - China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the onboard sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.

  15. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  16. UAV-based remote sensing of the Heumoes landslide, Austria Vorarlberg

    NASA Astrophysics Data System (ADS)

    Niethammer, U.; Joswig, M.

    2009-04-01

    image-processing based evaluation of the acquired images to characterize spatial and temporal details of landslide behaviour. We will also sketch first schemes of joint interpretation or 'data fusion' of UAV-based remote sensing with the results from geophysical mapping of underground distribution of soil moisture and fracture processes (Walter & Joswig, EGU 2009).

  17. Sense and avoid technology for Global Hawk and Predator UAVs

    NASA Astrophysics Data System (ADS)

    McCalmont, John F.; Utt, James; Deschenes, Michael; Taylor, Michael J.

    2005-05-01

    The Sensors Directorate at the Air Force Research Laboratory (AFRL) along with Defense Research Associates, Inc. (DRA) conducted a flight demonstration of technology that could potentially satisfy the Federal Aviation Administration's (FAA) requirement for Unmanned Aerial Vehicles (UAVs) to sense and avoid local air traffic sufficient to provide an "...equivalent level of safety, comparable to see-and-avoid requirements for manned aircraft". This FAA requirement must be satisfied for autonomous UAV operation within the national airspace. The real-time on-board system passively detects approaching aircraft, both cooperative and non-cooperative, using imaging sensors operating in the visible/near infrared band and a passive moving target indicator algorithm. Detection range requirements for RQ-4 and MQ-9 UAVs were determined based on analysis of flight geometries, avoidance maneuver timelines, system latencies and human pilot performance. Flight data and UAV operating parameters were provided by the system program offices, prime contractors, and flight-test personnel. Flight demonstrations were conducted using a surrogate UAV (Aero Commander) and an intruder aircraft (Beech Bonanza). The system demonstrated target detection ranges out to 3 nautical miles in nose-to-nose scenarios and marginal visual meteorological conditions. (VMC) This paper will describe the sense and avoid requirements definition process and the system concept (sensors, algorithms, processor, and flight rest results) that has demonstrated the potential to satisfy the FAA sense and avoid requirements.

  18. Direct Georeferencing of Uav Data Based on Simple Building Structures

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2016-06-01

    Unmanned Aerial Vehicle (UAV) data acquisition is more flexible compared with the more complex traditional airborne data acquisition. This advantage puts UAV platforms in a position as an alternative acquisition method in many applications including Large Scale Topographical Mapping (LSTM). LSTM, i.e. larger or equal than 1:10.000 map scale, is one of a number of prominent priority tasks to be solved in an accelerated way especially in third world developing countries such as Indonesia. As one component of fundamental geospatial data sets, large scale topographical maps are mandatory in order to enable detailed spatial planning. However, the accuracy of the products derived from the UAV data are normally not sufficient for LSTM as it needs robust georeferencing, which requires additional costly efforts such as the incorporation of sophisticated GPS Inertial Navigation System (INS) or Inertial Measurement Unit (IMU) on the platform and/or Ground Control Point (GCP) data on the ground. To reduce the costs and the weight on the UAV alternative solutions have to be found. This paper outlines a direct georeferencing method of UAV data by providing image orientation parameters derived from simple building structures and presents results of an investigation on the achievable results in a LSTM application. In this case, the image orientation determination has been performed through sequential images without any input from INS/IMU equipment. The simple building structures play a significant role in such a way that geometrical characteristics have been considered. Some instances are the orthogonality of the building's wall/rooftop and the local knowledge of the building orientation in the field. In addition, we want to include the Structure from Motion (SfM) approach in order to reduce the number of required GCPs especially for the absolute orientation purpose. The SfM technique applied to the UAV data and simple building structures additionally presents an effective tool

  19. FluidCam 1&2 - UAV-based Fluid Lensing Instruments for High-Resolution 3D Subaqueous Imaging and Automated Remote Biosphere Assessment of Reef Ecosystems

    NASA Astrophysics Data System (ADS)

    Chirayath, V.; Instrella, R.

    2016-02-01

    We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.

  20. FluidCam 1&2 - UAV-Based Fluid Lensing Instruments for High-Resolution 3D Subaqueous Imaging and Automated Remote Biosphere Assessment of Reef Ecosystems

    NASA Astrophysics Data System (ADS)

    Chirayath, V.

    2015-12-01

    We present NASA ESTO FluidCam 1 & 2, Visible and NIR Fluid-Lensing-enabled imaging payloads for Unmanned Aerial Vehicles (UAVs). Developed as part of a focused 2014 earth science technology grant, FluidCam 1&2 are Fluid-Lensing-based computational optical imagers designed for automated 3D mapping and remote sensing of underwater coastal targets from airborne platforms. Fluid Lensing has been used to map underwater reefs in 3D in American Samoa and Hamelin Pool, Australia from UAV platforms at sub-cm scale, which has proven a valuable tool in modern marine research for marine biosphere assessment and conservation. We share FluidCam 1&2 instrument validation and testing results as well as preliminary processed data from field campaigns. Petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk reefs demonstrate broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to improving bathymetry data for physical oceanographic models and understanding climate change's impact on coastal zones, global oxygen production, carbon sequestration.

  1. Spurious RF signals emitted by mini-UAVs

    NASA Astrophysics Data System (ADS)

    Schleijpen, Ric (H. M. A.); Voogt, Vincent; Zwamborn, Peter; van den Oever, Jaap

    2016-10-01

    This paper presents experimental work on the detection of spurious RF emissions of mini Unmanned Aerial Vehicles (mini-UAV). Many recent events have shown that mini-UAVs can be considered as a potential threat for civil security. For this reason the detection of mini-UAVs has become of interest to the sensor community. The detection, classification and identification chain can take advantage of different sensor technologies. Apart from the signatures used by radar and electro-optical sensor systems, the UAV also emits RF signals. These RF signatures can be split in intentional signals for communication with the operator and un-intentional RF signals emitted by the UAV. These unintentional or spurious RF emissions are very weak but could be used to discriminate potential UAV detections from false alarms. The goal of this research was to assess the potential of exploiting spurious emissions in the classification and identification chain of mini-UAVs. It was already known that spurious signals are very weak, but the focus was on the question whether the emission pattern could be correlated to the behaviour of the UAV. In this paper experimental examples of spurious RF emission for different types of mini-UAVs and their correlation with the electronic circuits in the UAVs will be shown

  2. On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.

    2018-05-01

    Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.

  3. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  4. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  5. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  6. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  7. Using infrared HOG-based pedestrian detection for outdoor autonomous searching UAV with embedded system

    NASA Astrophysics Data System (ADS)

    Shao, Yanhua; Mei, Yanying; Chu, Hongyu; Chang, Zhiyuan; He, Yuxuan; Zhan, Huayi

    2018-04-01

    Pedestrian detection (PD) is an important application domain in computer vision and pattern recognition. Unmanned Aerial Vehicles (UAVs) have become a major field of research in recent years. In this paper, an algorithm for a robust pedestrian detection method based on the combination of the infrared HOG (IR-HOG) feature and SVM is proposed for highly complex outdoor scenarios on the basis of airborne IR image sequences from UAV. The basic flow of our application operation is as follows. Firstly, the thermal infrared imager (TAU2-336), which was installed on our Outdoor Autonomous Searching (OAS) UAV, is used for taking pictures of the designated outdoor area. Secondly, image sequences collecting and processing were accomplished by using high-performance embedded system with Samsung ODROID-XU4 and Ubuntu as the core and operating system respectively, and IR-HOG features were extracted. Finally, the SVM is used to train the pedestrian classifier. Experiment show that, our method shows promising results under complex conditions including strong noise corruption, partial occlusion etc.

  8. French MALE UAV Program

    DTIC Science & Technology

    2003-09-02

    ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MoD- France 8...1French Air Force MINISTÈRE DE LA DÉFENSE 1 SIDM CONOPS 2 FAF IMAGERY ARCHITECTURE 3 FUTURE FRENCH MALE UAV PROGRAM FRENCH MALE UAV PROGRAM Report...2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE French Male UAV Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  9. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  10. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  11. Diverse Planning for UAV Control and Remote Sensing

    PubMed Central

    Tožička, Jan; Komenda, Antonín

    2016-01-01

    Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs. PMID:28009831

  12. Diverse Planning for UAV Control and Remote Sensing.

    PubMed

    Tožička, Jan; Komenda, Antonín

    2016-12-21

    Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs.

  13. Applications of UAV Photogrammetric Surveys to Natural Hazard Detection and Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Trizzino, Rosamaria; Caprioli, Mauro; Mazzone, Francesco; Scarano, Mario

    2017-04-01

    Unmanned Aerial Vehicle (UAV) systems are increasingly seen as an attractive low-cost alternative or supplement to aerial and terrestrial photogrammetry due to their low cost, flexibility, availability and readiness for duty. In addition, UAVs can be operated in hazardous or temporarily inaccessible locations. The combination of photogrammetric aerial and terrestrial recording methods using a mini UAV (also known as "drone") opens a broad range of applications, such as surveillance and monitoring of the environment and infrastructural assets. In particular, these methods and techniques are of paramount interest for the documentation of cultural heritage sites and areas of natural importance, facing threats from natural deterioration and hazards. In order to verify the reliability of these technologies an UAV survey and a LIDAR survey have been carried out along about 1 km of coast in the Salento peninsula, near the towns of San Foca, Torre dell' Orso and SantAndrea ( Lecce, Southern Italy). This area is affected by serious environmental hazards due to the presence of dangerous rocky cliffs named "falesie". The UAV platform was equipped with a photogrammetric measurement system that allowed us to obtain a mobile mapping of the fractured fronts of dangerous rocky cliffs. UAV-images data have been processed using dedicated software (Agisoft Photoscan). The point clouds obtained from both the UAV and LIDAR surveys have been processed using Cloud Compare software, with the aim of testing the UAV results with respect to the LIDAR ones. The analysis were done using the C2C algorithm which provides good results in terms of Euclidian distances, highlighting differences between the 3D models obtained from both the survey techiques. The total error obtained was of centimeter-order that is a very satisfactory result. In the the 2nd study area, the opportunities of obtaining more detailed documentation of cultural goods throughout UAV survey have been investigated. The study

  14. On-board computational efficiency in real time UAV embedded terrain reconstruction

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis

    2014-05-01

    In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in

  15. Demonstration of UAV deployment and control of mobile wireless sensing networks for modal analysis of structures

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Hirose, Mitsuhito; Greenwood, William; Xiao, Yong; Lynch, Jerome; Zekkos, Dimitrios; Kamat, Vineet

    2016-04-01

    Unmanned aerial vehicles (UAVs) can serve as a powerful mobile sensing platform for assessing the health of civil infrastructure systems. To date, the majority of their uses have been dedicated to vision and laser-based spatial imaging using on-board cameras and LiDAR units, respectively. Comparatively less work has focused on integration of other sensing modalities relevant to structural monitoring applications. The overarching goal of this study is to explore the ability for UAVs to deploy a network of wireless sensors on structures for controlled vibration testing. The study develops a UAV platform with an integrated robotic gripper that can be used to install wireless sensors in structures, drop a heavy weight for the introduction of impact loads, and to uninstall wireless sensors for reinstallation elsewhere. A pose estimation algorithm is embedded in the UAV to estimate the location of the UAV during sensor placement and impact load introduction. The Martlet wireless sensor network architecture is integrated with the UAV to provide the UAV a mobile sensing capability. The UAV is programmed to command field deployed Martlets, aggregate and temporarily store data from the wireless sensor network, and to communicate data to a fixed base station on site. This study demonstrates the integrated UAV system using a simply supported beam in the lab with Martlet wireless sensors placed by the UAV and impact load testing performed. The study verifies the feasibility of the integrated UAV-wireless monitoring system architecture with accurate modal characteristics of the beam estimated by modal analysis.

  16. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  17. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  18. Towards distributed ATR using subjective logic combination rules with a swarm of UAVs

    NASA Astrophysics Data System (ADS)

    O'Hara, Stephen; Simon, Michael; Zhu, Qiuming

    2007-04-01

    In this paper, we present our initial findings demonstrating a cost-effective approach to Aided Target Recognition (ATR) employing a swarm of inexpensive Unmanned Aerial Vehicles (UAVs). We call our approach Distributed ATR (DATR). Our paper describes the utility of DATR for autonomous UAV operations, provides an overview of our methods, and the results of our initial simulation-based implementation and feasibility study. Our technology is aimed towards small and micro UAVs where platform restrictions allow only a modest quality camera and limited on-board computational capabilities. It is understood that an inexpensive sensor coupled with limited processing capability would be challenged in deriving a high probability of detection (P d) while maintaining a low probability of false alarms (P fa). Our hypothesis is that an evidential reasoning approach to fusing the observations of multiple UAVs observing approximately the same scene can raise the P d and lower the P fa sufficiently in order to provide a cost-effective ATR capability. This capability can lead to practical implementations of autonomous, coordinated, multi-UAV operations. In our system, the live video feed from a UAV is processed by a lightweight real-time ATR algorithm. This algorithm provides a set of possible classifications for each detected object over a possibility space defined by a set of exemplars. The classifications for each frame within a short observation interval (a few seconds) are used to generate a belief statement. Our system considers how many frames in the observation interval support each potential classification. A definable function transforms the observational data into a belief value. The belief value, or opinion, represents the UAV's belief that an object of the particular class exists in the area covered during the observation interval. The opinion is submitted as evidence in an evidential reasoning system. Opinions from observations over the same spatial area will have

  19. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  20. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-01-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  1. Chosen Aspects of the Production of the Basic Map Using Uav Imagery

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.

    2016-06-01

    For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.

  2. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  3. Commercial vs professional UAVs for mapping

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Koukouvelas, Ioannis

    2017-09-01

    The continuous advancements in the technology behind Unmanned Aerial Vehicles (UAVs), in accordance with the consecutive decrease to their cost and the availability of photogrammetric software, make the use of UAVs an excellent tool for large scale mapping. In addition with the use of UAVs, the problems of increased costs, time consumption and the possible terrain accessibility problems, are significantly reduced. However, despite the growing number of UAV applications there has been a little quantitative assessment of UAV performance and of the quality of the derived products (orthophotos and Digital Surface Models). Here, we present results from field experiments designed to evaluate the accuracy of photogrammetrically-derived digital surface models (DSM) developed from imagery acquired with onboard digital cameras. We also show the comparison of the high resolution vs moderate resolution imagery for largescale geomorphic mapping. The acquired data analyzed in this study comes from a small commercial and a professional UAV. The test area was mapped using the same photogrammetric grid by the two UAVs. 3D models, DSMs and orthophotos were created using special software. Those products were compared to in situ survey measurements and the results are presented in this paper.

  4. Guided filtering for solar image/video processing

    NASA Astrophysics Data System (ADS)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.

  5. Algorithm for automatic image dodging of unmanned aerial vehicle images using two-dimensional radiometric spatial attributes

    NASA Astrophysics Data System (ADS)

    Li, Wenzhuo; Sun, Kaimin; Li, Deren; Bai, Ting

    2016-07-01

    Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.

  6. UAV-LiDAR accuracy and comparison to Structure from Motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Kucharczyk, M.; Hugenholtz, C.; Zou, X.; Nesbit, P. R.; Barchyn, T.

    2016-12-01

    We compare the spatial accuracy of a UAV-LiDAR system with Structure from Motion (SfM) photogrammetry. UAV-based LiDAR remote sensing potentially offers advantages over SfM photogrammetry in vegetated terrain, particularly with respect to canopy penetration and related measurements of ground surface elevation and vegetation height; however, little quantitative evidence has been presented to date. To address this, we performed a case study at a field site in Alberta, Canada with six different land cover types: short grass, tall grass, short shrubs, tall shrubs, deciduous trees, and coniferous trees. Both UAV datasets were acquired on the same day. The SfM dataset was derived from images acquired by a senseFly eBee fixed-wing UAV equipped with a 16.1 megapixel RGB camera. The UAV-LiDAR system is a proprietary design that consists of a single-rotor helicopter (2-m rotor diameter) equipped with a Riegl VUX-1UAV laser scanner, KVH 1750 inertial measurement unit, and dual NovAtel GNSS receivers. We measured vegetation height from at least 30 samples in each land cover type and acquired check point measurements to determine horizontal and vertical accuracy. Vegetation height was measured manually for grasses and shrubs with a level staff, and with a total station for trees. Coordinates of horizontal and vertical check points were surveyed with real-time kinematic (RTK) GNSS. We followed standard methods for computing horizontal and vertical accuracies based on the 2015 guidelines from the American Society of Photogrammetry and Remote Sensing. Results will be presented at the AGU Fall Meeting.

  7. Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story

    PubMed Central

    Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491

  8. Scanning Rocket Impact Area with an UAV: First Results

    NASA Astrophysics Data System (ADS)

    Santos, C. C. C.; Costa, D. A. L. M.; Junior, V. L. S.; Silva, B. R. F.; Leite, D. L.; Junor, C. E. B. S.; Liberator, B. A.; Nogueira, M. B.; Senna, M. D.; Santiago, G. S.; Dantas, J. B. D.; Alsina, P. J.; Albuquerque, G. L. A.

    2015-09-01

    This paper presents the first subsystems developed for an UAV used in safety procedures of sounding rockets campaigns. The aim of this UAV is to scan the rocket impact area in order to search for unexpected boats. To achieve this mission, designers developed an image recognition algorithm, two human-machine interfaces and two communication links, one to control the drone and the other for receiving telemetry data. In this paper, developers take all major engineering decisions in order to overcome the project constraints. A secondary goal of the project is to encourage young people to take part in Brazilian space program. For this reason, most of designers are undergraduate students under supervision of experts.

  9. Nearshore Measurements From a Small UAV.

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Brodie, K. L.; Spore, N.

    2016-02-01

    Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.

  10. Development of Cloud-Based UAV Monitoring and Management System

    PubMed Central

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-01-01

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation. PMID:27854267

  11. Development of Cloud-Based UAV Monitoring and Management System.

    PubMed

    Itkin, Mason; Kim, Mihui; Park, Younghee

    2016-11-15

    Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation.

  12. Informal settlement classification using point-cloud and image-based features from UAV data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2017-03-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Furthermore, it is of interest to analyse which fundamental attributes are suitable for describing these objects in different geographic locations. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. UAV datasets from informal settlements in two different countries are compared in order to identify salient features for specific objects in heterogeneous urban environments. Findings show that the integration of 2D and 3D features leads to an overall accuracy of 91.6% and 95.2% respectively for informal settlements in Kigali, Rwanda and Maldonado, Uruguay.

  13. High Resolution Airborne Laser Scanning and Hyperspectral Imaging with a Small Uav Platform

    NASA Astrophysics Data System (ADS)

    Gallay, Michal; Eck, Christoph; Zgraggen, Carlo; Kaňuk, Ján; Dvorný, Eduard

    2016-06-01

    The capabilities of unmanned airborne systems (UAS) have become diverse with the recent development of lightweight remote sensing instruments. In this paper, we demonstrate our custom integration of the state-of-the-art technologies within an unmanned aerial platform capable of high-resolution and high-accuracy laser scanning, hyperspectral imaging, and photographic imaging. The technological solution comprises the latest development of a completely autonomous, unmanned helicopter by Aeroscout, the Scout B1-100 UAV helicopter. The helicopter is powered by a gasoline two-stroke engine and it allows for integrating 18 kg of a customized payload unit. The whole system is modular providing flexibility of payload options, which comprises the main advantage of the UAS. The UAS integrates two kinds of payloads which can be altered. Both payloads integrate a GPS/IMU with a dual GPS antenna configuration provided by OXTS for accurate navigation and position measurements during the data acquisition. The first payload comprises a VUX-1 laser scanner by RIEGL and a Sony A6000 E-Mount photo camera. The second payload for hyperspectral scanning integrates a push-broom imager AISA KESTREL 10 by SPECIM. The UAS was designed for research of various aspects of landscape dynamics (landslides, erosion, flooding, or phenology) in high spectral and spatial resolution.

  14. Using UAV data for soil surface change detection at a loess field plot

    NASA Astrophysics Data System (ADS)

    Eltner, Anette; Baumgart, Philipp

    2014-05-01

    Application of unmanned aerial vehicles (UAV) denotes an increasing interest in geosciences due to major developments within the last years. Today, UAV are economical, reliable and flexible in usage. They provide a non-invasive method to measure the soil surface and its changes - e.g. due to erosion - with high resolution. Advances in digital photogrammetry and computer vision allow for fast and dense digital surface reconstruction from overlapping images. The study site is located in the Saxonian loess (Germany). The area is fragile due to erodible soils and intense agricultural utilisation. Hence, detectable soil surface changes are expected. The size of the field plot is 20 x 30 meters and the period of investigation lasts from October 2012 till July 2013 at which four surveys were performed. The UAV deployed in this study is equipped with a compact camera which is attached to an active stabilising camera mount. In addition, the micro drone integrates GPS and IMU that enables autonomous surveys with programmed flight patterns. About 100 photos are needed to cover the study site at a minimal flying height of eight metres and 65%/80% image overlap. For multi-temporal comparison a stable local reference system is established. Total station control of the signalised ground control points confirms two mm accuracy for the study period. To estimate the accuracy of the digital surface models (DSM) derived from the UAV images a comparison to DSM from terrestrial laser scanning (TLS) is conducted. The standard deviation of differences amounts five millimetres. To analyse surface changes methods from image processing are applied to the DSM. Erosion rills could be extracted for quantitative and qualitative consideration. Furthermore, volumetric changes are measured. First results indicate levelling processes during the winter season and reveal rill and inter-rill erosion during spring and summer season.

  15. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-10-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment, particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the catchment area as model input. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increases as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data are often unavailable. Modern unmanned aerial vehicles (UAVs) allow one to acquire high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility of deriving high-resolution imperviousness maps for urban areas from UAV imagery and of using this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is proposed and evaluated in a state-of-the-art urban drainage modelling exercise. In a real-life case study (Lucerne, Switzerland), we compare imperviousness maps generated using a fixed-wing consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their overall accuracy, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyse the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak

  16. Automatic and quantitative measurement of laryngeal video stroboscopic images.

    PubMed

    Kuo, Chung-Feng Jeffrey; Kuo, Joseph; Hsiao, Shang-Wun; Lee, Chi-Lung; Lee, Jih-Chin; Ke, Bo-Han

    2017-01-01

    The laryngeal video stroboscope is an important instrument for physicians to analyze abnormalities and diseases in the glottal area. Stroboscope has been widely used around the world. However, without quantized indices, physicians can only make subjective judgment on glottal images. We designed a new laser projection marking module and applied it onto the laryngeal video stroboscope to provide scale conversion reference parameters for glottal imaging and to convert the physiological parameters of glottis. Image processing technology was used to segment the important image regions of interest. Information of the glottis was quantified, and the vocal fold image segmentation system was completed to assist clinical diagnosis and increase accuracy. Regarding image processing, histogram equalization was used to enhance glottis image contrast. The center weighted median filters image noise while retaining the texture of the glottal image. Statistical threshold determination was used for automatic segmentation of a glottal image. As the glottis image contains saliva and light spots, which are classified as the noise of the image, noise was eliminated by erosion, expansion, disconnection, and closure techniques to highlight the vocal area. We also used image processing to automatically identify an image of vocal fold region in order to quantify information from the glottal image, such as glottal area, vocal fold perimeter, vocal fold length, glottal width, and vocal fold angle. The quantized glottis image database was created to assist physicians in diagnosing glottis diseases more objectively.

  17. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  18. Analysis of the Radiometric Response of Orange Tree Crown in Hyperspectral Uav Images

    NASA Astrophysics Data System (ADS)

    Imai, N. N.; Moriya, E. A. S.; Honkavaara, E.; Miyoshi, G. T.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    High spatial resolution remote sensing images acquired by drones are highly relevant data source in many applications. However, strong variations of radiometric values are difficult to correct in hyperspectral images. Honkavaara et al. (2013) presented a radiometric block adjustment method in which hyperspectral images taken from remotely piloted aerial systems - RPAS were processed both geometrically and radiometrically to produce a georeferenced mosaic in which the standard Reflectance Factor for the nadir is represented. The plants crowns in permanent cultivation show complex variations since the density of shadows and the irradiance of the surface vary due to the geometry of illumination and the geometry of the arrangement of branches and leaves. An evaluation of the radiometric quality of the mosaic of an orange plantation produced using images captured by a hyperspectral imager based on a tunable Fabry-Pérot interferometer and applying the radiometric block adjustment method, was performed. A high-resolution UAV based hyperspectral survey was carried out in an orange-producing farm located in Santa Cruz do Rio Pardo, state of São Paulo, Brazil. A set of 25 narrow spectral bands with 2.5 cm of GSD images were acquired. Trend analysis was applied to the values of a sample of transects extracted from plants appearing in the mosaic. The results of these trend analysis on the pixels distributed along transects on orange tree crown showed the reflectance factor presented a slightly trend, but the coefficients of the polynomials are very small, so the quality of mosaic is good enough for many applications.

  19. Cooperative UAV-Based Communications Backbone for Sensor Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, R S

    2001-10-07

    The objective of this project is to investigate the use of unmanned air vehicles (UAVs) as mobile, adaptive communications backbones for ground-based sensor networks. In this type of network, the UAVs provide communication connectivity to sensors that cannot communicate with each other because of terrain, distance, or other geographical constraints. In these situations, UAVs provide a vertical communication path for the sensors, thereby mitigating geographic obstacles often imposed on networks. With the proper use of UAVs, connectivity to a widely disbursed sensor network in rugged terrain is readily achieved. Our investigation has focused on networks where multiple cooperating UAVs aremore » used to form a network backbone. The advantage of using multiple UAVs to form the network backbone is parallelization of sensor connectivity. Many widely spaced or isolated sensors can be connected to the network at once using this approach. In these networks, the UAVs logically partition the sensor network into sub-networks (subnets), with one UAV assigned per subnet. Partitioning the network into subnets allows the UAVs to service sensors in parallel thereby decreasing the sensor-to-network connectivity. A UAV services sensors in its subnet by flying a route (path) through the subnet, uplinking data collected by the sensors, and forwarding the data to a ground station. An additional advantage of using multiple UAVs in the network is that they provide redundancy in the communications backbone, so that the failure of a single UAV does not necessarily imply the loss of the network.« less

  20. Unmanned Aerial Vehicle (UAV) associated DTM quality evaluation and hazard assessment

    NASA Astrophysics Data System (ADS)

    Huang, Mei-Jen; Chen, Shao-Der; Chao, Yu-Jui; Chiang, Yi-Lin; Chang, Kuo-Jen

    2014-05-01

    Taiwan, due to the high seismicity and high annual rainfall, numerous landslides triggered every year and severe impacts affect the island. Concerning to the catastrophic landslides, the key information of landslide, including range of landslide, volume estimation and the subsequent evolution are important when analyzing the triggering mechanism, hazard assessment and mitigation. Thus, the morphological analysis gives a general overview for the landslides and been considered as one of the most fundamental information. We try to integrate several technologies, especially by Unmanned Aerial Vehicle (UAV) and multi-spectral camera, to decipher the consequence and the potential hazard, and the social impact. In recent years, the remote sensing technology improves rapidly, providing a wide range of image, essential and precious information. Benefited of the advancing of informatics, remote-sensing and electric technologies, the Unmanned Aerial Vehicle (UAV) photogrammetry mas been improve significantly. The study tries to integrate several methods, including, 1) Remote-sensing images gathered by Unmanned Aerial Vehicle (UAV) and by aerial photos taken in different periods; 2) field in-situ geologic investigation; 3) Differential GPS, RTK GPS and Ground LiDAR field in-site geoinfomatics measurements; 4) Construct the DTMs before and after landslide, as well as the subsequent periods using UAV and aerial photos; 5) Discrete element method should be applied to understand the geomaterial composing the slope failure, for predicting earthquake-induced and rainfall-induced landslides displacement. First at all, we evaluate the Microdrones MD4-1000 UAV airphotos derived Digital Terrain Model (DTM). The ground resolution of the DSM point cloud of could be as high as 10 cm. By integrated 4 ground control point within an area of 56 hectares, compared with LiDAR DSM and filed RTK-GPS surveying, the mean error is as low as 6cm with a standard deviation of 17cm. The quality of the

  1. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  2. Increasing the UAV data value by an OBIA methodology

    NASA Astrophysics Data System (ADS)

    García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo

    2017-10-01

    Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.

  3. Demonstrating Acquisition of Real-Time Thermal Data Over Fires Utilizing UAVs

    NASA Technical Reports Server (NTRS)

    Ambrosia, Vincent G.; Wegener, Steven S.; Brass, James A.; Buechel, Sally W.; Peterson, David L. (Technical Monitor)

    2002-01-01

    A disaster mitigation demonstration, designed to integrate remote-piloted aerial platforms, a thermal infrared imaging payload, over-the-horizon (OTH) data telemetry and advanced image geo-rectification technologies was initiated in 2001. Project FiRE incorporates the use of a remotely piloted Uninhabited Aerial Vehicle (UAV), thermal imagery, and over-the-horizon satellite data telemetry to provide geo-corrected data over a controlled burn, to a fire management community in near real-time. The experiment demonstrated the use of a thermal multi-spectral scanner, integrated on a large payload capacity UAV, distributing data over-the-horizon via satellite communication telemetry equipment, and precision geo-rectification of the resultant data on the ground for data distribution to the Internet. The use of the UAV allowed remote-piloted flight (thereby reducing the potential for loss of human life during hazardous missions), and the ability to "finger and stare" over the fire for extended periods of time (beyond the capabilities of human-pilot endurance). Improved bit-rate capacity telemetry capabilities increased the amount, structure, and information content of the image data relayed to the ground. The integration of precision navigation instrumentation allowed improved accuracies in geo-rectification of the resultant imagery, easing data ingestion and overlay in a GIS framework. We focus on these technological advances and demonstrate how these emerging technologies can be readily integrated to support disaster mitigation and monitoring strategies regionally and nationally.

  4. IR sensors and imagers in networked operations

    NASA Astrophysics Data System (ADS)

    Breiter, Rainer; Cabanski, Wolfgang

    2005-05-01

    "Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with

  5. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  6. Photogrammetric Measurements in Fixed Wing Uav Imagery

    NASA Astrophysics Data System (ADS)

    Gülch, E.

    2012-07-01

    Several flights have been undertaken with PAMS (Photogrammetric Aerial Mapping System) by Germap, Germany, which is briefly introduced. This system is based on the SmartPlane fixed-wing UAV and a CANON IXUS camera system. The plane is equipped with GPS and has an infrared sensor system to estimate attitude values. A software has been developed to link the PAMS output to a standard photogrammetric processing chain built on Trimble INPHO. The linking of the image files and image IDs and the handling of different cases with partly corrupted output have to be solved to generate an INPHO project file. Based on this project file the software packages MATCH-AT, MATCH-T DSM, OrthoMaster and OrthoVista for digital aerial triangulation, DTM/DSM generation and finally digital orthomosaik generation are applied. The focus has been on investigations on how to adapt the "usual" parameters for the digital aerial triangulation and other software to the UAV flight conditions, which are showing high overlaps, large kappa angles and a certain image blur in case of turbulences. It was found, that the selected parameter setup shows a quite stable behaviour and can be applied to other flights. A comparison is made to results from other open source multi-ray matching software to handle the issue of the described flight conditions. Flights over the same area at different times have been compared to each other. The major objective was here to see, on how far differences occur relative to each other, without having access to ground control data, which would have a potential for applications with low requirements on the absolute accuracy. The results show, that there are influences of weather and illumination visible. The "unusual" flight pattern, which shows big time differences for neighbouring strips has an influence on the AT and DTM/DSM generation. The results obtained so far do indicate problems in the stability of the camera calibration. This clearly requests a usage of GCPs for all

  7. Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm

    PubMed Central

    Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis

    2016-01-01

    Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds. PMID:27827883

  8. Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm.

    PubMed

    Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis

    2016-11-03

    Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.

  9. Research on detection method of UAV obstruction based on binocular vision

    NASA Astrophysics Data System (ADS)

    Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao

    2018-04-01

    For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.

  10. Aeromagnetic Compensation for UAVs

    NASA Astrophysics Data System (ADS)

    Naprstek, T.; Lee, M. D.

    2017-12-01

    Aeromagnetic data is one of the most widely collected types of data in exploration geophysics. With the continuing prevalence of unmanned air vehicles (UAVs) in everyday life there is a strong push for aeromagnetic data collection using UAVs. However, apart from the many political and legal barriers to overcome in the development of UAVs as aeromagnetic data collection platforms, there are also significant scientific hurdles, primary of which is magnetic compensation. This is a well-established process in manned aircraft achieved through a combination of platform magnetic de-noising and compensation routines. However, not all of this protocol can be directly applied to UAVs due to fundamental differences in the platforms, most notably the decrease in scale causing magnetometers to be significantly closer to the avionics. As such, the methodology must be suitably adjusted. The National Research Council of Canada has collaborated with Aeromagnetic Solutions Incorporated to develop a standardized approach to de-noising and compensating UAVs, which is accomplished through a series of static and dynamic experiments. On the ground, small static tests are conducted on individual components to determine their magnetization. If they are highly magnetic, they are removed, demagnetized, or characterized such that they can be accounted for in the compensation. Dynamic tests can include measuring specific components as they are powered on and off to assess their potential effect on airborne data. The UAV is then flown, and a modified compensation routine is applied. These modifications include utilizing onboard autopilot current sensors as additional terms in the compensation algorithm. This process has been applied with success to fixed-wing and rotary-wing platforms, with both a standard manned-aircraft magnetometer, as well as a new atomic magnetometer, much smaller in scale.

  11. Using Unmanned Aerial Vehicle (UAV) for spatio-temporal monitoring of soil erosion and roughness in Chania, Crete, Greece

    NASA Astrophysics Data System (ADS)

    Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis

    2016-04-01

    This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various

  12. Uav-Mapping - a User Report

    NASA Astrophysics Data System (ADS)

    Mayr, W.

    2011-09-01

    This paper reports on first hand experiences in operating an unmanned airborne system (UAS) for mapping purposes in the environment of a mapping company. Recently, a multitude of activities in UAVs is visible, and there is growing interest in the commercial, industrial, and academic mapping user communities and not only in those. As an introduction, the major components of an UAS are identified. The paper focuses on a 1.1kg UAV which is integrated and gets applied on a day-to-day basis as part of an UAS in standard aerial imaging tasks for more than two years already. We present the unmanned airborne vehicle in some detail as well as the overall system components such as autopilot, ground station, flight mission planning and control, and first level image processing. The paper continues with reporting on experiences gained in setting up constraints such a system needs to fulfill. Further on, operational aspects with emphasis on unattended flight mission mode are presented. Various examples show the applicability of UAS in geospatial tasks, proofing that UAS are capable delivering reliably e.g. orthomosaics, digital surface models and more. Some remarks on achieved accuracies give an idea on obtainable qualities. A discussion about safety features puts some light on important matters when entering unmanned flying activities and rounds up this paper. Conclusions summarize the state of the art of an operational UAS from the point of the view of the author.

  13. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    PubMed

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  14. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  15. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  16. Research for new UAV capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.; Leadabrand, R.

    1996-07-01

    This paper discusses research for new Unmanned Aerial Vehicles (UAV) capabilities. Findings indicate that UAV performance could be greatly enhanced by modest research. Improved sensors and communications enhance near term cost effectiveness. Improved engines, platforms, and stealth improve long term effectiveness.

  17. A UAV System for Observing Volcanoes and Natural Hazards

    NASA Astrophysics Data System (ADS)

    Saggiani, G.; Persiani, F.; Ceruti, A.; Tortora, P.; Troiani, E.; Giuletti, F.; Amici, S.; Buongiorno, M.; Distefano, G.; Bentini, G.; Bianconi, M.; Cerutti, A.; Nubile, A.; Sugliani, S.; Chiarini, M.; Pennestri, G.; Petrini, S.; Pieri, D.

    2007-12-01

    Fixed or rotary wing manned aircraft are currently the most commonly used platforms for airborne reconnaissance in response to natural hazards, such as volcanic eruptions, oil spills, wild fires, earthquakes. Such flights are very often undertaken in hazardous flying conditions (e.g., turbulence, downdrafts, reduced visibility, close proximity to dangerous terrain) and can be expensive. To mitigate these two fundamental issues-- safety and cost--we are exploring the use of small (less than 100kg), relatively inexpensive, but effective, unmanned aerial vehicles (UAVs) for this purpose. As an operational test, in 2004 we flew a small autonomous UAV in the airspace above and around Stromboli Volcano. Based in part on this experience, we are adapting the RAVEN UAV system for such natural hazard surveillance missions. RAVEN has a 50km range, with a 3.5m wingspan, main fuselage length of 4.60m, and maximum weight of 56kg. It has autonomous flight capability and a ground control Station for the mission planning and control. It will carry a variety of imaging devices, including a visible camera, and an IR camera. It will also carry an experimental Fourier micro-interferometer based on MOEMS technology, (developed by IMM Institute of CNR), to detect atmospheric trace gases. Such flexible, capable, and easy-to-deploy UAV systems may significantly shorten the time necessary to characterize the nature and scale of the natural hazard threats if used from the outset of, and systematically during, natural hazard events. When appropriately utilized, such UAVs can provide a powerful new hazard mitigation and documentation tool for civil protection hazard responders. This research was carried out under the auspices of the Italian government, and, in part, under contract to NASA at the Jet Propulsion Laboratory.

  18. Comparison of a UAV-derived point-cloud to Lidar data at Haig Glacier, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Bash, E. A.; Moorman, B.; Montaghi, A.; Menounos, B.; Marshall, S. J.

    2016-12-01

    The use of unmanned aerial vehicles (UAVs) is expanding rapidly in glaciological research as a result of technological improvements that make UAVs a cost-effective solution for collecting high resolution datasets with relative ease. The cost and difficult access traditionally associated with performing fieldwork in glacial environments makes UAVs a particularly attractive tool. In the small, but growing, body of literature using UAVs in glaciology the accuracy of UAV data is tested through the comparison of a UAV-derived DEM to measured control points. A field campaign combining simultaneous lidar and UAV flights over Haig Glacier in April 2015, provided the unique opportunity to directly compare UAV data to lidar. The UAV was a six-propeller Mikrokopter carrying a Panasonic Lumix DMC-GF1 camera with a 12 Megapixel Live MOS sensor and Lumix G 20 mm lens flown at a height of 90 m, resulting in sub-centimetre ground resolution per image pixel. Lidar data collection took place April 20, while UAV flights were conducted April 20-21. A set of 65 control points were laid out and surveyed on the glacier surface on April 19 and 21 using a RTK GPS with a vertical uncertainty of 5 cm. A direct comparison of lidar points to these control points revealed a 9 cm offset between the control points and the lidar points on average, but the difference changed distinctly from points collected on April 19 versus those collected April 21 (7 cm and 12 cm). Agisoft Photoscan was used to create a point-cloud from imagery collected with the UAV and CloudCompare was used to calculate the difference between this and the lidar point cloud, revealing an average difference of less than 17 cm. This field campaign also highlighted some of the benefits and drawbacks of using a rotary UAV for glaciological research. The vertical takeoff and landing capabilities, combined with quick responsiveness and higher carrying capacity, make the rotary vehicle favourable for high-resolution photos when

  19. Analysis of Unmanned Aerial Vehicle (UAV) hyperspectral remote sensing monitoring key technology in coastal wetland

    NASA Astrophysics Data System (ADS)

    Ma, Yi; Zhang, Jie; Zhang, Jingyu

    2016-01-01

    The coastal wetland, a transitional zone between terrestrial ecosystems and marine ecosystems, is the type of great value to ecosystem services. For the recent 3 decades, area of the coastal wetland is decreasing and the ecological function is gradually degraded with the rapid development of economy, which restricts the sustainable development of economy and society in the coastal areas of China in turn. It is a major demand of the national reality to carry out the monitoring of coastal wetlands, to master the distribution and dynamic change. UAV, namely unmanned aerial vehicle, is a new platform for remote sensing. Compared with the traditional satellite and manned aerial remote sensing, it has the advantage of flexible implementation, no cloud cover, strong initiative and low cost. Image-spectrum merging is one character of high spectral remote sensing. At the same time of imaging, the spectral curve of each pixel is obtained, which is suitable for quantitative remote sensing, fine classification and target detection. Aimed at the frontier and hotspot of remote sensing monitoring technology, and faced the demand of the coastal wetland monitoring, this paper used UAV and the new remote sensor of high spectral imaging instrument to carry out the analysis of the key technologies of monitoring coastal wetlands by UAV on the basis of the current situation in overseas and domestic and the analysis of developing trend. According to the characteristic of airborne hyperspectral data on UAV, that is "three high and one many", the key technology research that should develop are promoted as follows: 1) the atmosphere correction of the UAV hyperspectral in coastal wetlands under the circumstance of complex underlying surface and variable geometry, 2) the best observation scale and scale transformation method of the UAV platform while monitoring the coastal wetland features, 3) the classification and detection method of typical features with high precision from multi scale

  20. Cloud-Assisted UAV Data Collection for Multiple Emerging Events in Distributed WSNs

    PubMed Central

    Cao, Huiru; Liu, Yongxin; Yue, Xuejun; Zhu, Wenjian

    2017-01-01

    In recent years, UAVs (Unmanned Aerial Vehicles) have been widely applied for data collection and image capture. Specifically, UAVs have been integrated with wireless sensor networks (WSNs) to create data collection platforms with high flexibility. However, most studies in this domain focus on system architecture and UAVs’ flight trajectory planning while event-related factors and other important issues are neglected. To address these challenges, we propose a cloud-assisted data gathering strategy for UAV-based WSN in the light of emerging events. We also provide a cloud-assisted approach for deriving UAV’s optimal flying and data acquisition sequence of a WSN cluster. We validate our approach through simulations and experiments. It has been proved that our methodology outperforms conventional approaches in terms of flying time, energy consumption, and integrity of data acquisition. We also conducted a real-world experiment using a UAV to collect data wirelessly from multiple clusters of sensor nodes for monitoring an emerging event, which are deployed in a farm. Compared against the traditional method, this proposed approach requires less than half the flying time and achieves almost perfect data integrity. PMID:28783100

  1. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  2. UAV magnetometry in mineral exploration and infrastructure detection

    NASA Astrophysics Data System (ADS)

    Braun, A.; Parvar, K.; Burns, M.

    2015-12-01

    Magnetic surveys are critical tools in mineral exploration and UAVs have the potential to carry magnetometers. UAV surveys can offer higher spatial resolution than traditional airborne surveys, and higher coverage than terrestrial surveys. However, the main advantage is their ability to sense the magnetic field in 3-D, while most airborne or terrestrial surveys are restricted to 2-D acquisition. This study compares UAV magnetic data from two different UAVs (JIB drone, DJI Phantom 2) and three different magnetometers (GEM GSPM35, Honeywell HMR2300, GEM GST-19). The first UAV survey was conducted using a JIB UAV with a GSPM35 flying at 10-15 m above ground. The survey's goal was to detect intrusive Rhyolite bodies for primary mineral exploration. The survey resulted in a better understanding of the validity/resolution of UAV data and led to improved knowledge about the geological structures in the area. The results further drove the design of a following terrestrial survey. Comparing the UAV data with an available airborne survey (upward continued to 250 m) reveals that the UAV data has superior spatial resolution, but exhibits a higher noise level. The magnetic anomalies related to the Rhyolite intrusions is about 109 nT and translates into an estimated depth of approximately 110 meters. The second survey was conducted using an in-house developed UAV magnetometer system equipped with a DJI Phantom 2 and a Honeywell HMR2300 fluxgate magnetometer. By flying the sensor in different altitudes, the vertical and horizontal gradients can be derived leading to full 3-D magnetic data volumes which can provide improved constraints for source depth/geometry characterization. We demonstrate that a buried steam pipeline was detectable with the UAV magnetometer system and compare the resulting data with a terrestrial survey using a GEM GST-19 Proton Precession Magnetometer.

  3. Technology Challenges in Small UAV Development

    NASA Technical Reports Server (NTRS)

    Logan, Michael J.; Vranas, Thomas L.; Motter, Mark; Shams, Qamar; Pollock, Dion S.

    2005-01-01

    Development of highly capable small UAVs present unique challenges for technology protagonists. Size constraints, the desire for ultra low cost and/or disposable platforms, lack of capable design and analysis tools, and unique mission requirements all add to the level of difficulty in creating state-of-the-art small UAVs. This paper presents the results of several small UAV developments, the difficulties encountered, and proposes a list of technology shortfalls that need to be addressed.

  4. Extracting Maximum Total Water Levels from Video "Brightest" Images

    NASA Astrophysics Data System (ADS)

    Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.

    2016-02-01

    An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in

  5. Nonlinear Landing Control for Quadrotor UAVs

    NASA Astrophysics Data System (ADS)

    Voos, Holger

    Quadrotor UAVs are one of the most preferred type of small unmanned aerial vehicles because of the very simple mechanical construction and propulsion principle. However, the nonlinear dynamic behavior requires a more advanced stabilizing control and guidance of these vehicles. In addition, the small payload reduces the amount of batteries that can be carried and thus also limits the operating range of the UAV. One possible solution for a range extension is the application of a mobile base station for recharging purpose even during operation. However, landing on a moving base station requires autonomous tracking and landing control of the UAV. In this paper, a nonlinear autopilot for quadrotor UAVs is extended with a tracking and landing controller to fulfill the required task.

  6. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  7. Very high resolution crop surface models (CSMs) from UAV-based stereo images for rice growth monitoring In Northeast China

    NASA Astrophysics Data System (ADS)

    Bendig, J.; Willkomm, M.; Tilly, N.; Gnyp, M. L.; Bennertz, S.; Qiang, C.; Miao, Y.; Lenz-Wiedemann, V. I. S.; Bareth, G.

    2013-08-01

    Unmanned aerial vehicles (UAVs) became popular platforms for the collection of remotely sensed geodata in the last years (Hardin & Jensen 2011). Various applications in numerous fields of research like archaeology (Hendrickx et al., 2011), forestry or geomorphology evolved (Martinsanz, 2012). This contribution deals with the generation of multi-temporal crop surface models (CSMs) with very high resolution by means of low-cost equipment. The concept of the generation of multi-temporal CSMs using Terrestrial Laserscanning (TLS) has already been introduced by Hoffmeister et al. (2010). For this study, data acquisition was performed with a low-cost and low-weight Mini-UAV (< 5 kg). UAVs in general and especially smaller ones, like the system presented here, close a gap in small scale remote sensing (Berni et al., 2009; Watts et al., 2012). In precision agriculture frequent remote sensing on such scales during the vegetation period provides important spatial information on the crop status. Crop growth variability can be detected by comparison of the CSMs in different phenological stages. Here, the focus is on the detection of this variability and its dependency on cultivar and plant treatment. The method has been tested for data acquired on a barley experiment field in Germany. In this contribution, it is applied to a different crop in a different environment. The study area is an experiment field for rice in Northeast China (Sanjiang Plain). Three replications of the cultivars Kongyu131 and Longjing21 were planted in plots that were treated with different amounts of N-fertilizer. In July 2012 three UAV-campaigns were carried out. Establishment of ground control points (GCPs) allowed for ground truth. Additionally, further destructive and non-destructive field data were collected. The UAV-system is an MK-Okto by Hisystems (http://www.mikrokopter.de) which was equipped with the high resolution Panasonic Lumix GF3 12

  8. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  9. An UAV scheduling and planning method for post-disaster survey

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhou, X. G.; Yin, J.; Xiao, Q. Y.

    2014-11-01

    Annually, the extreme climate and special geological environments lead to frequent natural disasters, e.g., earthquakes, floods, etc. The disasters often bring serious casualties and enormous economic losses. Post-disaster surveying is very important for disaster relief and assessment. As the Unmanned Aerial Vehicle (UAV) remote sensing with the advantage of high efficiency, high precision, high flexibility, and low cost, it is widely used in emergency surveying in recent years. As the UAVs used in emergency surveying cannot stop and wait for the happening of the disaster, when the disaster happens the UAVs usually are working at everywhere. In order to improve the emergency surveying efficiency, it is needed to track the UAVs and assign the emergency surveying task for each selected UAV. Therefore, a UAV tracking and scheduling method for post-disaster survey is presented in this paper. In this method, Global Positioning System (GPS), and GSM network are used to track the UAVs; an emergency tracking UAV information database is built in advance by registration, the database at least includes the following information, e.g., the ID of the UAVs, the communication number of the UAVs; when catastrophe happens, the real time location of all UAVs in the database will be gotten using emergency tracking method at first, then the traffic cost time for all UAVs to the disaster region will be calculated based on the UAVs' the real time location and the road network using the nearest services analysis algorithm; the disaster region is subdivided to several emergency surveying regions based on DEM, area, and the population distribution map; the emergency surveying regions are assigned to the appropriated UAV according to shortest cost time rule. The UAVs tracking and scheduling prototype is implemented using SQLServer2008, ArcEnginge 10.1 SDK, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.

  10. Improvements to video imaging detection for dilemma zone protection.

    DOT National Transportation Integrated Search

    2009-02-01

    The use of video imaging vehicle detection systems (VIVDS) at signalized intersections in Texas has : increased significantly due primarily to safety issues and costs. Installing non-intrusive detectors at : intersections is almost always safer than ...

  11. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  12. Active landslide monitoring using remote sensing data, GPS measurements and cameras on board UAV

    NASA Astrophysics Data System (ADS)

    Nikolakopoulos, Konstantinos G.; Kavoura, Katerina; Depountis, Nikolaos; Argyropoulos, Nikolaos; Koukouvelas, Ioannis; Sabatakakis, Nikolaos

    2015-10-01

    An active landslide can be monitored using many different methods: Classical geotechnical measurements like inclinometer, topographical survey measurements with total stations or GPS and photogrammetric techniques using airphotos or high resolution satellite images. As the cost of the aerial photo campaign and the acquisition of very high resolution satellite data is quite expensive the use of cameras on board UAV could be an identical solution. Small UAVs (Unmanned Aerial Vehicles) have started their development as expensive toys but they currently became a very valuable tool in remote sensing monitoring of small areas. The purpose of this work is to demonstrate a cheap but effective solution for an active landslide monitoring. We present the first experimental results of the synergistic use of UAV, GPS measurements and remote sensing data. A six-rotor aircraft with a total weight of 6 kg carrying two small cameras has been used. Very accurate digital airphotos, high accuracy DSM, DGPS measurements and the data captured from the UAV are combined and the results are presented in the current study.

  13. Uav for Geodata Acquisition in Agricultureal and Forestal Applications

    NASA Astrophysics Data System (ADS)

    Reidelstürz, P.; Schrenk, L.; Littmann, W.

    2011-09-01

    of German Armed Forces in Neubiberg/Munich and the well-established precision farming company "Konsultationszentrum Liepen" to develop an applicable UAV for precision farming purposes. Currently Cis GmbH and Technologie Campus Freyung, with intense contact to the „flying robot"- team of DLR Oberpfaffenhofen, collaborate to optimize the existing UAV and to extend the applications from data aquisition for biomass diversity up to detect the water supply situation in agricultural fields, to support pest management systems as much as to check the possibilities to detect bark beetle attacks in european spruce in an early stage of attack (green attack phase) by constructing and integrating further payload modules with different sensors in the existing UAV airframe. Also effective data processing workflows are to be worked out. Actually in the existing UAV autopilotsystem "piccolo" (cloudcaptech) is integrated and also a replaceable payload module is available, carrying a VIS and a NIR camera to calculate maps of NDVI diversity as indicator of biomass diversity. Further modules with a 6 channel multispectral still camera and with a spectrometer are planned. The airframe's wingspan is about 3,45m weighting 4.2 kg, ready to fly. The hand launchable UAV can start from any place in agricultural regions. The wing is configured with flaps, allowing steep approaches and short landings using a „butterfly" brake configuration. In spite of the lightweight configuration the UAV yet proves its worth under windy baltic wether situations by collecting regular sharp images of fields under wind speed up to 15m/s (Beaufort 6 -7). In further projects the development of further payload modules and a user friendly flight planning tool is scheduled considering different payload - and airframe requirements for different precision farming purposes and forest applications. Data processing and workflow will be optimized. Cooperation with further partners to establish UAV systems in agricultural

  14. Geomorphological mapping of shallow landslides using UAVs

    NASA Astrophysics Data System (ADS)

    Fiorucci, Federica; Giordan, Daniele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto

    2015-04-01

    The mapping of event shallow landslides is a critical activity, due to the large number of phenomena, mostly with small dimension, affecting extensive areas. This is commonly done through aerial photo-interpretation or through field surveys. Nowadays, landslide maps can be realized exploiting other methods/technologies: (i) airborne LiDARs, (ii) stereoscopic satellite images, and (iii) unmanned aerial vehicles (UAVs). In addition to the landslide maps, these methods/technologies allow the generation of updated Digital Terrain Models (DTM). In December 2013, in the Collazzone area (Umbria, Central Italy), an intense rainfall event triggered a large number of shallow landslides. To map the landslides occurred in the area, we exploited data and images obtained through (A) an airborne LiDAR survey, (B) a remote controlled optocopter (equipped with a Canon EOS M) survey, and (C) a stereoscopic satellite WorldView II MS. To evaluate the mapping accuracy of these methods, we select two landslides and we mapped them using a GPS RTK instrumentation. We consider the GPS survey as the benchmark being the most accurate system. The results of the comparison allow to highlight pros and cons of the methods/technologies used. LiDAR can be considered the most accurate system and in addition it allows the extraction and the classification of the digital surface models from the surveyed point cloud. Conversely, LiDAR requires additional time for the flight planning, and specific data analysis user capabilities. The analysis of the satellite WorldView II MS images facilitates the landslide mapping over large areas, but at the expenses of a minor resolution to detect the smaller landslides and their boundaries. UAVs can be considered the cheapest and fastest solution for the acquisition of high resolution ortho-photographs on limited areas, and the best solution for a multi-temporal analysis of specific landslide phenomena. Limitations are due to (i) the needs of optimal climatic

  15. Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response

    NASA Astrophysics Data System (ADS)

    Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.

    2018-04-01

    The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.

  16. Comparing and combining terrestrial laser scanning with ground-and UAV-based imaging for national-level assessment of soil erosion

    NASA Astrophysics Data System (ADS)

    McShane, Gareth; James, Mike R.; Quinton, John; Anderson, Karen; DeBell, Leon; Evans, Martin; Farrow, Luke; Glendell, Miriam; Jones, Lee; Kirkham, Matthew; Lark, Murray; Rawlins, Barry; Rickson, Jane; Quine, Tim; Wetherelt, Andy; Brazier, Richard

    2014-05-01

    3D topographic or surface models are increasingly being utilised for a wide range of applications and are established tools in geomorphological research. In this pilot study 'a cost effective framework for monitoring soil erosion in England and Wales', funded by the UK Department for Environment, Food and Rural Affairs (Defra), we compare methods of collecting topographic measurements via remote sensing for detailed studies of dynamic processes such as erosion and mass movement. The techniques assessed are terrestrial laser scanning (TLS), and unmanned aerial vehicle (UAV) photography and ground-based photography, processed using structure-from-motion (SfM) 3D reconstruction software. The methods will be applied in regions of different land use, including arable and horticultural, upland and semi natural habitats, and grassland, to quantify visible erosion pathways at the site scale. Volumetric estimates of soil loss will be quantified using the digital surface models (DSMs) provided by each technique and a modelled pre-erosion surface. Visible erosion and severity will be independently established through each technique, with their results compared and combined effectiveness assessed. A fixed delta-wing UAV (QuestUAV, http://www.questuav.com/) captures photos from a range of altitudes and angles over the study area, with automated SfM-based processing enabling rapid orthophoto production to support ground-based data acquisition. At sites with suitable scale erosion features, UAV data will also provide a DSM for volume loss measurement. Terrestrial laser scanning will provide detailed, accurate, high density measurements of the ground surface over long (100s m) distances. Ground-based photography is anticipated to be most useful for characterising small and difficult to view features. By using a consumer-grade digital camera and an SfM-based approach (using Agisoft Photoscan version 1.0.0, http://www.agisoft.ru/products/photoscan/), less expertise and fewer control

  17. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  18. Air Force UAVs: The Secret History

    DTIC Science & Technology

    2010-07-01

    iA Mitchell Institute Study i Air Force UAVs The Secret History A Mitchell Institute Study July 2010 By Thomas P. Ehrhard Report Documentation Page...DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Air Force UAVs The Secret History 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...opening phases of Operation Enduring Freedom in Afghanistan. By Thomas P. Ehrhard a miTchEll insTiTuTE sTudy July 2010 Air Force UAVs The Secret History

  19. Automated ortho-rectification of UAV-based hyperspectral data over an agricultural field using frame RGB imagery

    DOE PAGES

    Habib, Ayman; Han, Youkyung; Xiong, Weifeng; ...

    2016-09-24

    Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a

  20. Automated ortho-rectification of UAV-based hyperspectral data over an agricultural field using frame RGB imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Ayman; Han, Youkyung; Xiong, Weifeng

    Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging ismore » based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a

  1. Air Force UAV’s: The Secret History

    DTIC Science & Technology

    2010-07-01

    iA Mitchell Institute Study i Air Force UAVs The Secret History A Mitchell Institute Study July 2010 By Thomas P. Ehrhard Report Documentation Page...DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Air Force UAVs The Secret History 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c... The Secret History 2 Air Force UAVs: The Secret History2 air Force uaVs: The secret history Has any airplane in the past decade captured the public

  2. Calibration procedures for imaging spectrometers: improving data quality from satellite missions to UAV campaigns

    NASA Astrophysics Data System (ADS)

    Brachmann, Johannes F. S.; Baumgartner, Andreas; Lenhard, Karim

    2016-10-01

    The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented.

  3. UAV observation of newly formed volcanic island, Nishinoshima, Japan, from a ship

    NASA Astrophysics Data System (ADS)

    Ohminato, T.; Kaneko, T.; Takagi, A.

    2016-12-01

    We conducted an aerial observation at Nishinoshima island, south of Japan, from Jun 7 to Jun 9, 2016 by using an Unmanned Aerial Vehicle (UAV), a radio controlled small helicopter. Takeoff and landing of the UAV was conducted on a ship. Nishinoshima is a small island, 130km west of Chichijima in Ogasawara Islands, Japan. New eruption started in November 2013 in a shallow sea approximately 400 m southeast of the existing Nishinoshima Island. It started from a small islet and evolved with 1-5 × 105 m3/day discharge rate (Maeno et al, 2016). In late December 2013, the islet coalesced with the existing Nishinoshima. In 16 month, the lava field reached 2.6×106 m2and covered almost all of the existing Nishinoshima. Human landing upon the newly formed part of the island has still been prohibited due to the danger of sudden eruptions. Before our mission, some pumice or rock samples had been taken from the island but their amount was not enough to conduct detailed petrological analyses. The evolution of the lava field from the central cone has been well documented by using images taken from satellites and airplanes. However, due to the limited resolution of satellite images or photos taken from distant airplanes, there still be uncertainties in detailed morphological evolution of lava flows. The purpose of our observation includes, 1) sampling of pyroclasts near the central cone in order to investigate the condition of magma chamber and magma ascent process, and 2) taking high resolution 4K images in order to clarify the characteristic morphology of the lava flow covering the island. During the three days operation, we were successfully able to sample 250g of pyroclasts and to take 1.5TB of 4K movies. Conducting UAV's takeoff and landing on a ship was not an easy task. We used a marine research ship, Keifu-Maru, operated by Japan Meteorological Agency. The ship size is 1483 tons. On the ship deck, there are several structures which can interfere with the helicopter

  4. Unmanned aerial vehicles (UAVs) in pest management: Progress in the development of a UAV-deployed mating disruption system for Wisconsin cranberries

    USDA-ARS?s Scientific Manuscript database

    Unmanned aerial vehicles (UAVs) represent a powerful new tool for agriculture. Currently, UAVs are used almost exclusively as crop reconnaissance devices (“eyes in the sky”), not as pest control delivery systems. Research in Wisconsin cranberries is taking UAVs in a new direction. The Steffan and Lu...

  5. Unmanned aerial vehicles (UAVs) in pest management: Progress in the development of a UAV-deployed mating disruption system for Wisconsin cranberries

    USDA-ARS?s Scientific Manuscript database

    Unmanned aerial vehicles (UAVs) hold significant promise for agriculture. Currently, UAVs are being employed for various reconnaissance purposes (“eyes in the sky”), but not as pest control delivery systems. Research in Wisconsin cranberries is taking UAVs in a new direction. The Steffan and Luck La...

  6. Budget Uav Systems for the Prospection of - and Medium-Scale Archaeological Sites

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Hanus, K.

    2016-06-01

    One of the popular uses of UAVs in photogrammetry is providing an archaeological documentation. A wide offer of low-cost (consumer) grade UAVs, as well as the popularity of user-friendly photogrammetric software allowing obtaining satisfying results, contribute to facilitating the process of preparing documentation for small archaeological sites. However, using solutions of this kind is much more problematic for larger areas. The limited possibilities of autonomous flight makes it significantly harder to obtain data for areas too large to be covered during a single mission. Moreover, sometimes the platforms used are not equipped with telemetry systems, which makes navigating and guaranteeing a similar quality of data during separate flights difficult. The simplest solution is using a better UAV, however the cost of devices of such type often exceeds the financial capabilities of archaeological expeditions. The aim of this article is to present methodology allowing obtaining data for medium scale areas using only a basic UAV. The proposed methodology assumes using a simple multirotor, not equipped with any flight planning system or telemetry. Navigating of the platform is based solely on live-view images sent from the camera attached to the UAV. The presented survey was carried out using a simple GoPro camera which, from the perspective of photogrammetric use, was not the optimal configuration due to the fish eye geometry of the camera. Another limitation is the actual operational range of UAVs which in the case of cheaper systems, rarely exceeds 1 kilometre and is in fact often much smaller. Therefore the surveyed area must be divided into sub-blocks which correspond to the range of the drone. It is inconvenient since the blocks must overlap, so that they will later be merged during their processing. This increases the length of required flights as well as the computing power necessary to process a greater number of images. These issues make prospection highly

  7. State-Of in Uav Remote Sensing Survey - First Insights Into Applications of Uav Sensing Systems

    NASA Astrophysics Data System (ADS)

    Aasen, H.

    2017-08-01

    UAVs are increasingly adapted as remote sensing platforms. Together with specialized sensors, they become powerful sensing systems for environmental monitoring and surveying. Spectral data has great capabilities to the gather information about biophysical and biochemical properties. Still, capturing meaningful spectral data in a reproducible way is not trivial. Since a couple of years small and lightweight spectral sensors, which can be carried on small flexible platforms, have become available. With their adaption in the community, the responsibility to ensure the quality of the data is increasingly shifted from specialized companies and agencies to individual researchers or research teams. Due to the complexity of the data acquisition of spectral data, this poses a challenge for the community and standardized protocols, metadata and best practice procedures are needed to make data intercomparable. In November 2016, the ESSEM COST action Innovative optical Tools for proximal sensing of ecophysiological processes (OPTIMISE; http://optimise.dcs.aber.ac.uk/) held a workshop on best practices for UAV spectral sampling. The objective of this meeting was to trace the way from particle to pixel and identify influences on the data quality / reliability, to figure out how well we are currently doing with spectral sampling from UAVs and how we can improve. Additionally, a survey was designed to be distributed within the community to get an overview over the current practices and raise awareness for the topic. This talk will introduce the approach of the OPTIMISE community towards best practises in UAV spectral sampling and present first results of the survey (uav-survey/"target="_blank">http://optimise.dcs.aber.ac.uk/uav-survey/). This contribution briefly introduces the survey and gives some insights into the first results given by the interviewees.

  8. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  9. Towards a More Efficient Detection of Earthquake Induced FAÇADE Damages Using Oblique Uav Imagery

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2017-08-01

    Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for façade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient façade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the façades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for façade location in the oblique images captured in subsequent flights, and iii) selection of the façade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted façade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of L'Aquila, Italy, affected in 2009 by an earthquake.

  10. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2000-12-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  11. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  12. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  13. Rapid Topographic Mapping Using TLS and UAV in a Beach-dune-wetland Environment: Case Study in Freeport, Texas, USA

    NASA Astrophysics Data System (ADS)

    Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.

    2017-12-01

    Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the

  14. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  15. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland

    NASA Astrophysics Data System (ADS)

    Lu, Bing; He, Yuhong

    2017-06-01

    Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition

  16. System Considerations and Challendes in 3d Mapping and Modeling Using Low-Cost Uav Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2015-08-01

    In the last few years, low-cost UAV systems have been acknowledged as an affordable technology for geospatial data acquisition that can meet the needs of a variety of traditional and non-traditional mapping applications. In spite of its proven potential, UAV-based mapping is still lacking in terms of what is needed for it to become an acceptable mapping tool. In other words, a well-designed system architecture that considers payload restrictions as well as the specifications of the utilized direct geo-referencing component and the imaging systems in light of the required mapping accuracy and intended application is still required. Moreover, efficient data processing workflows, which are capable of delivering the mapping products with the specified quality while considering the synergistic characteristics of the sensors onboard, the wide range of potential users who might lack deep knowledge in mapping activities, and time constraints of emerging applications, are still needed to be adopted. Therefore, the introduced challenges by having low-cost imaging and georeferencing sensors onboard UAVs with limited payload capability, the necessity of efficient data processing techniques for delivering required products for intended applications, and the diversity of potential users with insufficient mapping-related expertise needs to be fully investigated and addressed by UAV-based mapping research efforts. This paper addresses these challenges and reviews system considerations, adaptive processing techniques, and quality assurance/quality control procedures for achievement of accurate mapping products from these systems.

  17. UAV Annual Report, FY 1996.

    DTIC Science & Technology

    1996-11-06

    Tracor; Vector; Cl Fiberite; Hexcel; Honeywell Cannon; Tamam; IntegriNautics; Lockheed Martin; Carlyle Gp; Northrop Grumman (SAR); Hbroux; Hughes...Aerospace; Group; Teftec Inc. Northrop Grumman ; Williams Internations Developmental estimates Developmental estimates 09 31 UAV ANNUAL REPORT UAV Tier 11...Rosemount Aerospace; Northrop Grumman ; Williams International Developmental estimates 31 UAVANNUAL REPORT A U.S. Customs Service P-3 AEW and Predator

  18. Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: a Case Study

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Drijfhout, A. P.; Tesfamichael, S.

    2017-08-01

    The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping

  19. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  20. Development and testing of instrumentation for ship-based UAV measurements of ocean surface processes and the marine atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Reineman, B. D.; Lenain, L.; Statom, N.; Melville, W. K.

    2012-12-01

    We have developed instrumentation packages for unmanned aerial vehicles (UAVs) to measure ocean surface processes along with momentum fluxes and latent, sensible, and radiative heat fluxes in the marine atmospheric boundary layer (MABL). The packages have been flown over land on BAE Manta C1s and over water on Boeing-Insitu ScanEagles. The low altitude required for accurate surface flux measurements (< 30 m) is below the typical safety limit of manned research aircraft; however, with advances in laser altimeters, small-aircraft flight control, and real-time kinematic differential GPS, low-altitude flight is now within the capability of small UAV platforms. Fast-response turbulence, hygrometer, and temperature probes permit turbulent flux measurements, and short- and long-wave radiometers allow the determination of net radiation, surface temperature, and albedo. Onboard laser altimetry and high-resolution visible and infrared video permit observations of surface waves and fine-scale (O(10) cm) ocean surface temperature structure. Flight tests of payloads aboard ScanEagle UAVs were conducted in April 2012 at the Naval Surface Warfare Center Dahlgren Division (Dahlgren, VA), where measurements of water vapor, heat, and momentum fluxes were made from low-altitude (31-m) UAV flights over water (Potomac River). ScanEagles are capable of ship-based launch and recovery, which can extend the reach of research vessels and enable scientific measurements out to ranges of O(10-100) km and altitudes up to 5 km. UAV-based atmospheric and surface observations can complement observations of surface and subsurface phenomena made from a research vessel and avoid the well-known problems of vessel interference in MABL measurements. We present a description of the instrumentation, summarize results from flight tests, and discuss potential applications of these UAVs for ship-based MABL studies.

  1. Commercial UAV operations in civil airspace

    NASA Astrophysics Data System (ADS)

    Newcome, Laurence R.

    2000-11-01

    The Federal Aviation Administration is often portrayed as the major impediment to unmanned aerial vehicle expansion into civil government and commercial markets. This paper describes one company's record for successfully negotiating the FAA regulations and obtaining authorizations for several types of UAVs to fly commercial reconnaissance missions in civil airspace. The process and criteria for obtaining such authorizations are described. The mishap records of the Pioneer, Predator and Hunter UAVs are examined in regard to their impact on FAA rule making. The paper concludes with a discussion of the true impediments to UAV penetration of commercial markets to date.

  2. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery.

    PubMed

    Zhao, Yi; Ma, Jiale; Li, Xiaohui; Zhang, Jie

    2018-02-27

    An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset 'UAV_Fire'. A 15-layered self-learning DCNN architecture named 'Fire_Net' is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, 'Fire_Net' guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.

  3. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    PubMed

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  4. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    PubMed Central

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  5. UAV State Estimation Modeling Techniques in AHRS

    NASA Astrophysics Data System (ADS)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  6. Design of rapid prototype of UAV line-of-sight stabilized control system

    NASA Astrophysics Data System (ADS)

    Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe

    2018-01-01

    The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.

  7. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  8. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  9. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  10. Pressurized Structure Technology for UAVS

    DTIC Science & Technology

    2008-12-01

    deficiencies of the UAVs just listed is to employ lighter-than-air or pressurized structure-based ( PSB ) technology. Basically, the UAV will be built such...that a considerable percentage of its weight is supported by or constructed from inflatable structures containing air or helium. PSB technology...neutral buoyancy will allow much slower flight speeds and increased maneuverability while expending little power. PSB airframes used in conjunction

  11. The Altus Cumulus Electrification Study (ACES): A UAV-Based Science Demonstration

    NASA Technical Reports Server (NTRS)

    Blakeslee, R. J.; Croskey, C. L.; Desch, M. D.; Farrell, W. M.; Goldberg, R. A.; Houser, J. G.; Kim, H. S.; Mach, D. M.; Mitchell, J. D.; Stoneburner, J. C.

    2003-01-01

    The Altus Cumulus Electrification Study (ACES) is an unmanned aerial vehicle (UAV)- based project that investigated thunderstorms in the vicinity of the Florida Everglades in August 2002. ACES was conducted to investigate storm electrical activity and its relationship to storm morphology, and to validate satellite-based lightning measurements. In addition, as part of the NASA sponsored UAV-based science demonstration program, this project provided a scientifically useful demonstration of the utility and promise of UAV platforms for Earth science and applications observations. ACES employed the Altus II aircraft, built by General Atomics - Aeronautical Systems, Inc. Key science objectives simultaneously addressed by ACES are to: (1) investigate lightning-storm relationships, (2) study storm electrical budgets, and provide Lightning Imaging Sensor validation. The ACES payload included electrical, magnetic, and optical sensors to remotely characterize the lightning activity and the electrical environment within and around thunderstorms. ACES contributed important electrical and optical measurements not available from other sources. Also, the high altitude vantage point of the UAV observing platform (up to 55,000 feet) provided cloud-top perspective. By taking advantage of its slow flight speed (70 to 100 knots), long endurance, and high altitude flight, the Altus was flown near, and when possible, over (but never into) thunderstorms for long periods of time that allowed investigations to be conducted over entire storm life cycles. An innovative real time weather system was used to identify and vector the aircraft to selected thunderstorms and safely fly around these storms, while, at the same time monitor the weather near our base of operations. In addition, concurrent ground-based observations that included radar (Miami and Key West WSRBD, NASA NPOL), satellite imagery, and lightning (NALDN and Los Alamos EDOT) enable the UAV measurements to be more completely

  12. Microwave tomography for an effective imaging in GPR on UAV/airborne observational platforms

    NASA Astrophysics Data System (ADS)

    Soldovieri, Francesco; Catapano, Ilaria; Ludeno, Giovanni

    2017-04-01

    GPR was originally thought as a non-invasive diagnostics technique working in contact with the underground or structure to be investigated. On the other hand, in the recent years several challenging necessities and opportunities entail the necessity to work with antenna not in contact with the structure to be investigated. This necessity arises for example in the case of landmine detection but also for cultural heritage diagnostics. Other field of application regards the forward-looking GPR aiming at shallower hidden targets forward the platfrom (vehicle) carrying the GPR [1]. Finally, a recent application is concerned with the deployment of airborne/UAV GPR, able to ensure several advantages in terms of large scale surveys and "freedom" of logistics constraint [2]. For all the above mentioned cases, the interest is towards the development of effective data processing able to make imaging task in real time. The presentation will show different data processing strategies, based on microwave tomography [1,2], for a reliable and real time imaging in the case of GPR platforms far from the interface of the structure/underground to be investigated. [1] I. Catapano, A. Affinito, A. Del Moro,.G. Alli, and F. Soldovieri, "Forward-Looking Ground-Penetrating Radar via a Linear Inverse Scattering Approach," IEEE Transactions on Geoscience and Remote Sensing, vol. 53, pp. 5624 - 5633, Oct. 2015. [2] I. Catapano, L. Crocco, Y. Krellmann, G. Triltzsch, and F. Soldovieri, "A tomographic approach for helicopter-borne ground penetrating radar imaging," IEEE Geosci. Remote Sens. Lett., vol. 9, no. 3, pp. 378-382, May 2012.

  13. 4D very high-resolution topography monitoring of surface deformation using UAV-SfM framework.

    NASA Astrophysics Data System (ADS)

    Clapuyt, François; Vanacker, Veerle; Schlunegger, Fritz; Van Oost, Kristof

    2016-04-01

    During the last years, exploratory research has shown that UAV-based image acquisition is suitable for environmental remote sensing and monitoring. Image acquisition with cameras mounted on an UAV can be performed at very-high spatial resolution and high temporal frequency in the most dynamic environments. Combined with Structure-from-Motion algorithm, the UAV-SfM framework is capable of providing digital surface models (DSM) which are highly accurate when compared to other very-high resolution topographic datasets and highly reproducible for repeated measurements over the same study area. In this study, we aim at assessing (1) differential movement of the Earth's surface and (2) the sediment budget of a complex earthflow located in the Central Swiss Alps based on three topographic datasets acquired over a period of 2 years. For three time steps, we acquired aerial photographs with a standard reflex camera mounted on a low-cost and lightweight UAV. Image datasets were then processed with the Structure-from-Motion algorithm in order to reconstruct a 3D dense point cloud representing the topography. Georeferencing of outputs has been achieved based on the ground control point (GCP) extraction method, previously surveyed on the field with a RTK GPS. Finally, digital elevation model of differences (DOD) has been computed to assess the topographic changes between the three acquisition dates while surface displacements have been quantified by using image correlation techniques. Our results show that the digital elevation model of topographic differences is able to capture surface deformation at cm-scale resolution. The mean annual displacement of the earthflow is about 3.6 m while the forefront of the landslide has advanced by ca. 30 meters over a period of 18 months. The 4D analysis permits to identify the direction and velocity of Earth movement. Stable topographic ridges condition the direction of the flow with highest downslope movement on steep slopes, and diffuse

  14. UAV-based Natural Hazard Management in High-Alpine Terrain - Case Studies from Austria

    NASA Astrophysics Data System (ADS)

    Sotier, Bernadette; Adams, Marc; Lechner, Veronika

    2015-04-01

    Unmanned Aerial Vehicles (UAV) have become a standard tool for geodata collection, as they allow conducting on-demand mapping missions in a flexible, cost-effective manner at an unprecedented level of detail. Easy-to-use, high-performance image matching software make it possible to process the collected aerial images to orthophotos and 3D-terrain models. Such up-to-date geodata have proven to be an important asset in natural hazard management: Processes like debris flows, avalanches, landslides, fluvial erosion and rock-fall can be detected and quantified; damages can be documented and evaluated. In the Alps, these processes mostly originate in remote areas, which are difficult and hazardous to access, thus presenting a challenging task for RPAS data collection. In particular, the problems include finding suitable landing and piloting-places, dealing with bad or no GPS-signals and the installation of ground control points (GCP) for georeferencing. At the BFW, RPAS have been used since 2012 to aid natural hazard management of various processes, of which three case studies are presented below. The first case study deals with the results from an attempt to employ UAV-based multi-spectral remote sensing to monitor the state of natural hazard protection forests. Images in the visible and near-infrared (NIR) band were collected using modified low-cost cameras, combined with different optical filters. Several UAV-flights were performed in the 72 ha large study site in 2014, which lies in the Wattental, Tyrol (Austria) between 1700 and 2050 m a.s.l., where the main tree species are stone pine and mountain pine. The matched aerial images were analysed using different UAV-specific vitality indices, evaluating both single- and dual-camera UAV-missions. To calculate the mass balance of a debris flow in the Tyrolean Halltal (Austria), an RPAS flight was conducted in autumn 2012. The extreme alpine environment was challenging for both the mission and the evaluation of the aerial

  15. Comparison of DSMs acquired by terrestrial laser scanning, UAV-based aerial images and ground-based optical images at the Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred

    2013-04-01

    In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To

  16. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  17. 10000 pixels wide CMOS frame imager for earth observation from a HALE UAV

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz

    2009-09-01

    MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.

  18. Mission planning optimization of video satellite for ground multi-object staring imaging

    NASA Astrophysics Data System (ADS)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  19. A Programmable SDN+NFV Architecture for UAV Telemetry Monitoring

    NASA Technical Reports Server (NTRS)

    White, Kyle J. S.; Pezaros, Dimitrios P.; Denney, Ewen; Knudson, Matt D.

    2017-01-01

    With the explosive growth in UAV numbers forecast worldwide, a core concern is how to manage the ad-hoc network configuration required for mobility management. As UAVs migrate among ground control stations, associated network services, routing and operational control must also rapidly migrate to ensure a seamless transition. In this paper, we present a novel, lightweight and modular architecture which supports high mobility, resilience and flexibility through the application of SDN and NFV principles on top of the UAV infrastructure. By combining SDN programmability and Network Function Virtualization we can achieve resilient infrastructure migration of network services, such as network monitoring and anomaly detection, coupled with migrating UAVs to enable high mobility management. Our container-based monitoring and anomaly detection Network Functions (NFs) can be tuned to specific UAV models providing operators better insight during live, high-mobility deployments. We evaluate our architecture against telemetry from over 80flights from a scientific research UAV infrastructure.

  20. a Micro-Uav with the Capability of Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Mabillard, R.; Skaloud, J.

    2013-08-01

    This paper presents the development of a low cost UAV (Unmanned Aerial Vehicle) with the capability of direct georeferencing. The advantage of such system lies in its high maneuverability, operation flexibility as well as capability to acquire image data without the need of establishing ground control points (GCPs). Moreover, the precise georeferencing offers an improvement in the final mapping accuracy when employing integrated sensor orientation. Such mode of operation limits the number and distribution of GCPs, which in turns save time in their signalization and surveying. Although the UAV systems feature high flexibility and capability of flying into areas that are inhospitable or inaccessible to humans, the lack of precision in positioning and attitude estimation on-board decrease the gained value of the captured imagery and limits their mode of operation to specific configurations and need of groundreference. Within a scope of this study we show the potential of present technologies in the field of position and orientation determination on a small UAV. The hardware implementation and especially the non-trivial synchronization of all components is clarified. Thanks to the implementation of a multi-frequency, low power GNSS receiver and its coupling with redundant MEMSIMU, we can attain the characteristic of a much larger systems flown on large carries while keeping the sensor size and weight suitable for MAV operations.

  1. Multiple UAV Cooperation for Wildfire Monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Zhongjie

    Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.

  2. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    PubMed

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing.

    PubMed

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-07-17

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system.

  4. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing

    PubMed Central

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-01-01

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system. PMID:26193281

  5. Long-term monitoring of a large landslide by using an Unmanned Aerial Vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Lindner, Gerald; Schraml, Klaus; Mansberger, Reinfried; Hübl, Johannes

    2015-04-01

    Currently UAVs become more and more important in various scientific areas, including forestry, precision farming, archaeology and hydrology. Using these drones in natural hazards research enables a completely new level of data acquisition being flexible of site, invariant in time, cost-efficient and enabling arbitrary spatial resolution. In this study, a rotary-wing Mini-UAV carrying a DSLR camera was used to acquire time series of overlapping aerial images. These photographs were taken as input to extract Digital Surface Models (DSM) as well as orthophotos in the area of interest. The "Pechgraben" area in Upper Austria has a catchment area of approximately 2 km². Geology is mainly dominated by limestone and sandstone. Caused by heavy rainfalls in the late spring of 2013, an area of about 70 ha began to move towards the village in the valley. In addition to the urgent measures, the slow-moving landslide was monitored approximately every month over a time period of more than 18 months. A detailed documentation of the change process was the result. Moving velocities and height differences were quantified and validated using a dense network of Ground Control Points (GCP). For further analysis, 14 image flights with a total amount of 10.000 photographs were performed to create multi-temporal geodata in in sub-decimeter-resolution for two depicted areas of the landslide. Using a UAV for this application proved to be an excellent choice, as it allows short repetition times, low flying heights and high spatial resolution. Furthermore, the UAV acts almost weather independently as well as highly autonomously. High-quality results can be expected within a few hours after the photo flight. The UAV system performs very well in an alpine environment. Time series of the assessed geodata detect changes in topography and provide a long-term documentation of the measures taken in order to stop the landslide and to prevent infrastructure from damage.

  6. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    NASA Astrophysics Data System (ADS)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  7. UAV-borne lidar with MEMS mirror-based scanning capability

    NASA Astrophysics Data System (ADS)

    Kasturi, Abhishek; Milanovic, Veljko; Atwood, Bryan H.; Yang, James

    2016-05-01

    Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm x 60mm x 40mm, weighing less than 40g and consuming less than 750mW of power using a ~5mW laser. This MEMS scan module was controlled by a smartphone via Bluetooth while flying on a drone, and could project vector content, text, and perform laser based tracking. Also, a "point-and-range" LiDAR module was developed for UAV applications based on low SWaP (Size, Weight and Power) gimbal-less MEMS mirror beam-steering technology and off-the-shelf OEM LRF modules. For demonstration purposes of an integrated laser range finder module, we used a simple off-the-shelf OEM laser range finder (LRF) with a 100m range, +/-1.5mm accuracy, and 4Hz ranging capability. The LRFs receiver optics were modified to accept 20° of angle, matching the transmitter's FoR. A relatively large (5.0mm) diameter MEMS mirror with +/-10° optical scanning angle was utilized in the demonstration to maintain the small beam divergence of the module. The complete LiDAR prototype can fit into a small volume of <70mm x 60mm x 60mm, and weigh <50g when powered by the UAV's battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV's position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design.

  8. Beach Volume Change Using Uav Photogrammetry Songjung Beach, Korea

    NASA Astrophysics Data System (ADS)

    Yoo, C. I.; Oh, T. S.

    2016-06-01

    Natural beach is controlled by many factors related to wave and tidal forces, wind, sediment, and initial topography. For this reason, if numerous topographic data of beach is accurately collected, coastal erosion/acceleration is able to be assessed and clarified. Generally, however, many studies on coastal erosion have limitation to analyse the whole beach, carried out of partial area as like shoreline (horizontal 2D) and beach profile (vertical 2D) on account of limitation of numerical simulation. This is an important application for prevention of coastal erosion, and UAV photogrammetry is also used to 3D topographic data. This paper analyses the use of unmanned aerial vehicles (UAV) to 3D map and beach volume change. UAV (Quadcopter) equipped with a non-metric camera was used to acquire images in Songjung beach which is located south-east Korea peninsula. The dynamics of beach topography, its geometric properties and estimates of eroded and deposited sand volumes were determined by combining elevation data with quarterly RTK-VRS measurements. To explore the new possibilities for assessment of coastal change we have developed a methodology for 3D analysis of coastal topography evolution based on existing high resolution elevation data combined with low coast, UAV and on-ground RTK-VRS surveys. DSMs were obtained by stereo-matching using Agisoft Photoscan. Using GCPs the vertical accuracy of the DSMs was found to be 10 cm or better. The resulting datasets were integrated in a local coordinates and the method proved to be a very useful fool for the detection of areas where coastal erosion occurs and for the quantification of beach change. The value of such analysis is illustrated by applications to coastal of South Korea sites that face significant management challenges.

  9. Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery

    NASA Astrophysics Data System (ADS)

    Boon, M. A.; Tesfamichael, S.

    2017-08-01

    The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to

  10. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  11. Lightweight UAV with on-board photogrammetry and single-frequency GPS positioning for metrology applications

    NASA Astrophysics Data System (ADS)

    Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O.

    2017-05-01

    This article presents a coupled system consisting of a single-frequency GPS receiver and a light photogrammetric quality camera embedded in an Unmanned Aerial Vehicle (UAV). The aim is to produce high quality data that can be used in metrology applications. The issue of Integrated Sensor Orientation (ISO) of camera poses using only GPS measurements is presented and discussed. The accuracy reached by our system based on sensors developed at the French Mapping Agency (IGN) Opto-Electronics, Instrumentation and Metrology Laboratory (LOEMI) is qualified. These sensors are specially designed for close-range aerial image acquisition with a UAV. Lever-arm calibration and time synchronization are explained and performed to reach maximum accuracy. All processing steps are detailed from data acquisition to quality control of final products. We show that an accuracy of a few centimeters can be reached with this system which uses low-cost UAV and GPS module coupled with the IGN-LOEMI home-made camera.

  12. Millimeter-wave micro-Doppler measurements of small UAVs

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Robertson, Duncan A.

    2017-05-01

    This paper discusses the micro-Doppler signatures of small UAVs obtained from a millimeter-wave radar system. At first, simulation results are shown to demonstrate the theoretical concept. It is illustrated that whilst the propeller rotation rate of the small UAVs is quite high, millimeter-wave radar systems are capable of capturing the full micro-Doppler spread. Measurements of small UAVs have been performed with both CW and FMCW radars operating at 94 GHz. The CW radar was used for obtaining micro-Doppler signatures of individual propellers. The field test data of a flying small UAV was collected with the FMCW radar and was processed to extract micro-Doppler signatures. The high fidelity results clearly reveal features such as blade flashes and propeller rotation modulation lines which can be used to classify targets. This work confirms that millimeter-wave radar is suitable for the detection and classification of small UAVs at usefully long ranges.

  13. Assessing UAV platform types and optical sensor specifications

    NASA Astrophysics Data System (ADS)

    Altena, B.; Goedemé, T.

    2014-05-01

    Photogrammetric acquisition with unmanned aerial vehicles (UAV) has grown extensively over the last couple of years. Such mobile platforms and their processing software have matured, resulting in a market which offers off-the-shelf mapping solutions to surveying companies and geospatial enterprises. Different approaches in platform type and optical instruments exist, though its resulting products have similar specifications. To demonstrate differences in acquisitioning practice, a case study over an open mine was flown with two different off-the-shelf UAVs (a fixed-wing and a multi-rotor). The resulting imagery is analyzed to clarify the differences in collection quality. We look at image settings, and stress the fact of photographic experience if manual setting are applied. For mapping production it might be safest to set the camera on automatic. Furthermore, we try to estimate if blur is present due to image motion. A subtle trend seems to be present, for the fast flying platform though its extent is of similar order to the slow moving one. It shows both systems operate at their limits. Finally, the lens distortion is assessed with special attention to chromatic aberration. Here we see that through calibration such aberrations could be present, however detecting this phenomena directly on imagery is not straightforward. For such effects a normal lens is sufficient, though a better lens and collimator does give significant improvement.

  14. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  15. Searching Lost People with Uavs: the System and Results of the Close-Search Project

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Skaloud, J.; Kornus, W.; Prades, R.; Aguilera, C.

    2012-07-01

    This paper will introduce the goals, concept and results of the project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost Search-And-Rescue (SAR) operations'. The main goal is to integrate a medium-size, helicopter-type Unmanned Aerial Vehicle (UAV), a thermal imaging sensor and an EGNOS-based multi-sensor navigation system, including an Autonomous Integrity Monitoring (AIM) capability, to support search operations in difficult-to-access areas and/or night operations. The focus of the paper is three-fold. Firstly, the operational and technical challenges of the proposed approach are discussed, such as ultra-safe multi-sensor navigation system, the use of combined thermal and optical vision (infrared plus visible) for person recognition and Beyond-Line-Of-Sight communications among others. Secondly, the implementation of the integrity concept for UAV platforms is discussed herein through the AIM approach. Based on the potential of the geodetic quality analysis and on the use of the European EGNOS system as a navigation performance starting point, AIM approaches integrity from the precision standpoint; that is, the derivation of Horizontal and Vertical Protection Levels (HPLs, VPLs) from a realistic precision estimation of the position parameters is performed and compared to predefined Alert Limits (ALs). Finally, some results from the project test campaigns are described to report on particular project achievements. Together with actual Search-and-Rescue teams, the system was operated in realistic, user-chosen test scenarios. In this context, and specially focusing on the EGNOS-based UAV navigation, the AIM capability and also the RGB/thermal imaging subsystem, a summary of the results is presented.

  16. Development of a bio-inspired UAV perching system

    NASA Astrophysics Data System (ADS)

    Xie, Pu

    Although technologies of unmanned aerial vehicles (UAVs) including micro air vehicles (MAVs) have been greatly advanced in the recent years, it is still very difficult for a UAV to perform some very challenging tasks such as perching to any desired spot reliably and agilely like a bird. Unlike the UAVs, the biological control mechanism of birds has been optimized through millions of year evolution and hence, they can perform many extremely maneuverability tasks, such as perching or grasping accurately and robustly. Therefore, we have good reason to learn from the nature in order to significantly improve the capabilities of UAVs. The development of a UAV perching system is becoming feasible, especially after a lot of research contributions in ornithology which involve the analysis of the bird's functionalities. Meanwhile, as technology advances in many engineering fields, such as airframes, propulsion, sensors, batteries, micro-electromechanical-system (MEMS), and UAV technology is also advancing rapidly. All of these research efforts in ornithology and the fast growing development technologies in UAV applications are motivating further interests and development in the area of UAV perching and grasping research. During the last decade, the research contributions about UAV perching and grasping were mainly based on fixed-wing, flapping-wing, and rotorcraft UAVs. However, most of the current researches in UAV systems with perching and grasping capability are focusing on either active (powered) grasping and perching or passive (unpowered) perching. Although birds do have both active and passive perching capabilities depending on their needs, there is no UAV perching system with both capabilities. In this project, we focused on filling this gap. Inspired by the anatomy analysis of bird legs and feet, a novel perching system has been developed to implement the bionics action for both active grasping and passive perching. In addition, for developing a robust and

  17. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  18. The Earth and Environmental Systems Podcast, and the Earth Explorations Video Series

    NASA Astrophysics Data System (ADS)

    Shorey, C. V.

    2015-12-01

    The Earth and Environmental Systems Podcast, a complete overview of the theoretical basics of Earth Science in 64 episodes, was completed in 2009, but has continued to serve the worldwide community as evidenced by listener feedback (e.g. "I am a 65 year old man. I have been retired for awhile and thought that retirement would be nothing more than waiting for the grave. However I want to thank you for your geo podcasts. They have given me a new lease on life and taught me a great deal." - FP, 2015). My current project is a video series on the practical basics of Earth Science titled "Earth Explorations". Each video is under 12 minutes long and tackles a major Earth Science concept. These videos go beyond a talking head, or even voice-over with static pictures or white-board graphics. Moving images are combined with animations created with Adobe After Effects, and aerial shots using a UAV. The dialog is scripted in a way to make it accessible at many levels, and the episodes as they currently stand have been used in K-12, and Freshman college levels with success. Though these videos are made to be used at this introductory level, they are also designed as remedial episodes for upper level classes, freeing up time given to review for new content. When completed, the series should contain close to 200 episodes, and this talk will cover the full range of resources I have produced, plan to produce, and how to access these resources. Both resources are available on iTunesU, and the videos are also available on YouTube.

  19. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery

    PubMed Central

    Zhao, Yi; Ma, Jiale; Li, Xiaohui

    2018-01-01

    An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. PMID:29495504

  20. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  1. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  2. Assessing UAVs in Monitoring Crop Evapotranspiration within a Heterogeneous Soil

    NASA Astrophysics Data System (ADS)

    Rouze, G.; Neely, H.; Morgan, C.; Kustas, W. P.; McKee, L.; Prueger, J. H.; Cope, D.; Yang, C.; Thomasson, A.; Jung, J.

    2017-12-01

    Airborne and satellite remote sensing methods have been developed to provide ET estimates across entire management fields. However, airborne-based ET is not particularly cost-effective and satellite-based ET provides insufficient spatial/temporal information. ET estimations through remote sensing are also problematic where soils are highly variable within a given management field. Unlike airborne/satellite-based ET, Unmanned Aerial Vehicle (UAV)-based ET has the potential to increase the spatial and temporal detail of these measurements, particularly within a heterogeneous soil landscape. However, it is unclear to what extent UAVs can model ET. The overall goal of this project was to assess the capability of UAVs in modeling ET across a heterogeneous landscape. Within a 20-ha irrigated cotton field in Central Texas, low-altitude UAV surveys were conducted throughout the growing season over two soil types. UAVs were equipped with thermal and multispectral cameras to obtain canopy temperature and NDVI, respectively. UAV data were supplemented simultaneously with ground-truth measurements such as Leaf Area Index (LAI) and plant height. Both remote sensing and ground-truth parameters were used to model ET using a Two-Source Energy Balance (TSEB) model. UAV-based estimations of ET and other energy balance components were validated against energy balance measurements obtained from nearby eddy covariance towers that were installed within each soil type. UAV-based ET fluxes were also compared with airborne and satellite (Landsat 8)-based ET fluxes collected near the time of the UAV survey.

  3. Research on UAV Intelligent Obstacle Avoidance Technology During Inspection of Transmission Line

    NASA Astrophysics Data System (ADS)

    Wei, Chuanhu; Zhang, Fei; Yin, Chaoyuan; Liu, Yue; Liu, Liang; Li, Zongyu; Wang, Wanguo

    Autonomous obstacle avoidance of unmanned aerial vehicle (hereinafter referred to as UAV) in electric power line inspection process has important significance for operation safety and economy for UAV intelligent inspection system of transmission line as main content of UAV intelligent inspection system on transmission line. In the paper, principles of UAV inspection obstacle avoidance technology of transmission line are introduced. UAV inspection obstacle avoidance technology based on particle swarm global optimization algorithm is proposed after common obstacle avoidance technologies are studied. Stimulation comparison is implemented with traditional UAV inspection obstacle avoidance technology which adopts artificial potential field method. Results show that UAV inspection strategy of particle swarm optimization algorithm, adopted in the paper, is prominently better than UAV inspection strategy of artificial potential field method in the aspects of obstacle avoidance effect and the ability of returning to preset inspection track after passing through the obstacle. An effective method is provided for UAV inspection obstacle avoidance of transmission line.

  4. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  5. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    PubMed

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  6. Reinvestigation and analysis a landslide dam event in 2012 using UAV

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Lung; Huang, Zji-Jie; Lin, Jun-Tin

    2015-04-01

    Geological condition of Taiwan is fracture with locating on Pacific Rim seismic area. Typhoons usually attack during summer and steep mountains are highly weathered, which induces landslide in mountain area. The situation happens more frequently recent years due to weather change effect. Most landslides are very far away from residence area. Field investigation is time consuming, high budget, limited data collected and dangerous. Investigation with satellite images has disadvantages such as less of the actual situation and poor resolution. Thus the possibility for slope investigation with UAV will be proposed and discussed in this research. Hazard investigation and monitoring is adopted UAV in recent years. UAV has advantages such as light weight, small volume, high mobility, safe, easy maintenance and low cost. Investigation can be executed in high risk area. Use the mature aero photogrammetry , combines aero photos with control point. Digital surface model (DSM) and Ortho photos can be produced with control points aligned. The resolution can be less than 5cm thus can be used as temporal creeping monitoring before landslide happens. A large landslide site at 75k of road No. 14 was investigated in this research. Landslide happened in June, 2012 with heavy rainfall and landslide dam was formed quickly after that. Analysis of this landslide failure and mechanism were discussed in this research using DEMs produced prior this event with aero photos and after this event with UAV. Residual slope stability analysis is thus carried out after strength parameters obtain from analysis described above. Thus advice for following potential landslide conditions can be provided.

  7. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope.

    PubMed

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T C

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  8. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  9. Integrated long-range UAV/UGV collaborative target tracking

    NASA Astrophysics Data System (ADS)

    Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv

    2009-05-01

    Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.

  10. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    USDA-ARS?s Scientific Manuscript database

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  11. Comparison of Uncalibrated Rgbvi with Spectrometer-Based Ndvi Derived from Uav Sensing Systems on Field Scale

    NASA Astrophysics Data System (ADS)

    Bareth, G.; Bolten, A.; Gnyp, M. L.; Reusch, S.; Jasper, J.

    2016-06-01

    The development of UAV-based sensing systems for agronomic applications serves the improvement of crop management. The latter is in the focus of precision agriculture which intends to optimize yield, fertilizer input, and crop protection. Besides, in some cropping systems vehicle-based sensing devices are less suitable because fields cannot be entered from certain growing stages onwards. This is true for rice, maize, sorghum, and many more crops. Consequently, UAV-based sensing approaches fill a niche of very high resolution data acquisition on the field scale in space and time. While mounting RGB digital compact cameras to low-weight UAVs (< 5 kg) is well established, the miniaturization of sensors in the last years also enables hyperspectral data acquisition from those platforms. From both, RGB and hyperspectral data, vegetation indices (VIs) are computed to estimate crop growth parameters. In this contribution, we compare two different sensing approaches from a low-weight UAV platform (< 5 kg) for monitoring a nitrogen field experiment of winter wheat and a corresponding farmers' field in Western Germany. (i) A standard digital compact camera was flown to acquire RGB images which are used to compute the RGBVI and (ii) NDVI is computed from a newly modified version of the Yara N-Sensor. The latter is a well-established tractor-based hyperspectral sensor for crop management and is available on the market since a decade. It was modified for this study to fit the requirements of UAV-based data acquisition. Consequently, we focus on three objectives in this contribution: (1) to evaluate the potential of the uncalibrated RGBVI for monitoring nitrogen status in winter wheat, (2) investigate the UAV-based performance of the modified Yara N-Sensor, and (3) compare the results of the two different UAV-based sensing approaches for winter wheat.

  12. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  13. A new framework for UAV-based remote sensing data processing and its application in almond water stress quantification

    USDA-ARS?s Scientific Manuscript database

    With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...

  14. Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix

    PubMed Central

    Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185

  15. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging

  16. Development and evaluation of unmanned aerial vehicle (UAV) magnetometry systems

    NASA Astrophysics Data System (ADS)

    Parvar, Kiyavash

    In this thesis, the procedure of conducting magnetic surveys from a UAV platform is investigated. In the process of evaluating UAVs for such surveys, magnetic sensors capable of operating on a UAV platform were tested using a terrestrial survey, as well as on a UAV-platform. Results were then compared to a model of the area generated using a proton precession magnetometer. Magnetic signature of the UAVs are discussed and impact values are calculated. For a better understanding of the magnetic fields around UAVs some micro-surveys were conducted with the help of a fluxgate magnetometer around two UAVs. Results of such surveys were used to determine a location to mount the magnetometer during the survey. A test survey over a known anomaly (a visible chromite outcrop in Oman) is conducted in order to determine the feasibility of using UAV-based magnetometry for chromite exploration. Observations were taken at two different elevations in order to generate a 3-D model of the magnetic field. Later, after applying upward continuation filters and comparing the calculated results to the real values, the reliability and uncertainty levels of such filters were investigated. Results show that magnetometery on UAV platforms is feasible. Unwanted signals can be noticeable and produce fake anomalies by the end of each line because of the swinging effect of the suspended magnetometer below the UAV. This should be reduced by hardware and software modifications i.e. applying non-linear filters and mounting the sensor on a rigid rod. Also, it was derived that the error level associated with upward continuation filters exceeds 45% and thus, using such filters instead of actual observations is not suggested in gradiometry. Moreover, 3-D magnetic gradient surveys can be beneficial for future inversion problems.

  17. Coordinating UAV information for executing national security-oriented collaboration

    NASA Astrophysics Data System (ADS)

    Isenor, Anthony W.; Allard, Yannick; Lapinski, Anna-Liesa S.; Demers, Hugues; Radulescu, Dan

    2014-10-01

    Unmanned Aerial Vehicles (UAVs) are being used by numerous nations for defence-related missions. In some cases, the UAV is considered a cost-effective means to acquire data such as imagery over a location or object. Considering Canada's geographic expanse, UAVs are also being suggested as a potential platform for use in surveillance of remote areas, such as northern Canada. However, such activities are typically associated with security as opposed to defence. The use of a defence platform for security activities introduces the issue of information exchange between the defence and security communities and their software applications. This paper explores the flow of information from the system used by the UAVs employed by the Royal Canadian Navy. Multiple computers are setup, each with the information system used by the UAVs, including appropriate communication between the systems. Simulated data that may be expected from a typical maritime UAV mission is then fed into the information system. The information structures common to the Canadian security community are then used to store and transfer the simulated data. The resulting data flow from the defence-oriented UAV system to the security-oriented information structure is then displayed using an open source geospatial application. Use of the information structures and applications relevant to the security community avoids the distribution restrictions often associated with defence-specific applications.

  18. A real-time remote video streaming platform for ultrasound imaging.

    PubMed

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  19. Wide field video-rate two-photon imaging by using spinning disk beam scanner

    NASA Astrophysics Data System (ADS)

    Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko

    2018-02-01

    The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.

  20. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  1. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  2. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  3. Computational analysis of unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    Abudarag, Sakhr; Yagoub, Rashid; Elfatih, Hassan; Filipovic, Zoran

    2017-01-01

    A computational analysis has been performed to verify the aerodynamics properties of Unmanned Aerial Vehicle (UAV). The UAV-SUST has been designed and fabricated at the Department of Aeronautical Engineering at Sudan University of Science and Technology in order to meet the specifications required for surveillance and reconnaissance mission. It is classified as a medium range and medium endurance UAV. A commercial CFD solver is used to simulate steady and unsteady aerodynamics characteristics of the entire UAV. In addition to Lift Coefficient (CL), Drag Coefficient (CD), Pitching Moment Coefficient (CM) and Yawing Moment Coefficient (CN), the pressure and velocity contours are illustrated. The aerodynamics parameters are represented a very good agreement with the design consideration at angle of attack ranging from zero to 26 degrees. Moreover, the visualization of the velocity field and static pressure contours is indicated a satisfactory agreement with the proposed design. The turbulence is predicted by enhancing K-ω SST turbulence model within the computational fluid dynamics code.

  4. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy

    PubMed Central

    1981-01-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  5. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed Central

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-01-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  6. The Use of UAV in Housing Renovation Identification: A Case Study at Taman Manis 2

    NASA Astrophysics Data System (ADS)

    Mustaffa, A. A.; Hasmori, M. F.; Sarif, A. S.; Ahmad, N. F.; Zainun, N. Y.

    2018-04-01

    Housing industry in Malaysia is growing rapidly due to the increase in population and the arising of economic level of Malaysian people. Most residential houses are built according to the standard residential design that may lead to house renovation by the buyers after purchasing the house. A method of using Unmanned Aerial Vehicle (UAV) monitoring was used to obtain information of the renovated houses directly on-site at Taman Manis 2, Parit Raja, Batu Pahat. Through comparison of image captured by the UAV with the original house plans, we found out that a total of 160 units out of 336 units of houses undergo a renovation process. Surprisingly, 41 units have been renovated illegally which has 40% to 96% of renovation rate. The acquired data were analyzed and can be concluded that the method of using UAVs to obtain information is highly recommended. The study is expected to help Municipal Council to detect improper & illegal renovation by the residents in a residential area.

  7. Uav Application in Coastal Environment, Example of the Oleron Island for Dunes and Dikes Survey

    NASA Astrophysics Data System (ADS)

    Guillot, B.; Pouget, F.

    2015-08-01

    The recent evolutions in civil UAV ease of use led the University of La Rochelle to conduct an UAV program around its own potential costal application. An application program involving La Rochelle University and the District of Oleron Island began in January 2015 and lasted through July of 2015. The aims were to choose 9 study areas and survey them during the winter season. The studies concerned surveying the dikes and coastal sand dunes of Oleron Island. During each flight, an action sport camera fixed on the UAV's brushless gimbal took a series of 150 pictures. After processing the photographs and using a 3D reconstruction plugin via Photoscan, we were able to export high-resolution ortho-imagery, DSM and 3D models. After applying GIS treatment to these images, volumetric evolutions between flights were revealed through a DDVM (Difference of Digital volumetric Model), in order to study sand movements on coastal sand dunes.

  8. A Discussion of Aerodynamic Control Effectors (ACEs) for Unmanned Air Vehicles (UAVs)

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.

    2002-01-01

    A Reynolds number based, unmanned air vehicle classification structure has been developed which identifies four classes of unmanned air vehicle concepts. The four unmanned air vehicle (UAV) classes are; Micro UAV, Meso UAV, Macro UAV, and Mega UAV. In a similar fashion a labeling scheme for aerodynamic control effectors (ACE) was developed and eleven types of ACE concepts were identified. These eleven types of ACEs were laid out in a five (5) layer scheme. The final section of the paper correlated the various ACE concepts to the four UAV classes and ACE recommendations are offered for future design activities.

  9. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  10. Fusion of UAV photogrammetry and digital optical granulometry for detection of structural changes in floodplains

    NASA Astrophysics Data System (ADS)

    Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub

    2016-04-01

    Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical

  11. Super-resolution image reconstruction from UAS surveillance video through affine invariant interest point-based motion estimation

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Wang, Yi; Camargo, Aldo; Martel, Florent

    2008-01-01

    In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.

  12. UAV formation control design with obstacle avoidance in dynamic three-dimensional environment.

    PubMed

    Chang, Kai; Xia, Yuanqing; Huang, Kaoli

    2016-01-01

    This paper considers the artificial potential field method combined with rotational vectors for a general problem of multi-unmanned aerial vehicle (UAV) systems tracking a moving target in dynamic three-dimensional environment. An attractive potential field is generated between the leader and the target. It drives the leader to track the target based on the relative position of them. The other UAVs in the formation are controlled to follow the leader by the attractive control force. The repulsive force affects among the UAVs to avoid collisions and distribute the UAVs evenly on the spherical surface whose center is the leader-UAV. Specific orders or positions of the UAVs are not required. The trajectories of avoidance obstacle can be obtained through two kinds of potential field with rotation vectors. Every UAV can choose the optimal trajectory to avoid the obstacle and reconfigure the formation after passing the obstacle. Simulations study on UAV are presented to demonstrate the effectiveness of proposed method.

  13. The Effects of Commercial Video Game Playing: A Comparison of Skills and Abilities for the Predator UAV

    DTIC Science & Technology

    2008-03-01

    wearing eyeglasses or contacts to achieve 20/20 vision would not constitute an automatic rejection to operate a UAV. Therefore, the reduced medical...Current selection methods may in fact not provide the fit for Predator needs because they do not really test what the Predator pilot really requires to do...but more importantly, how the information fits into what we already know-- our knowledge which has been previously obtained based on our experiences

  14. Time-Critical Cooperative Path Following of Multiple UAVs: Case Studies

    DTIC Science & Technology

    2012-10-30

    control algorithm for UAVs in 3D space. Section IV derives a strategy for time-critical cooperative path following of multiple UAVs that relies on the...UAVs in 3D space, in which a fleet of UAVs is tasked to converge to and follow a set of desired feasible paths so as to meet spatial and temporal...cooperative trajectory generation is not addressed in this paper. In fact, it is assumed that a set of desired 3D time trajectories pd,i(td) : R → R3

  15. A UAV and S2A data-based estimation of the initial biomass of green algae in the South Yellow Sea.

    PubMed

    Xu, Fuxiang; Gao, Zhiqiang; Jiang, Xiaopeng; Shang, Weitao; Ning, Jicai; Song, Debin; Ai, Jinquan

    2018-03-01

    Previous studies have shown that the initial biomass of green tide was the green algae attaching to Pyropia aquaculture rafts in the Southern Yellow Sea. In this study, the green algae was identified with unmanned aerial vehicle (UAV), an biomass estimation model was proposed for green algae biomass in the radial sand ridge area based on Sentinel-2A image (S2A) and UAV images. The result showed that the green algae was detected highly accurately with the normalized green-red difference index (NGRDI); approximately 1340 tons and 700 tons of green algae were attached to rafts and raft ropes respectively, and the lower biomass might be the main cause for the smaller scale of green tide in 2017. In addition, UAV play an important role in raft-attaching green algae monitoring and long-term research of its biomass would provide a scientific basis for the control and forecast of green tide in the Yellow Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  17. D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances

    NASA Astrophysics Data System (ADS)

    Piras, M.; Di Pietra, V.; Visintini, D.

    2017-08-01

    The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.

  18. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  19. French Interim MALE UAV Program

    DTIC Science & Technology

    2003-09-02

    MINISTÈRE DE LA DÉFENSE June, 13th 2002 Lcl Monsterleet FAF Staff J. Caron EADS S&DE-ISR FRENCH INTERIM MALE UAV PROGRAM 4 INDUSTRIAL STATUS Report...2003 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE French Interim Male UAV Program 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) EADS

  20. The development of a UGV-mounted automated refueling system for VTOL UAVs

    NASA Astrophysics Data System (ADS)

    Wills, Mike; Burmeister, Aaron; Nelson, Travis; Denewiler, Thomas; Mullens, Kathy

    2006-05-01

    This paper describes the latest efforts to develop an Automated UAV Mission System (AUMS) for small vertical takeoff and landing (VTOL) unmanned air vehicles (UAVs). In certain applications such as force protection, perimeter security, and urban surveillance a VTOL UAV can provide far greater utility than fixed-wing UAVs or ground-based sensors. The VTOL UAV can operate much closer to an object of interest and can provide a hover-and-stare capability to keep its sensors trained on an object, while the fixed wing UAV would be forced into a higher altitude loitering pattern where its sensors would be subject to intermittent blockage by obstacles and terrain. The most significant disadvantage of a VTOL UAV when compared to a fixed-wing UAV is its reduced flight endurance. AUMS addresses this disadvantage by providing forward staging, refueling, and recovery capabilities for the VTOL UAV through a host unmanned ground vehicle (UGV), which serves as a launch/recovery platform and service station. The UGV has sufficient payload capacity to carry UAV fuel for multiple launch, recovery, and refuel iterations. The UGV also provides a highly mobile means of forward deploying a small UAV into hazardous areas unsafe for personnel, such as chemically or biologically contaminated areas. Teaming small UAVs with large UGVs can decrease risk to personnel and expand mission capabilities and effectiveness. There are numerous technical challenges being addressed by these development efforts. Among the challenges is the development and integration of a precision landing system compact and light enough to allow it to be mounted on a small VTOL UAV while providing repeatable landing accuracy to safely land on the AUMS. Another challenge is the design of a UGV-transportable, expandable, self-centering landing pad that contains hardware and safety devices for automatically refueling the UAV. A third challenge is making the design flexible enough to accommodate different types of VTOL UAVs

  1. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization

    PubMed Central

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-01-01

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845

  2. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization.

    PubMed

    Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan

    2017-03-04

    This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.

  3. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  4. Using Calibrated RGB Imagery from Low-Cost Uavs for Grassland Monitoring: Case Study at the Rengen Grassland Experiment (rge), Germany

    NASA Astrophysics Data System (ADS)

    Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G.

    2017-08-01

    Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.

  5. Measures in 2015 Using a DSLR and Video Lucky Imaging

    NASA Astrophysics Data System (ADS)

    Cotterell, David

    2017-10-01

    Measures of 31 pairs taken in 2015 are reported. A 202mm, f/15 Maksutov-Cassegrain and a DSLR in video crop mode were used for the acquisition of “lucky images”. Calibration was via essentially stationary wider pairs, as analyzed and discussed.

  6. Assessing the Accuracy of Ortho-image using Photogrammetric Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Jeong, H. H.; Park, J. W.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Smart-camera can not only be operated under network environment anytime and any place but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study's proposed UAV photogrammetric method, low-cost UAV and smart camera were used. The elements of interior orientation were acquired through camera calibration. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration, The Digital Elevation Model (DEM) was constructed using the image data photographed at the target area and the results of the ground control point survey. This study also analyzes the proposed method's application possibility by comparing a Ortho-image the results of the ground control point survey. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.

  7. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  8. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is

  9. Using Distance Sensors to Perform Collision Avoidance Maneuvres on Uav Applications

    NASA Astrophysics Data System (ADS)

    Raimundo, A.; Peres, D.; Santos, N.; Sebastião, P.; Souto, N.

    2017-08-01

    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. "Sense and Avoid" algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a Light Detection and Ranging (LiDAR), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawk's flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station and UAV are made via Wi-Fi or cellular third or fourth generation (3G/4G). Some tests were made in order to evaluate the "Sense and Avoid" algorithm's overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and "Brake" mode on a real outdoor, proving its concepts.

  10. UAV based mapping of variation in grassland yield for forage production in Arctic environments

    NASA Astrophysics Data System (ADS)

    Davids, C.; Karlsen, S. R.; Jørgensen, M.; Ancin Murguzur, F. J.

    2017-12-01

    Grassland cultivation for animal feed is the key agricultural activity in northern Norway. Even though the growing season has increased by at least a week in the last 30 years, grassland yields appear to have declined, probably due to more challenging winter conditions and changing agronomy practices. The ability for local and regional crop productivity forecasting would assist farmers with management decisions and would provide local and national authorities with a better overview over productivity and potential problems due to e.g. winter damage. Remote sensing technology has long been used to estimate and map the variability of various biophysical parameters, but calibration is important. In order to establish the relationship between spectral reflectance and grass yield in northern European environments we combine Sentinel-2 time series, UAV-based multispectral measurements, and ground-based spectroradiometry, with biomass analyses and observations of species composition. In this presentation we will focus on the results from the UAV data acquisition. We used a multirotor UAV with different sensors (a multispectral Rikola camera, and NDVI and RGB cameras) to image a number of cultivated grasslands of different age and productivity in northern Norway in June/July 2016 and 2017. Following UAV data acquisition, 10 to 20 in situ measurements were made per field using a FieldSpec3 (350-2500 nm). In addition, samples were taken to determine biomass and grass species composition. The imaging and sampling was done immediately prior to harvesting. The Rikola camera, when used as a stand-alone camera mounted on a UAV, can collect 15 bands with a spectral width of 10-15 nm in the range between 500-890 nm. In the initial analysis of the 2016 data we investigated how well different vegetation indices correlated with biomass and showed that vegetation indices that include red edge bands perform better than widely used indices such as NDVI. We will extend the analysis with

  11. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  12. Using unmanned aerial vehicle (UAV) surveys and image analysis in the study of large surface-associated marine species: a case study on reef sharks Carcharhinus melanopterus shoaling behaviour.

    PubMed

    Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R

    2018-06-01

    A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Current development of UAV sense and avoid system

    NASA Astrophysics Data System (ADS)

    Zhahir, A.; Razali, A.; Mohd Ajir, M. R.

    2016-10-01

    As unmanned aerial vehicles (UAVs) are now gaining high interests from civil and commercialised market, the automatic sense and avoid (SAA) system is currently one of the essential features in research spotlight of UAV. Several sensor types employed in current SAA research and technology of sensor fusion that offers a great opportunity in improving detection and tracking system are presented here. The purpose of this paper is to provide an overview of SAA system development in general, as well as the current challenges facing UAV researchers and designers.

  14. The Altus Cumulus Electrification Study (ACES): A UAV-based Investigation of Thunderstorms

    NASA Technical Reports Server (NTRS)

    Blakeslee, Richard; Arnold, James E. (Technical Monitor)

    2001-01-01

    The Altus Cumulus Electrification Study (ACES) is a NASA-sponsored and -led science investigation that utilizes an uninhabited aerial vehicle (UAV) to investigate thunderstorms in the vicinity of the NASA Kennedy Space Center, Florida. As part of NASA's UAV-based science demonstration program, ACES will provide a scientifically useful demonstration of the utility and promise of UAV platforms for Earth science and applications observations. ACES will employ the Altus 11 aircraft, built by General Atomics-Aeronautical Systems, Inc. By taking advantage of its slow flight speed (70 to 100 knots), long endurance, and high-altitude flight (up to 55,000 feet), the Altus will be flown near, and when possible, above (but never into) thunderstorms for long periods of time, allowing investigations to be conducted over entire storm life cycles. Key science objectives simultaneously addressed by ACES are to: (1) investigate lightning-storm relationships, (2) study storm electrical budgets, and (3) provide Lightning Imaging Sensor validation. The ACES payload, already developed and flown on Altus, includes electrical, magnetic, and optical sensors to remotely characterize the lightning activity and the electrical environment within and around thunderstorms. The ACES field campaign will be conducted during July 2002 with a goal of performing 8 to 10 UAV flights. Each flight will require about 4 to 5 hours on station at altitudes from 40,000 ft to 55,000 ft. The ACES team is comprised of scientists from the NASA Marshall Space Flight Center and NASA Goddard Space Flight Centers partnered with General Atomics and IDEA, LLC.

  15. Systemic Approach to Elevation Data Acquisition for Geophysical Survey Alignments in Hilly Terrains Using UAVs

    NASA Astrophysics Data System (ADS)

    Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.

    2018-04-01

    This study is about systematic approach to photogrammetric survey that is applicable in the extraction of elevation data for geophysical surveys in hilly terrains using Unmanned Aerial Vehicles (UAVs). The outcome will be to acquire high-quality geophysical data from areas where elevations vary by locating the best survey lines. The study area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. Seismic refraction surveys were carried out for the modelling of the subsurface for detailed site investigations. Study were carried out to identify the accuracy of the digital elevation model (DEM) produced from an UAV. At 100 m altitude (flying height), over 135 overlapping images were acquired using a DJI Phantom 3 quadcopter. All acquired images were processed for automatic 3D photo-reconstruction using Agisoft PhotoScan digital photogrammetric software, which was applied to all photogrammetric stages. The products generated included a 3D model, dense point cloud, mesh surface, digital orthophoto, and DEM. In validating the accuracy of the produced DEM, the coordinates of the selected ground control point (GCP) of the survey line in the imaging area were extracted from the generated DEM with the aid of Global Mapper software. These coordinates were compared with the GCPs obtained using a real-time kinematic global positioning system. The maximum percentage of difference between GCP’s and photogrammetry survey is 13.3 %. UAVs are suitable for acquiring elevation data for geophysical surveys which can save time and cost.

  16. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  17. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  18. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  19. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  20. Repurposing Radiosonde Sensors for UAV Integration

    NASA Astrophysics Data System (ADS)

    Clowney, F. A.

    2015-12-01

    Radiosondes provide accurate, high-resolution meteorological data for a variety of purposes but are inefficient for studying the atmospheric boundary layer. Tethered balloons can provide greater temporal resolution but are difficult to acquire, hard to manage and limited in vertical resolution. UAVs appear to offer a more cost-effective method for gathering low-level meteorological data in situ, with a strong possibility of adding atmospheric chemistry. This potential is enhanced by the availability of new generations of small sensors along with dramatic advances in low-cost UAVs, especially rotary-wing. InterMet is using its experience in radiosonde design and manufacturing to develop sensor packages for fixed and rotary-wing UAVs, with the goal of delivering high-quality data at low cost. The challenge is to adapt affordable, high-accuracy sensors to the different UAV flight modes. Equally important is learning from the research community what is required for this data to have useful scientific value. Specific topics to be covered include data sampling and output rates, sensor response times, calibration, sensor placement, data storage and transfer, power consumption, integration with flight management systems and wind calculations. Beta test results for the iMet-XQ and iMet-XF sensor packages will be presented if available.