Science.gov

Sample records for aerial video imagery

  1. Acquisition and registration of aerial video imagery of urban traffic

    SciTech Connect

    Loveland, Rohan C

    2008-01-01

    The amount of information available about urban traffic from aerial video imagery is extremely high. Here we discuss the collection of such video imagery from a helicopter platform with a low-cost sensor, and the post-processing used to correct radial distortion in the data and register it. The radial distortion correction is accomplished using a Harris model. The registration is implemented in a two-step process, using a globally applied polyprojective correction model followed by a fine scale local displacement field adjustment. The resulting cleaned-up data is sufficiently well-registered to allow subsequent straight-forward vehicle tracking.

  2. Using aerial video to train the supervised classification of Landsat TM imagery for coral reef habitats mapping.

    PubMed

    Bello-Pineda, J; Liceaga-Correa, M A; Hernández-Núñez, H; Ponce-Hernández, R

    2005-06-01

    Management of coral reef resources is a challenging task, in many cases, because of the scarcity or inexistence of accurate sources of information and maps. Remote sensing is a not intrusive, but powerful tool, which has been successfully used for the assessment and mapping of natural resources in coral reef areas. In this study we utilized GIS to combine Landsat TM imagery, aerial photography, aerial video and a digital bathymetric model, to assess and to map submerged habitats for Alacranes reef, Yucatán, México. Our main goal was testing the potential of aerial video as the source of data to produce training areas for the supervised classification of Landsat TM imagery. Submerged habitats were ecologically characterized by using a hierarchical classification of field data. Habitats were identified on an overlaid image, consisting of the three types of remote sensing products and the bathymetric model. Pixels representing those habitats were selected as training areas by using GIS tools. Training areas were used to classify the Landsat TM bands 1, 2 and 3 and the bathymetric model by using a maximum likelihood algorithm. The resulting thematic map was compared against field data classification to improve habitats definition. Contextual editing and reclassification were used to obtain the final thematic map with an overall accuracy of 77%. Analysis of aerial video by a specialist in coral reef ecology was found to be a suitable source of information to produce training areas for the supervised classification of Landsat TM imagery in coral reefs at a coarse scale.

  3. Aerial video and ladar imagery fusion for persistent urban vehicle tracking

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Greisokh, Daniel; Anderson, Hyrum; Sandland, Jessica; Knowlton, Robert

    2007-04-01

    We assess the impact of supplementing two-dimensional video with three-dimensional geometry for persistent vehicle tracking in complex urban environments. Using recent video data collected over a city with minimal terrain content, we first quantify erroneous sources of automated tracking termination and identify those which could be ameliorated by detailed height maps. They include imagery misregistration, roadway occlusion and vehicle deceleration. We next develop mathematical models to analyze the tracking value of spatial geometry knowledge in general and high resolution ladar imagery in particular. Simulation results demonstrate how 3D information could eliminate large numbers of false tracks passing through impenetrable structures. Spurious track rejection would permit Kalman filter coasting times to be significantly increased. Track lifetimes for vehicles occluded by trees and buildings as well as for cars slowing down at corners and intersections could consequently be prolonged. We find high resolution 3D imagery can ideally yield an 83% reduction in the rate of automated tracking failure.

  4. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  5. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  6. COCOA: tracking in aerial imagery

    NASA Astrophysics Data System (ADS)

    Ali, Saad; Shah, Mubarak

    2006-05-01

    Unmanned Aerial Vehicles (UAVs) are becoming a core intelligence asset for reconnaissance, surveillance and target tracking in urban and battlefield settings. In order to achieve the goal of automated tracking of objects in UAV videos we have developed a system called COCOA. It processes the video stream through number of stages. At first stage platform motion compensation is performed. Moving object detection is performed to detect the regions of interest from which object contours are extracted by performing a level set based segmentation. Finally blob based tracking is performed for each detected object. Global tracks are generated which are used for higher level processing. COCOA is customizable to different sensor resolutions and is capable of tracking targets as small as 100 pixels. It works seamlessly for both visible and thermal imaging modes. The system is implemented in Matlab and works in a batch mode.

  7. Advanced Image Processing of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn; Jobson, Daniel J.; Rahman, Zia-ur; Hines, Glenn

    2006-01-01

    Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at the NASA Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.

  8. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  9. Agency Video, Audio and Imagery Library

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  10. Persistent aerial video registration and fast multi-view mosaicing.

    PubMed

    Molina, Edgardo; Zhu, Zhigang

    2014-05-01

    Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.

  11. Object and activity detection from aerial video

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Shi, Feng; Liu, Xin; Ghazel, Mohsen

    2015-05-01

    Aerial video surveillance has advanced significantly in recent years, as inexpensive high-quality video cameras and airborne platforms are becoming more readily available. Video has become an indispensable part of military operations and is now becoming increasingly valuable in the civil and paramilitary sectors. Such surveillance capabilities are useful for battlefield intelligence and reconnaissance as well as monitoring major events, border control and critical infrastructure. However, monitoring this growing flood of video data requires significant effort from increasingly large numbers of video analysts. We have developed a suite of aerial video exploitation tools that can alleviate mundane monitoring from the analysts, by detecting and alerting objects and activities that require analysts' attention. These tools can be used for both tactical applications and post-mission analytics so that the video data can be exploited more efficiently and timely. A feature-based approach and a pixel-based approach have been developed for Video Moving Target Indicator (VMTI) to detect moving objects at real-time in aerial video. Such moving objects can then be classified by a person detector algorithm which was trained with representative aerial data. We have also developed an activity detection tool that can detect activities of interests in aerial video, such as person-vehicle interaction. We have implemented a flexible framework so that new processing modules can be added easily. The Graphical User Interface (GUI) allows the user to configure the processing pipeline at run-time to evaluate different algorithms and parameters. Promising experimental results have been obtained using these tools and an evaluation has been carried out to characterize their performance.

  12. Building and road detection from large aerial imagery

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Aoki, Yoshimitsu

    2015-02-01

    Building and road detection from aerial imagery has many applications in a wide range of areas including urban design, real-estate management, and disaster relief. The extracting buildings and roads from aerial imagery has been performed by human experts manually, so that it has been very costly and time-consuming process. Our goal is to develop a system for automatically detecting buildings and roads directly from aerial imagery. Many attempts at automatic aerial imagery interpretation have been proposed in remote sensing literature, but much of early works use local features to classify each pixel or segment to an object label, so that these kind of approach needs some prior knowledge on object appearance or class-conditional distribution of pixel values. Furthermore, some works also need a segmentation step as pre-processing. Therefore, we use Convolutional Neural Networks(CNN) to learn mapping from raw pixel values in aerial imagery to three object labels (buildings, roads, and others), in other words, we generate three-channel maps from raw aerial imagery input. We take a patch-based semantic segmentation approach, so we firstly divide large aerial imagery into small patches and then train the CNN with those patches and corresponding three-channel map patches. Finally, we evaluate our system on a large-scale road and building detection datasets that is publicly available.

  13. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  14. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  15. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  16. Estimating soil organic carbon using aerial imagery and soil surveys

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Widespread implementation of precision agriculture practices requires low-cost, high-quality, georeferenced soil organic carbon (SOC) maps, but currently these maps require expensive sample collection and analysis. Widely available aerial imagery is a low-cost source of georeferenced data. After til...

  17. Texture mapping based on multiple aerial imageries in urban areas

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Ye, Siqi; Wang, Yuefeng; Han, Caiyun; Wang, Chenxi

    2015-12-01

    In the realistic 3D model reconstruction, the requirement of the texture is very high. Texture is one of the key factors that affecting realistic of the model and using texture mapping technology to realize. In this paper we present a practical approach of texture mapping based on photogrammetry theory from multiple aerial imageries in urban areas. By collinearity equation to matching the model and imageries, and in order to improving the quality of texture, we describe an automatic approach for select the optimal texture to realized 3D building from the aerial imageries of many strip. The texture of buildings can be automatically matching by the algorithm. The experimental results show that the platform of texture mapping process has a high degree of automation and improve the efficiency of the 3D modeling reconstruction.

  18. Converting aerial imagery to application maps

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...

  19. Comparison of hyperspectral imagery with aerial photography and multispectral imagery for mapping broom snakeweed

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby] is one of the most widespread and abundant rangeland weeds in western North America. The objectives of this study were to evaluate airborne hyperspectral imagery and compare it with aerial color-infrared (CIR) photography and multispe...

  20. Encoding and analyzing aerial imagery using geospatial semantic graphs

    SciTech Connect

    Watson, Jean-Paul; Strip, David R.; McLendon, William C.; Parekh, Ojas D.; Diegert, Carl F.; Martin, Shawn Bryan; Rintoul, Mark Daniel

    2014-02-01

    While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

  1. Building population mapping with aerial imagery and GIS data

    NASA Astrophysics Data System (ADS)

    Ural, Serkan; Hussain, Ejaz; Shan, Jie

    2011-12-01

    Geospatial distribution of population at a scale of individual buildings is needed for analysis of people's interaction with their local socio-economic and physical environments. High resolution aerial images are capable of capturing urban complexities and considered as a potential source for mapping urban features at this fine scale. This paper studies population mapping for individual buildings by using aerial imagery and other geographic data. Building footprints and heights are first determined from aerial images, digital terrain and surface models. City zoning maps allow the classification of the buildings as residential and non-residential. The use of additional ancillary geographic data further filters residential utility buildings out of the residential area and identifies houses and apartments. In the final step, census block population, which is publicly available from the U.S. Census, is disaggregated and mapped to individual residential buildings. This paper proposes a modified building population mapping model that takes into account the effects of different types of residential buildings. Detailed steps are described that lead to the identification of residential buildings from imagery and other GIS data layers. Estimated building populations are evaluated per census block with reference to the known census records. This paper presents and evaluates the results of building population mapping in areas of West Lafayette, Lafayette, and Wea Township, all in the state of Indiana, USA.

  2. Evaluation of orthomosics and digital surface models derived from aerial imagery for crop mapping

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Orthomosics derived from aerial imagery acquired by consumer-grade cameras have been used for crop mapping. However, digital surface models (DSM) derived from aerial imagery have not been evaluated for this application. In this study, a novel method was proposed to extract crop height from DSM and t...

  3. Pricise Target Geolocation and Tracking Based on Uav Video Imagery

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.

  4. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  5. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  6. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  7. Feature fusion using ranking for object tracking in aerial imagery

    NASA Astrophysics Data System (ADS)

    Candemir, Sema; Palaniappan, Kannappan; Bunyak, Filiz; Seetharaman, Guna

    2012-06-01

    Aerial wide-area monitoring and tracking using multi-camera arrays poses unique challenges compared to stan- dard full motion video analysis due to low frame rate sampling, accurate registration due to platform motion, low resolution targets, limited image contrast, static and dynamic parallax occlusions.1{3 We have developed a low frame rate tracking system that fuses a rich set of intensity, texture and shape features, which enables adaptation of the tracker to dynamic environment changes and target appearance variabilities. However, improper fusion and overweighting of low quality features can adversely aect target localization and reduce tracking performance. Moreover, the large computational cost associated with extracting a large number of image-based feature sets will in uence tradeos for real-time and on-board tracking. This paper presents a framework for dynamic online ranking-based feature evaluation and fusion in aerial wide-area tracking. We describe a set of ecient descriptors suitable for small sized targets in aerial video based on intensity, texture, and shape feature representations or views. Feature ranking is then used as a selection procedure where target-background discrimination power for each (raw) feature view is scored using a two-class variance ratio approach. A subset of the k-best discriminative features are selected for further processing and fusion. The target match probability or likelihood maps for each of the k features are estimated by comparing target descriptors within a search region using a sliding win- dow approach. The resulting k likelihood maps are fused for target localization using the normalized variance ratio weights. We quantitatively measure the performance of the proposed system using ground-truth tracks within the framework of our tracking evaluation test-bed that incorporates various performance metrics. The proposed feature ranking and fusion approach increases localization accuracy by reducing multimodal eects

  8. First results for an image processing workflow for hyperspatial imagery acquired with a low-cost unmanned aerial vehicle (UAV).

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Very high-resolution images from unmanned aerial vehicles (UAVs) have great potential for use in rangeland monitoring and assessment, because the imagery fills the gap between ground-based observations and remotely sensed imagery from aerial or satellite sensors. However, because UAV imagery is ofte...

  9. Detection, classification, and tracking of compact objects in video imagery

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.; Nebrich, Mark A.

    2012-06-01

    A video data conditioner (VDC) for automated full-­motion video (FMV) detection, classification, and tracking is described. VDC extends our multi-­stage image data conditioner (IDC) to video. Key features include robust detection of compact objects in motion imagery, coarse classification of all detections, and tracking of fixed and moving objects. An implementation of the detection and tracking components of the VDC on an Apple iPhone is discussed. Preliminary tracking results of naval ships captured during the Phoenix Express 2009 Photo Exercise are presented.

  10. Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Imagery acquired with unmanned aerial vehicles (UAVs) has great potential for incorporation into natural resource monitoring protocols due to their ability to be deployed quickly and repeatedly and to fly at low altitudes. While the imagery may have high spatial resolution, the spectral resolution i...

  11. Noise reduction of video imagery through simple averaging

    NASA Astrophysics Data System (ADS)

    Vorder Bruegge, Richard W.

    1999-02-01

    Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

  12. Registering aerial video images using the projective constraint.

    PubMed

    Jackson, Brian P; Goshtasby, A Ardeshir

    2010-03-01

    To separate object motion from camera motion in an aerial video, consecutive frames are registered at their planar background. Feature points are selected in consecutive frames and those that belong to the background are identified using the projective constraint. Corresponding background feature points are then used to register and align the frames. By aligning video frames at the background and knowing that objects move against the background, a means to detect and track moving objects is provided. Only scenes with planar background are considered in this study. Experimental results show improvement in registration accuracy when using the projective constraint to determine the registration parameters as opposed to finding the registration parameters without the projective constraint.

  13. Analysis of aerial multispectral imagery to assess water quality parameters of Mississippi water bodies

    NASA Astrophysics Data System (ADS)

    Irvin, Shane Adison

    The goal of this study was to demonstrate the application of aerial imagery as a tool in detecting water quality indicators in a three mile segment of Tibbee Creek in, Clay County, Mississippi. Water samples from 10 transects were collected per sampling date over two periods in 2010 and 2011. Temperature and dissolved oxygen (DO) were measured at each point, and water samples were tested for turbidity and total suspended solids (TSS). Relative reflectance was extracted from high resolution (0.5 meter) multispectral aerial images. A regression model was developed for turbidity and TSS as a function of values for specific sampling dates. The best model was used to predict turbidity and TSS using datasets outside the original model date. The development of an appropriate predictive model for water quality assessment based on the relative reflectance of aerial imagery is affected by the quality of imagery and time of sampling.

  14. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  15. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  16. Practical use of video imagery in nearshore oceanographic field studies

    USGS Publications Warehouse

    Holland, K.T.; Holman, R.A.; Lippmann, T.C.; Stanley, J.; Plant, N.

    1997-01-01

    An approach was developed for using video imagery to quantify, in terms of both spatial and temporal dimensions, a number of naturally occurring (nearshore) physical processes. The complete method is presented, including the derivation of the geometrical relationships relating image and ground coordinates, principles to be considered when working with video imagery and the two-step strategy for calibration of the camera model. The techniques are founded on the principles of photogrammetry, account for difficulties inherent in the use of video signals, and have been adapted to allow for flexibility of use in field studies. Examples from field experiments indicate that this approach is both accurate and applicable under the conditions typically experienced when sampling in coastal regions. Several applications of the camera model are discussed, including the measurement of nearshore fluid processes, sand bar length scales, foreshore topography, and drifter motions. Although we have applied this method to the measurement of nearshore processes and morphologic features, these same techniques are transferable to studies in other geophysical settings.

  17. Acquisition, orthorectification, and object-based classification of unmanned aerial vehicle (UAV) imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this paper, we examine the potential of using a small unmanned aerial vehicle (UAV) for rangeland inventory, assessment and monitoring. Imagery with 8-cm resolution was acquired over 290 ha in southwestern Idaho. We developed a semi-automated orthorectification procedure suitable for handling lar...

  18. Incremental road discovery from aerial imagery using curvilinear spanning tree (CST) search

    NASA Astrophysics Data System (ADS)

    Wang, Guozhi; Huang, Yuchun; Xie, Rongchang; Zhang, Hongchang

    2016-10-01

    Robust detection of road network in aerial imagery is a challenging task since roads have different pavement texture, road-side surroundings, as well as grades. Roads of different grade have different curvilinear saliency in the aerial imagery. This paper is motivated to incrementally extract roads and construct the topology of the road network of aerial imagery from the higher-grade-first perspective. Inspired by the spanning tree technique, the proposed method starts from the robust extraction of the most salient road segment(s) of the road network, and incrementally connects segments of less saliency of curvilinear structure until all road segments in the network are extracted. The proposed algorithm includes: curvilinear path-based road morphological enhancement, extraction of road segments, and spanning tree search for the incremental road discovery. It is tested on a diverse set of aerial imagery acquired in the city and inter-city areas. Experimental results show that the proposed curvilinear spanning tree (CST) can detect roads efficiently and construct the topology of the road network effectively. It is promising for the change detection of the road network.

  19. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery

    PubMed Central

    Rivas Casado, Monica; Ballesteros Gonzalez, Rocio; Kriechbaumer, Thomas; Veal, Amanda

    2015-01-01

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management. PMID:26556355

  20. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

  1. Identification of spatially corresponding imagery using content-based image retrieval in the context of UAS video exploitation

    NASA Astrophysics Data System (ADS)

    Brüstle, Stefan; Manger, Daniel; Mück, Klaus; Heinze, Norbert

    2014-06-01

    For many tasks in the fields of reconnaissance and surveillance it is important to know the spatial location represented by the imagery to be exploited. A task involving the assessment of changes, e.g. the appearance or disappearance of an object of interest at a certain location, can typically not be accomplished without spatial location information associated with the imagery. Often, such georeferenced imagery is stored in an archive enabling the user to query for the data with respect to its spatial location. Thus, the user is able to effectively find spatially corresponding imagery to be used for change detection tasks. In the field of exploitation of video taken from unmanned aerial systems (UAS), spatial location data is usually acquired using a GPS receiver, together with an INS device providing the sensor orientation, both integrated in the UAS. If during a flight valid GPS data becomes unavailable for a period of time, e.g. due to sensor malfunction, transmission problems or jamming, the imagery gathered during that time is not applicable for change detection tasks based merely on its georeference. Furthermore, GPS and INS inaccuracy together with a potentially poor knowledge of ground elevation can also render location information inapplicable. On the other hand, change detection tasks can be hard to accomplish even if imagery is well georeferenced as a result of occlusions within the imagery, due to e.g. clouds or fog, or image artefacts, due to e.g. transmission problems. In these cases a merely georeference based approach to find spatially corresponding imagery can also be inapplicable. In this paper, we present a search method based on the content of the images to find imagery spatially corresponding to given imagery independent from georeference quality. Using methods from content-based image retrieval, we build an image database which allows for querying even large imagery archives efficiently. We further evaluate the benefits of this method in the

  2. Target tracking and localization using infrared video imagery

    NASA Astrophysics Data System (ADS)

    Barsamian, Alex; Berk, Vincent H.; Cybenko, George V.

    2006-05-01

    One of the significant problems in visual tracking of objects is the requirement for a human analyst to post-process and interpret the data. For instance, consider the task of tracking a target, in this case a moving person, using video imagery. When this person hides behind an obstruction, and is therefore no longer visible by the camera, conventional tracking systems quickly lose track of the target and are no longer able to indicate where the target is or where it was headed. A human interpreter is then needed to conclude that the person is hiding, and probably (with certain probability) is still there. A Process Query System (PQS) is able to track and predict the path of arbitrary objects, based only on a description of their dynamic behavior, thus eliminating the need for precise identification of each object in every frame. The PQS is therefore able to draw human-like conclusions, allowing the system to track the person even when he/she is out of view. Additionally, using dynamic descriptions of tracked objects allows for low-quality video signals, or even infrared video, to be used for tracking. In this paper we introduce a novel way of implementing a video-based tracking system using a Process Query System to predict the position of objects in the environment, even after they have disappeared from view. Although the image processing pipeline is trivial, tracking accuracy is remarkably high, suggesting that overall performance can be improved even further with the use of more sophisticated video processing and image recognition technology.

  3. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  4. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  5. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  6. Evaluation of unmanned aerial vehicle (UAV) imagery to model vegetation heights in Hulun Buir grassland ecosystem

    NASA Astrophysics Data System (ADS)

    Wang, D.; Xin, X.; Li, Z.

    2015-12-01

    Vertical vegetation structure in grassland ecosystem is needed to assess grassland health and monitor available forage for livestock and wildlife habitat. Traditional ground-based field methods for measuring vegetation heights are time consuming. Most emerging airborne remote sensing techniques capable of measuring surface and vegetation height (e.g., LIDAR) are too expensive to apply at broad scales. Aerial or spaceborne stereo imagery has the cost advantage for mapping height of tall vegetation, such as forest. However, the accuracy and uncertainty of using stereo imagery for modeling heights of short vegetation, such as grass (generally lower than 50cm) needs to be investigated. In this study, 2.5-cm resolution UAV stereo imagery are used to model vegetation heights in Hulun Buir grassland ecosystem. Strong correlations were observed (r > 0.9) between vegetation heights derived from UAV stereo imagery and those field-measured ones at individual and plot level. However, vegetation heights tended to be underestimated in the imagery especially for those areas with high vegetation coverage. The strong correlations between field-collected vegetation heights and metrics derived from UAV stereo imagery suggest that UAV stereo imagery can be used to estimate short vegetation heights such as those in grassland ecosystem. Future work will be needed to verify the extensibility of the methods to other sites and vegetation types.

  7. Analysis and Exploitation of Automatically Generated Scene Structure from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Nilosek, David R.

    The recent advancements made in the field of computer vision, along with the ever increasing rate of computational power has opened up opportunities in the field of automated photogrammetry. Many researchers have focused on using these powerful computer vision algorithms to extract three-dimensional point clouds of scenes from multi-view imagery, with the ultimate goal of creating a photo-realistic scene model. However, geographically accurate three-dimensional scene models have the potential to be exploited for much more than just visualization. This work looks at utilizing automatically generated scene structure from near-nadir aerial imagery to identify and classify objects within the structure, through the analysis of spatial-spectral information. The limitation to this type of imagery is imposed due to the common availability of this type of aerial imagery. Popular third-party computer-vision algorithms are used to generate the scene structure. A voxel-based approach for surface estimation is developed using Manhattan-world assumptions. A surface estimation confidence metric is also presented. This approach provides the basis for further analysis of surface materials, incorporating spectral information. Two cases of spectral analysis are examined: when additional hyperspectral imagery of the reconstructed scene is available, and when only R,G,B spectral information can be obtained. A method for registering the surface estimation to hyperspectral imagery, through orthorectification, is developed. Atmospherically corrected hyperspectral imagery is used to assign reflectance values to estimated surface facets for physical simulation with DIRSIG. A spatial-spectral region growing-based segmentation algorithm is developed for the R,G,B limited case, in order to identify possible materials for user attribution. Finally, an analysis of the geographic accuracy of automatically generated three-dimensional structure is performed. An end-to-end, semi-automated, workflow

  8. A procedure for orthorectification of sub-decimeter resolution imagery obtained with an unmanned aerial vehicle (UAV)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Digital aerial photography acquired with unmanned aerial vehicles (UAVs) has great value for resource management due to the flexibility and relatively low cost for image acquisition, and very high resolution imagery (5 cm) which allows for mapping bare soil and vegetation types, structure and patter...

  9. Unmanned Aerial Vehicles Produce High-Resolution Seasonally-Relevant Imagery for Classifying Wetland Vegetation

    NASA Astrophysics Data System (ADS)

    Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.

    2015-08-01

    With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.

  10. Vehicle detection from very-high-resolution (VHR) aerial imagery using attribute belief propagation (ABP)

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Li, Ying; Zhang, Li; Huang, Yuchun

    2016-10-01

    With the popularity of very-high-resolution (VHR) aerial imagery, the shape, color, and context attribute of vehicles are better characterized. Due to the various road surroundings and imaging conditions, vehicle attributes could be adversely affected so that vehicle is mistakenly detected or missed. This paper is motivated to robustly extract the rich attribute feature for detecting the vehicles of VHR imagery under different scenarios. Based on the hierarchical component tree of vehicle context, attribute belief propagation (ABP) is proposed to detect salient vehicles from the statistical perspective. With the Max-tree data structure, the multi-level component tree around the road network is efficiently created. The spatial relationship between vehicle and its belonging context is established with the belief definition of vehicle attribute. To effectively correct single-level belief error, the inter-level belief linkages enforce consistency of belief assignment between corresponding components at different levels. ABP starts from an initial set of vehicle belief calculated by vehicle attribute, and then iterates through each component by applying inter-level belief passing until convergence. The optimal value of vehicle belief of each component is obtained via minimizing its belief function iteratively. The proposed algorithm is tested on a diverse set of VHR imagery acquired in the city and inter-city areas of the West and South China. Experimental results show that the proposed algorithm can detect vehicle efficiently and suppress the erroneous effectively. The proposed ABP framework is promising to robustly classify the vehicles from VHR Aerial imagery.

  11. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  12. Vectorization of Road Data Extracted from Aerial and Uav Imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Pohl, Melanie

    2016-06-01

    Road databases are essential instances of urban infrastructure. Therefore, automatic road detection from sensor data has been an important research activity during many decades. Given aerial images in a sufficient resolution, dense 3D reconstruction can be performed. Starting at a classification result of road pixels from combined elevation and optical data, we present in this paper a fivestep procedure for creating vectorized road networks. These main steps of the algorithm are: preprocessing, thinning, polygonization, filtering, and generalization. In particular, for the generalization step, which represents the principal area of innovation, two strategies are presented. The first strategy corresponds to a modification of the Douglas-Peucker-algorithm in order to reduce the number of vertices while the second strategy allows a smoother representation of street windings by Bezir curves, which results in reduction - to a decimal power - of the total curvature defined for the dataset. We tested our approach on three datasets with different complexity. The quantitative assessment of the results was performed by means of shapefiles from OpenStreetMap data. For a threshold of 6 m, completeness and correctness values of up to 85% were achieved.

  13. Challenges in collecting hyperspectral imagery of coastal waters using Unmanned Aerial Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    English, D. C.; Herwitz, S.; Hu, C.; Carlson, P. R., Jr.; Muller-Karger, F. E.; Yates, K. K.; Ramsewak, D.

    2013-12-01

    Airborne multi-band remote sensing is an important tool for many aquatic applications; and the increased spectral information from hyperspectral sensors may increase the utility of coastal surveys. Recent technological advances allow Unmanned Aerial Vehicles (UAVs) to be used as alternatives or complements to manned aircraft or in situ observing platforms, and promise significant advantages for field studies. These include the ability to conduct programmed flight plans, prolonged and coordinated surveys, and agile flight operations under difficult conditions such as measurements made at low altitudes. Hyperspectral imagery collected from UAVs should allow the increased differentiation of water column or shallow benthic communities at relatively small spatial scales. However, the analysis of hyperspectral imagery from airborne platforms over shallow coastal waters differs from that used for terrestrial or oligotrophic ocean color imagery, and the operational constraints and considerations for the collection of such imagery from autonomous platforms also differ from terrestrial surveys using manned aircraft. Multispectral and hyperspectral imagery of shallow seagrass and coral environments in the Florida Keys were collected with various sensor systems mounted on manned and unmanned aircrafts in May 2012, October 2012, and May 2013. The imaging systems deployed on UAVs included NovaSol's Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK), a Tetracam multispectral imaging system, and the Sunflower hyperspectal imager from Galileo Group, Inc. The UAVs carrying these systems were Xtreme Aerial Concepts' Vision-II Rotorcraft UAV, MLB Company's Bat-4 UAV, and NASA's SIERRA UAV, respectively. Additionally, the Galileo Group's manned aircraft also surveyed the areas with their AISA Eagle hyperspectral imaging system. For both manned and autonomous flights, cloud cover and sun glint (solar and viewing angles) were dominant constraints on retrieval of quantitatively

  14. Detecting blind building façades from highly overlapping wide angle aerial imagery

    NASA Astrophysics Data System (ADS)

    Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas

    2014-10-01

    This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.

  15. The Effects of Gender and Music Video Imagery on Sexual Attitudes.

    ERIC Educational Resources Information Center

    Kalof, Linda

    1999-01-01

    Examines the influence of gender and exposure to gender-stereotyped music video imagery on the sexual attitudes of male and female viewers. Finds that traditional sexual imagery had a significant effect on attitudes about adversarial sexual relationships and gender had main effects on 3 of 4 sexual attitudes. (CMK)

  16. Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.

    2015-12-01

    There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.

  17. Effective delineation of urban flooded areas based on aerial ortho-photo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Guindon, Bert; Raymond, Don; Hong, Gang

    2016-10-01

    The combination of rapid global urban growth and climate change has resulted in increased occurrence of major urban flood events across the globe. The distribution of flooded area is one of the key information layers for applications of emergency planning and response management. While SAR systems and technologies have been widely used for flood area delineation, radar images suffer from range ambiguities arising from corner reflection effects and shadowing in dense urban settings. A new mapping framework is proposed for the extraction and quantification of flood extent based on aerial optical multi-spectral imagery and ancillary data. This involves first mapping of flood areas directly visible to the sensor. Subsequently, the complete area of submergence is estimated from this initial mapping and inference techniques based on baseline data such as land cover and GIS information such as available digital elevation models. The methodology has been tested and proven effective using aerial photography for the case of the 2013 flood in Calgary, Canada.

  18. The effects of gender and music video imagery on sexual attitudes.

    PubMed

    Kalof, L

    1999-06-01

    This study examined the influence of gender and exposure to gender-stereo-typed music video imagery on sexual attitudes (adversarial sexual beliefs, acceptance of rape myths, acceptance of interpersonal violence, and gender role stereotyping). A group of 44 U.S. college students were randomly assigned to 1 of 2 groups that viewed either a video portraying stereotyped sexual imagery or a video that excluded all sexual images. Exposure to traditional sexual imagery had a significant main effect on attitudes about adversarial sexual relationships, and gender had main effects on 3 of 4 sexual attitudes. There was some evidence of an interaction between gender and exposure to traditional sexual imagery on the acceptance of interpersonal violence.

  19. Lake Superior water quality near Duluth from analysis of aerial photos and ERTS imagery

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.; Van Domelen, J. F.

    1973-01-01

    ERTS imagery of Lake Superior in the late summer of 1972 shows dirty water near the city of Duluth. Water samples and simultaneous photographs were taken on three separate days following a heavy storm which caused muddy runoff water. The water samples were analyzed for turbidity, color, and solids. Reflectance and transmittance characteristics of the water samples were determined with a spectrophotometer apparatus. This same apparatus attached to a microdensitometer was used to analyze the photographs for the approximate colors or wavelengths of reflected energy that caused the exposure. Although other parameters do correlate for any one particular day, it is only the water quality parameter of turbidity that correlates with the aerial imagery on all days, as the character of the dirty water changes due to settling and mixing.

  20. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  1. Environmental waste site characterization utilizing aerial photographs and satellite imagery: Three sites in New Mexico, USA

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Becker, N.; Wells, B.; Lewis, A.; David, N.

    1996-04-01

    The proper handling and characterization of past hazardous waste sites is becoming more and more important as world population extends into areas previously deemed undesirable. Historical photographs, past records, current aerial satellite imagery can play an important role in characterizing these sites. These data provide clear insight into defining problem areas which can be surface samples for further detail. Three such areas are discussed in this paper: (1) nuclear wastes buried in trenches at Los Alamos National Laboratory, (2) surface dumping at one site at Los Alamos National Laboratory, and (3) the historical development of a municipal landfill near Las Cruces, New Mexico.

  2. Estimation of walrus populations on sea ice with infrared imagery and aerial photography

    USGS Publications Warehouse

    Udevitz, M.S.; Burn, D.M.; Webber, M.A.

    2008-01-01

    Population sizes of ice-associated pinnipeds have often been estimated with visual or photographic aerial surveys, but these methods require relatively slow speeds and low altitudes, limiting the area they can cover. Recent developments in infrared imagery and its integration with digital photography could allow substantially larger areas to be surveyed and more accurate enumeration of individuals, thereby solving major problems with previous survey methods. We conducted a trial survey in April 2003 to estimate the number of Pacific walruses (Odobenus rosmarus divergens) hauled out on sea ice around St. Lawrence Island, Alaska. The survey used high altitude infrared imagery to detect groups of walruses on strip transects. Low altitude digital photography was used to determine the number of walruses in a sample of detected groups and calibrate the infrared imagery for estimating the total number of walruses. We propose a survey design incorporating this approach with satellite radio telemetry to estimate the proportion of the population in the water and additional low-level flights to estimate the proportion of the hauled-out population in groups too small to be detected in the infrared imagery. We believe that this approach offers the potential for obtaining reliable population estimates for walruses and other ice-associated pinnipeds. ?? 2007 by the Society for Marine Mammalogy.

  3. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  4. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    PubMed

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  5. Saliency region selection in large aerial imagery using multiscale SLIC segmentation

    NASA Astrophysics Data System (ADS)

    Sahli, Samir; Lavigne, Daniel A.; Sheng, Yunlong

    2012-06-01

    Advents in new sensing hardwares like GigE-cameras and fast growing data transmission capability create an imbalance between the amount of large scale aerial imagery and the means at disposal for treating them. Selection of saliency regions can reduce significantly the prospecting time and computation cost for the detection of objects in large scale aerial imagery. We propose a new approach using multiscale Simple Linear Iterative Clustering (SLIC) technique to compute the saliency regions. The SLIC is fast to create compact and uniform superpixels, based on the distances in both color and geometric spaces. When a salient structure of the object is over-segmented by the SLIC, a number of superpixels will follow the edges in the structure and therefore acquires irregular shapes. Thus, the superpixels deformation betrays presence of salient structures. We quantify the non-compactness of the superpixels as a salience measure, which is computed using the distance transform and the shape factor. To treat objects or object details of various sizes in an image, or the multiscale images, we compute the SLIC segmentations and the salient measures at multiple scales with a set of predetermined sizes of the superpixels. The final saliency map is a sum of the salience measures obtained at multiple scales. The proposed approach is fast, requires no input of user-defined parameter, produces well defined salient regions at full resolution and adapted to multi-scale image processing.

  6. Tracking stormwater discharge plumes and water quality of the Tijuana River with multispectral aerial imagery

    NASA Astrophysics Data System (ADS)

    Svejkovsky, Jan; Nezlin, Nikolay P.; Mustain, Neomi M.; Kum, Jamie B.

    2010-04-01

    Spatial-temporal characteristics and environmental factors regulating the behavior of stormwater runoff from the Tijuana River in southern California were analyzed utilizing very high resolution aerial imagery, and time-coincident environmental and bacterial sampling data. Thirty nine multispectral aerial images with 2.1-m spatial resolution were collected after major rainstorms during 2003-2008. Utilizing differences in color reflectance characteristics, the ocean surface was classified into non-plume waters and three components of the runoff plume reflecting differences in age and suspended sediment concentrations. Tijuana River discharge rate was the primary factor regulating the size of the freshest plume component and its shorelong extensions to the north and south. Wave direction was found to affect the shorelong distribution of the shoreline-connected fresh plume components much more strongly than wind direction. Wave-driven sediment resuspension also significantly contributed to the size of the oldest plume component. Surf zone bacterial samples collected near the time of each image acquisition were used to evaluate the contamination characteristics of each plume component. The bacterial contamination of the freshest plume waters was very high (100% of surf zone samples exceeded California standards), but the oldest plume areas were heterogeneous, including both polluted and clean waters. The aerial imagery archive allowed study of river runoff characteristics on a plume component level, not previously done with coarser satellite images. Our findings suggest that high resolution imaging can quickly identify the spatial extents of the most polluted runoff but cannot be relied upon to always identify the entire polluted area. Our results also indicate that wave-driven transport is important in distributing the most contaminated plume areas along the shoreline.

  7. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  8. Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.

    2016-01-01

    Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to

  9. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  10. Unsupervised building detection from irregularly spaced LiDAR and aerial imagery

    NASA Astrophysics Data System (ADS)

    Shorter, Nicholas Sven

    As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and

  11. Estimating crop water requirements of a command area using multispectral video imagery and geographic information systems

    NASA Astrophysics Data System (ADS)

    Ahmed, Rashid Hassan

    This research focused on the potential use of multispectral video remote sensing for irrigation water management. Two methods for estimating crop evapotranspiration were investigated, the energy balance estimation from multispectral video imagery and use of reflectance-based crop coefficients from multitemporal multispectral video imagery. The energy balance method was based on estimating net radiation, and soil and sensible heat fluxes, using input from the multispectral video imagery. The latent heat flux was estimated as a residual. The results were compared to surface heat fluxes measured on the ground. The net radiation was estimated within 5% of the measured values. However, the estimates of sensible and soil heat fluxes were not consistent with the measured values. This discrepancy was attributed to the methods for estimating the two fluxes. The degree of uncertainty in the parameters used in the methods made their application too limited for extrapolation to large agricultural areas. The second method used reflectance-based crop coefficients developed from the multispectral video imagery using alfalfa as a reference crop. The daily evapotranspiration from alfalfa was estimated using a nearby weather station. With the crop coefficients known for a canal command area, irrigation scheduling was simulated using the soil moisture balance method. The estimated soil moisture matched the actual soil moisture measured using the neutron probe method. Also, the overall water requirement estimated by this method was found to be in close agreement with the canal water deliveries. The crop coefficient method has great potential for irrigation management of large agricultural areas.

  12. Identification of wild areas in southern lower Michigan. [terrain analysis from aerial photography, and satellite imagery

    NASA Technical Reports Server (NTRS)

    Habowski, S.; Cialek, C.

    1978-01-01

    An inventory methodology was developed to identify potential wild area sites. A list of site criteria were formulated and tested in six selected counties. Potential sites were initially identified from LANDSAT satellite imagery. A detailed study of the soil, vegetation and relief characteristics of each site based on both high-altitude aerial photographs and existing map data was conducted to eliminate unsuitable sites. Ground reconnaissance of the remaining wild areas was made to verify suitability and acquire information on wildlife and general aesthetics. Physical characteristics of the wild areas in each county are presented in tables. Maps show the potential sites to be set aside for natural preservation and regulation by the state under the Wilderness and Natural Areas Act of 1972.

  13. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  14. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California.

    PubMed

    Boyda, Edward; Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA.

  15. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  16. Forest and land inventory using ERTS imagery and aerial photography in the boreal forest region of Alberta, Canada

    NASA Technical Reports Server (NTRS)

    Kirby, C. L.

    1974-01-01

    Satellite imagery and small-scale (1:120,000) infrared ektachrome aerial photography for the development of improved forest and land inventory techniques in the boreal forest region are presented to demonstrate spectral signatures and their application. The forest is predominately mixed, stands of white spruce and poplar, with some pure stands of black spruce, pine and large areas of poorly drained land with peat and sedge type muskegs. This work is part of coordinated program to evaluate ERTS imagery by the Canadian Forestry Service.

  17. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  18. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  19. Outlier and target detection in aerial hyperspectral imagery: a comparison of traditional and percentage occupancy hit or miss transform techniques

    NASA Astrophysics Data System (ADS)

    Young, Andrew; Marshall, Stephen; Gray, Alison

    2016-05-01

    The use of aerial hyperspectral imagery for the purpose of remote sensing is a rapidly growing research area. Currently, targets are generally detected by looking for distinct spectral features of the objects under surveillance. For example, a camouflaged vehicle, deliberately designed to blend into background trees and grass in the visible spectrum, can be revealed using spectral features in the near-infrared spectrum. This work aims to develop improved target detection methods, using a two-stage approach, firstly by development of a physics-based atmospheric correction algorithm to convert radiance into re ectance hyperspectral image data and secondly by use of improved outlier detection techniques. In this paper the use of the Percentage Occupancy Hit or Miss Transform is explored to provide an automated method for target detection in aerial hyperspectral imagery.

  20. Geomorphological relationships through the use of 2-D seismic reflection data, Lidar, and aerial imagery

    NASA Astrophysics Data System (ADS)

    Alesce, Meghan Elizabeth

    Barrier Islands are crucial in protecting coastal environments. This study focuses on Dauphin Island, Alabama, located within the Northern Gulf of Mexico (NGOM) Barrier Island complex. It is one of many islands serving as natural protection for NGOM ecosystems and coastal cities. The NGOM barrier islands formed at 4 kya in response to a decrease in rate of sea level rise. The morphology of these islands changes with hurricanes, anthropogenic activity, and tidal and wave action. This study focuses on ancient incised valleys and and the impact on island morphology on hurricane breaches. Using high frequency 2-D seismic reflection data four horizons, including the present seafloor, were interpreted. Subaerial portions of Dauphin Island were imaged using Lidar data and aerial imagery over a ten-year time span, as well as historical maps. Historical shorelines of Dauphin Island were extracted from aerial imagery and historical maps, and were compared to the location of incised valleys seen within the 2-D seismic reflection data. Erosion and deposition volumes of Dauphin Island from 1998 to 2010 (the time span covering hurricanes Ivan and Katrina) in the vicinity of Katrina Cut and Pelican Island were quantified using Lidar data. For the time period prior to Hurricane Ivan an erosional volume of 46,382,552 m3 and depositional volume of 16,113.6 m3 were quantified from Lidar data. The effects of Hurricane Ivan produced a total erosion volume of 4,076,041.5 m3. The erosional and depositional volumes of Katrina Cut being were 7,562,068.5 m3 and 510,936.7 m3, respectively. More volume change was found within Pelican Pass. For the period between hurricanes Ivan and Katrina the erosion volume was 595,713.8 m3. This was mostly located within Katrina Cut. Total deposition for the same period, including in Pelican Pass, was 15,353,961 m3. Hurricane breaches were compared to ancient incised valleys seen within the 2-D seismic reflection results. Breaches from hurricanes from 1849

  1. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    PubMed Central

    Zhang, Yanning; Tong, Xiaomin; Yang, Tao; Ma, Wenguang

    2015-01-01

    With the wide development of UAV (Unmanned Aerial Vehicle) technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly. PMID:25856330

  2. Multi-model estimation based moving object detection for aerial video.

    PubMed

    Zhang, Yanning; Tong, Xiaomin; Yang, Tao; Ma, Wenguang

    2015-04-08

    With the wide development of UAV (Unmanned Aerial Vehicle) technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel's subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  3. Mapping of riparian invasive species with supervised classification of Unmanned Aerial System (UAS) imagery

    NASA Astrophysics Data System (ADS)

    Michez, Adrien; Piégay, Hervé; Jonathan, Lisein; Claessens, Hugues; Lejeune, Philippe

    2016-02-01

    Riparian zones are key landscape features, representing the interface between terrestrial and aquatic ecosystems. Although they have been influenced by human activities for centuries, their degradation has increased during the 20th century. Concomitant with (or as consequences of) these disturbances, the invasion of exotic species has increased throughout the world's riparian zones. In our study, we propose a easily reproducible methodological framework to map three riparian invasive taxa using Unmanned Aerial Systems (UAS) imagery: Impatiens glandulifera Royle, Heracleum mantegazzianum Sommier and Levier, and Japanese knotweed (Fallopia sachalinensis (F. Schmidt Petrop.), Fallopia japonica (Houtt.) and hybrids). Based on visible and near-infrared UAS orthophoto, we derived simple spectral and texture image metrics computed at various scales of image segmentation (10, 30, 45, 60 using eCognition software). Supervised classification based on the random forests algorithm was used to identify the most relevant variable (or combination of variables) derived from UAS imagery for mapping riparian invasive plant species. The models were built using 20% of the dataset, the rest of the dataset being used as a test set (80%). Except for H. mantegazzianum, the best results in terms of global accuracy were achieved with the finest scale of analysis (segmentation scale parameter = 10). The best values of overall accuracies reached 72%, 68%, and 97% for I. glandulifera, Japanese knotweed, and H. mantegazzianum respectively. In terms of selected metrics, simple spectral metrics (layer mean/camera brightness) were the most used. Our results also confirm the added value of texture metrics (GLCM derivatives) for mapping riparian invasive species. The results obtained for I. glandulifera and Japanese knotweed do not reach sufficient accuracies for operational applications. However, the results achieved for H. mantegazzianum are encouraging. The high accuracies values combined to

  4. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel

  5. Wildlife Multispecies Remote Sensing Using Visible and Thermal Infrared Imagery Acquired from AN Unmanned Aerial Vehicle (uav)

    NASA Astrophysics Data System (ADS)

    Chrétien, L.-P.; Théau, J.; Ménard, P.

    2015-08-01

    Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0-2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.

  6. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the

  7. Combined aerial and ground technique for assessing structural heat loss

    NASA Astrophysics Data System (ADS)

    Snyder, William C.; Schott, John R.

    1994-03-01

    The results of a combined aerial and ground-based structural heat loss survey are presented. The aerial imagery was collected by a thermal IR line scanner. Enhanced quantitative analysis of the imagery gives the roof heat flow and insulation level. The ground images were collected by a video van and converted to still frames stored on a video disk. A computer based presentation system retrieves the images and other information indexed by street address for screening and dissemination to owners. We conclude that the combined aerial and ground survey effectively discriminates between well insulated and poorly insulated structures, and that such a survey is a cost-effective alternative to site audits.

  8. Random Forest and Objected-Based Classification for Forest Pest Extraction from Uav Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Hu, Xiangyun

    2016-06-01

    Forest pest is one of the most important factors affecting the health of forest. However, since it is difficult to figure out the pest areas and to predict the spreading ways just to partially control and exterminate it has not effective enough so far now. The infected areas by it have continuously spreaded out at present. Thus the introduction of spatial information technology is highly demanded. It is very effective to examine the spatial distribution characteristics that can establish timely proper strategies for control against pests by periodically figuring out the infected situations as soon as possible and by predicting the spreading ways of the infection. Now, with the UAV photography being more and more popular, it has become much cheaper and faster to get UAV images which are very suitable to be used to monitor the health of forest and detect the pest. This paper proposals a new method to effective detect forest pest in UAV aerial imagery. For an image, we segment it to many superpixels at first and then we calculate a 12-dimension statistical texture information for each superpixel which are used to train and classify the data. At last, we refine the classification results by some simple rules. The experiments show that the method is effective for the extraction of forest pest areas in UAV images.

  9. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery

    PubMed Central

    Lisein, Jonathan; Michez, Adrien; Claessens, Hugues; Lejeune, Philippe

    2015-01-01

    Technology advances can revolutionize Precision Forestry by providing accurate and fine forest information at tree level. This paper addresses the question of how and particularly when Unmanned Aerial System (UAS) should be used in order to efficiently discriminate deciduous tree species. The goal of this research is to determine when is the best time window to achieve an optimal species discrimination. A time series of high resolution UAS imagery was collected to cover the growing season from leaf flush to leaf fall. Full benefit was taken of the temporal resolution of UAS acquisition, one of the most promising features of small drones. The disparity in forest tree phenology is at the maximum during early spring and late autumn. But the phenology state that optimized the classification result is the one that minimizes the spectral variation within tree species groups and, at the same time, maximizes the phenologic differences between species. Sunlit tree crowns (5 deciduous species groups) were classified using a Random Forest approach for monotemporal, two-date and three-date combinations. The end of leaf flushing was the most efficient single-date time window. Multitemporal datasets definitely improve the overall classification accuracy. But single-date high resolution orthophotomosaics, acquired on optimal time-windows, result in a very good classification accuracy (overall out of bag error of 16%). PMID:26600422

  10. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  11. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  12. The research of moving objects behavior detection and tracking algorithm in aerial video

    NASA Astrophysics Data System (ADS)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  13. Exploration towards the modeling of gable-roofed buildings using a combination of aerial and street-level imagery

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo; Hazelhoff, Lykele; de With, Peter H. N.

    2015-03-01

    Extraction of residential building properties is helpful for numerous applications, such as computer-guided feasibility analysis for solar panel placement, determination of real-estate taxes and assessment of real-estate insurance policies. Therefore, this work explores the automated modeling of buildings with a gable roof (the most common roof type within Western Europe), based on a combination of aerial imagery and street-level panoramic images. This is a challenging task, since buildings show large variations in shape, dimensions and building extensions, and may additionally be captured under non-ideal lighting conditions. The aerial images feature a coarse overview of the building due to the large capturing distance. The building footprint and an initial estimate of the building height is extracted based on the analysis of stereo aerial images. The estimated model is then refined using street-level images, which feature higher resolution and enable more accurate measurements, however, displaying a single building side only. Initial experiments indicate that the footprint dimensions of the main building can be accurately extracted from aerial images, while the building height is extracted with slightly less accuracy. By combining aerial and street-level images, we have found that the accuracies of these height measurements are significantly increased, thereby improving the overall quality of the extracted building model, and resulting in an average inaccuracy of the estimated volume below 10%.

  14. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  15. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.

  16. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  17. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  18. Trafficking in tobacco farm culture: Tobacco companies use of video imagery to undermine health policy

    PubMed Central

    Otañez, Martin G; Glantz, Stanton A

    2009-01-01

    The cigarette companies and their lobbying organization used tobacco industry-produced films and videos about tobacco farming to support their political, public relations, and public policy goals. Critical discourse analysis shows how tobacco companies utilized film and video imagery and narratives of tobacco farmers and tobacco economies for lobbying politicians and influencing consumers, industry-allied groups, and retail shop owners to oppose tobacco control measures and counter publicity on the health hazards, social problems, and environmental effects of tobacco growing. Imagery and narratives of tobacco farmers, tobacco barns, and agricultural landscapes in industry videos constituted a tobacco industry strategy to construct a corporate vision of tobacco farm culture that privileges the economic benefits of tobacco. The positive discursive representations of tobacco farming ignored actual behavior of tobacco companies to promote relationships of dependency and subordination for tobacco farmers and to contribute to tobacco-related poverty, child labor, and deforestation in tobacco growing countries. While showing tobacco farming as a family and a national tradition and a source of jobs, tobacco companies portrayed tobacco as a tradition to be protected instead of an industry to be regulated and denormalized. PMID:20160936

  19. New interpretations of the Fort Clark State Historic Site based on aerial color and thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Heller, Andrew Roland

    The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.

  20. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  1. Bridging Estimates of Greenness in an Arid Grassland Using Field Observations, Phenocams, and Time Series Unmanned Aerial System (UAS) Imagery

    NASA Astrophysics Data System (ADS)

    Browning, D. M.; Tweedie, C. E.; Rango, A.

    2013-12-01

    Spatially extensive grasslands and savannas in arid and semi-arid ecosystems (i.e., rangelands) require cost-effective, accurate, and consistent approaches for monitoring plant phenology. Remotely sensed imagery offers these capabilities; however contributions of exposed soil due to modest vegetation cover, susceptibility of vegetation to drought, and lack of robust scaling relationships challenge biophysical retrievals using moderate- and coarse-resolution satellite imagery. To evaluate methods for characterizing plant phenology of common rangeland species and to link field measurements to remotely sensed metrics of land surface phenology, we devised a hierarchical study spanning multiple spatial scales. We collect data using weekly standardized field observations on focal plants, daily phenocam estimates of vegetation greenness, and very high spatial resolution imagery from an Unmanned Aerial System (UAS) throughout the growing season. Field observations of phenological condition and vegetation cover serve to verify phenocam greenness indices along with indices derived from time series UAS imagery. UAS imagery is classified using object-oriented image analysis to identify species-specific image objects for which greenness indices are derived. Species-specific image objects facilitate comparisons with phenocam greenness indices and scaling spectral responses to footprints of Landsat and MODIS pixels. Phenocam greenness curves indicated rapid canopy development for the widespread deciduous shrub Prosopis glandulosa over 14 (in April 2012) to 16 (in May 2013) days. The modest peak in greenness for the dominant perennial grass Bouteloua eriopoda occurred in October 2012 following peak summer rainfall. Weekly field estimates of canopy development closely coincided with daily patterns in initial growth and senescence for both species. Field observations improve the precision of the timing of phenophase transitions relative to inflection points calculated from phenocam

  2. Quantifying the rapid evolution of a nourishment project with video imagery

    USGS Publications Warehouse

    Elko, N.A.; Holman, R.A.; Gelfenbaum, G.

    2005-01-01

    Spatially and temporally high-resolution video imagery was combined with traditional surveyed beach profiles to investigate the evolution of a rapidly eroding beach nourishment project. Upham Beach is a 0.6-km beach located downdrift of a structured inlet on the west coast of Florida. The beach was stabilized in seaward advanced position during the 1960s and has been nourished every 4-5 years since 1975. During the 1996 nourishment project, 193,000 m 3 of sediment advanced the shoreline as much as 175 m. Video images were collected concurrent with traditional surveys during the 1996 nourishment project to test video imaging as a nourishment monitoring technique. Video imagery illustrated morphologic changes that were unapparent in survey data. Increased storminess during the second (El Nin??o) winter after the 1996 project resulted in increased erosion rates of 0.4 m/d (135.0 m/y) as compared with 0.2 m/d (69.4 m/y) during the first winter. The measured half-life, the time at which 50% of the nourished material remains, of the nourishment project was 0.94 years. A simple analytical equation indicates reasonable agreement with the measured values, suggesting that project evolution follows a predictable pattern of exponential decay. Long-shore planform equilibration does not occur on Upham Beach, rather sediment diffuses downdrift until 100% of the nourished material erodes. The wide nourished beach erodes rapidly due to the lack of sediment bypassing from the north and the stabilized headland at Upham Beach that is exposed to wave energy.

  3. Decision Level Fusion of LIDAR Data and Aerial Color Imagery Based on Bayesian Theory for Urban Area Classification

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.

    2015-12-01

    Airborne Light Detection and Ranging (LiDAR) generates high-density 3D point clouds to provide a comprehensive information from object surfaces. Combining this data with aerial/satellite imagery is quite promising for improving land cover classification. In this study, fusion of LiDAR data and aerial imagery based on Bayesian theory in a three-level fusion algorithm is presented. In the first level, pixel-level fusion, the proper descriptors for both LiDAR and image data are extracted. In the next level of fusion, feature-level, using extracted features the area are classified into six classes of "Buildings", "Trees", "Asphalt Roads", "Concrete roads", "Grass" and "Cars" using Naïve Bayes classification algorithm. This classification is performed in three different strategies: (1) using merely LiDAR data, (2) using merely image data, and (3) using all extracted features from LiDAR and image. The results of three classifiers are integrated in the last phase, decision level fusion, based on Naïve Bayes algorithm. To evaluate the proposed algorithm, a high resolution color orthophoto and LiDAR data over the urban areas of Zeebruges, Belgium were applied. Obtained results from the decision level fusion phase revealed an improvement in overall accuracy and kappa coefficient.

  4. Detection of two intermixed invasive woody species using color infrared aerial imagery and the support vector machine classifier

    NASA Astrophysics Data System (ADS)

    Mirik, Mustafa; Chaudhuri, Sriroop; Surber, Brady; Ale, Srinivasulu; James Ansley, R.

    2013-01-01

    Both the evergreen redberry juniper (Juniperus pinchotii Sudw.) and deciduous honey mesquite (Prosopis glandulosa Torr.) are destructive and aggressive invaders that affect rangelands and grasslands of the southern Great Plains of the United States. However, their current spatial extent and future expansion trends are unknown. This study was aimed at: (1) exploring the utility of aerial imagery for detecting and mapping intermixed redberry juniper and honey mesquite while both are in full foliage using the support vector machine classifier at two sites in north central Texas and, (2) assessing and comparing the mapping accuracies between sites. Accuracy assessments revealed that the overall accuracies were 90% with the associated kappa coefficient of 0.86% and 89% with the associated kappa coefficient of 0.85 for sites 1 and 2, respectively. Z-statistics (0.102<1.96) used to compare the classification results for both sites indicated an insignificant difference between classifications at 95% probability level. In most instances, juniper and mesquite were identified correctly with <7% being mistaken for the other woody species. These results indicated that assessment of the current infestation extent and severity of these two woody species in a spatial context is possible using aerial remote sensing imagery.

  5. Intergration of LiDAR Data with Aerial Imagery for Estimating Rooftop Solar Photovoltaic Potentials in City of Cape Town

    NASA Astrophysics Data System (ADS)

    Adeleke, A. K.; Smit, J. L.

    2016-06-01

    Apart from the drive to reduce carbon dioxide emissions by carbon-intensive economies like South Africa, the recent spate of electricity load shedding across most part of the country, including Cape Town has left electricity consumers scampering for alternatives, so as to rely less on the national grid. Solar energy, which is adequately available in most part of Africa and regarded as a clean and renewable source of energy, makes it possible to generate electricity by using photovoltaics technology. However, before time and financial resources are invested into rooftop solar photovoltaic systems in urban areas, it is important to evaluate the potential of the building rooftop, intended to be used in harvesting the solar energy. This paper presents methodologies making use of LiDAR data and other ancillary data, such as high-resolution aerial imagery, to automatically extract building rooftops in City of Cape Town and evaluate their potentials for solar photovoltaics systems. Two main processes were involved: (1) automatic extraction of building roofs using the integration of LiDAR data and aerial imagery in order to derive its' outline and areal coverage; and (2) estimating the global solar radiation incidence on each roof surface using an elevation model derived from the LiDAR data, in order to evaluate its solar photovoltaic potential. This resulted in a geodatabase, which can be queried to retrieve salient information about the viability of a particular building roof for solar photovoltaic installation.

  6. Comparison of aerial imagery from manned and unmanned aircraft platforms for monitoring cotton growth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned aircraft systems (UAS) have emerged as a low-cost and versatile remote sensing platform in recent years, but little work has been done on comparing imagery from manned and unmanned platforms for crop assessment. The objective of this study was to compare imagery taken from multiple cameras ...

  7. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    USGS Publications Warehouse

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  8. Use of video observation and motor imagery on jumping performance in national rhythmic gymnastics athletes.

    PubMed

    Battaglia, Claudia; D'Artibale, Emanuele; Fiorilli, Giovanni; Piazza, Marina; Tsopani, Despina; Giombini, Arrigo; Calcagno, Giuseppe; di Cagno, Alessandra

    2014-12-01

    The aim of this study was to evaluate whether a mental training protocol could improve gymnastic jumping performance. Seventy-two rhythmic gymnasts were randomly divided into an experimental and control group. At baseline, experimental group completed the Movement Imagery Questionnaire Revised (MIQ-R) to assess the gymnast ability to generate movement imagery. A repeated measures design was used to compare two different types of training aimed at improving jumping performance: (a) video observation and PETTLEP mental training associated with physical practice, for the experimental group, and (b) physical practice alone for the control group. Before and after six weeks of training, their jumping performance was measured using the Hopping Test (HT), Drop Jump (DJ), and Counter Movement Jump (CMJ). Results revealed differences between jumping parameters F(1,71)=11.957; p<.01, and between groups F(1,71)=10.620; p<.01. In the experimental group there were significant correlations between imagery ability and the post-training Flight Time of the HT, r(34)=-.295, p<.05 and the DJ, r(34)=-.297, p<.05. The application of the protocol described herein was shown to improve jumping performance, thereby preserving the elite athlete's energy for other tasks.

  9. Using high-resolution digital aerial imagery to map land cover

    USGS Publications Warehouse

    Dieck, J.J.; Robinson, Larry

    2014-01-01

    The Upper Midwest Environmental Sciences Center (UMESC) has used aerial photography to map land cover/land use on federally owned and managed lands for over 20 years. Until recently, that process used 23- by 23-centimeter (9- by 9-inch) analog aerial photos to classify vegetation along the Upper Mississippi River System, on National Wildlife Refuges, and in National Parks. With digital aerial cameras becoming more common and offering distinct advantages over analog film, UMESC transitioned to an entirely digital mapping process in 2009. Though not without challenges, this method has proven to be much more accurate and efficient when compared to the analog process.

  10. Monitoring the invasion of Spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai, Guangxi (China).

    PubMed

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Fu, Jingying; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population.

  11. Monitoring the Invasion of Spartina alterniflora Using Very High Resolution Unmanned Aerial Vehicle Imagery in Beihai, Guangxi (China)

    PubMed Central

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population. PMID:24892066

  12. Preliminary statistical studies concerning the Campos RJ sugar cane area, using LANDSAT imagery and aerial photographs

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.

    1983-01-01

    The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.

  13. Forest fuel treatment detection using multi-temporal airborne Lidar data and high resolution aerial imagery ---- A case study at Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Collins, B.; Fry, D.; Kelly, M.

    2014-12-01

    Forest fuel treatments (FFT) are often employed in Sierra Nevada forest (located in California, US) to enhance forest health, regulate stand density, and reduce wildfire risk. However, there have been concerns that FFTs may have negative impacts on certain protected wildlife species. Due to the constraints and protection of resources (e.g., perennial streams, cultural resources, wildlife habitat, etc.), the actual FFT extents are usually different from planned extents. Identifying the actual extent of treated areas is of primary importance to understand the environmental influence of FFTs. Light detection and ranging (Lidar) is a powerful remote sensing technique that can provide accurate forest structure measurements, which provides great potential to monitor forest changes. This study used canopy height model (CHM) and canopy cover (CC) products derived from multi-temporal airborne Lidar data to detect FFTs by an approach combining a pixel-wise thresholding method and a object-of-interest segmentation method. We also investigated forest change following the implementation of landscape-scale FFT projects through the use of normalized difference vegetation index (NDVI) and standardized principle component analysis (PCA) from multi-temporal high resolution aerial imagery. The same FFT detection routine was applied on the Lidar data and aerial imagery for the purpose of comparing the capability of Lidar data and aerial imagery on FFT detection. Our results demonstrated that the FFT detection using Lidar derived CC products produced both the highest total accuracy and kappa coefficient, and was more robust at identifying areas with light FFTs. The accuracy using Lidar derived CHM products was significantly lower than that of the result using Lidar derived CC, but was still slightly higher than using aerial imagery. FFT detection results using NDVI and standardized PCA using multi-temporal aerial imagery produced almost identical total accuracy and kappa coefficient

  14. Characterizing Sediment Flux Using Reconstructed Topography and Bathymetry from Historical Aerial Imagery on the Willamette River, OR.

    NASA Astrophysics Data System (ADS)

    Langston, T.; Fonstad, M. A.

    2014-12-01

    The Willamette is a gravel-bed river that drains ~28,800 km^2 between the Coast Range and Cascade Range in northwestern Oregon before entering the Columbia River near Portland. In the last 150 years, natural and anthropogenic drivers have altered the sediment transport regime, drastically reducing the geomorphic complexity of the river. Previously dynamic multi-threaded reaches have transformed into stable single channels to the detriment of ecosystem diversity and productivity. Flow regulation by flood-control dams, bank revetments, and conversion of riparian forests to agriculture have been key drivers of channel change. To date, little has been done to quantitatively describe temporal and spatial trends of sediment transport in the Willamette. This knowledge is critical for understanding how modern processes shape landforms and habitats. The goal of this study is to describe large-scale temporal and spatial trends in the sediment budget by reconstructing historical topography and bathymetry from aerial imagery. The area of interest for this project is a reach of the Willamette stretching from the confluence of the McKenzie River to the town of Peoria. While this reach remains one of the most dynamic sections of the river, it has exhibited a great loss in geomorphic complexity. Aerial imagery for this section of the river is available from USDA and USACE projects dating back to the 1930's. Above water surface elevations are extracted using the Imagine Photogrammetry package in ERDAS. Bathymetry is estimated using a method known as Hydraulic Assisted Bathymetry in which hydraulic parameters are used to develop a regression between water depth and pixel values. From this, pixel values are converted to depth below the water surface. Merged together, topography and bathymetry produce a spatially continuous digital elevation model of the geomorphic floodplain. Volumetric changes in sediment stored along the study reach are then estimated for different historic periods

  15. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  16. A study of video frame rate on the perception of moving imagery detail

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  17. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1988-01-19

    approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban

  18. Monitoring a BLM level 5 watershed with very-large aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A fifth order BLM watershed in central Wyoming was flown using a Sport-airplane to acquire high-resolution aerial images from 2 cameras at 2 altitudes. Project phases 1 and 2 obtained images for measuring ground cover, species composition and canopy cover of Wyoming big sagebrush by ecological site....

  19. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach

  20. Surface Temperature Mapping of the University of Northern Iowa Campus Using High Resolution Thermal Infrared Aerial Imageries

    PubMed Central

    Savelyev, Alexander; Sugumaran, Ramanathan

    2008-01-01

    The goal of this project was to map the surface temperature of the University of Northern Iowa campus using high-resolution thermal infrared aerial imageries. A thermal camera with a spectral bandwidth of 3.0-5.0 μm was flown at the average altitude of 600 m, achieving ground resolution of 29 cm. Ground control data was used to construct the pixel- to-temperature conversion model, which was later used to produce temperature maps of the entire campus and also for validation of the model. The temperature map then was used to assess the building rooftop conditions and steam line faults in the study area. Assessment of the temperature map revealed a number of building structures that may be subject to insulation improvement due to their high surface temperatures leaks. Several hot spots were also identified on the campus for steam pipelines faults. High-resolution thermal infrared imagery proved highly effective tool for precise heat anomaly detection on the campus, and it can be used by university facility services for effective future maintenance of buildings and grounds. PMID:27873800

  1. Wavelet-based detection of bush encroachment in a savanna using multi-temporal aerial photographs and satellite imagery

    NASA Astrophysics Data System (ADS)

    Shekede, Munyaradzi D.; Murwira, Amon; Masocha, Mhosisi

    2015-03-01

    Although increased woody plant abundance has been reported in tropical savannas worldwide, techniques for detecting the direction and magnitude of change are mostly based on visual interpretation of historical aerial photography or textural analysis of multi-temporal satellite images. These techniques are prone to human error and do not permit integration of remotely sensed data from diverse sources. Here, we integrate aerial photographs with high spatial resolution satellite imagery and use a discrete wavelet transform to objectively detect the dynamics in bush encroachment at two protected Zimbabwean savanna sites. Based on the recently introduced intensity-dominant scale approach, we test the hypotheses that: (1) the encroachment of woody patches into the surrounding grassland matrix causes a shift in the dominant scale. This shift in the dominant scale can be detected using a discrete wavelet transform regardless of whether aerial photography and satellite data are used; and (2) as the woody patch size stabilises, woody cover tends to increase thereby triggering changes in intensity. The results show that at the first site where tree patches were already established (Lake Chivero Game Reserve), between 1972 and 1984 the dominant scale of woody patches initially increased from 8 m before stabilising at 16 m and 32 m between 1984 and 2012 while the intensity fluctuated during the same period. In contrast, at the second site, which was formely grass-dominated site (Kyle Game Reserve), we observed an unclear dominant scale (1972) which later becomes distinct in 1985, 1996 and 2012. Over the same period, the intensity increased. Our results imply that using our approach we can detect and quantify woody/bush patch dynamics in savanna landscapes.

  2. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover.

  3. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  4. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data.

    PubMed

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data.

  5. Image degradation in aerial imagery duplicates. [photographic processing of photographic film and reproduction (copying)

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    A series of Earth Resources Aircraft Program data flights were made over an aerial test range in Arizona for the evaluation of large cameras. Specifically, both medium altitude and high altitude flights were made to test and evaluate a series of color as well as black-and-white films. Image degradation, inherent in duplication processing, was studied. Resolution losses resulting from resolution characteristics of the film types are given. Color duplicates, in general, are shown to be degraded more than black-and-white films because of the limitations imposed by available aerial color duplicating stock. Results indicate that a greater resolution loss may be expected when the original has higher resolution. Photographs of the duplications are shown.

  6. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  7. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2015-11-04

    being analyzed, rl is the local range of values across the pixels and rm is the maximum possible range of values. Algorithm Imagery must first be...River, LA The case presented in Figures 1 and 6 represents an ideal case for demonstrating the algorithm in that the surface of the water appears uniform...x 1400 pixel image. A human operator loaded the image in the open source Quantum GIS programme and traced the edges to create a ESRI shape file, which

  8. Derivation of River Bathymetry Using Imagery from Unmanned Aerial Vehicles (UAV)

    DTIC Science & Technology

    2011-09-01

    from gamma rays to radio waves. Near the center of this spectrum are the wavelengths that are of concern for derivation of bathymetry from imagery... airborne manned platforms have been used for bathymetric derivation, but are not in abundance, nor do they have the spatial resolution required to...regarding river water depths, which is a necessity for safe operational planning. Satellite sensors and airborne manned platforms have been used for

  9. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  10. Mapping Urban Tree Canopy Coverage and Structure using Data Fusion of High Resolution Satellite Imagery and Aerial Lidar

    NASA Astrophysics Data System (ADS)

    Elmes, A.; Rogan, J.; Williams, C. A.; Martin, D. G.; Ratick, S.; Nowak, D.

    2015-12-01

    Urban tree canopy (UTC) coverage is a critical component of sustainable urban areas. Trees provide a number of important ecosystem services, including air pollution mitigation, water runoff control, and aesthetic and cultural values. Critically, urban trees also act to mitigate the urban heat island (UHI) effect by shading impervious surfaces and via evaporative cooling. The cooling effect of urban trees can be seen locally, with individual trees reducing home HVAC costs, and at a citywide scale, reducing the extent and magnitude of an urban areas UHI. In order to accurately model the ecosystem services of a given urban forest, it is essential to map in detail the condition and composition of these trees at a fine scale, capturing individual tree crowns and their vertical structure. This paper presents methods for delineating UTC and measuring canopy structure at fine spatial resolution (<1m). These metrics are essential for modeling the HVAC benefits from UTC for individual homes, and for assessing the ecosystem services for entire urban areas. Such maps have previously been made using a variety of methods, typically relying on high resolution aerial or satellite imagery. This paper seeks to contribute to this growing body of methods, relying on a data fusion method to combine the information contained in high resolution WorldView-3 satellite imagery and aerial lidar data using an object-based image classification approach. The study area, Worcester, MA, has recently undergone a large-scale tree removal and reforestation program, following a pest eradication effort. Therefore, the urban canopy in this location provides a wide mix of tree age class and functional type, ideal for illustrating the effectiveness of the proposed methods. Early results show that the object-based classifier is indeed capable of identifying individual tree crowns, while continued research will focus on extracting crown structural characteristics using lidar-derived metrics. Ultimately

  11. Detection and spatiotemporal analysis of methane ebullition on thermokarst lake ice using high-resolution optical aerial imagery

    NASA Astrophysics Data System (ADS)

    Lindgren, P. R.; Grosse, G.; Anthony, K. M. Walter; Meyer, F. J.

    2016-01-01

    Thermokarst lakes are important emitters of methane, a potent greenhouse gas. However, accurate estimation of methane flux from thermokarst lakes is difficult due to their remoteness and observational challenges associated with the heterogeneous nature of ebullition. We used high-resolution (9-11 cm) snow-free aerial images of an interior Alaskan thermokarst lake acquired 2 and 4 days following freeze-up in 2011 and 2012, respectively, to detect and characterize methane ebullition seeps and to estimate whole-lake ebullition. Bubbles impeded by the lake ice sheet form distinct white patches as a function of bubbling when lake ice grows downward and around them, trapping the gas in the ice. Our aerial imagery thus captured a snapshot of bubbles trapped in lake ice during the ebullition events that occurred before the image acquisition. Image analysis showed that low-flux A- and B-type seeps are associated with low brightness patches and are statistically distinct from high-flux C-type and hotspot seeps associated with high brightness patches. Mean whole-lake ebullition based on optical image analysis in combination with bubble-trap flux measurements was estimated to be 174 ± 28 and 216 ± 33 mL gas m-2 d-1 for the years 2011 and 2012, respectively. A large number of seeps demonstrated spatiotemporal stability over our 2-year study period. A strong inverse exponential relationship (R2 > = 0.79) was found between the percent of the surface area of lake ice covered with bubble patches and distance from the active thermokarst lake margin. Even though the narrow timing of optical image acquisition is a critical factor, with respect to both atmospheric pressure changes and snow/no-snow conditions during early lake freeze-up, our study shows that optical remote sensing is a powerful tool to map ebullition seeps on lake ice, to identify their relative strength of ebullition, and to assess their spatiotemporal variability.

  12. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  13. A study of integration methods of aerial imagery and LIDAR data for a high level of automation in 3D building reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Suyoung; Schenk, Toni F.

    2003-04-01

    This paper describes integration methods to increase the level of automation in building reconstruction. Aerial imagery has been used as a major source in mapping fields and, in recent years, LIDAR data became popular as another type of mapping resources. Regarding to their performances, aerial imagery has abilities to delineate object boundaries but leaves many missing parts of boundaries during feature extraction. LIDAR data provide direct information about heights of object surfaces but have limitation for boundary localization. Efficient methods using complementary characteristics of two sensors are described to generate hypotheses of building boundaries and localize the object features. Tree structures for grid contours of LIDAR data are used for interpretation of contours. Buildings are recognized by analyzing the contour trees and modeled with surface patches with LIDAR data. Hypotheses of building models are generated as combination of wing models and verified by assessing the consistency between the corresponding data sets. Experiments using aerial imagery and laser data are presented. Our approach shows that the building boundaries are successfully recognized through our contour analysis approach and the inference from contours and our modeling method using wing model increase the level of automation in hypothesis generation/verification steps.

  14. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    SciTech Connect

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; Glenn, Nancy F.

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis. The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).

  15. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs

  16. Parameter optimization of image classification techniques to delineate crowns of coppice trees on UltraCam-D aerial imagery in woodlands

    NASA Astrophysics Data System (ADS)

    Erfanifard, Yousef; Stereńczak, Krzysztof; Behnia, Negin

    2014-01-01

    Estimating the optimal parameters of some classification techniques becomes their negative aspect as it affects their performance for a given dataset and reduces classification accuracy. It was aimed to optimize the combination of effective parameters of support vector machine (SVM), artificial neural network (ANN), and object-based image analysis (OBIA) classification techniques by the Taguchi method. The optimized techniques were applied to delineate crowns of Persian oak coppice trees on UltraCam-D very high spatial resolution aerial imagery in Zagros semiarid woodlands, Iran. The imagery was classified and the maps were assessed by receiver operating characteristic curve and other performance metrics. The results showed that Taguchi is a robust approach to optimize the combination of effective parameters in these image classification techniques. The area under curve (AUC) showed that the optimized OBIA could well discriminate tree crowns on the imagery (AUC=0.897), while SVM and ANN yielded slightly less AUC performances of 0.819 and 0.850, respectively. The indices of accuracy (0.999) and precision (0.999) and performance metrics of specificity (0.999) and sensitivity (0.999) in the optimized OBIA were higher than with other techniques. The optimization of effective parameters of image classification techniques by the Taguchi method, thus, provided encouraging results to discriminate the crowns of Persian oak coppice trees on UltraCam-D aerial imagery in Zagros semiarid woodlands.

  17. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  18. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots

    PubMed Central

    Lelong, Camille C. D.; Burger, Philippe; Jubelin, Guillaume; Roux, Bruno; Labbé, Sylvain; Baret, Frédéric

    2008-01-01

    This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships. PMID:27879893

  19. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots.

    PubMed

    Lelong, Camille C D; Burger, Philippe; Jubelin, Guillaume; Roux, Bruno; Labbé, Sylvain; Baret, Frédéric

    2008-05-26

    This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships.

  20. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  1. Improvement of erosion risk modelling using soil information derived from aerial Vis-NIR imagery

    NASA Astrophysics Data System (ADS)

    Ciampalini, Rossano; Raclot, Damien; Le Bissonnais, Yves

    2016-04-01

    The aim of this research is to test the benefit of the hyperspectral imagery in soil surface properties characterisation for soil erosion modelling purposes. The research area is the Lebna catchment located in the in the north of Tunisia (Cap Bon Region). Soil erosion is evaluated with the use of two different soil erosion models: PESERA (Pan-European Soil Erosion Risk Assessment already used for the soil erosion risk mapping for the European Union, Kirkby et al., 2008) and Mesales (Regional Modelling of Soil Erosion Risk developed by Le Bissonnais et al., 1998, 2002); for that, different sources for soil properties and derived parameters such as soil erodibility map and soil crusting map have been evaluated with use of four different supports: 1) IAO soil map (IAO, 2000), 2) Carte Agricole - CA - (Ministry of Agriculture, Tunisia), 3) Hyperspectral VIS-NIR map - HY - (Gomez et al., 2012; Ciampalini t al., 2012), and, 3) a here developed Hybrid map - CY - integrating information from Hyperspectral VIS-NIR and pedological maps. Results show that the data source has a high influence on the estimation of the parameters for both the models with a more evident sensitivity for Pesera. With regard to the classical pedological data, the VIS-NIR data clearly ameliorates the spatialization of the texture, then, the spatial detail of the results. Differences in the output using different maps are more important in Pesera model than in Mesales showing no-change ranges of about 15 to 41% and 53 to 67%, respectively.

  2. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  3. Aerial Imagery and Other Non-invasive Approaches to Detect Nitrogen and Water Stress in a Potato Crop

    NASA Astrophysics Data System (ADS)

    Nigon, Tyler John

    commercial potato field using aerial imagery. Reference areas were found to be necessary in order to make accurate recommendations because of differences in sensors, potato variety, growth stage, and other local conditions. The results from this study suggest that diagnostic criteria based on both biomass and plant nutrient concentration (e.g., canopy-level spectral reflectance data) were best suited to determine overall crop N status for determination of in-season N fertilizer recommendations.

  4. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  5. Detection of Single Standing Dead Trees from Aerial Color Infrared Imagery by Segmentation with Shape and Intensity Priors

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-03-01

    Standing dead trees, known as snags, are an essential factor in maintaining biodiversity in forest ecosystems. Combined with their role as carbon sinks, this makes for a compelling reason to study their spatial distribution. This paper presents an integrated method to detect and delineate individual dead tree crowns from color infrared aerial imagery. Our approach consists of two steps which incorporate statistical information about prior distributions of both the image intensities and the shapes of the target objects. In the first step, we perform a Gaussian Mixture Model clustering in the pixel color space with priors on the cluster means, obtaining up to 3 components corresponding to dead trees, living trees, and shadows. We then refine the dead tree regions using a level set segmentation method enriched with a generative model of the dead trees' shape distribution as well as a discriminative model of their pixel intensity distribution. The iterative application of the statistical shape template yields the set of delineated dead crowns. The prior information enforces the consistency of the template's shape variation with the shape manifold defined by manually labeled training examples, which makes it possible to separate crowns located in close proximity and prevents the formation of large crown clusters. Also, the statistical information built into the segmentation gives rise to an implicit detection scheme, because the shape template evolves towards an empty contour if not enough evidence for the object is present in the image. We test our method on 3 sample plots from the Bavarian Forest National Park with reference data obtained by manually marking individual dead tree polygons in the images. Our results are scenario-dependent and range from a correctness/completeness of 0.71/0.81 up to 0.77/1, with an average center-of-gravity displacement of 3-5 pixels between the detected and reference polygons.

  6. Tracking aeolian transport patterns across a mega-nourishment using video imagery

    NASA Astrophysics Data System (ADS)

    Wijnberg, Kathelijne; van der Weerd, Lianne; Hulscher, Suzanne

    2014-05-01

    Coastal dune areas protect the hinterland from flooding. In order to maintain the safety level provided by the dunes, it may be necessary to artificially supply the beach-dune system with sand. How to best design these shore nourishments, amongst others with respect to optimal dune growth on the long-term (decadal scale), is not yet clear. One reason for this is that current models for aeolian transport on beaches appear to have limited predictive capabilities regarding annual onshore sediment supply. These limited capabilities may be attributed to the lack of appropriate input data, for instance on moisture content of the beach surface, or shortcomings in process understanding. However, it may also be argued that for the long-term prediction of onshore aeolian sand supply from the beach to the dunes, we may need to develop some aggregated-scale transport equations, because the detailed input data required for the application of process-scale transport equations may never be available in reality. A first step towards the development of such new concepts for aggregated-scale transport equations is to increase phenomenological insight into the characteristics and number of aeolian transport events that account for the annual volume changes of the foredunes. This requires high-frequency, long-term data sets to capture the only intermittently occurring aeolian transport events. Automated video image collection seems a promising way to collect such data. In the present study we describe the movement (direction and speed) of sand patches and aeolian bed forms across a nourished site, using video imagery, to characterize aeolian transport pathways and their variability in time. The study site is a mega-nourishment (21 Mm3 of sand) that was recently constructed at the Dutch coast. This mega-nourishment, also referred to as the Sand Motor, is a pilot project that may potentially replace current practice of more frequently applying small scale nourishments. The mega

  7. A low-bandwidth graphical user interface for high-speed triage of potential items of interest in video imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Martin, Kevin; Chen, Yang

    2013-06-01

    In this paper, we introduce a user interface called the "Threat Chip Display" (TCD) for rapid human-in-the-loop analysis and detection of "threats" in high-bandwidth imagery and video from a list of "Items of Interest" (IOI), which includes objects, targets and events that the human is interested in detecting and identifying. Typically some front-end algorithm (e.g., computer vision, cognitive algorithm, EEG RSVP based detection, radar detection) has been applied to the video and has pre-processed and identified a potential list of IOI. The goal of the TCD is to facilitate rapid analysis and triaging of this list of IOI to detect and confirm actual threats. The layout of the TCD is designed for ease of use, fast triage of IOI, and a low bandwidth requirement. Additionally, a very low mental demand allows the system to be run for extended periods of time.

  8. Automatic Vehicle Trajectory Extraction for Traffic Analysis from Aerial Video Data

    NASA Astrophysics Data System (ADS)

    Apeltauer, J.; Babinec, A.; Herman, D.; Apeltauer, T.

    2015-03-01

    This paper presents a new approach to simultaneous detection and tracking of vehicles moving through an intersection in aerial images acquired by an unmanned aerial vehicle (UAV). Detailed analysis of spatial and temporal utilization of an intersection is an important step for its design evaluation and further traffic inspection. Traffic flow at intersections is typically very dynamic and requires continuous and accurate monitoring systems. Conventional traffic surveillance relies on a set of fixed cameras or other detectors, requiring a high density of the said devices in order to monitor the intersection in its entirety and to provide data in sufficient quality. Alternatively, a UAV can be converted to a very agile and responsive mobile sensing platform for data collection from such large scenes. However, manual vehicle annotation in aerial images would involve tremendous effort. In this paper, the proposed combination of vehicle detection and tracking aims to tackle the problem of automatic traffic analysis at an intersection from visual data. The presented method has been evaluated in several real-life scenarios.

  9. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  10. Detecting new Buffel grass infestations in Australian arid lands: evaluation of methods using high-resolution multispectral imagery and aerial photography.

    PubMed

    Marshall, V M; Lewis, M M; Ostendorf, B

    2014-03-01

    We assess the feasibility of using airborne imagery for Buffel grass detection in Australian arid lands and evaluate four commonly used image classification techniques (visual estimate, manual digitisation, unsupervised classification and normalised difference vegetation index (NDVI) thresholding) for their suitability to this purpose. Colour digital aerial photography captured at approximately 5 cm of ground sample distance (GSD) and four-band (visible–near-infrared) multispectral imagery (25 cm GSD) were acquired (14 February 2012) across overlapping subsets of our study site. In the field, Buffel grass projected cover estimates were collected for quadrates (10 m diameter), which were subsequently used to evaluate the four image classification techniques. Buffel grass was found to be widespread throughout our study site; it was particularly prevalent in riparian land systems and alluvial plains. On hill slopes, Buffel grass was often present in depressions, valleys and crevices of rock outcrops, but the spread appeared to be dependent on soil type and vegetation communities. Visual cover estimates performed best (r 2 0.39), and pixel-based classifiers (unsupervised classification and NDVI thresholding) performed worst (r 2 0.21). Manual digitising consistently underrepresented Buffel grass cover compared with field- and image-based visual cover estimates; we did not find the labours of digitising rewarding. Our recommendation for regional documentation of new infestation of Buffel grass is to acquire ultra-high-resolution aerial photography and have a trained observer score cover against visual standards and use the scored sites to interpolate density across the region.

  11. Αutomated 2D shoreline detection from coastal video imagery: an example from the island of Crete

    NASA Astrophysics Data System (ADS)

    Velegrakis, A. F.; Trygonis, V.; Vousdoukas, M. I.; Ghionis, G.; Chatzipavlis, A.; Andreadis, O.; Psarros, F.; Hasiotis, Th.

    2015-06-01

    Beaches are both sensitive and critical coastal system components as they: (i) are vulnerable to coastal erosion (due to e.g. wave regime changes and the short- and long-term sea level rise) and (ii) form valuable ecosystems and economic resources. In order to identify/understand the current and future beach morphodynamics, effective monitoring of the beach spatial characteristics (e.g. the shoreline position) at adequate spatio-temporal resolutions is required. In this contribution we present the results of a new, fully-automated detection method of the (2-D) shoreline positions using high resolution video imaging from a Greek island beach (Ammoudara, Crete). A fully-automated feature detection method was developed/used to monitor the shoreline position in geo-rectified coastal imagery obtained through a video system set to collect 10 min videos every daylight hour with a sampling rate of 5 Hz, from which snapshot, time-averaged (TIMEX) and variance images (SIGMA) were generated. The developed coastal feature detector is based on a very fast algorithm using a localised kernel that progressively grows along the SIGMA or TIMEX digital image, following the maximum backscatter intensity along the feature of interest; the detector results were found to compare very well with those obtained from a semi-automated `manual' shoreline detection procedure. The automated procedure was tested on video imagery obtained from the eastern part of Ammoudara beach in two 5-day periods, a low wave energy period (6-10 April 2014) and a high wave energy period (1 -5 November 2014). The results showed that, during the high wave energy event, there have been much higher levels of shoreline variance which, however, appeared to be similarly unevenly distributed along the shoreline as that related to the low wave energy event, Shoreline variance `hot spots' were found to be related to the presence/architecture of an offshore submerged shallow beachrock reef, found at a distance of 50-80 m

  12. Fusion of LiDAR and aerial imagery for the estimation of downed tree volume using Support Vector Machines classification and region based object fitting

    NASA Astrophysics Data System (ADS)

    Selvarajan, Sowmya

    The study classifies 3D small footprint full waveform digitized LiDAR fused with aerial imagery to downed trees using Support Vector Machines (SVM) algorithm. Using small footprint waveform LiDAR, airborne LiDAR systems can provide better canopy penetration and very high spatial resolution. The small footprint waveform scanner system Riegl LMS-Q680 is addition with an UltraCamX aerial camera are used to measure and map downed trees in a forest. The various data preprocessing steps helped in the identification of ground points from the dense LiDAR dataset and segment the LiDAR data to help reduce the complexity of the algorithm. The haze filtering process helped to differentiate the spectral signatures of the various classes within the aerial image. Such processes, helped to better select the features from both sensor data. The six features: LiDAR height, LiDAR intensity, LiDAR echo, and three image intensities are utilized. To do so, LiDAR derived, aerial image derived and fused LiDAR-aerial image derived features are used to organize the data for the SVM hypothesis formulation. Several variations of the SVM algorithm with different kernels and soft margin parameter C are experimented. The algorithm is implemented to classify downed trees over a pine trees zone. The LiDAR derived features provided an overall accuracy of 98% of downed trees but with no classification error of 86%. The image derived features provided an overall accuracy of 65% and fusion derived features resulted in an overall accuracy of 88%. The results are observed to be stable and robust. The SVM accuracies were accompanied by high false alarm rates, with the LiDAR classification producing 58.45%, image classification producing 95.74% and finally the fused classification producing 93% false alarm rates The Canny edge correction filter helped control the LiDAR false alarm to 35.99%, image false alarm to 48.56% and fused false alarm to 37.69% The implemented classifiers provided a powerful tool for

  13. A simulation and estimation framework for intracellular dynamics and trafficking in video-microscopy and fluorescence imagery.

    PubMed

    Boulanger, Jérôme; Kervrann, Charles; Bouthemy, Patrick

    2009-02-01

    Image sequence analysis in video-microscopy has now gained importance since molecular biology is presently having a profound impact on the way research is being conducted in medicine. However, image processing techniques that are currently used for modeling intracellular dynamics, are still relatively crude and yield imprecise results. Indeed, complex interactions between a large number of small moving particles in a complex scene cannot be easily modeled, limiting the performance of object detection and tracking algorithms. This motivates our present research effort which is to develop a general estimation/simulation framework able to produce image sequences showing small moving spots in interaction, with variable velocities, and corresponding to intracellular dynamics and trafficking in biology. It is now well established that spot/object trajectories can play a role in the analysis of living cell dynamics and simulating realistic image sequences is then of major importance. We demonstrate the potential of the proposed simulation/estimation framework in experiments, and show that this approach can also be used to evaluate the performance of object detection/tracking algorithms in video-microscopy and fluorescence imagery.

  14. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  15. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery.

  16. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  17. Evolution of a natural debris flow: In situ measurements of flow dynamics, video imagery, and terrestrial laser scanning

    USGS Publications Warehouse

    McCoy, S.W.; Kean, J.W.; Coe, J.A.; Staley, D.M.; Wasklewicz, T.A.; Tucker, G.E.

    2010-01-01

    Many theoretical and laboratory studies have been undertaken to understand debris-flow processes and their associated hazards. However, complete and quantitative data sets from natural debris flows needed for confirmation of these results are limited. We used a novel combination of in situ measurements of debris-flow dynamics, video imagery, and pre- and postflow 2-cm-resolution digital terrain models to study a natural debris-flow event. Our field data constrain the initial and final reach morphology and key flow dynamics. The observed event consisted of multiple surges, each with clear variation of flow properties along the length of the surge. Steep, highly resistant, surge fronts of coarse-grained material without measurable pore-fluid pressure were pushed along by relatively fine-grained and water-rich tails that had a wide range of pore-fluid pressures (some two times greater than hydrostatic). Surges with larger nonequilibrium pore-fluid pressures had longer travel distances. A wide range of travel distances from different surges of similar size indicates that dynamic flow properties are of equal or greater importance than channel properties in determining where a particular surge will stop. Progressive vertical accretion of multiple surges generated the total thickness of mapped debris-flow deposits; nevertheless, deposits had massive, vertically unstratified sedimentological textures. ?? 2010 Geological Society of America.

  18. Estimation of wave phase speed and nearshore bathymetry from video imagery

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.

    2000-01-01

    A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.

  19. Analysis of Small-Scale Convective Dynamics in a Crown Fire Using Infrared Video Camera Imagery.

    NASA Astrophysics Data System (ADS)

    Clark, Terry L.; Radke, Larry; Coen, Janice; Middleton, Don

    1999-10-01

    A good physical understanding of the initiation, propagation, and spread of crown fires remains an elusive goal for fire researchers. Although some data exist that describe the fire spread rate and some qualitative aspects of wildfire behavior, none have revealed the very small timescales and spatial scales in the convective processes that may play a key role in determining both the details and the rate of fire spread. Here such a dataset is derived using data from a prescribed burn during the International Crown Fire Modelling Experiment. A gradient-based image flow analysis scheme is presented and applied to a sequence of high-frequency (0.03 s), high-resolution (0.05-0.16 m) radiant temperature images obtained by an Inframetrics ThermaCAM instrument during an intense crown fire to derive wind fields and sensible heat flux. It was found that the motions during the crown fire had energy-containing scales on the order of meters with timescales of fractions of a second. Estimates of maximum vertical heat fluxes ranged between 0.6 and 3 MW m2 over the 4.5-min burn, with early time periods showing surprisingly large fluxes of 3 MW m2. Statistically determined velocity extremes, using five standard deviations from the mean, suggest that updrafts between 10 and 30 m s1, downdrafts between 10 and 20 m s1, and horizontal motions between 5 and 15 m s1 frequently occurred throughout the fire.The image flow analyses indicated a number of physical mechanisms that contribute to the fire spread rate, such as the enhanced tilting of horizontal vortices leading to counterrotating convective towers with estimated vertical vorticities of 4 to 10 s1 rotating such that air between the towers blew in the direction of fire spread at canopy height and below. The IR imagery and flow analysis also repeatedly showed regions of thermal saturation (infrared temperature > 750°C), rising through the convection. These regions represent turbulent bursts or hairpin vortices resulting again from

  20. Building block extraction and classification by means of Markov random fields using aerial imagery and LiDAR data

    NASA Astrophysics Data System (ADS)

    Bratsolis, E.; Sigelle, M.; Charou, E.

    2016-10-01

    Building detection has been a prominent area in the area of image classification. Most of the research effort is adapted to the specific application requirements and available datasets. Our dataset includes aerial orthophotos (with spatial resolution 20cm), a DSM generated from LiDAR (with spatial resolution 1m and elevation resolution 20 cm) and DTM (spatial resolution 2m) from an area of Athens, Greece. Our aim is to classify these data by means of Markov Random Fields (MRFs) in a Bayesian framework for building block extraction and perform a comparative analysis with other supervised classification techniques namely Feed Forward Neural Net (FFNN), Cascade-Correlation Neural Network (CCNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). We evaluated the performance of each method using a subset of the test area. We present the classified images, and statistical measures (confusion matrix, kappa coefficient and overall accuracy). Our results demonstrate that the MRFs and FFNN perform better than the other methods.

  1. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  2. Use of ultra-high spatial resolution aerial imagery in the estimation of chaparral wildfire fuel loads.

    PubMed

    Schmidt, Ian T; O'Leary, John F; Stow, Douglas A; Uyeda, Kellie A; Riggan, Phillip J

    2016-12-01

    Development of methods that more accurately estimate spatial distributions of fuel loads in shrublands allows for improved understanding of ecological processes such as wildfire behavior and postburn recovery. The goal of this study is to develop and test remote sensing methods to upscale field estimates of shrubland fuel to broader-scale biomass estimates using ultra-high spatial resolution imagery captured by a light-sport aircraft. The study is conducted on chaparral shrublands located in eastern San Diego County, CA, USA. We measured the fuel load in the field using a regression relationship between basal area and aboveground biomass of shrubs and estimated ground areal coverage of individual shrub species by using ultra-high spatial resolution imagery and image processing routines. Study results show a strong relationship between image-derived shrub coverage and field-measured fuel loads in three even-age stands that have regrown approximately 7, 28, and 68 years since last wildfire. We conducted ordinary least square analysis using ground coverage as the independent variable regressed against biomass. The analysis yielded R (2) values ranging from 0.80 to 0.96 in the older stands for the live shrub species, while R (2) values for species in the younger stands ranged from 0.32 to 0.89. Pooling species-based data into larger sample sizes consisting of a functional group and all-shrub classes while obtaining suitable linear regression models supports the potential for these methods to be used for upscaling fuel estimates to broader areal extents, without having to classify and map shrubland vegetation at the species level.

  3. Low-altitude aerial imagery and related field observations associated with unmanned aerial systems (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016

    USGS Publications Warehouse

    Sherwood, Christopher R.

    2016-01-01

    launch site; they have horizontal and vertical uncertainties of approximately +/ 0.03 m. The locations of the ground control points can be used to constrain photogrammetric reconstructions based on the aerial imagery. The locations of the 144 transect points can be used for independent evaluation of the photogrammetric products.This data release includes the four sets of original aerial images; tables listing the image file names and locations; locations of the 140 transect points; and locations of the ground control points with photographs of the four in-place features and images showing the location of the two a posteriori points at two zoom levels.Collection of these data were supported by the USGS Coastal and Marine Geology Program and the USGS Innovation Center and were conducted under USGS field activity number 2016-007-FA and National Park Service Scientific Research and Collecting Permit, study number CACO-00285, permit number CACO-2016-SCI-003. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

  4. Identification of areas of recharge and discharge using Landsat-TM satellite imagery and aerial photography mapping techniques

    NASA Astrophysics Data System (ADS)

    Salama, R. B.; Tapley, I.; Ishii, T.; Hawkes, G.

    1994-10-01

    Aerial photographs (AP) and Landsat (TM) colour composites were used to map the geomorphology, geology and structures of the Salt River System of Western Australia. Geomorphic features identified are sand plains, dissected etchplain, colluvium, lateritic duricrust and rock outcrops. The hydrogeomorphic units include streams, lakes and playas, palaeochannels and palaeodeltas. The structural features are linear and curvilinear lineaments, ring structures and dolerite dykes. Suture lines control the course of the main river channel. Permeable areas around the circular granitic plutons were found to be the main areas of recharge in the uplands. Recharge was also found to occur in the highly permeable areas of the sandplains. Discharge was shown to be primarily along the main drainage lines, on the edge of the circular sandplains, in depressions and in lakes. The groundwater occurrence and hydrogeological classification of the recharge potential of the different units were used to classify the mapped areas into recharge and discharge zones. The results also show that TM colour composites provide a viable source of data comparable with AP for mapping and delineating areas of recharge and discharge on a regional scale.

  5. Small UAV-Acquired, High-resolution, Georeferenced Still Imagery

    SciTech Connect

    Ryan Hruska

    2005-09-01

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical to use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.

  6. Analysis of Biophysical Mechanisms of Gilgai Microrelief Formation in Dryland Swelling Soils Using Ultra-High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Krell, N.; DeCarlo, K. F.; Caylor, K. K.

    2015-12-01

    Microrelief formations ("gilgai"), which form due to successive wetting-drying cycles typical of swelling soils, provide ecological hotspots for local fauna and flora, including higher and more robust vegetative growth. The distribution of these gilgai suggests a remarkable degree of regularity. However, it is unclear to what extent the mechanisms that drive gilgai formation are physical, such as desiccation-induced fracturing, or biological in nature, namely antecedent vegetative clustering. We investigated gilgai genesis and pattern formation in a 100 x 100 meter study area with swelling soils in a semiarid grassland at the Mpala Research Center in central Kenya. Our ongoing experiment is composed of three 9m2 treatments: we removed gilgai and limited vegetative growth by herbicide application in one plot, allowed for unrestricted seed dispersal in another, and left gilgai unobstructed in a control plot. To estimate the spatial frequencies of the repeating patterns of gilgai, we obtained ultra-high resolution (0.01-0.03m/pixel) images with an unmanned aerial vehicle (UAV) from which digital elevation models were also generated. Geostatistical analyses using wavelet and fourier methods in 1- and 2-dimensions were employed to characterize gilgai size and distribution. Preliminary results support regular spatial patterning across the gilgaied landscape and heterogeneities may be related to local soil properties and biophysical influences. Local data on gilgai and fracture characteristics suggest that gilgai form at characteristic heights and spacing based on fracture morphology: deep, wide cracks result in large, highly vegetated mounds whereas shallow cracks, induced by animal trails, are less correlated with gilgai size and shape. Our experiments will help elucidate the links between shrink-swell processes and gilgai-vegetation patterning in high activity clay soils and advance our understanding of the mechanisms of gilgai formation in drylands.

  7. Semi-Automated Approach for Mapping Urban Trees from Integrated Aerial LiDAR Point Cloud and Digital Imagery Datasets

    NASA Astrophysics Data System (ADS)

    Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-09-01

    Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  8. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  9. Deglaciation of the Caucasus Mountains, Russia/Georgia, in the 21st century observed with ASTER satellite imagery and aerial photography

    NASA Astrophysics Data System (ADS)

    Shahgedanova, M.; Nosenko, G.; Kutuzov, S.; Rototaeva, O.; Khromova, T.

    2014-12-01

    Changes in the map area of 498 glaciers located on the Main Caucasus ridge (MCR) and on Mt. Elbrus in the Greater Caucasus Mountains (Russia and Georgia) were assessed using multispectral ASTER and panchromatic Landsat imagery with 15 m spatial resolution in 1999/2001 and 2010/2012. Changes in recession rates of glacier snouts between 1987-2001 and 2001-2010 were investigated using aerial photography and ASTER imagery for a sub-sample of 44 glaciers. In total, glacier area decreased by 4.7 ± 2.1% or 19.2 ± 8.7 km2 from 407.3 ± 5.4 km2 to 388.1 ± 5.2 km2. Glaciers located in the central and western MCR lost 13.4 ± 7.3 km2 (4.7 ± 2.5%) in total or 8.5 km2 (5.0 ± 2.4%) and 4.9 km2 (4.1 ± 2.7%) respectively. Glaciers on Mt. Elbrus, although located at higher elevations, lost 5.8 ± 1.4 km2 (4.9 ± 1.2%) of their total area. The recession rates of valley glacier termini increased between 1987-2000/01 and 2000/01-2010 (2000 for the western MCR and 2001 for the central MCR and Mt.~Elbrus) from 3.8 ± 0.8, 3.2 ± 0.9 and 8.3 ± 0.8 m yr-1 to 11.9 ± 1.1, 8.7 ± 1.1 and 14.1 ± 1.1 m yr-1 in the central and western MCR and on Mt. Elbrus respectively. The highest rate of increase in glacier termini retreat was registered on the southern slope of the central MCR where it has tripled. A positive trend in summer temperatures forced glacier recession, and strong positive temperature anomalies in 1998, 2006, and 2010 contributed to the enhanced loss of ice. An increase in accumulation season precipitation observed in the northern MCR since the mid-1980s has not compensated for the effects of summer warming while the negative precipitation anomalies, observed on the southern slope of the central MCR in the 1990s, resulted in stronger glacier wastage.

  10. Use of Aerial high resolution visible imagery to produce large river bathymetry: a multi temporal and spatial study over the by-passed Upper Rhine

    NASA Astrophysics Data System (ADS)

    Béal, D.; Piégay, H.; Arnaud, F.; Rollet, A.; Schmitt, L.

    2011-12-01

    Aerial high resolution visible imagery allows producing large river bathymetry assuming that water depth is related to water colour (Beer-Bouguer-Lambert law). In this paper we aim at monitoring Rhine River geometry changes for a diachronic study as well as sediment transport after an artificial injection (25.000 m3 restoration operation). For that a consequent data base of ground measurements of river depth is used, built on 3 different sources: (i) differential GPS acquisitions, (ii) sounder data and (iii) lateral profiles realized by experts. Water depth is estimated using a multi linear regression over neo channels built on a principal component analysis over red, green and blue bands and previously cited depth data. The study site is a 12 km long reach of the by-passed section of the Rhine River that draws French and German border. This section has been heavily impacted by engineering works during the last two centuries: channelization since 1842 for navigation purposes and the construction of a 45 km long lateral canal and 4 consecutive hydroelectric power plants of since 1932. Several bathymetric models are produced based on 3 different spatial resolutions (6, 13 and 20 cm) and 5 acquisitions (January, March, April, August and October) since 2008. Objectives are to find the optimal spatial resolution and to characterize seasonal effects. Best performances according to the 13 cm resolution show a 18 cm accuracy when suspended matters impacted less water transparency. Discussions are oriented to the monitoring of the artificial reload after 2 flood events during winter 2010-2011. Bathymetric models produced are also useful to build 2D hydraulic model's mesh.

  11. A new technique for the detection of large scale landslides in glacio-lacustrine deposits using image correlation based upon aerial imagery: A case study from the French Alps

    NASA Astrophysics Data System (ADS)

    Fernandez, Paz; Whitworth, Malcolm

    2016-10-01

    Landslide monitoring has benefited from recent advances in the use of image correlation of high resolution optical imagery. However, this approach has typically involved satellite imagery that may not be available for all landslides depending on their time of movement and location. This study has investigated the application of image correlation techniques applied to a sequence of aerial imagery to an active landslide in the French Alps. We apply an indirect landslide monitoring technique (COSI-Corr) based upon the cross-correlation between aerial photographs, to obtain horizontal displacement rates. Results for the 2001-2003 time interval are presented, providing a spatial model of landslide activity and motion across the landslide, which is consistent with previous studies. The study has identified areas of new landslide activity in addition to known areas and through image decorrelation has identified and mapped two new lateral landslides within the main landslide complex. This new approach for landslide monitoring is likely to be of wide applicability to other areas characterised by complex ground displacements.

  12. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

  13. Using IKONOS and Aerial Videography to Validate Landsat Land Cover Maps of Central African Tropical Rain Forests

    NASA Astrophysics Data System (ADS)

    Lin, T.; Laporte, N. T.

    2003-12-01

    Compared to the traditional validation methods, aerial videography is a relatively inexpensive and time-efficient approach to collect "field" data for validating satellite-derived land cover map over large areas. In particular, this approach is valuable in remote and inaccessible locations. In the Sangha Tri-National Park region of Central Africa, where road access is limited to industrial logging sites, we are using IKONOS imagery and aerial videography to assess the accuracy of Landsat-derived land cover maps. As part of a NASA Land Cover Land Use Change project (INFORMS) and in collaboration with the Wildlife Conservation Society in the Republic of Congo, over 1500km of aerial video transects were collected in the Spring of 2001. The use of MediaMapper software combined with a VMS 200 video mapping system enabled the collection of aerial transects to be registered with geographic locations from a Geographic Positioning System. Video frame were extracted, visually interpreted, and compared to land cover types mapped by Landsat. We addressed the limitations of accuracy assessment using aerial-base data and its potential for improving vegetation mapping in tropical rain forests. The results of the videography and IKONOS image analysis demonstrate the utility of very high resolution imagery for map validation and forest resource assessment.

  14. Near infrared-red models for the remote estimation of chlorophyll- a concentration in optically complex turbid productive waters: From in situ measurements to aerial imagery

    NASA Astrophysics Data System (ADS)

    Gurlin, Daniela

    Today the water quality of many inland and coastal waters is compromised by cultural eutrophication in consequence of increased human agricultural and industrial activities and remote sensing is widely applied to monitor the trophic state of these waters. This study explores near infrared-red models for the remote estimation of chlorophyll-a concentration in turbid productive waters and compares several near infrared-red models developed within the last 35 years. Three of these near infrared-red models were calibrated for a dataset with chlorophyll-a concentrations from 2.3 to 81.2 mg m -3 and validated for independent and statistically significantly different datasets with chlorophyll-a concentrations from 4.0 to 95.5 mg m-3 and 4.0 to 24.2 mg m-3 for the spectral bands of the MEdium Resolution Imaging Spectrometer (MERIS) and Moderate-resolution Imaging Spectroradiometer (MODIS). The developed MERIS two-band algorithm estimated chlorophyll-a concentrations from 4.0 to 24.2 mg m-3, which are typical for many inland and coastal waters, very accurately with a mean absolute error 1.2 mg m-3. These results indicate a high potential of the simple MERIS two-band algorithm for the reliable estimation of chlorophyll-a concentration without any reduction in accuracy compared to more complex algorithms, even though more research seems required to analyze the sensitivity of this algorithm to differences in the chlorophyll-a specific absorption coefficient of phytoplankton. Three near infrared-red models were calibrated and validated for a smaller dataset of atmospherically corrected multi-temporal aerial imagery collected by the hyperspectral airborne imaging spectrometer for applications (AisaEAGLE). The developed algorithms successfully captured the spatial and temporal variability of the chlorophyll-a concentrations and estimated chlorophyll- a concentrations from 2.3 to 81.2 mg m-3 with mean absolute errors from 4.4 mg m-3 for the AISA two band algorithm to 5.2 mg m-3

  15. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  16. Scaling Sap Flow Results Over Wide Areas Using High-Resolution Aerial Multispectral Digital Imaging, Leaf Area Index (LAI) and MODIS Satellite Imagery in Saltcedar Stands on the Lower Colorado River

    NASA Astrophysics Data System (ADS)

    Murray, R.; Neale, C.; Nagler, P. L.; Glenn, E. P.

    2008-12-01

    Heat-balance sap flow sensors provide direct estimates of water movement through plant stems and can be used to accurately measure leaf-level transpiration (EL) and stomatal conductance (GS) over time scales ranging from 20-minutes to a month or longer in natural stands of plants. However, their use is limited to relatively small branches on shrubs or trees, as the gauged stem section needs to be uniformly heated by the heating coil to produce valid measurements. This presents a scaling problem in applying the results to whole plants, stands of plants, and larger landscape areas. We used high-resolution aerial multispectral digital imaging with green, red and NIR bands as a bridge between ground measurements of EL and GS, and MODIS satellite imagery of a flood plain on the Lower Colorado River dominated by saltcedar (Tamarix ramosissima). Saltcedar is considered to be a high-water-use plant, and saltcedar removal programs have been proposed to salvage water. Hence, knowledge of actual saltcedar ET rates is needed on western U.S. rivers. Scaling EL and GS to large landscape units requires knowledge of leaf area index (LAI) over large areas. We used a LAI model developed for riparian habitats on Bosque del Apache, New Mexico, to estimate LAI at our study site on the Colorado River. We compared the model estimates to ground measurements of LAI, determined with a Li-Cor LAI-2000 Plant Canopy Analyzer calibrated by leaf harvesting to determine Specific Leaf Area (SLA) (m2 leaf area per g dry weight leaves) of the different species on the floodplain. LAI could be adequately predicted from NDVI from aerial multispectral imagery and could be cross-calibrated with MODIS NDVI and EVI. Hence, we were able to project point measurements of sap flow and LAI over multiple years and over large areas of floodplain using aerial multispectral imagery as a bridge between ground and satellite data. The methods are applicable to riparian corridors throughout the western U.S.

  17. Land cover/use mapping using multi-band imageries captured by Cropcam Unmanned Aerial Vehicle Autopilot (UAV) over Penang Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Fuyi, Tan; Boon Chun, Beh; Mat Jafri, Mohd Zubir; Hwee San, Lim; Abdullah, Khiruddin; Mohammad Tahrin, Norhaslinda

    2012-11-01

    The problem of difficulty in obtaining cloud-free scene at the Equatorial region from satellite platforms can be overcome by using airborne imagery. Airborne digital imagery has proved to be an effective tool for land cover studies. Airborne digital camera imageries were selected in this present study because of the airborne digital image provides higher spatial resolution data for mapping a small study area. The main objective of this study is to classify the RGB bands imageries taken from a low-altitude Cropcam UAV for land cover/use mapping over USM campus, penang Island, Malaysia. A conventional digital camera was used to capture images from an elevation of 320 meter on board on an UAV autopilot. This technique was cheaper and economical compared with other airborne studies. The artificial neural network (NN) and maximum likelihood classifier (MLC) were used to classify the digital imageries captured by using Cropcam UAV over USM campus, Penang Islands, Malaysia. The supervised classifier was chosen based on the highest overall accuracy (<80%) and Kappa statistic (<0.8). The classified land cover map was geometrically corrected to provide a geocoded map. The results produced by this study indicated that land cover features could be clearly identified and classified into a land cover map. This study indicates the use of a conventional digital camera as a sensor on board on an UAV autopilot can provide useful information for planning and development of a small area of coverage.

  18. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    USGS Publications Warehouse

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  19. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  20. Multi-temporal image analysis of historical aerial photographs and recent satellite imagery reveals evolution of water body surface area and polygonal terrain morphology in Kobuk Valley National Park, Alaska

    NASA Astrophysics Data System (ADS)

    Necsoiu, Marius; Dinwiddie, Cynthia L.; Walter, Gary R.; Larsen, Amy; Stothoff, Stuart A.

    2013-06-01

    Multi-temporal image analysis of very-high-resolution historical aerial and recent satellite imagery of the Ahnewetut Wetlands in Kobuk Valley National Park, Alaska, revealed the nature of thaw lake and polygonal terrain evolution over a 54-year period of record comprising two 27-year intervals (1951-1978, 1978-2005). Using active-contouring-based change detection, high-precision orthorectification and co-registration and the normalized difference index, surface area expansion and contraction of 22 shallow water bodies, ranging in size from 0.09 to 179 ha, and the transition of ice-wedge polygons from a low- to a high-centered morphology were quantified. Total surface area decreased by only 0.4% during the first time interval, but decreased by 5.5% during the second time interval. Twelve water bodies (ten lakes and two ponds) were relatively stable with net surface area decreases of ≤10%, including four lakes that gained area during both time intervals, whereas ten water bodies (five lakes and five ponds) had surface area losses in excess of 10%, including two ponds that drained completely. Polygonal terrain remained relatively stable during the first time interval, but transformation of polygons from low- to high-centered was significant during the second time interval.

  1. "We're from the Generation that was Raised on Television": A Qualitative Exploration of Media Imagery in Elementary Preservice Teachers' Video Production

    ERIC Educational Resources Information Center

    Hayes, Michael T.; Petrie, Gina Mikel

    2006-01-01

    In this article, the authors present their analysis of preservice teachers video production. Twenty-eight students in the first authors Social Foundations of the Elementary Curriculum course produced a 5 to 10 minute video as the major assignment for the class, interviews were conducted with six of the seven video production groups and the videos…

  2. Physical controls and patterns of recruitment on the Drôme River (SE France): An analysis based on a chronosequence of high resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Piegay, H.; Stella, J. C.; Raepple, B.

    2014-12-01

    Along with the recent recognition of the role of vegetation in influencing channel hydraulics, and thus fluvial morphology comes the need for scientific research on vegetation recruitment and its control factors. Flood disturbance is known to create a suitable physical template for the establishment of woody pioneers. Sapling recruitment patterns and underlying physical controls were investigated on a 5 km braided reach of the Drôme River in South-eastern France, following the 2003 50-year flood event. The approach was based on the analysis of a chronosequence of high resolution aerial images acquired yearly between 2005 and 2011, complemented by airborne LiDAR data and field observations. The study highlights how physical complexity induced by natural variations in hydro-climatic and consequently hydro-geomorphic conditions facilitates variable patterns of recruitment. The initial post-flood vegetative units, which covered up to 10% of the total active channel area in 2005, was seen to double within six years. The variability of hydro-climatic conditions was reflected in the temporal and spatial patterns of recruitment, with a pronounced peak of vegetation expansion in 2007 and a decreasing trend following higher flows in 2009. Recruitment was further seen to be sustained in a variety of geomorphic units, which showed different probabilities and patterns of recruitment. Active channels were the prominent geomorphic unit in terms of total biomass development, while in-channel wood units showed the highest probability for recruitment. Vegetation recruitment understanding is becoming crucial for predicting fluvial system evolution in different hydroclimatic contexts. Applied, these findings may contribute to improve efforts made in the field of flood risk management, as well as restoration planning.

  3. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    NASA Astrophysics Data System (ADS)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  4. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY, DIGITAL ORTHOPHOTO QUADRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades' worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data ar...

  5. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY, DIGITAL ORTHOPHOTO QUADDRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades'worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data are...

  6. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY DIGITAL ORTHOPHOTO QUADRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades'worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data are...

  7. Aerial Explorers

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg; Ippolito, Corey

    2005-01-01

    This paper presents recent results from a mission architecture study of planetary aerial explorers. In this study, several mission scenarios were developed in simulation and evaluated on success in meeting mission goals. This aerial explorer mission architecture study is unique in comparison with previous Mars airplane research activities. The study examines how aerial vehicles can find and gain access to otherwise inaccessible terrain features of interest. The aerial explorer also engages in a high-level of (indirect) surface interaction, despite not typically being able to takeoff and land or to engage in multiple flights/sorties. To achieve this goal, a new mission paradigm is proposed: aerial explorers should be considered as an additional element in the overall Entry, Descent, Landing System (EDLS) process. Further, aerial vehicles should be considered primarily as carrier/utility platforms whose purpose is to deliver air-deployed sensors and robotic devices, or symbiotes, to those high-value terrain features of interest.

  8. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  9. Satellite Imagery Assisted Road-Based Visual Navigation System

    NASA Astrophysics Data System (ADS)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  10. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    PubMed Central

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091

  11. Small Moving Vehicle Detection in a Satellite Video of an Urban Area.

    PubMed

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-09-21

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously.

  12. Unmanned aerial vehicles for rangeland mapping and monitoring: a comparison of two systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography from unmanned aerial vehicles (UAVs) bridges the gap between ground-based observations and remotely sensed imagery from aerial and satellite platforms. UAVs can be deployed quickly and repeatedly, are less costly and safer than piloted aircraft, and can obtain very high-resolution...

  13. 3-D Scene Reconstruction from Aerial Imagery

    DTIC Science & Technology

    2012-03-01

    63 3.4.2 CMVS /PMVS2...63 28. Twenty six identified reference markers within ground truth...Selection parameters used for CMVS /PMVS2 . . . . . . . . . . . . . . . . . . . . . . 67 3. Number of keypoints extracted from each image at variable

  14. Incorporation of texture, intensity, hue, and saturation for rangeland monitoring with unmanned aircraft imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography acquired with unmanned aerial vehicles (UAVs) has great potential for incorporation into rangeland health monitoring protocols, and object-based image analysis is well suited for this hyperspatial imagery. A major drawback, however, is the low spectral resolution of the imagery, b...

  15. Use of Kendall's coefficient of concordance to assess agreement among observers of very high resolution imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ground-based vegetation monitoring methods are expensive, time-consuming, and limited in sample-size. Aerial imagery is appealing to managers because of the reduced time and expense and the increase in sample size. One challenge of aerial imagery is detecting differences among observers of the sam...

  16. Using Airborne and Satellite Imagery to Distinguish and Map Black Mangrove

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of studies evaluating color-infrared (CIR) aerial photography, CIR aerial true digital imagery, and high resolution QuickBird multispectral satellite imagery for distinguishing and mapping black mangrove [Avicennia germinans (L.) L.] populations along the lower Texas g...

  17. "A" Is for Aerial Maps and Art

    ERIC Educational Resources Information Center

    Todd, Reese H.; Delahunty, Tina

    2007-01-01

    The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…

  18. Aerial Photography

    NASA Technical Reports Server (NTRS)

    1985-01-01

    John Hill, a pilot and commercial aerial photographer, needed an information base. He consulted NERAC and requested a search of the latest developments in camera optics. NERAC provided information; Hill contacted the manufacturers of camera equipment and reduced his photographic costs significantly.

  19. High-resolution spatial patterns of Soil Organic Carbon content derived from low-altitude aerial multi-band imagery on the Broadbalk Wheat Experiment at Rothamsted,UK

    NASA Astrophysics Data System (ADS)

    Aldana Jague, Emilien; Goulding, Keith; Heckrath, Goswin; Macdonald, Andy; Poulton, Paul; Stevens, Antoine; Van Wesemael, Bas; Van Oost, Kristof

    2014-05-01

    Soil organic C (SOC) contents in arable landscapes change as a function of management, climate and topography (Johnston et al, 2009). Traditional methods to measure soil C stocks are labour intensive, time consuming and expensive. Consequently, there is a need for developing low-cost methods for monitoring SOC contents in agricultural soils. Remote sensing methods based on multi-spectral images may help map SOC variation in surface soils. Recently, the costs of both Unmanned Aerial Vehicles (UAVs) and multi-spectral cameras have dropped dramatically, opening up the possibility for more widespread use of these tools for SOC mapping. Long-term field experiments with distinct SOC contents in adjacent plots, provide a very useful resource for systematically testing remote sensing approaches for measuring SOC. This study focusses on the Broadbalk Wheat Experiment at Rothamsted (UK). The Broadbalk experiment started in 1843. It is widely acknowledged to be the oldest continuing agronomic field experiment in the world. The initial aim of the experiment was to test the effects of different organic manures and inorganic fertilizers on the yield of winter wheat. The experiment initially contained 18 strips, each about 320m long and 6m wide, separated by paths of 1.5-2.5m wide. The strips were subsequently divided into ten sections (>180 plots) to test the effects of other factors (crop rotation, herbicides, pesticides etc.). The different amounts and combinations of mineral fertilisers (N,P,K,Na & Mg) and Farmyard Manure (FYM) applied to these plots for over 160 years has resulted in very different SOC contents in adjacent plots, ranging between 0.8% and 3.5%. In addition to large inter-plot variability in SOC there is evidence of within-plot trends related to the use of discard areas between plots and movement of soil as a result of ploughing. The objectives of this study are (i) to test whether low-altitude multi-band imagery can be used to accurately predict spatial

  20. Simulation of parafoil reconnaissance imagery

    NASA Astrophysics Data System (ADS)

    Kogler, Kent J.; Sutkus, Linas; Troast, Douglas; Kisatsky, Paul; Charles, Alain M.

    1995-08-01

    Reconnaissance from unmanned platforms is currently of interest to DoD and civil sectors concerned with drug trafficking and illegal immigration. Platforms employed vary from motorized aircraft to tethered balloons. One appraoch currently under evaluation deploys a TV camera suspended from a parafoil delivered to the area of interest by a cannon launched projectile. Imagery is then transmitted to a remote monitor for processing and interpretation. This paper presents results of imagery obtained from simulated parafoil flights in which software techniques were developed to process-in image degradation caused by atmospheric obscurants and perturbations in the normal parafoil flight trajectory induced by wind gusts. The approach to capturing continuous motion imagery from captive flight test recordings, the introduction of simulated effects, and the transfer of the processed imagery back to video tape is described.

  1. Cockpit Video: A Low Cost BDA Source

    DTIC Science & Technology

    1993-12-01

    Poor Secondary Imagery Dissemination ................. 11 MISREP Taadequacies ...................... 11 ATO Omissions ... ....................... 12 ...Intel’s Reluctance to Use Onboard Video ............ 12 BDA Work-arounds ........................ .......... 13 Video in the BDA Process...47 Video Joint Munitions Effectiveness Manual Weighting . . . . 49 Enhanee. 5l-,erie. Training .. 50 Revise MISREP Procedures

  2. The remote characterization of vegetation using Unmanned Aerial Vehicle photography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial phot...

  3. In-flow evolution of lahar deposits from video-imagery with implications for post-event deposit interpretation, Mount Semeru, Indonesia

    NASA Astrophysics Data System (ADS)

    Starheim, Colette C. A.; Gomez, Christopher; Davies, Tim; Lavigne, Franck; Wassmer, Patrick

    2013-04-01

    The hazardous and unpredictable nature of lahars makes them challenging to study, yet the in-flow processes characterizing these events are important to understand. As a result, much of the previous research on lahar sedimentation and flow processes has been derived from experimental flows or stratigraphic surveys of post-event deposits. By comparison, little is known on the time-dependent sediment and flow dynamics of lahars in natural environments. Using video-footage of seven lahars on the flanks of Semeru Volcano (East Java, Indonesia), the present study offers new insights on the in-flow evolution of sediment in natural lahars. Video analysis revealed several distinctive patterns of sediment entrainment and deposition that varied with time-related fluctuations in flow. These patterns were used to generate a conceptual framework describing possible processes of formation for subsurface architectural features identified in an earlier lateral survey of lahar deposits on Semeru Volcano (Gomez and Lavigne, 2010a). The formation of lateral discontinuities was related to the partial erosion of transitional bank deposits followed by fresh deposition along the erosional contact. This pattern was observed over the course of several lahar events and within individual flows. Observations similarly offer potential explanations for the formation of lenticular features. Depending on flow characteristics, these features appeared to form by preferential erosion or deposition around large stationary blocks, and by deposition along channel banks during episodes of channel migration or channel constriction. Finally, conditions conducive to the deposition of fine laminated beds were observed during periods of attenuating and surging flow. These results emphasize the difficulties associated with identifying process-structure relationships solely from post-event deposit interpretation and illustrate that an improved understanding of the time-dependent sediment dynamics in lahars may

  4. The availability of local aerial photography in southern California. [for solution of urban planning problems

    NASA Technical Reports Server (NTRS)

    Allen, W., III; Sledge, B.; Paul, C. K.; Landini, A. J.

    1974-01-01

    Some of the major photography and photogrammetric suppliers and users located in Southern California are listed. Recent trends in aerial photographic coverage of the Los Angeles basin area are also noted, as well as the uses of that imagery.

  5. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  6. Observations of debris flows at Chalk Cliffs, Colorado, USA: Part 1, in-situ measurements of flow dynamics, tracer particle movement and video imagery from the summer of 2009

    USGS Publications Warehouse

    McCoy, Scott W.; Coe, Jeffrey A.; Kean, Jason W.; Tucker, Greg E.; Staley, Dennis M.; Wasklewicz, Thad A.

    2011-01-01

    Debris flows initiated by surface-water runoff during short duration, moderate- to high-intensity rainfall are common in steep, rocky, and sparsely vegetated terrain. Yet large uncertainties remain about the potential for a flow to grow through entrainment of loose debris, which make formulation of accurate mechanical models of debris-flow routing difficult. Using a combination of in situ measurements of debris flow dynamics, video imagery, tracer rocks implanted with passive integrated transponders (PIT) and pre- and post-flow 2-cm resolution digital terrain models (terrain data presented in a companion paper by STALEY et alii, 2011), we investigated the entrainment and transport response of debris flows at Chalk Cliffs, CO, USA. Four monitored events during the summer of 2009 all initiated from surface-water runoff, generally less than an hour after the first measurable rain. Despite reach-scale morphology that remained relatively constant, the four flow events displayed a range of responses, from long-runout flows that entrained significant amounts of channel sediment and dammed the main-stem river, to smaller, short-runout flows that were primarily depositional in the upper basin. Tracer-rock travel-distance distributions for these events were bimodal; particles either remained immobile or they travelled the entire length of the catchment. The long-runout, large-entrainment flow differed from the other smaller flows by the following controlling factors: peak 10-minute rain intensity; duration of significant flow in the channel; and to a lesser extent, peak surge depth and velocity. Our growing database of natural debris-flow events can be used to develop linkages between observed debris-flow transport and entrainment responses and the controlling rainstorm characteristics and flow properties.

  7. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  8. Dreams and Mediation in Music Video.

    ERIC Educational Resources Information Center

    Burns, Gary

    The most extensive use of dream imagery in popular culture occurs in the visual arts, and in the past five years it has become evident that music video (a semi-narrative hybrid of film and television) is the most dreamlike media product of all. The rampant depiction and implication of dreams and media fantasies in music video are often strongly…

  9. Looking for an old aerial photograph

    USGS Publications Warehouse

    ,

    1997-01-01

    Attempts to photograph the surface of the Earth date from the 1800's, when photographers attached cameras to balloons, kites, and even pigeons. Today, aerial photographs and satellite images are commonplace. The rate of acquiring aerial photographs and satellite images has increased rapidly in recent years. Views of the Earth obtained from aircraft or satellites have become valuable tools to Government resource planners and managers, land-use experts, environmentalists, engineers, scientists, and a wide variety of other users. Many people want historical aerial photographs for business or personal reasons. They may want to locate the boundaries of an old farm or a piece of family property. Or they may want a photograph as a record of changes in their neighborhood, or as a gift. The U.S. Geological Survey (USGS) maintains the Earth Science Information Centers (ESIC?s) to sell aerial photographs, remotely sensed images from satellites, a wide array of digital geographic and cartographic data, as well as the Bureau?s wellknown maps. Declassified photographs from early spy satellites were recently added to the ESIC offerings of historical images. Using the Aerial Photography Summary Record System database, ESIC researchers can help customers find imagery in the collections of other Federal agencies and, in some cases, those of private companies that specialize in esoteric products.

  10. Orthorectification, mosaicking, and analysis of sub-decimeter resolution UAV imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...

  11. Integrating multisource imagery and GIS analysis for mapping Bermuda`s benthic habitats

    SciTech Connect

    Vierros, M.K.

    1997-06-01

    Bermuda is a group of isolated oceanic situated in the northwest Atlantic Ocean and surrounded by the Sargasso Sea. Bermuda possesses the northernmost coral reefs and mangroves in the Atlantic Ocean, and because of its high population density, both the terrestrial and marine environments are under intense human pressure. Although a long record of scientific research exists, this study is the first attempt to comprehensively map the area`s benthic habitats, despite the need for such a map for resource assessment and management purposes. Multi-source and multi-date imagery were used for producing the habitat map due to lack of a complete up-to-date image. Classifications were performed with SPOT data, and the results verified from recent aerial photography and current aerial video, along with extensive ground truthing. Stratification of the image into regions prior to classification reduced the confusing effects of varying water depth. Classification accuracy in shallow areas was increased by derivation of a texture pseudo-channel, while bathymetry was used as a classification tool in deeper areas, where local patterns of zonation were well known. Because of seasonal variation in extent of seagrasses, a classification scheme based on density could not be used. Instead, a set of classes based on the seagrass area`s exposure to the open ocean were developed. The resulting habitat map is currently being assessed for accuracy with promising preliminary results, indicating its usefulness as a basis for future resource assessment studies.

  12. Aerial radiation surveys

    SciTech Connect

    Jobst, J.

    1980-01-01

    A recent aerial radiation survey of the surroundings of the Vitro mill in Salt Lake City shows that uranium mill tailings have been removed to many locations outside their original boundary. To date, 52 remote sites have been discovered within a 100 square kilometer aerial survey perimeter surrounding the mill; 9 of these were discovered with the recent aerial survey map. Five additional sites, also discovered by aerial survey, contained uranium ore, milling equipment, or radioactive slag. Because of the success of this survey, plans are being made to extend the aerial survey program to other parts of the Salt Lake valley where diversions of Vitro tailings are also known to exist.

  13. ERTS imagery for ground-water investigations

    USGS Publications Warehouse

    Moore, Gerald K.; Deutsch, Morris

    1975-01-01

    ERTS imagery offers the first opportunity to apply moderately high-resolution satellite data to the nationwide study of water resources. This imagery is both a tool and a form of basic data. Like other tools and basic data, it should be considered for use in ground-water investigations. The main advantage of its use will be to reduce the need for field work. In addition, however, broad regional features may be seen easily on ERTS imagery, whereas they would be difficult or impossible to see on the ground or on low-altitude aerial photographs. Some present and potential uses of ERTS imagery are to locate new aquifers, to study aquifer recharge and discharge, to estimate ground-water pumpage for irrigation, to predict the location and type of aquifer management problems, and to locate and monitor strip mines which commonly are sources for acid mine drainage. In many cases, boundaries which are gradational on the ground appear to be sharp on ERTS imagery. Initial results indicate that the accuracy of maps produced from ERTS imagery is completely adequate for some purposes.

  14. Structural geologic interpretations from radar imagery

    USGS Publications Warehouse

    Reeves, Robert G.

    1969-01-01

    Certain structural geologic features may be more readily recognized on sidelooking airborne radar (SLAR) images than on conventional aerial photographs, other remote sensor imagery, or by ground observations. SLAR systems look obliquely to one or both sides and their images resemble aerial photographs taken at low sun angle with the sun directly behind the camera. They differ from air photos in geometry, resolution, and information content. Radar operates at much lower frequencies than the human eye, camera, or infrared sensors, and thus "sees" differently. The lower frequency enables it to penetrate most clouds and some precipitation, haze, dust, and some vegetation. Radar provides its own illumination, which can be closely controlled in intensity and frequency. It is narrow band, or essentially monochromatic. Low relief and subdued features are accentuated when viewed from the proper direction. Runs over the same area in significantly different directions (more than 45° from each other), show that images taken in one direction may emphasize features that are not emphasized on those taken in the other direction; optimum direction is determined by those features which need to be emphasized for study purposes. Lineaments interpreted as faults stand out on radar imagery of central and western Nevada; folded sedimentary rocks cut by faults can be clearly seen on radar imagery of northern Alabama. In these areas, certain structural and stratigraphic features are more pronounced on radar images than on conventional photographs; thus radar imagery materially aids structural interpretation.

  15. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    PubMed

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  16. SAR imagery of the Grand Banks (Newfoundland) pack ice pack and its relationship to surface features

    NASA Technical Reports Server (NTRS)

    Argus, S. D.; Carsey, F. D.

    1988-01-01

    Synthetic Aperture Radar (SAR) data and aerial photographs were obtained over pack ice off the East Coast of Canada in March 1987 as part of the Labrador Ice Margin Experiment (LIMEX) pilot project. Examination of this data shows that although the pack ice off the Canadian East Coast appears essentially homogeneous to visible light imagery, two clearly defined zones of ice are apparent on C-band SAR imagery. To identify factors that create the zones seen on the radar image, aerial photographs were compared to the SAR imagery. Floe size data from the aerial photographs was compared to digital number values taken from SAR imagery of the same ice. The SAR data of the inner zone acquired three days apart over the melt period was also examined. The studies indicate that the radar response is governed by floe size and meltwater distribution.

  17. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  18. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  19. Aerial Image Systems

    NASA Astrophysics Data System (ADS)

    Clapp, Robert E.

    1987-09-01

    Aerial images produce the best stereoscopic images of the viewed world. Despite the fact that every optic in existence produces an aerial image, few persons are aware of their existence and possible uses. Constant reference to the eye and other optical systems have produced a psychosis of design that only considers "focal planes" in the design and analysis of optical systems. All objects in the field of view of the optical device are imaged by the device as an aerial image. Use of aerial images in vision and visual display systems can provide a true stereoscopic representation of the viewed world. This paper discusses aerial image systems - their applications and designs and presents designs and design concepts that utilize aerial images to obtain superior visual displays, particularly with application to visual simulation.

  20. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  1. Aerial imagery and structure-from-motion based DEM reconstruction of region-sized areas (Sierra Arana, Spain and Namur Province, Belgium) using an high-altitude drifting balloon platform.

    NASA Astrophysics Data System (ADS)

    Burlet, Christian; María Mateos, Rosa; Azañón, Jose Miguel; Perez, José Vicente; Vanbrabant, Yves

    2015-04-01

    different elevations. A 1m/pixel ground resolution set covering an area of about 200km² and mapping the eastern part of the Sierra Arana (Andalucía, Spain) includes a kartsic field directly to the south-east of the ridge and the cliffs of the "Riscos del Moro". A 4m/pixel ground resolution set covering an area of about 900km² includes the landslide active Diezma region (Andalucía, Spain) and the water reserve of Francisco Abellan lake. The third set has a 3m/pixel ground resolution, covers about 100km² and maps the Famennian rocks formations, known as part of "La Calestienne", outcropping near Beauraing and Rochefort in the Namur Province (Belgium). The DEM and orthophoto's have been referenced using ground control points from satellite imagery (Spain, Belgium) and DPGS (Belgium). The quality of produced DEM were then evaluated by comparing the level and accuracy of details and surface artefacts between available topographic data (SRTM- 30m/pixel, topographic maps) and the three Stratochip sets. This evaluation showed that the models were in good correlation with existing data, and can be readily be used in geomorphology, structural and natural hazard studies.

  2. Review of the SAFARI 2000 RC-10 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Myers, Jeff; Shelton, Gary; Annegarn, Harrold; Peterson, David L. (Technical Monitor)

    2001-01-01

    This presentation will review the aerial photography collected by the NASA ER-2 aircraft during the SAFARI (Southern African Regional Science Initiative) year 2000 campaign. It will include specifications on the camera and film, and will show examples of the imagery. It will also detail the extent of coverage, and the procedures to obtain film products from the South African government. Also included will be some sample applications of aerial photography for various environmental applications, and its use in augmenting other SAFARI data sets.

  3. Aerial vehicles collision avoidance using monocular vision

    NASA Astrophysics Data System (ADS)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  4. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    NASA Astrophysics Data System (ADS)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  5. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  6. Synthesis of photorealistic whole earth imagery

    NASA Astrophysics Data System (ADS)

    Rodgers, Todd K.; Papaik, Michael J.; Wylie, Jack L.

    1993-03-01

    A variety of remotely sensed digital imagery data sources now exists that enable the computer graphics synthesis of convincing real, whole Earth images similar to those recorded by orbiting astronauts using conventional photographic techniques. Within data resolution limitations, such data sets can be rendered (using three dimensional graphics technologies) to produce views of our planet from any vantage point. By utilizing time series of collected data in conjunction with synthetic Lambertian lighting models, such views can be animated, in time, to produce dynamic visualizations of the Earth and its weather systems. This paper describes an effort to produce an animation for commercial use in the broadcast industry. To be used for entertainment purposes, the animation was designed to show the dramatic, fluid nature of the Earth as it might appear from space. GOES infra red imagery was collected over the western hemisphere for 15 days at half hour intervals. This imagery was processed to remove sensor artifacts and drop-outs and to create synthetic imagery which appears to the observer to be nature visible wavelength imagery. Cloud free imagery of the entire planet, re- sampled to 4 Km resolution, based on mosaicked AVHRR, polar orbiting imagery was used as a 'base map' to reflect surface features. Graphics techniques to simulate Lambertian lighting of the Earth surface were used to impart the effects of changing solar illumination. All of the graphics elements were then, on a frame by frame basis, digitally composited together, with varying cloud transparency to produce the final rendered imagery, which in turn is recorded onto video tape.

  7. Acquisition of airborne imagery in support of Deepwater Horizon oil spill recovery assessments

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Muller-Karger, Frank E.

    2012-09-01

    Remote sensing imagery was collected from a low flying aircraft along the near coastal waters of the Florida Panhandle and northern Gulf of Mexico and into Barataria Bay, Louisiana, USA, during March 2011. Imagery was acquired from an aircraft that simultaneously collected traditional photogrammetric film imagery, digital video, digital still images, and digital hyperspectral imagery. The original purpose of the project was to collect airborne imagery to support assessment of weathered oil in littoral areas influenced by the Deepwater Horizon oil and gas spill that occurred during the spring and summer of 2010. This paper describes the data acquired and presents information that demonstrates the utility of small spatial scale imagery to detect the presence of weathered oil along littoral areas in the northern Gulf of Mexico. Flight tracks and examples of imagery collected are presented and methods used to plan and acquire the imagery are described. Results suggest weathered oil in littoral areas after the spill was contained at the source.

  8. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  9. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  10. Artificial Video for Video Analysis

    ERIC Educational Resources Information Center

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  11. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  12. Orientation Strategies for Aerial Oblique Images

    NASA Astrophysics Data System (ADS)

    Wiedemann, A.; Moré, J.

    2012-07-01

    Oblique aerial images become more and more distributed to fill the gap between vertical aerial images and mobile mapping systems. Different systems are on the market. For some applications, like texture mapping, precise orientation data are required. One point is the stable interior orientation, which can be achieved by stable camera systems, the other a precise exterior orientation. A sufficient exterior orientation can be achieved by a large effort in direct sensor orientation, whereas minor errors in the angles have a larger effect than in vertical imagery. The more appropriate approach is by determine the precise orientation parameters by photogrammetric methods using an adapted aerial triangulation. Due to the different points of view towards the object the traditional aerotriangulation matching tools fail, as they produce a bunch of blunders and require a lot of manual work to achieve a sufficient solution. In this paper some approaches are discussed and results are presented for the most promising approaches. We describe a single step approach with an aerotriangulation using all available images; a two step approach with an aerotriangulation only of the vertical images plus a mathematical transformation of the oblique images using the oblique cameras excentricity; and finally the extended functional model for a bundle block adjustment considering the mechanical connection between vertical and oblique images. Beside accuracy also other aspects like efficiency and required manual work have to be considered.

  13. Updating Maps Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Shahzad Janjua, Khurram; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Kingdom of Saudi Arabia is one of the most dynamic countries of the world. We have witnessed a very rapid urban development's which are altering Kingdom's landscape on daily basis. In recent years a substantial increase in urban populations is observed which results in the formation of large cities. Considering this fast paced growth, it has become necessary to monitor these changes, in consideration with challenges faced by aerial photography projects. It has been observed that data obtained through aerial photography has a lifecycle of 5-years because of delay caused by extreme weather conditions and dust storms which acts as hindrances or barriers during aerial imagery acquisition, which has increased the costs of aerial survey projects. All of these circumstances require that we must consider some alternatives that can provide us easy and better ways of image acquisition in short span of time for achieving reliable accuracy and cost effectiveness. The approach of this study is to conduct an extensive comparison between different resolutions of data sets which include: Orthophoto of (10 cm) GSD, Stereo images of (50 cm) GSD and Stereo images of (1 m) GSD, for map updating. Different approaches have been applied for digitizing buildings, roads, tracks, airport, roof level changes, filling stations, buildings under construction, property boundaries, mosques buildings and parking places.

  14. Locating inputs of freshwater to Lynch Cove, Hood Canal, Washington, using aerial infrared photography

    USGS Publications Warehouse

    Sheibley, Rich W.; Josberger, Edward G.; Chickadel, Chris

    2010-01-01

    The input of freshwater and associated nutrients into Lynch Cove and lower Hood Canal (fig. 1) from sources such as groundwater seeps, small streams, and ephemeral creeks may play a major role in the nutrient loading and hydrodynamics of this low dissolved-oxygen (hypoxic) system. These disbursed sources exhibit a high degree of spatial variability. However, few in-situ measurements of groundwater seepage rates and nutrient concentrations are available and thus may not represent adequately the large spatial variability of groundwater discharge in the area. As a result, our understanding of these processes and their effect on hypoxic conditions in Hood Canal is limited. To determine the spatial variability and relative intensity of these sources, the U.S. Geological Survey Washington Water Science Center collaborated with the University of Washington Applied Physics Laboratory to obtain thermal infrared (TIR) images of the nearshore and intertidal regions of Lynch Cove at or near low tide. In the summer, cool freshwater discharges from seeps and streams, flows across the exposed, sun-warmed beach, and out on the warm surface of the marine water. These temperature differences are readily apparent in aerial thermal infrared imagery that we acquired during the summers of 2008 and 2009. When combined with co-incident video camera images, these temperature differences allow identification of the location, the type, and the relative intensity of the sources.

  15. AERIAL METHODS OF EXPLORATION

    DTIC Science & Technology

    The development of photointerpretation techniques for identifying kimberlite pipes on aerial photographs is discussed. The geographic area considered is the Daldyn region, which lies in the zone of Northern Taiga of Yakutiya.

  16. Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.

    PubMed

    Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin

    2016-04-14

    Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.

  17. Aerial image databases for pipeline rights-of-way management

    NASA Astrophysics Data System (ADS)

    Jadkowski, Mark A.

    1996-03-01

    Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.

  18. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  19. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their

  20. Deep person re-identification in aerial images

    NASA Astrophysics Data System (ADS)

    Schumann, Arne; Schuchert, Tobias

    2016-10-01

    Person re-identification is the problem of matching multiple occurrences of a person in large amounts of image or video data. In this work we propose an approach specifically tailored to re-identify people across different camera views in aerial video recordings. New challenges that arise in aerial data include unusual and more varied view angles, a moving camera and potentially large changes in environment and other in uences between recordings (i.e. between flights). Our approach addresses these new challenges. Due to their recent successes, we apply deep learning to automatically learn features for person re-identification on a number of public datasets. We evaluate these features on aerial data and propose a method to automatically select suitable pretrained features without requiring person id labels on the aerial data. We further show that tailored data augmentation methods are well suited to better cope with the larger variety in view angles. Finally, we combine our model with a metric learning approach to allow for interactive improvement of re-identification results through user feedback. We evaluate the approach on our own video dataset which contains 12 persons recorded from a UAV.

  1. Evaluating automatic registration of UAV imagery using multi-temporal ortho images

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-10-01

    Accurate geo-registration of acquired imagery is an important task when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. As an example, change detection needs accurately geo-registered images for selecting and comparing co-located images taken at different points in time. One challenge using small UAVs lies in the instable flight behavior and using low-weight cameras. Thus, there is a need to stabilize and register the UAV imagery by image processing methods since using only direct approaches based on positional information coming from a GPS and attitude and acceleration measured by an inertial measurement unit (IMU) are not accurate enough. In order to improve this direct geo-registration (or "pre-registration"), image matching techniques are applied to align the UAV imagery to geo-registered reference images. The main challenge consists in matching images taken from different sensors at different day time and seasons. In this paper, we present evaluation methods for measuring the performance of image registration algorithms w.r.t. multi-temporal input data. They are based on augmenting a set of aligned image pairs by synthetic pre-registrations to an evaluation data set including truth transformations. The evaluation characteristics are based on quantiles of transformation residuals at certain control points. For a test site, video frames of a UAV mission and several ortho images from a period of 12 years are collected and synthetic pre-registrations corresponding to real flight parameters and registration errors are computed. Two algorithms A1 and A2 based on extracting key-points with a floating point descriptor (A1) and a binary descriptor (A2) are applied to the evaluation data set. As evaluation result, the algorithm A1 turned out to perform better than A2. Using affine or Helmert transformation types, both algorithms perform better than in the projective case. Furthermore, the evaluation classifies the ortho images w

  2. Motivational Videos and the Library Media Specialist: Teachers and Students on Film--Take 1

    ERIC Educational Resources Information Center

    Bohot, Cameron Brooke; Pfortmiller, Michelle

    2009-01-01

    Today's students are bombarded with digital imagery and sound nearly 24 hours of the day. Video use in the classroom is engaging, and a teacher can instantly grab her students' attention. The content of the videos comes from many sources; the curriculum, the student handbook, and even the school rules. By creating the videos, teachers are not only…

  3. Onboard and Parts-based Object Detection from Aerial Imagery

    DTIC Science & Technology

    2011-09-01

    reduced operator workload. Additionally, a novel parts- based detection method was developed. A whole-object detector is not well suited for deformable and...reduced operator workload. Additionally, a novel parts- based detection method was developed. A whole-object detector is not well suited for deformable and...Methodology This chapter details the challenges of transitioning from ground station processing to onboard processing, the part- based detection method

  4. High-biomass sorghum yield estimate with aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Abstract. To reach the goals laid out by the U.S. Government for displacing fossil fuels with biofuels, agricultural production of dedicated biomass crops is required. High-biomass sorghum is advantageous across wide regions because it requires less water per unit dry biomass and can produce very hi...

  5. Yield mapping of high-biomass sorghum with aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To reach the goals laid out by the U.S. Government for displacing fossil fuels with biofuels, agricultural production of dedicated biomass crops is required. High-biomass sorghum is advantageous across wide regions because it requires less water per unit dry biomass and can produce very high biomass...

  6. Early identification of cotton fields using mosaicked aerial multispectral imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Early identification of cotton fields is important for advancing boll weevil eradication progress and reducing the risk of reinfestation. Remote sensing has long been used for crop identification, but limited work has been reported on early identification of cotton fields. The objective of this stud...

  7. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  8. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  9. Aerial Photography Summary Record System

    USGS Publications Warehouse

    ,

    1998-01-01

    The Aerial Photography Summary Record System (APSRS) describes aerial photography projects that meet specified criteria over a given geographic area of the United States and its territories. Aerial photographs are an important tool in cartography and a number of other professions. Land use planners, real estate developers, lawyers, environmental specialists, and many other professionals rely on detailed and timely aerial photographs. Until 1975, there was no systematic approach to locate an aerial photograph, or series of photographs, quickly and easily. In that year, the U.S. Geological Survey (USGS) inaugurated the APSRS, which has become a standard reference for users of aerial photographs.

  10. Weakly-Supervised Multimodal Kernel for Categorizing Aerial Photographs.

    PubMed

    Xia, Yingjie; Zhang, Luming; Liu, Zhenguang; Nie, Liqiang; Li, Xuelong

    2016-12-14

    Accurately distinguishing aerial photographs from different categories is a promising technique in computer vision. It can facilitate a series of applications such as video surveillance and vehicle navigation. In the paper, a new image kernel is proposed for effectively recognizing aerial photographs. The key is to encode high-level semantic cues into local image patches in a weakly-supervised way, and integrate multimodal visual features using a newly-developed hashing algorithm. The flowchart can be elaborated as follows. Given an aerial photo, we first extract a number of graphlets to describe its topological structure. For each graphlet, we utilize color and texture to capture its appearance, and a weakly-supervised algorithm to capture its semantics. Thereafter, aerial photo categorization can be naturally formulated as graphlet-to-graphlet matching. As the number of graphlets from each aerial photo is huge, to accelerate matching, we present a hashing algorithm to seamlessly fuze the multiple visual features into binary codes. Finally, an image kernel is calculated by fast matching the binary codes corresponding to each graphlet. And a multi-class SVM is learned for aerial photo categorization. We demonstrate the advantage of our proposed model by comparing it with state-of-the-art image descriptors. Moreover, an in-depth study of the descriptiveness of the hash-based graphlet is presented.

  11. Adding Insult to Imagery? Art Education and Censorship

    ERIC Educational Resources Information Center

    Sweeny, Robert W.

    2007-01-01

    The "Adding Insult to Imagery? Artistic Responses to Censorship and Mass-Media" exhibition opened in January 16, 2006, Kipp Gallery on the Indiana University of Pennsylvania campus. Eleven gallery-based works, 9 videos, and 10 web-based artworks comprised the show; each dealt with the relationship between censorship and mass mediated…

  12. Comparative Assessment of Very High Resolution Satellite and Aerial Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.

    2015-03-01

    This paper aims to assess the accuracy and radiometric quality of orthorectified high resolution satellite imagery from Pleiades-1B satellites through a comparative evaluation of their quantitative and qualitative properties. A Pleiades-B1 stereopair of high resolution images taken in 2013, two adjacent GeoEye-1 stereopairs from 2011 and aerial orthomosaic (LSO) provided by NCMA S.A (Hellenic Cadastre) from 2007 have been used for the comparison tests. As control dataset orthomosaic from aerial imagery provided also by NCMA S.A (0.25m GSD) from 2012 was selected. The process for DSM and orthoimage production was performed using commercial digital photogrammetric workstations. The two resulting orthoimages and the aerial orthomosaic (LSO) were relatively and absolutely evaluated for their quantitative and qualitative properties. Test measurements were performed using the same check points in order to establish their accuracy both as far as the single point coordinates as well as their distances are concerned. Check points were distributed according to JRC Guidelines for Best Practice and Quality Checking of Ortho Imagery and NSSDA standards while areas with different terrain relief and land cover were also included. The tests performed were based also on JRC and NSSDA accuracy standards. Finally, tests were carried out in order to assess the radiometric quality of the orthoimagery. The results are presented with a statistical analysis and they are evaluated in order to present the merits and demerits of the imaging sensors involved for orthoimage production. The results also serve for a critical approach for the usability and cost efficiency of satellite imagery for the production of Large Scale Orthophotos.

  13. Application of airborne thermal imagery to surveys of Pacific walrus

    USGS Publications Warehouse

    Burn, D.M.; Webber, M.A.; Udevitz, M.S.

    2006-01-01

    We conducted tests of airborne thermal imagery of Pacific walrus to determine if this technology can be used to detect walrus groups on sea ice and estimate the number of walruses present in each group. In April 2002 we collected thermal imagery of 37 walrus groups in the Bering Sea at spatial resolutions ranging from 1-4 m. We also collected high-resolution digital aerial photographs of the same groups. Walruses were considerably warmer than the background environment of ice, snow, and seawater and were easily detected in thermal imagery. We found a significant linear relation between walrus group size and the amount of heat measured by the thermal sensor at all 4 spatial resolutions tested. This relation can be used in a double-sampling framework to estimate total walrus numbers from a thermal survey of a sample of units within an area and photographs from a subsample of the thermally detected groups. Previous methods used in visual aerial surveys of Pacific walrus have sampled only a small percentage of available habitat, resulting in population estimates with low precision. Results of this study indicate that an aerial survey using a thermal sensor can cover as much as 4 times the area per hour of flight time with greater reliability than visual observation.

  14. A comparison of real and simulated airborne multisensor imagery

    NASA Astrophysics Data System (ADS)

    Bloechl, Kevin; De Angelis, Chris; Gartley, Michael; Kerekes, John; Nance, C. Eric

    2014-06-01

    This paper presents a methodology and results for the comparison of simulated imagery to real imagery acquired with multiple sensors hosted on an airborne platform. The dataset includes aerial multi- and hyperspectral imagery with spatial resolutions of one meter or less. The multispectral imagery includes data from an airborne sensor with three-band visible color and calibrated radiance imagery in the long-, mid-, and short-wave infrared. The airborne hyperspectral imagery includes 360 bands of calibrated radiance and reflectance data spanning 400 to 2450 nm in wavelength. Collected in September 2012, the imagery is of a park in Avon, NY, and includes a dirt track and areas of grass, gravel, forest, and agricultural fields. A number of artificial targets were deployed in the scene prior to collection for purposes of target detection, subpixel detection, spectral unmixing, and 3D object recognition. A synthetic reconstruction of the collection site was created in DIRSIG, an image generation and modeling tool developed by the Rochester Institute of Technology, based on ground-measured reflectance data, ground photography, and previous airborne imagery. Simulated airborne images were generated using the scene model, time of observation, estimates of the atmospheric conditions, and approximations of the sensor characteristics. The paper provides a comparison between the empirical and simulated images, including a comparison of achieved performance for classification, detection and unmixing applications. It was found that several differences exist due to the way the image is generated, including finite sampling and incomplete knowledge of the scene, atmospheric conditions and sensor characteristics. The lessons learned from this effort can be used in constructing future simulated scenes and further comparisons between real and simulated imagery.

  15. Aerial Explorers and Robotic Ecosystems

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg

    2004-01-01

    A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.

  16. Landscape-scale geospatial research utilizing low elevation aerial photography generated with commercial unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Lipo, C. P.; Lee, C.; Wechsler, S.

    2012-12-01

    With the ability to generate on demand high-resolution imagery across landscapes, unmanned aerial systems (UAS) are increasingly become the tools of choice for geospatial researchers. At CSULB, we have implemented a number of aerial systems in order to conduct archaeological, vegetation and terrain analyses. The platforms include the commercially available X100 by Gatewing, a hobby based aircraft, kites, and tethered blimps. From our experience, each platform has advantages and disadvantages n applicability int eh field and derived imagery. The X100, though comparatively more costly, produces images with excellent coverage of areas of interest and can fly in a wide range of weather conditions. The hobby plane solutions are low-cost and flexible in their configuration but their relative lightweight makes them difficult to fly in windy conditions and the sets of images produced can widely vary. The tethered blimp has a large payload and can fly under many conditions but its ability to systematically cover large areas is very limited. Kites are extremely low-cost but have similar limitations to blimps for area coverage and limited payload capabilities. Overall, we have found the greatest return for our investment from the Gatewing X100, despite its relatively higher cost, due to the quality of the images produced. Developments in autopilots, however, may improve the hobby aircraft solution and allow X100 like products to be produced in the near future. Results of imagery and derived products from these UAS missions will be presented and evaluated. Assessment of the viability of these UAS-products will inform the research community of their applicability to a range of applications, and if viable, could provide a lower cost alternative to other image acquisition methods.

  17. Video games.

    PubMed

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  18. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  19. Processing of SeaMARC swath sonar imagery

    SciTech Connect

    Pratson, L.; Malinverno, A.; Edwards, M.; Ryan, W. )

    1990-05-01

    Side-scan swath sonar systems have become an increasingly important means of mapping the sea floor. Two such systems are the deep-towed, high-resolution SeaMARC I sonar, which has a variable swath width of up to 5 km, and the shallow-towed, lower-resolution SeaMARC II sonar, which has a swath width of 10 km. The sea-floor imagery of acoustic backscatter output by the SeaMARC sonars is analogous to aerial photographs and airborne side-looking radar images of continental topography. Geologic interpretation of the sea-floor imagery is greatly facilitated by image processing. Image processing of the digital backscatter data involves removal of noise by median filtering, spatial filtering to remove sonar scans of anomalous intensity, across-track corrections to remove beam patterns caused by nonuniform response of the sonar transducers to changes in incident angle, and contrast enhancement by histogram equalization to maximize the available dynamic range. Correct geologic interpretation requires submarine structural fabrics to be displayed in their proper locations and orientations. Geographic projection of sea-floor imagery is achieved by merging the enhanced imagery with the sonar vehicle navigation and correcting for vehicle attitude. Co-registration of bathymetry with sonar imagery introduces sea-floor relief and permits the imagery to be displayed in three-dimensional perspectives, furthering the ability of the marine geologist to infer the processes shaping formerly hidden subsea terrains.

  20. Unmanned Aerial Vehicle (UAV) Dynamic-Tracking Directional Wireless Antennas for Low Powered Applications that Require Reliable Extended Range Operations in Time Critical Scenarios

    SciTech Connect

    Scott G. Bauer; Matthew O. Anderson; James R. Hanneman

    2005-10-01

    The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs require wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.

  1. Aerial of the VAB

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Even in this aerial view at KSC, the Vehicle Assembly Building is imposing. In front of it is the Launch Control Center. In the background is the Rotation/Processing Facility, next to the Banana Creek. In the foreground is the Saturn Causeway that leads to Launch Pads 39A and 39B.

  2. Aerial photographic reproductions

    USGS Publications Warehouse

    ,

    1971-01-01

    Geological Survey vertical aerial photography is obtained primarily for topographic and geologic mapping. Reproductions from this photography are usually satisfactory for general use. Because reproductions are not stocked, but are custom processed for each order, they cannot be returned for credit or refund.

  3. Applicability of ERTS-1 imagery to the study of suspended sediment and aquatic fronts

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Srna, R.; Treasure, W.; Otley, M.

    1973-01-01

    Imagery from three successful ERTS-1 passes over the Delaware Bay and Atlantic Coastal Region have been evaluated to determine visibility of aquatic features. Data gathered from ground truth teams before and during the overflights, in conjunction with aerial photographs taken at various altitudes, were used to interpret the imagery. The overpasses took place on August 16, October 10, 1972, and January 26, 1973, with cloud cover ranging from about zero to twenty percent. (I.D. Nos. 1024-15073, 1079-15133, and 1187-15140). Visual inspection, density slicing and multispectral analysis of the imagery revealed strong suspended sediment patterns and several distinct types of aquatic interfaces or frontal systems.

  4. Forestry, geology and hydrological investigations from ERTS-1 imagery in two areas of Ecuador, South America

    NASA Technical Reports Server (NTRS)

    Moreno, N. V. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. In the Oriente area, well-drained forests containing commercially valuable hardwoods can be recognized confidently and delineated quickly on the ERTS imagery. In the tropical rainforest, ERTS can provide an abundance of inferential information about large scale geologic structures. ERTS imagery is better than normal aerial photography for recognizing linears. The imagery is particularly useful for updating maps of the distributary system of the Guagas River Basin and of any other river with a similarly rapid changing channel pattern.

  5. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  6. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  7. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  8. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  9. Video Library.

    ERIC Educational Resources Information Center

    Neugebauer, Bonnie

    1993-01-01

    Reviews videos for early childhood programs that focus on staff training, child care selection, building children's self-esteem, farm animals, and children's books by Judith Viorst and Robert McCloskey. (HOD)

  10. A qualitative evaluation of Landsat imagery of Australian rangelands

    USGS Publications Warehouse

    Graetz, R.D.; Carneggie, David M.; Hacker, R.; Lendon, C.; Wilcox, D.G.

    1976-01-01

    The capability of multidate, multispectral ERTS-1 imagery of three different rangeland areas within Australia was evaluated for its usefulness in preparing inventories of rangeland types, assessing on a broad scale range condition within these rangeland types, and assessing the response of rangelands to rainfall events over large areas. For the three divergent rangeland test areas, centered on Broken W, Alice Springs and Kalgoorlie, detailed interpretation of the imagery only partially satisfied the information requirements set. It was most useful in the Broken Hill area where fenceline contrasts in range condition were readily visible. At this and the other sites an overstorey of trees made interpretation difficult. Whilst the low resolution characteristics and the lack of stereoscopic coverage hindered interpretation it was felt that this type of imagery with its vast coverage, present low cost and potential for repeated sampling is a useful addition to conventional aerial photography for all rangeland types.

  11. Digital elevation modelling using ASTER stereo imagery.

    PubMed

    Forkuo, Eric Kwabena

    2010-04-01

    Digital elevation model (DEM) in recent times has become an integral part of national spatial data infrastructure of many countries world-wide due to its invaluable importance. Although DEMs are mostly generated from contours maps, stereo aerial photographs and air-borne and terrestrial laser scanning, the stereo interpretation and auto-correlation from satellite image stereo-pairs such as with SPOT, IRS, and relatively new ASTER imagery is also an effective means of producing DEM data. In this study, terrain elevation data were derived by applying photogrammetric process to ASTER stereo imagery. Also, the quality ofDEMs produced from ASTER stereo imagery was analysed by comparing it with DEM produced from topographic map at a scale of 1:50,000. While analyzing the vertical accuracy of the generated ASTER DEM, fifty ground control points were extracted from the map and overlaid on the DEM. Results indicate that a root-mean-square error in elevation of +/- 14 m was achieved with ASTER stereo image data of good quality. The horizontal accuracy obtained from the ground control points was 14.77, which is within the acceptable range of +/- 7m to +/- 25 m. The generated (15 m) DEM was compared with a 20m, 25m, and a 30 m pixel DEM to the original map. In all, the results proved that, the 15 m DEM conform to the original map DEM than the others. Overall, this analysis proves that, the generated digital terrain model, DEM is acceptable.

  12. Evaluation of unmanned aerial vehicles (UAVs) for detection of cattle in the Cattle Fever Tick Permanent Quarantine Zone

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An unmanned aerial vehicle was used to capture videos of cattle in pastures to determine the efficiency of this technology for use by Mounted Inspectors in the Permanent Quarantine zone (PQZ) of the Cattle Fever Tick Eradication Program in south Texas along the U.S.-Mexico Border. These videos were ...

  13. Mini, Micro, and Swarming Unmanned Aerial Vehicles: A Baseline Study

    DTIC Science & Technology

    2006-11-01

    establishment of a UAV center of excellence in Queensland .”10 6 Alon Ben David, Robert Hewson, Damian Kemp, and Stephen Trimble, “Special Report...has been used to monitor damage caused by the 2000 Mount Usu volcanic eruption in Japan. Using onboard video cameras, the RMAX recorded images of...206 Unmanned Aerial Vehicle,” Chinese Defence Today, October 19, 2006. <http://www.sinodefence.com/airforce/uav/asn206.asp> 80 “ Volcanic

  14. Waste site characterization through digital analysis of historical aerial photographs at Los Alamos National Laboratory and Eglin Air Force Base

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Wells, B.; Rofer, C.; Martin, B.

    1995-05-01

    Historical aerial photographs are used to provide a physical history and preliminary mapping information for characterizing hazardous waste sites at Los Alamos National Laboratory and Eglin Air Force Base. The examples cited show how imagery was used to accurately locate and identify previous activities at a site, monitor changes that occurred over time, and document the observable of such activities today. The methodology demonstrates how historical imagery (along with any other pertinent data) can be used in the characterization of past environmental damage.

  15. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  16. Multistage, Multiband and sequential imagery to identify and quantify non-forest vegetation resources

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1971-01-01

    Analysis and recognition processing of multispectral scanner imagery for plant community classification and interpretations of various film-filter-scale aerial photographs are reported. Data analyses and manuscript preparation of research on microdensitometry for plant community and component identification and remote estimates of biomass are included.

  17. Prediction of senescent rangeland canopy structural attributes with airborne hyperspectral imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Canopy structural and chemical data are needed for senescent, mixed-grass prairie landscapes in autumn, yet models driven by image data are lacking for rangelands dominated by non-photosynthetically active vegetation (NPV). Here, we report how aerial hyperspectral imagery might be modeled to predic...

  18. Science documentary video slides to enhance education and communication

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Little, L. J.; Dodgson, K.

    2010-12-01

    Documentary production can convey powerful messages using a combination of authentic science and reinforcing video imagery. Conventional documentary production contains too much information for many viewers to follow; hence many powerful points may be lost. But documentary productions that are re-edited into short video sequences and made available through web based video servers allow the teacher/viewer to access the material as video slides. Each video slide contains one critical discussion segment of the larger documentary. A teacher/viewer can review the documentary one segment at a time in a class room, public forum, or in the comfort of home. The sequential presentation of the video slides allows the viewer to best absorb the documentary message. The website environment provides space for additional questions and discussion to enhance the video message.

  19. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    The Human Exploration Science Office (KX) provides leadership for NASA's Imagery Integration (Integration 2) Team, an affiliation of experts in the use of engineering-class imagery intended to monitor the performance of launch vehicles and crewed spacecraft in flight. Typical engineering imagery assessments include studying and characterizing the liftoff and ascent debris environments; launch vehicle and propulsion element performance; in-flight activities; and entry, landing, and recovery operations. Integration 2 support has been provided not only for U.S. Government spaceflight (e.g., Space Shuttle, Ares I-X) but also for commercial launch providers, such as Space Exploration Technologies Corporation (SpaceX) and Orbital Sciences Corporation, servicing the International Space Station. The NASA Integration 2 Team is composed of imagery integration specialists from JSC, the Marshall Space Flight Center (MSFC), and the Kennedy Space Center (KSC), who have access to a vast pool of experience and capabilities related to program integration, deployment and management of imagery assets, imagery data management, and photogrammetric analysis. The Integration 2 team is currently providing integration services to commercial demonstration flights, Exploration Flight Test-1 (EFT-1), and the Space Launch System (SLS)-based Exploration Missions (EM)-1 and EM-2. EM-2 will be the first attempt to fly a piloted mission with the Orion spacecraft. The Integration 2 Team provides the customer (both commercial and Government) with access to a wide array of imagery options - ground-based, airborne, seaborne, or vehicle-based - that are available through the Government and commercial vendors. The team guides the customer in assembling the appropriate complement of imagery acquisition assets at the customer's facilities, minimizing costs associated with market research and the risk of purchasing inadequate assets. The NASA Integration 2 capability simplifies the process of securing one

  20. Video Golf

    NASA Technical Reports Server (NTRS)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  1. Advances in applications and methodology for aerial infrared thermography

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2004-04-01

    Most aerial infrared (IR) is performed by the military, but there are commercial uses. Some of these non-military applications are the focus of this paper. Generally speaking, the farther away one can get from the object of an infrared survey, while maintaining the needed spatial resolution and thermal sensitivity, the more usable the data is. Wide areas and large objects can be effectively imaged from the air. In fact, the use of high-resolution aerial infrared imagery is often the only way that one can see slight nuances of temperature differences and trace the patterns of heat. In order to produce an easy to understand, high quality and useable report, the data must be acquired, recorded and processed in an efficient and effective way. This paper discusses the ongoing advances in methodology, platform and equipment required to produce high quality usable data for the end-user.

  2. Individual differences in visual imagery determine how event information is remembered.

    PubMed

    Sheldon, Signy; Amaral, Robert; Levine, Brian

    2017-03-01

    Individuals differ in how they mentally imagine past events. When reminiscing about a past experience, some individuals remember the event accompanied by rich visual images, while others will remember it with few of these images. In spite of the implications that these differences in the use of imagery have to the understanding of human memory, few studies have taken them into consideration. We examined how imagery interference affecting event memory retrieval was differently modulated by spatial and object imagery ability. We presented participants with a series of video-clips depicting complex events. Participants subsequently answered true/false questions related to event, spatial, or feature details contained in the videos, while simultaneously viewing stimuli that interfered with visual imagery processes (dynamic visual noise; DVN) or a control grey screen. The impact of DVN on memory accuracy was related to individual differences in spatial imagery ability. Individuals high in spatial imagery were less accurate at recalling details from the videos when simultaneously viewing the DVN stimuli compared to those low in spatial imagery ability. This finding held for questions related to the event and spatial details but not feature details. This study advocates for the inclusion of individual differences when studying memory processes.

  3. Applications of thermal infrared imagery for energy conservation and environmental surveys

    NASA Technical Reports Server (NTRS)

    Carney, J. R.; Vogel, T. C.; Howard, G. E., Jr.; Love, E. R.

    1977-01-01

    The survey procedures, developed during the winter and summer of 1976, employ color and color infrared aerial photography, thermal infrared imagery, and a handheld infrared imaging device. The resulting imagery was used to detect building heat losses, deteriorated insulation in built-up type building roofs, and defective underground steam lines. The handheld thermal infrared device, used in conjunction with the aerial thermal infrared imagery, provided a method for detecting and locating those roof areas that were underlain with wet insulation. In addition, the handheld infrared device was employed to conduct a survey of a U.S. Army installation's electrical distribution system under full operating loads. This survey proved to be cost effective procedure for detecting faulty electrical insulators and connections that if allowed to persist could have resulted in both safety hazards and loss in production.

  4. Aerial thermography studies of power plant heated lakes

    SciTech Connect

    Villa-Aleman, E.

    2000-01-26

    Remote sensing temperature measurements of water bodies is complicated by the temperature differences between the true surface or skin water and the bulk water below. Weather conditions control the reduction of the skin temperature relative to the bulk water temperature. Typical skin temperature depressions range from a few tenths of a degree Celsius to more than one degree. In this research project, the Savannah River Technology Center (SRTC) used aerial thermography and surface-based meteorological and water temperature measurements to study a power plant cooling lake in South Carolina. Skin and bulk water temperatures were measured simultaneously for imagery calibration and to produce a database for modeling of skin temperature depressions as a function of weather and bulk water temperatures. This paper will present imagery that illustrates how the skin temperature depression was affected by different conditions in several locations on the lake and will present skin temperature modeling results.

  5. Modeling Spatial Dependencies in High-Resolution Overhead Imagery

    SciTech Connect

    Cheriyadat, Anil M; Bright, Eddie A; Vatsavai, Raju

    2011-01-01

    Human settlement regions with different physical and socio-economic attributes exhibit unique spatial characteristics that are often illustrated in high-resolution overhead imageries. For example- size, shape and spatial arrangements of man-made structures are key attributes that vary with respect to the socioeconomic profile of the neighborhood. Successfully modeling these attributes is crucial in developing advanced image understanding systems for interpreting complex aerial scenes. In this paper we present three different approaches to model the spatial context in the overhead imagery. First, we show that the frequency domain of the image can be used to model the spatial context [1]. The shape of the spectral energy contours characterize the scene context and can be exploited as global features. Secondly, we explore a discriminative framework based on the Conditional Random Fields (CRF) [2] to model the spatial context in the overhead imagery. The features derived from the edge orientation distribution calculated for a neighborhood and the associated class labels are used as input features to model the spatial context. Our third approach is based on grouping spatially connected pixels based on the low-level edge primitives to form support-regions [3]. The statistical parameters generated from the support-region feature distributions characterize different geospatial neighborhoods. We apply our approaches on high-resolution overhead imageries. We show that proposed approaches characterize the spatial context in overhead imageries.

  6. Defense Science Board Study on Unmanned Aerial Vehicles and Uninhabited Combat Aerial Vehicles

    DTIC Science & Technology

    2004-02-01

    Defense Science Board Study on Unmanned Aerial Vehicles and Uninhabited Combat Aerial Vehicles February 2004 Office...COVERED - 4. TITLE AND SUBTITLE Defense Science Board Study on Unmanned Aerial Vehicles and Uninhabited Combat Aerial Vehicles 5a. CONTRACT...the Defense Science Board Task Force on Unmanned Aerial Vehicles and Uninhabited Combat Aerial Vehicles I am pleased to forward the final report of

  7. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  8. Thermal imagery for census of ungulates

    NASA Technical Reports Server (NTRS)

    Wride, M. C.; Baker, K.

    1977-01-01

    A Daedalus thermal linescanner mounted in a light single engine aircraft was used to image the entire 270 square kilometers within the fenced perimeter of ElK Island Park, Alberta, Canada. The data were collected during winter, 1976 in morning and midday (overcast conditions) processed and analyzed to obtain a number for total ungulates. Five different ungulate species were present during the survey. Ungulates were easily observed during the analysis of linescanner imagery and the total number of ungulates was established at 2175 compared to figures of 1010 and 1231 for visual method aerial survey results of the same area that year. It was concluded that the scanner was much more accurate and precise for census of ungulates than visual techniques.

  9. 2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. AERIAL VIEW OF MINUTEMAN SILOS. Low oblique aerial view (original in color) of the two launch silos, covered. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Missile Silo Type, Test Area 1-100, northeast end of Test Area 1-100 Road, Boron, Kern County, CA

  10. Multiscale assessment of green leaf area in a semi-arid rangeland with a small unmanned aerial vehicle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spatial variability in green leaf cover of a western rangeland was studied by comparing field measurements on 50 m crossed transects to aerial and satellite imagery. The normalized difference vegetation index was calculated for multiple 2 cm resolution images collected over the field transects with ...

  11. Thermal Imaging Using Small-Aerial Platforms for Assessment of Crop Water Stress in Humid Subtropical Climates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Leaf- or canopy-to-air temperature difference (hereafter called CATD) can provide information on crop energy status. Thermal imagery from agricultural aircraft or Unmanned Aerial Vehicles (UAVs) have the potential of providing thermal data for calculation of CATD and visual snapshots that can guide ...

  12. Improved reduced-resolution satellite imagery

    NASA Technical Reports Server (NTRS)

    Ellison, James; Milstein, Jaime

    1995-01-01

    The resolution of satellite imagery is often traded-off to satisfy transmission time and bandwidth, memory, and display limitations. Although there are many ways to achieve the same reduction in resolution, algorithms vary in their ability to preserve the visual quality of the original imagery. These issues are investigated in the context of the Landsat browse system, which permits the user to preview a reduced resolution version of a Landsat image. Wavelets-based techniques for resolution reduction are proposed as alternatives to subsampling used in the current system. Experts judged imagery generated by the wavelets-based methods visually superior, confirming initial quantitative results. In particular, compared to subsampling, the wavelets-based techniques were much less likely to obscure roads, transmission lines, and other linear features present in the original image, introduce artifacts and noise, and otherwise reduce the usefulness of the image. The wavelets-based techniques afford multiple levels of resolution reduction and computational speed. This study is applicable to a wide range of reduced resolution applications in satellite imaging systems, including low resolution display, spaceborne browse, emergency image transmission, and real-time video downlinking.

  13. Pixel-wise Motion Detection in Persistent Aerial Video Surveillance

    SciTech Connect

    Vesom, G

    2012-03-23

    In ground stabilized WAMI, stable objects with depth appear to have precessive motion due to sensor movement alongside objects undergoing true, independent motion in the scene. Computational objective is to disambiguate independent and structural motion in WAMI efficiently and robustly.

  14. Bay-scale assessment of eelgrass beds using sidescan and video

    NASA Astrophysics Data System (ADS)

    Vandermeulen, Herb

    2014-12-01

    The assessment of the status of eelgrass ( Zostera marina) beds at the bay-scale in turbid, shallow estuaries is problematic. The bay-scale assessment (i.e., tens of km) of eelgrass beds usually involves remote sensing methods such as aerial photography or satellite imagery. These methods can fail if the water column is turbid, as is the case for many shallow estuaries on Canada's eastern seaboard. A novel towfish package was developed for the bay-scale assessment of eelgrass beds irrespective of water column turbidity. The towfish consisted of an underwater video camera with scaling lasers, sidescan sonar and a transponder-based positioning system. The towfish was deployed along predetermined transects in three northern New Brunswick estuaries. Maps were created of eelgrass cover and health (epiphyte load) and ancillary bottom features such as benthic algal growth, bacterial mats ( Beggiatoa) and oysters. All three estuaries had accumulations of material reminiscent of the oomycete Leptomitus, although it was not positively identified in our study. Tabusintac held the most extensive eelgrass beds of the best health. Cocagne had the lowest scores for eelgrass health, while Bouctouche was slightly better. The towfish method proved to be cost effective and useful for the bay-scale assessment of eelgrass beds to sub-meter precision in real time.

  15. Measuring creative imagery abilities

    PubMed Central

    Jankowska, Dorota M.; Karwowski, Maciej

    2015-01-01

    Over the decades, creativity and imagination research developed in parallel, but they surprisingly rarely intersected. This paper introduces a new theoretical model of creative visual imagination, which bridges creativity and imagination research, as well as presents a new psychometric instrument, called the Test of Creative Imagery Abilities (TCIA), developed to measure creative imagery abilities understood in accordance with this model. Creative imagination is understood as constituted by three interrelated components: vividness (the ability to create images characterized by a high level of complexity and detail), originality (the ability to produce unique imagery), and transformativeness (the ability to control imagery). TCIA enables valid and reliable measurement of these three groups of abilities, yielding the general score of imagery abilities and at the same time making profile analysis possible. We present the results of nine studies on a total sample of more than 1700 participants, showing the factor structure of TCIA using confirmatory factor analysis, as well as provide data confirming this instrument's validity and reliability. The availability of TCIA for interested researchers may result in new insights and possibilities of integrating the fields of creativity and imagination science. PMID:26539140

  16. An application of backprojection for video SAR image formation exploiting a subaperature circular shift register

    NASA Astrophysics Data System (ADS)

    Miller, J.; Bishop, E.; Doerry, A.

    2013-05-01

    This paper details a Video SAR (Synthetic Aperture Radar) mode that provides a persistent view of a scene centered at the Motion Compensation Point (MCP). The radar platform follows a circular flight path. An objective is to form a sequence of SAR images while observing dynamic scene changes at a selectable video frame rate. A formulation of backprojection meets this objective. Modified backprojection equations take into account changes in the grazing angle or squint angle that result from non-ideal flight paths. The algorithm forms a new video frame relying upon much of the signal processing performed in prior frames. The method described applies an appropriate azimuth window to each video frame for window sidelobe rejection. A Cardinal Direction Up (CDU) coordinate frame forms images with the top of the image oriented along a given cardinal direction for all video frames. Using this coordinate frame helps characterize a moving target's target response. Generation of synthetic targets with linear motion including both constant velocity and constant acceleration is described. The synthetic target video imagery demonstrates dynamic SAR imagery with expected moving target responses. The paper presents 2011 flight data collected by General Atomics Aeronautical Systems, Inc. (GA-ASI) implementing the video SAR mode. The flight data demonstrates good video quality showing moving vehicles. The flight imagery demonstrates the real-time capability of the video SAR mode. The video SAR mode uses a circular shift register of subapertures. The radar employs a Graphics Processing Unit (GPU) in order to implement this algorithm.

  17. A temporal and ecological analysis of the Huntington Beach Wetlands through an unmanned aerial system remote sensing perspective

    NASA Astrophysics Data System (ADS)

    Rafiq, Talha

    Wetland monitoring and preservation efforts have the potential to be enhanced with advanced remote sensing acquisition and digital image analysis approaches. Progress in the development and utilization of Unmanned Aerial Systems (UAS) and Unmanned Aerial Vehicles (UAV) as remote sensing platforms has offered significant spatial and temporal advantages over traditional aerial and orbital remote sensing platforms. Photogrammetric approaches to generate high spatial resolution orthophotos of UAV acquired imagery along with the UAV's low-cost and temporally flexible characteristics are explored. A comparative analysis of different spectral based land cover maps derived from imagery captured using UAV, satellite, and airplane platforms provide an assessment of the Huntington Beach Wetlands. This research presents a UAS remote sensing methodology encompassing data collection, image processing, and analysis in constructing spectral based land cover maps to augment the efforts of the Huntington Beach Wetlands Conservancy by assessing ecological and temporal changes at the Huntington Beach Wetlands.

  18. Phenomenology of passive multi-band submillimeter-wave imagery

    NASA Astrophysics Data System (ADS)

    Enestam, Sissi; Kajatkari, Perttu; Kivimäki, Olli; Leivo, Mikko M.; Rautiainen, Anssi; Tamminen, Aleksi A.; Luukanen, Arttu R.

    2016-05-01

    In 2015, Asqella Oy commercialized a passive multi-band submillimeter-wave camera system intended for use in walk-by personnel security screening applications. In this paper we study the imagery acquired with the prototype of the ARGON passive multi-band submm-wave video camera. To challenge the system and test its limits, imagery has been obtained in various environments with varying background surface temperatures, with people of different body types, with different clothing materials and numbers of layers of clothing and with objects of different materials. In addition to the phenomenological study, we discuss the detection statistics of the system, evaluated by running blind trials with human operators. While significant improvements have been made particularly in the software side since the beginning of the testing, the obtained imagery enables a comprehensive evaluation of the capabilities and challenges of the multiband submillimeter-wave imaging system.

  19. The live service of video geo-information

    NASA Astrophysics Data System (ADS)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  20. Imagery analysis and the need for standards

    NASA Astrophysics Data System (ADS)

    Grant, Barbara G.

    2014-09-01

    While efforts within the optics community focus on the development of high-quality systems and data products, comparatively little attention is paid to their use. Our standards for verification and validation are high; but in some user domains, standards are either lax or do not exist at all. In forensic imagery analysis, for example, standards exist to judge image quality, but do not exist to judge the quality of an analysis. In litigation, a high quality analysis is by default the one performed by the victorious attorney's expert. This paper argues for the need to extend quality standards into the domain of imagery analysis, which is expected to increase in national visibility and significance with the increasing deployment of unmanned aerial vehicle—UAV, or "drone"—sensors in the continental U. S.. It argues that like a good radiometric calibration, made as independent of the calibrated instrument as possible, a good analysis should be subject to standards the most basic of which is the separation of issues of scientific fact from analysis results.

  1. Mapping and Characterizing Selected Canopy Tree Species at the Angkor World Heritage Site in Cambodia Using Aerial Data

    PubMed Central

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  2. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    PubMed

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  3. Gypsy moth defoliation assessment: Forest defoliation in detectable from satellite imagery. [New England, New York, Pennsylvania, and New Jersey

    NASA Technical Reports Server (NTRS)

    Moore, H. J. (Principal Investigator); Rohde, W. G.

    1975-01-01

    The author has identified the following significant results. ERTS-1 imagery obtained over eastern Pennsylvania during July 1973, indicates that forest defoliation is detectable from satellite imagery and correlates well with aerial visual survey data. It now appears that two damage classes (heavy and moderate-light) and areas of no visible defoliation can be detected and mapped from properly prepared false composite imagery. In areas where maple is the dominant species or in areas of small woodlots interspersed with agricultural areas, detection and subsequent mapping is more difficult.

  4. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  5. U. S. Department of Energy Aerial Measuring Systems

    SciTech Connect

    J. J. Lease

    1998-10-01

    The Aerial Measuring Systems (AMS) is an aerial surveillance system. This system consists of remote sensing equipment to include radiation detectors; multispectral, thermal, radar, and laser scanners; precision cameras; and electronic imaging and still video systems. This equipment, in varying combinations, is mounted in an airplane or helicopter and flown at different heights in specific patterns to gather various types of data. This system is a key element in the US Department of Energy's (DOE) national emergency response assets. The mission of the AMS program is twofold--first, to respond to emergencies involving radioactive materials by conducting aerial surveys to rapidly track and map the contamination that may exist over a large ground area and second, to conduct routinely scheduled, aerial surveys for environmental monitoring and compliance purposes through the use of credible science and technology. The AMS program evolved from an early program, begun by a predecessor to the DOE--the Atomic Energy Commission--to map the radiation that may have existed within and around the terrestrial environments of DOE facilities, which produced, used, or stored radioactive materials.

  6. Obtaining biophysical measurements of woody vegetation from high resolution digital aerial photography in tropical and arid environments: Northern Territory, Australia

    NASA Astrophysics Data System (ADS)

    Staben, G. W.; Lucieer, A.; Evans, K. G.; Scarth, P.; Cook, G. D.

    2016-10-01

    Biophysical parameters obtained from woody vegetation are commonly measured using field based techniques which require significant investment in resources. Quantitative measurements of woody vegetation provide important information for ecological studies investigating landscape change. The fine spatial resolution of aerial photography enables identification of features such as trees and shrubs. Improvements in spatial and spectral resolution of digital aerial photographic sensors have increased the possibility of using these data in quantitative remote sensing. Obtaining biophysical measurements from aerial photography has the potential to enable it to be used as a surrogate for the collection of field data. In this study quantitative measurements obtained from digital aerial photography captured at ground sampling distance (GSD) of 15 cm (n = 50) and 30 cm (n = 52) were compared to woody biophysical parameters measured from 1 ha field plots. Supervised classification of the aerial photography using object based image analysis was used to quantify woody and non-woody vegetation components in the imagery. There was a high correlation (r ≥ 0.92) between all field measured woody canopy parameters and aerial derived green woody cover measurements, however only foliage projective cover (FPC) was found to be statistically significant (paired t-test; α = 0.01). There was no significant difference between measurements derived from imagery captured at either GSD of 15 cm and 30 cm over the same field site (n = 20). Live stand basal area (SBA) (m2 ha-1) was predicted from the aerial photographs by applying an allometric equation developed between field-measured live SBA and woody FPC. The results show that there was very little difference between live SBA predicted from FPC measured in the field or from aerial photography. The results of this study show that accurate woody biophysical parameters can be obtained from aerial photography from a range of woody vegetation

  7. The Imagery-Creativity Connection.

    ERIC Educational Resources Information Center

    Daniels-McGhee, Susan; Davis, Gary A.

    1994-01-01

    This paper reviews historical highlights of the imagery-creativity connection, including early and contemporary accounts, along with notable examples of imagery in the creative process. It also looks at cross-modal imagery (synesthesia), a model of image-based creativity and the creative process, and implications for strengthening creativity by…

  8. Imagery Production Specialist (AFSC 23350).

    ERIC Educational Resources Information Center

    Air Univ., Gunter AFS, Ala. Extension Course Inst.

    This course of study is designed to lead the student to full qualification as an Air Force imagery production specialist. The complete course consists of six volumes: general subjects in imagery production (39 hours), photographic fundamentals (57 hours), continuous imagery production (54 hours), chemical analysis and process control (volumes A…

  9. Low-altitude aerial color digital photographic survey of the San Andreas Fault

    USGS Publications Warehouse

    Lynch, David K.; Hudnut, Kenneth W.; Dearborn, David S.P.

    2010-01-01

    Ever since 1858, when Gaspard-Félix Tournachon (pen name Félix Nadar) took the first aerial photograph (Professional Aerial Photographers Association 2009), the scientific value and popular appeal of such pictures have been widely recognized. Indeed, Nadar patented the idea of using aerial photographs in mapmaking and surveying. Since then, aerial imagery has flourished, eventually making the leap to space and to wavelengths outside the visible range. Yet until recently, the availability of such surveys has been limited to technical organizations with significant resources. Geolocation required extensive time and equipment, and distribution was costly and slow. While these situations still plague older surveys, modern digital photography and lidar systems acquire well-calibrated and easily shared imagery, although expensive, platform-specific software is sometimes still needed to manage and analyze the data. With current consumer-level electronics (cameras and computers) and broadband internet access, acquisition and distribution of large imaging data sets are now possible for virtually anyone. In this paper we demonstrate a simple, low-cost means of obtaining useful aerial imagery by reporting two new, high-resolution, low-cost, color digital photographic surveys of selected portions of the San Andreas fault in California. All pictures are in standard jpeg format. The first set of imagery covers a 92-km-long section of the fault in Kern and San Luis Obispo counties and includes the entire Carrizo Plain. The second covers the region from Lake of the Woods to Cajon Pass in Kern, Los Angeles, and San Bernardino counties (151 km) and includes Lone Pine Canyon soon after the ground was largely denuded by the Sheep Fire of October 2009. The first survey produced a total of 1,454 oblique digital photographs (4,288 x 2,848 pixels, average 6 Mb each) and the second produced 3,762 nadir images from an elevation of approximately 150 m above ground level (AGL) on the

  10. Aerial surveys adjusted by ground surveys to estimate area occupied by black-tailed prairie dog colonies

    USGS Publications Warehouse

    Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.

    2012-01-01

    Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends

  11. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and

  12. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  16. Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June 29, 1960 (University of California, Santa Barbara, Map and Imagery Collection) PORTION OF IRVINE RANCH SHOWING SITE CA-2275-A IN LOWER LEFT QUADRANT AND SITE CA-2275-B IN UPPER RIGHT QUADRANT (see separate photograph index for 2275-B) - Irvine Ranch Agricultural Headquarters, Carillo Tenant House, Southwest of Intersection of San Diego & Santa Ana Freeways, Irvine, Orange County, CA

  17. Cultural Artifact Detection in Long Wave Infrared Imagery.

    SciTech Connect

    Anderson, Dylan Zachary; Craven, Julia M.; Ramon, Eric

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are applied to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.

  18. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  19. Classification of wetlands vegetation using small scale color infrared imagery

    NASA Technical Reports Server (NTRS)

    Williamson, F. S. L.

    1975-01-01

    A classification system for Chesapeake Bay wetlands was derived from the correlation of film density classes and actual vegetation classes. The data processing programs used were developed by the Laboratory for the Applications of Remote Sensing. These programs were tested for their value in classifying natural vegetation, using digitized data from small scale aerial photography. Existing imagery and the vegetation map of Farm Creek Marsh were used to determine the optimal number of classes, and to aid in determining if the computer maps were a believable product.

  20. Aerial thermography for energy conservation

    NASA Technical Reports Server (NTRS)

    Jack, J. R.

    1978-01-01

    Thermal infrared scanning from an aircraft is a convenient and commercially available means for determining relative rates of energy loss from building roofs. The need to conserve energy as fuel costs makes the mass survey capability of aerial thermography an attractive adjunct to community energy awareness programs. Background information on principles of aerial thermography is presented. Thermal infrared scanning systems, flight and environmental requirements for data acquisition, preparation of thermographs for display, major users and suppliers of thermography, and suggested specifications for obtaining aerial scanning services were reviewed.

  1. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  2. Unmanned aerial systems for photogrammetry and remote sensing: A review

    NASA Astrophysics Data System (ADS)

    Colomina, I.; Molina, P.

    2014-06-01

    We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment.

  3. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  4. Aerial-Photointerpretation of landslides along the Ohio and Mississippi rivers

    USGS Publications Warehouse

    Su, W.-J.; Stohr, C.

    2000-01-01

    A landslide inventory was conducted along the Ohio and Mississippi rivers in the New Madrid Seismic Zone of southern Illinois, between the towns of Olmsted and Chester, Illinois. Aerial photography and field reconnaissance identified 221 landslides of three types: rock/debris falls, block slides, and undifferentiated rotational/translational slides. Most of the landslides are small- to medium-size, ancient rotational/translational features partially ob-scured by vegetation and modified by weathering. Five imagery sources were interpreted for landslides: 1:250,000-scale side-looking airborne radar (SLAR); 1:40,000-scale, 1:20,000-scale, 1:6,000-scale, black and white aerial photography; and low altitude, oblique 35-mm color photography. Landslides were identified with three levels of confidence on the basis of distinguishing characteristics and ambiguous indicators. SLAR imagery permitted identification of a 520 hectare mega-landslide which would not have been identified on medium-scale aerial photography. The leaf-off, 35-mm color, oblique photography provided the best imagery for confident interpretation of detailed features needed for smaller landslides.

  5. Reconstruction of former glacier surface topography from archive oblique aerial images

    NASA Astrophysics Data System (ADS)

    Midgley, N. G.; Tonkin, T. N.

    2017-04-01

    Archive oblique aerial imagery offers the potential to reconstruct the former geometry of valley glaciers and other landscape surfaces. Whilst the use of Structure-from-Motion (SfM) photogrammetry with multiview stereopsis (MVS) to process small-format imagery is now well established in the geosciences, the potential of the technique for extracting topographic data from archive oblique aerial imagery is unclear. Here, SfM-MVS is used to reconstruct the former topography of two high-Arctic glaciers (Midtre and Austre Lovénbreen, Svalbard, Norway) using three archive oblique aerial images obtained by the Norwegian Polar Institute in 1936. The 1936 point cloud was produced using seven LiDAR-derived ground control points located on stable surfaces in proximity to the former piedmont glacier termini. To assess accuracy, the 1936 data set was compared to a LiDAR data set using the M3C2 algorithm to calculate cloud-to-cloud differences. For stable areas (such as nonglacial surfaces), vertical differences were detected between the two point clouds (RMS M3C2 vertical difference of 8.5 m), with the outwash zones adjacent to the assessed glacier termini showing less extensive vertical discrepancies (94% of M3C2 vertical differences between ± 5 m). This

  6. The evolution of wireless video transmission technology for surveillance missions

    NASA Astrophysics Data System (ADS)

    Durso, Christopher M.; McCulley, Eric

    2012-06-01

    Covert and overt video collection systems as well as tactical unmanned aerial vehicles (UAV's) and unmanned ground vehicles (UGV's) can deliver real-time video intelligence direct from sensor systems to command staff providing unprecedented situational awareness and tactical advantage. Today's tactical video communications system must be secure, compact, lightweight, and fieldable in quick reaction scenarios. Four main technology implementations can be identified with the evolutionary development of wireless video transmission systems. Analog FM led to single carrier digital modulation, which gave way to multi-carrier orthogonal modulation. Each of these systems is currently in use today. Depending on the operating environment and size, weight, and power limitations, a system designer may choose one over another to support tactical video collection missions.

  7. Enhancing voluntary imitation through attention and motor imagery.

    PubMed

    Bek, Judith; Poliakoff, Ellen; Marshall, Hannah; Trueman, Sophie; Gowen, Emma

    2016-07-01

    Action observation activates brain areas involved in performing the same action and has been shown to increase motor learning, with potential implications for neurorehabilitation. Recent work indicates that the effects of action observation on movement can be increased by motor imagery or by directing attention to observed actions. In voluntary imitation, activation of the motor system during action observation is already increased. We therefore explored whether imitation could be further enhanced by imagery or attention. Healthy participants observed and then immediately imitated videos of human hand movement sequences, while movement kinematics were recorded. Two blocks of trials were completed, and after the first block participants were instructed to imagine performing the observed movement (Imagery group, N = 18) or attend closely to the characteristics of the movement (Attention group, N = 15), or received no further instructions (Control group, N = 17). Kinematics of the imitated movements were modulated by instructions, with both Imagery and Attention groups being closer in duration, peak velocity and amplitude to the observed model compared with controls. These findings show that both attention and motor imagery can increase the accuracy of imitation and have implications for motor learning and rehabilitation. Future work is required to understand the mechanisms by which these two strategies influence imitation accuracy.

  8. Marketing through Video Presentations.

    ERIC Educational Resources Information Center

    Newhart, Donna

    1989-01-01

    Discusses the advantages of using video presentations as marketing tools. Includes information about video news releases, public service announcements, and sales/marketing presentations. Describes the three stages in creating a marketing video: preproduction planning; production; and postproduction. (JOW)

  9. Video modeling and imaging training on performance of tennis service of 9- to 12-year-old children.

    PubMed

    Atienza, F L; Balaguer, I; García-Merita, M L

    1998-10-01

    The purpose of this work is to analyze, in a pilot study, the effects of video modeling and imagery training over 24 weeks on tennis service performance. Three groups of 9- to 12-yr.-old tennis players participated: (a) a physical practice group, who received physical training, (b) a physical practice + video group who received physical training plus watched a video modeling mental training, and (c) a physical practice + video + imagery group who received physical training plus video modeling and imagery mental training. The results for the intragroup pre-post-test comparisons showed that tennis performance did not significantly improve for the physical training group. The groups given mental training showed improvement from pre- to postintervention. Finally, the posttest comparison between groups indicated that there were significant differences between the group given physical training only compared to the groups given mental training but that the latter two did not differ significantly from each other.

  10. Aerial surveys and tagging of free-drifting icebergs using an unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    McGill, P. R.; Reisenbichler, K. R.; Etchemendy, S. A.; Dawe, T. C.; Hobson, B. W.

    2011-06-01

    Ship-based observations of free-drifting icebergs are hindered by the dangers of calving ice. To improve the efficacy and safety of these studies, new unmanned aerial vehicles (UAVs) were developed and then deployed in the Southern Ocean. These inexpensive UAVs were launched and recovered from a ship by scientific personal with a few weeks of flight training. The UAVs sent real-time video back to the ship, allowing researchers to observe conditions in regions of the icebergs not visible from the ship. In addition, the UAVs dropped newly developed global positioning system (GPS) tracking tags, permitting researchers to record the precise position of the icebergs over time. The position reports received from the tags show that the motion of free-drifting icebergs changes rapidly and is a complex combination of both translation and rotation.

  11. Optimizing view/illumination geometry for terrestrial features using Space Shuttle and aerial polarimetry

    NASA Technical Reports Server (NTRS)

    Israel, Steven A.; Holly, Mark H.; Whitehead, Victor S.

    1992-01-01

    This paper describes to relationship of polarimetric observations from orbital and aerial platforms and the determination optimum sun-target-sensor geometry. Polarimetric observations were evaluated for feature discrimination. The Space Shuttle experiment was performed using two boresighted Hasselblad 70 mm cameras with identical settings with linear polarizing filters aligned orthogonally about the optic axis. The aerial experiment was performed using a single 35 mm Nikon FE2 and rotating the linear polarizing filter 90 deg to acquire both minimum and maximum photographs. Characteristic curves were created by covertype and waveband for both aerial and Space Shuttle imagery. Though significant differences existed between the two datasets, the observed polarimetric signatures were unique and separable.

  12. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kr uger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  13. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team

    PubMed Central

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-01-01

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus’ estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling. PMID:27618060

  14. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team.

    PubMed

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-09-09

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus' estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling.

  15. Aerial Refueling Clearance Process Guide

    DTIC Science & Technology

    2014-08-21

    08-2014 2. REPORT TYPE Guidance Document 3. DATES COVERED 2008-2014 4. TITLE AND SUBTITLE Aerial Refueling Clearance Process Guide Attachment: Aerial...ATP-3.3.4.2 covers general operational procedures for AR and national/organizational SRDs cover data and procedures specific to their AR platforms...Receptacle, Probe/Drogue, and BDA Kit. 3.1.3 The items for assessment consideration cover several areas of interface for both the tanker and the

  16. Overall evaluation of LANDSAT (ERTS) follow on imagery for cartographic application

    NASA Technical Reports Server (NTRS)

    Colvocoresses, A. P. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. LANDSAT imagery can be operationally applied to the revision of nautical charts. The imagery depicts shallow seas in a form that permits accurate planimetric image mapping of features to 20 meters of depth where the conditions of water clarity and bottom reflection are suitable. LANDSAT data also provide an excellent simulation of the earth's surface, for such applications as aeronautical charting and radar image correlation in aircraft and aircraft simulators. Radiometric enhancement, particularly edge enhancement, a technique only marginally successful with aerial photographs has proved to be high value when applied to LANDSAT data.

  17. The Potential of Unmanned Aerial Vehicle for Large Scale Mapping of Coastal Area

    NASA Astrophysics Data System (ADS)

    Darwin, N.; Ahmad, A.; Zainon, O.

    2014-02-01

    Many countries in the tropical region are covered with cloud for most of the time, hence, it is difficult to get clear images especially from high resolution satellite imagery. Aerial photogrammetry can be used but most of the time the cloud problem still exists. Today, this problem could be solved using a system known as unmanned aerial vehicle (UAV) where the aerial images can be acquired at low altitude and the system can fly under the cloud. The UAV system could be used in various applications including mapping coastal area. The UAV system is equipped with an autopilot system and automatic method known as autonomous flying that can be utilized for data acquisition. To achieve high resolution imagery, a compact digital camera of high resolution was used to acquire the aerial images at an altitude. In this study, the UAV system was employed to acquire aerial images of a coastal simulation model at low altitude. From the aerial images, photogrammetric image processing was executed to produce photogrammetric outputs such a digital elevation model (DEM), contour line and orthophoto. In this study, ground control point (GCP) and check point (CP) were established using conventional ground surveying method (i.e total station). The GCP is used for exterior orientation in photogrammetric processes and CP for accuracy assessment based on Root Mean Square Error (RMSE). From this study, it was found that the UAV system can be used for large scale mapping of coastal simulation model with accuracy at millimeter level. It is anticipated that the same system could be used for large scale mapping of real coastal area and produces good accuracy. Finally, the UAV system has great potential to be used for various applications that require accurate results or products at limited time and less man power.

  18. Digital Watermarking of Autonomous Vehicles Imagery and Video Communication

    DTIC Science & Technology

    2005-10-01

    across all JPEG quality factors . The fundamental approach is based on using 2D chirps as spreading functions, followed by chirp transform to recover...cover media with Two significant algorithms involve the embedding of a little or no perceptual impact . The cover media may take pseudo-random (PN...compression increases. On the other hand, factors , between 40 and 90. the low frequency DCT coefficients would survive com- In addition to robustness to JPEG

  19. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe's requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a

  20. Mapping Urban Ecosystem Services Using High Resolution Aerial Photography

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Neale, A.; Wilhelm, D.

    2010-12-01

    Ecosystem services (ES) are the many life-sustaining benefits we receive from nature: e.g., clean air and water, food and fiber, cultural-aesthetic-recreational benefits, pollination and flood control. The ES concept is emerging as a means of integrating complex environmental and economic information to support informed environmental decision making. The US EPA is developing a web-based National Atlas of Ecosystem Services, with a component for urban ecosystems. Currently, the only wall-to-wall, national scale land cover data suitable for this analysis is the National Land Cover Data (NLCD) at 30 m spatial resolution with 5 and 10 year updates. However, aerial photography is acquired at higher spatial resolution (0.5-3 m) and more frequently (1-5 years, typically) for most urban areas. Land cover was mapped in Raleigh, NC using freely available USDA National Agricultural Imagery Program (NAIP) with 1 m ground sample distance to test the suitability of aerial photography for urban ES analysis. Automated feature extraction techniques were used to extract five land cover classes, and an accuracy assessment was performed using standard techniques. Results will be presented that demonstrate applications to mapping ES in urban environments: greenways, corridors, fragmentation, habitat, impervious surfaces, dark and light pavement (urban heat island). Automated feature extraction results mapped over NAIP color aerial photograph. At this scale, we can look at land cover and related ecosystem services at the 2-10 m scale. Small features such as individual trees and sidewalks are visible and mappable. Classified aerial photo of Downtown Raleigh NC Red: impervious surface Dark Green: trees Light Green: grass Tan: soil

  1. Security Engineering Project - System Aware Cyber Security for an Autonomous Surveillance System On Board an Unmanned Aerial Vehicle

    DTIC Science & Technology

    2014-01-31

    moving- map , real -time mosaicing, target tracking , and video recording functions. Report No. SERC-2014-TR-036-3... create plug-in software modules for added functionality. In addition, the PCC and ViewPoint allow users to go online and download maps and aerial...environment. .................................................... 12 Figure 4. ViewPoint user interface for streaming video created by the MetaVR scene

  2. Ikonos Imagery Product Nonuniformity Assessment

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Zanoni, Vicki; Pagnutti, Mary; Holekamp, Kara; Smith, Charles

    2002-01-01

    During the early stages of the NASA Scientific Data Purchase (SDP) program, three approximately equal vertical stripes were observable in the IKONOS imagery of highly spatially uniform sites. Although these effects appeared to be less than a few percent of the mean signal, several investigators requested new imagery. Over time, Space Imaging updated its processing to minimize these artifacts. This however, produced differences in Space Imaging products derived from archive imagery processed at different times. Imagery processed before 2/22/01 is processed with one set of coefficients, while imagery processed after that date requires another set. Space Imaging produces its products from raw imagery, so changes in the ground processing over time can change the delivered digital number (DN) values, even for identical orders of a previously acquired scene. NASA Stennis initiated studies to investigate the magnitude and changes in these artifacts over the lifetime of the system and before and after processing updates.

  3. Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis

    DTIC Science & Technology

    2006-01-01

    National Geospatial- Intelligence Agency (NGA) is the national- level producer of Geospatial Intelligence , serving both policy makers and DoD elements...One core task of Geospatial Intelligence Analysts is to develop intelligence through the exploitation of imagery (including overhead, airborne, and...video sources), with geospatial data and additional intelligence sources supporting the analysis process. Currently there is a gap between the

  4. Chromotomosynthesis for high speed hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Bostick, Randall L.; Perram, Glen P.

    2012-09-01

    A rotating direct vision prism, chromotomosynthetic imaging (CTI) system operating in the visible creates hyperspectral imagery by collecting a set of 2D images with each spectrally projected at a different rotation angle of the prism. Mathematical reconstruction techniques that have been well tested in the field of medical physics are used to reconstruct the data to produce the 3D hyperspectral image. The instrument operates with a 100 mm focusing lens in the spectral range of 400-900 nm with a field of view of 71.6 mrad and angular resolution of 0.8-1.6 μrad. The spectral resolution is 0.6 nm at the shortest wavelengths, degrading to over 10 nm at the longest wavelengths. Measurements using a pointlike target show that performance is limited by chromatic aberration. The accuracy and utility of the instrument is assessed by comparing the CTI results to spatial data collected by a wideband image and hyperspectral data collected using a liquid crystal tunable filter (LCTF). The wide-band spatial content of the scene reconstructed from the CTI data is of same or better quality as a single frame collected by the undispersed imaging system with projections taken at every 1°. Performance is dependent on the number of projections used, with projections at 5° producing adequate results in terms of target characterization. The data collected by the CTI system can provide spatial information of equal quality as a comparable imaging system, provide high-frame rate slitless 1-D spectra, and generate 3-D hyperspectral imagery which can be exploited to provide the same results as a traditional multi-band spectral imaging system. While this prototype does not operate at high speeds, components exist which will allow for CTI systems to generate hyperspectral video imagery at rates greater than 100 Hz. The instrument has considerable potential for characterizing bomb detonations, muzzle flashes, and other battlefield combustion events.

  5. Standardized rendering from IR surveillance motion imagery

    NASA Astrophysics Data System (ADS)

    Prokoski, F. J.

    2014-06-01

    Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.

  6. Dense Multiple Stereo Matching of Highly Overlapping Uav Imagery

    NASA Astrophysics Data System (ADS)

    Haala, N.; Rothermel, M.

    2012-07-01

    UAVs are becoming standard platforms for applications aiming at photogrammetric data capture. Since these systems can be completely built-up at very reasonable prices, their use can be very cost effective. This is especially true while aiming at large scale aerial mapping of areas at limited extent. In principle, the photogrammetric evaluation of UAV-based imagery is feasible by of-the-shelf commercial software products. Thus, standard steps like aerial triangulation, the generation of Digital Surface Models and ortho image computation can be performed effectively. However, this processing pipeline can be hindered due to the limited quality of UAV data. This is especially true if low-cost sensor components are applied. To overcome potential problems in AAT, UAV imagery is frequently captured at considerable overlaps. As it will be discussed in the paper, such highly overlapping image blocks are not only beneficial during georeferencing, but are especially advantageous while aiming at a dense and accurate image based 3D surface reconstruction.

  7. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  8. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  9. No Fuss Video

    ERIC Educational Resources Information Center

    Doyle, Al

    2006-01-01

    Ever since video became readily available with the advent of the VCR, educators have been clamoring for easier ways to integrate the medium into the classroom. Today, thanks to broadband access and ever-expanding offerings, engaging students with high-quality video has never been easier. Video-on-demand (VOD) services provide bite-size video clips…

  10. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  11. Preliminary Results from the Portable Imagery Quality Assessment Test Field (PIQuAT) of Uav Imagery for Imagery Reconnaissance Purposes

    NASA Astrophysics Data System (ADS)

    Dabrowski, R.; Orych, A.; Jenerowicz, A.; Walczykowski, P.

    2015-08-01

    The article presents a set of initial results of a quality assessment study of 2 different types of sensors mounted on an unmanned aerial vehicle, carried out over an especially designed and constructed test field. The PIQuAT (Portable Imagery Quality Assessment Test Field) field had been designed especially for the purposes of determining the quality parameters of UAV sensors, especially in terms of the spatial, spectral and radiometric resolutions and chosen geometric aspects. The sensor used include a multispectral framing camera and a high-resolution RGB sensor. The flights were conducted from a number of altitudes ranging from 10 m to 200 m above the test field. Acquiring data at a number of different altitudes allowed the authors to evaluate the obtained results and check for possible linearity of the calculated quality assessment parameters. The radiometric properties of the sensors were evaluated from images of the grayscale target section of the PIQuAT field. The spectral resolution of the imagery was determined based on a number of test samples with known spectral reflectance curves. These reference spectral reflectance curves were then compared with spectral reflectance coefficients at the wavelengths registered by the miniMCA camera. Before conducting all of these experiments in field conditions, the interior orientation parameters were calculated for the MiniMCA and RGB sensor in laboratory conditions. These parameters include: the actual pixel size on the detector, distortion parameters, calibrated focal length (CFL) and the coordinates of the principal point of autocollimation (miniMCA - for each of the six channels separately.

  12. Airborne Hyperspectral Imagery for the Detection of Agricultural Crop Stress

    NASA Technical Reports Server (NTRS)

    Cassady, Philip E.; Perry, Eileen M.; Gardner, Margaret E.; Roberts, Dar A.

    2001-01-01

    Multispectral digital imagery from aircraft or satellite is presently being used to derive basic assessments of crop health for growers and others involved in the agricultural industry. Research indicates that narrow band stress indices derived from hyperspectral imagery should have improved sensitivity to provide more specific information on the type and cause of crop stress, Under funding from the NASA Earth Observation Commercial Applications Program (EOCAP) we are identifying and evaluating scientific and commercial applications of hyperspectral imagery for the remote characterization of agricultural crop stress. During the summer of 1999 a field experiment was conducted with varying nitrogen treatments on a production corn-field in eastern Nebraska. The AVIRIS (Airborne Visible-Infrared Imaging Spectrometer) hyperspectral imager was flown at two critical dates during crop development, at two different altitudes, providing images with approximately 18m pixels and 3m pixels. Simultaneous supporting soil and crop characterization included spectral reflectance measurements above the canopy, biomass characterization, soil sampling, and aerial photography. In this paper we describe the experiment and results, and examine the following three issues relative to the utility of hyperspectral imagery for scientific study and commercial crop stress products: (1) Accuracy of reflectance derived stress indices relative to conventional measures of stress. We compare reflectance-derived indices (both field radiometer and AVIRIS) with applied nitrogen and with leaf level measurement of nitrogen availability and chlorophyll concentrations over the experimental plots (4 replications of 5 different nitrogen levels); (2) Ability of the hyperspectral sensors to detect sub-pixel areas under crop stress. We applied the stress indices to both the 3m and 18m AVIRIS imagery for the entire production corn field using several sub-pixel areas within the field to compare the relative

  13. Non-Drug Pain Relief: Imagery

    MedlinePlus

    ... pain. Imagery does not replace pain medicine. It works with your pain medicine to help you have better pain relief. How Imagery Helps Imagery is used to help reduce stress that can cause muscle tension. It can help ...

  14. Identification of irrigated crop types from ERTS-1 density contour maps and color infrared aerial photography. [Wyoming

    NASA Technical Reports Server (NTRS)

    Marrs, R. W.; Evans, M. A.

    1974-01-01

    The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.

  15. Mapping and delineating wetlands of Huntington Wildlife Forest using very high resolution digital color-infrared imagery

    NASA Astrophysics Data System (ADS)

    Yavuz, Mehmet

    The effectiveness of off-site wetland delineation methods using very high resolution digital color-infrared aerial imagery (the color-IR imagery) is compared to the traditional on-site wetland delineation method. The on-site delineation results created using the US Fish and Wildlife Service's National Wetland Inventory (NWI map procedures are compared to the following mapping techniques; heads-up digitizing, hybrid classification, Normalized Difference Vegetation Index (NDVI) and unsupervised classifications (ISODATA) using the same image source. Each of the mapping techniques was applied using the seasonal color-IR imagery. Pair-wise significance tests of the closest mean distances indicated that heads-up digitizing was significantly more accurate than other classification techniques for the color-IR imagery. A combination of the heads-up digitizing and the hybrid classification showed that emergent wetland and scrub-shrub wetlands can be delineated without visiting the ground from the color-IR imagery. Applying logarithmic and hyperbolic sine algorithms to enhance the radiometric property of the color-IR imagery increased delineation accuracy 98% in the spring color-IR imagery and 28% in the fall color-IR imagery. Methods for measuring the accuracy of linear features are reviewed and a new method Points-in-Buffer Analysis (PIBA) is proposed. Keywords. Wetland boundary delineation, heads-up digitizing, radiometric enhancement, wetland boundary accuracy, point-in-buffer analysis (PIBA)

  16. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity. ?? 2010 IEEE.

  17. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity.

  18. Dynamics of aerial target pursuit

    NASA Astrophysics Data System (ADS)

    Pal, S.

    2015-12-01

    During pursuit and predation, aerial species engage in multitasking behavior that involve simultaneous target detection, tracking, decision-making, approach and capture. The mobility of the pursuer and the target in a three dimensional environment during predation makes the capture task highly complex. Many researchers have studied and analyzed prey capture dynamics in different aerial species such as insects and bats. This article focuses on reviewing the capture strategies adopted by these species while relying on different sensory variables (vision and acoustics) for navigation. In conclusion, the neural basis of these capture strategies and some applications of these strategies in bio-inspired navigation and control of engineered systems are discussed.

  19. Imagery of pineal tumors.

    PubMed

    Deiana, G; Mottolese, C; Hermier, M; Louis-Tisserand, G; Berthezene, Y

    2015-01-01

    Pineal tumors are rare and include a large variety of entities. Germ cell tumors are relatively frequent and often secreting lesions. Pineal parenchymal tumors include pineocytomas, pineal parenchymal tumor of intermediate differentiation, pineoblastomas and papillary tumors of the pineal region. Other lesions including astrocytomas and meningiomas as well as congenital malformations i.e. benign cysts, lipomas, epidermoid and dermoid cysts, which can also arise from the pineal region. Imagery is often non-specific but detailed analysis of the images compared with the hormone profile can narrow the spectrum of possible diagnosis.

  20. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data

  1. Photogrammetric Measurements in Fixed Wing Uav Imagery

    NASA Astrophysics Data System (ADS)

    Gülch, E.

    2012-07-01

    Several flights have been undertaken with PAMS (Photogrammetric Aerial Mapping System) by Germap, Germany, which is briefly introduced. This system is based on the SmartPlane fixed-wing UAV and a CANON IXUS camera system. The plane is equipped with GPS and has an infrared sensor system to estimate attitude values. A software has been developed to link the PAMS output to a standard photogrammetric processing chain built on Trimble INPHO. The linking of the image files and image IDs and the handling of different cases with partly corrupted output have to be solved to generate an INPHO project file. Based on this project file the software packages MATCH-AT, MATCH-T DSM, OrthoMaster and OrthoVista for digital aerial triangulation, DTM/DSM generation and finally digital orthomosaik generation are applied. The focus has been on investigations on how to adapt the "usual" parameters for the digital aerial triangulation and other software to the UAV flight conditions, which are showing high overlaps, large kappa angles and a certain image blur in case of turbulences. It was found, that the selected parameter setup shows a quite stable behaviour and can be applied to other flights. A comparison is made to results from other open source multi-ray matching software to handle the issue of the described flight conditions. Flights over the same area at different times have been compared to each other. The major objective was here to see, on how far differences occur relative to each other, without having access to ground control data, which would have a potential for applications with low requirements on the absolute accuracy. The results show, that there are influences of weather and illumination visible. The "unusual" flight pattern, which shows big time differences for neighbouring strips has an influence on the AT and DTM/DSM generation. The results obtained so far do indicate problems in the stability of the camera calibration. This clearly requests a usage of GCPs for all

  2. Imagery Rescripting for Personality Disorders

    ERIC Educational Resources Information Center

    Arntz, Arnoud

    2011-01-01

    Imagery rescripting is a powerful technique that can be successfully applied in the treatment of personality disorders. For personality disorders, imagery rescripting is not used to address intrusive images but to change the implicational meaning of schemas and childhood experiences that underlie the patient's problems. Various mechanisms that may…

  3. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  4. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  5. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  6. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  7. Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Gebelein, Jennifer

    1999-01-01

    This report is produced in accordance with the requirements outlined in the NASA Research Grant NAG9-1032 titled "Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery". This grant funds the Remote Sensing Research Unit of the University of California, Santa Barbara. This document summarizes the research progress and accomplishments to date and describes current on-going research activities. Even though this grant has technically expired, in a contractual sense, work continues on this project. Therefore, this summary will include all work done through and 5 May 1999. The principal goal of this effort is to test the accuracy of a sub-regional portion of an AVHRR-based land cover product. Land cover mapped to three different classification systems, in the southwestern United States, have been subjected to two specific accuracy assessments. One assessment utilizing astronaut acquired photography, and a second assessment employing Landsat Thematic Mapper imagery, augmented in some cases, high aerial photography. Validation of these three land cover products has proceeded using a stratified sampling methodology. We believe this research will provide an important initial test of the potential use of imagery acquired from Shuttle and ultimately the International Space Station (ISS) for the operational validation of the Moderate Resolution Imaging Spectrometer (MODIS) land cover products.

  8. Application of ERTS imagery in estimating the environmental impact of a freeway through the Knysna area of South Africa

    NASA Technical Reports Server (NTRS)

    Williamson, D. T.; Gilbertson, B.

    1974-01-01

    In the coastal areas north-east and south-west of Knysna, South Africa lie natural forests, lakes and lagoons highly regarded by many for their aesthetic and ecological richness. A freeway construction project has given rise to fears of the degradation or destruction of these natural features. The possibility was investigated of using ERTS imagery to estimate the environmental impact of the freeway and found that: (1) All threatened features could readily be identified on the imagery. (2) It was possible within a short time to provide an area estimate of damage to indigenous forest. (3) In several important respects the imagery has advantages over maps and aerial photos for this type of work. (4) The imagery will enable monitoring of the actual environmental impact of the freeway when completed.

  9. Polarimetric imagery collection experiment

    NASA Astrophysics Data System (ADS)

    Romano, Joao M.; Felton, Melvin; Chenault, David; Sohr, Brian

    2010-04-01

    The Spectral and Polarimetric Imagery Collection Experiment (SPICE) is a collaborative effort between the US Army ARDEC and ARL that is focused on the collection of mid-wave and long-wave infrared imagery using hyperspectral, polarimetric, and broadband sensors. The objective of the program is to collect a comprehensive database of the different modalities over the course of 1 to 2 years to capture sensor performance over a wide variety of weather conditions, diurnal, and seasonal changes inherent to Picatinny's northern New Jersey location. Using the Precision Armament Laboratory (PAL) tower at Picatinny Arsenal, the sensors will autonomously collect the desired data around the clock at different ranges where surrogate 2S3 Self-Propelled Howitzer targets are positioned at different viewing perspectives in an open field. The database will allow for: 1) Understanding of signature variability under adverse weather conditions; 2) Development of robust algorithms; 3) Development of new sensors; 4) Evaluation of polarimetric technology; and 5) Evaluation of fusing the different sensor modalities. In this paper, we will present the SPICE data collection objectives, the ongoing effort, the sensors that are currently deployed, and how this work will assist researches on the development and evaluation of sensors, algorithms, and fusion applications.

  10. Spectral imagery collection experiment

    NASA Astrophysics Data System (ADS)

    Romano, Joao M.; Rosario, Dalton; Farley, Vincent; Sohr, Brian

    2010-04-01

    The Spectral and Polarimetric Imagery Collection Experiment (SPICE) is a collaborative effort between the US Army ARDEC and ARL for the collection of mid-wave and long-wave infrared imagery using hyperspectral, polarimetric, and broadband sensors. The objective of the program is to collect a comprehensive database of the different modalities over the course of 1 to 2 years to capture sensor performance over a wide variety of adverse weather conditions, diurnal, and seasonal changes inherent to Picatinny's northern New Jersey location. Using the Precision Armament Laboratory (PAL) tower at Picatinny Arsenal, the sensors will autonomously collect the desired data around the clock at different ranges where surrogate 2S3 Self-Propelled Howitzer targets are positioned at different viewing perspectives at 549 and 1280m from the sensor location. The collected database will allow for: 1) Understand of signature variability under the different weather conditions; 2) Development of robust algorithms; 3) Development of new sensors; 4) Evaluation of hyperspectral and polarimetric technologies; and 5) Evaluation of fusing the different sensor modalities. In this paper, we will present the SPICE data collection objectives, the ongoing effort, the sensors that are currently deployed, and how this work will assist researches on the development and evaluation of sensors, algorithms, and fusion applications.

  11. Mapping Forest Edge Using Aerial Lidar

    NASA Astrophysics Data System (ADS)

    MacLean, M. G.

    2014-12-01

    Slightly more than 60% of Massachusetts is covered with forest and this land cover type is invaluable for the protection and maintenance of our natural resources and is a carbon sink for the state. However, Massachusetts is currently experiencing a decline in forested lands, primarily due to the expansion of human development (Thompson et al., 2011). Of particular concern is the loss of "core areas" or the areas within forests that are not influenced by other land cover types. These areas are of significant importance to native flora and fauna, since they generally are not subject to invasion by exotic species and are more resilient to the effects of climate change (Campbell et al., 2009). However, the expansion of development has reduced the amount of this core area, but the exact amount is still unknown. Current methods of estimating core area are not particularly precise, since edge, or the area of the forest that is most influenced by other land cover types, is quite variable and situation dependent. Therefore, the purpose of this study is to devise a new method for identifying areas that could qualify as "edge" within the Harvard Forest, in Petersham MA, using new remote sensing techniques. We sampled along eight transects perpendicular to the edge of an abandoned golf course within the Harvard Forest property. Vegetation inventories as well as Photosynthetically Active Radiation (PAR) at different heights within the canopy were used to determine edge depth. These measurements were then compared with small-footprint waveform aerial LiDAR datasets and imagery to model edge depths within Harvard Forest.

  12. Open Skies aerial photography of selected areas in Central America affected by Hurricane Mitch

    USGS Publications Warehouse

    Molnia, Bruce; Hallam, Cheryl A.

    1999-01-01

    Between October 27 and November 1, 1998, Central America was devastated by Hurricane Mitch. Following a humanitarian relief effort, one of the first informational needs was complete aerial photographic coverage of the storm ravaged areas so that the governments of the affected countries, the U.S. agencies planning to provide assistance, and the international relief community could come to the aid of the residents of the devastated area. Between December 4 and 19, 1998 an Open Skies aircraft conducted five successful missions and obtained more than 5,000 high-resolution aerial photographs and more than 15,000 video images. The aerial data are being used by the Reconstruction Task Force and many others who are working to begin rebuilding and to help reduce the risk of future destruction.

  13. Aerial Refueling Clearance Initiation Request

    DTIC Science & Technology

    2016-07-14

    and receiver agencies. The AR Clearance Initiation Request document recognizes the requirement for definitive aerial refueling agreements between...include directions for the development or content of these contractual agreements. 15. –SUBJECT TERMS See Document Terms and Definitions , Page 8 16...7 Terms and Definitions

  14. Reconnaissance mapping from aerial photographs

    NASA Technical Reports Server (NTRS)

    Weeden, H. A.; Bolling, N. B. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Engineering soil and geology maps were successfully made from Pennsylvania aerial photographs taken at scales from 1:4,800 to 1:60,000. The procedure involved a detailed study of a stereoscopic model while evaluating landform, drainage, erosion, color or gray tones, tone and texture patterns, vegetation, and cultural or land use patterns.

  15. Imagery intensifier for recce

    NASA Astrophysics Data System (ADS)

    Sturz, Richard A.

    1998-11-01

    The image intensifier based night vision goggle which has proven so useful in low light or night observation applications, can be mated to the typical CD video camera for imaging under these adverse lighting conditions. Image intensifiers have specific spectral response, low light sensitivity, resolution, and electronic characteristics to augment standard CCD camera capability and thus provide video suited for reconnaissance. The variety of these devices include the Gen I, Gen II and the Gen II series of image intensifiers. Recent developments have increased the variety of spectral response, quantum efficiencies and spatial resolution within the Gen II and Gen III types. The SPIE Airborne Reconnaissance session paper presented in 1995 entitled 'Advanced in Low Light Level Video Imaging' described the then available image intensifiers. This paper explores and updates the data of the 1995 paper and discusses the changes and improvements in image intensifiers since the original paper. Additional information concerning the CCD camera and image intensification for reconnaissance applications is also presented.

  16. Replacing craving imagery with alternative pleasant imagery reduces craving intensity.

    PubMed

    Knäuper, Bärbel; Pillay, Rowena; Lacaille, Julien; McCollam, Amanda; Kelso, Evan

    2011-08-01

    Laboratory studies have shown that asking people to engage in imagery reduces the intensity of laboratory-induced food cravings. This study examined whether the intensity of naturally occurring cravings can be reduced by replacing the craving-related imagery with alternative, pleasant imagery. Participants were instructed to vividly imagine engaging in their favorite activity. They had to apply this imagery technique over a period of four days whenever they felt a craving arising and were asked to keep applying this technique until the craving passed. Compared to baseline, craving intensity and vividness of craving-related imagery were both significantly reduced. Vividness of craving-related imagery fully mediated the effect of the alternative imagery on craving intensity. No effects were found for control conditions in which participants (1) just formed the goal intention to reduce their cravings, (2) formed implementation intentions to reduce their cravings, and (3) engaged in a cognitive task (reciting the alphabet backwards). The findings suggest that vividly imagining a pleasant element can be an effective technique to curb cravings in everyday life.

  17. Video Screen Capture Basics

    ERIC Educational Resources Information Center

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  18. Automated video and infrared tracking technology

    NASA Astrophysics Data System (ADS)

    Koligman, Michael; Dirbas, Joseph J.

    1999-10-01

    PAR Government Systems Corporation (PGSC) has recently developed a complete activity detection and tracking system using standard NTSC Video or Infrared (IR) camera inputs. The inputs are processed using state-of-the-art signal processing hardware and software developed specifically for real-time applications. The system automatically detects and tracks moving objects in video or infrared imagery. Algorithms to automatically detect and track moving objects were implemented and ported to a C80 based DSP board for real-time operation. The real-time embedded software performs: (1) Video/IR frame registration to compensate for sensor motion, jitter, and panning; (2) Moving target detection and track formation; (3) Symbology overlays. The hardware components are PC based COGS which include a high speed DSP board for real-time video/IR data collection and processing. The system can be used for a variety of detection and tracking purposes including border surveillance, perimeter surveillance including building, airport, correctional facilities, and other areas requiring detection and tracking of intruders. The system was designed, built and tested in 1998 by PAR Government Systems Corporation, La Jolla, CA. This paper addresses the algorithms (Registration, Tracking, Outputs) as well as hardware used to port the algorithms (C80 DSP board) for real-time processing.

  19. Uncooled infrared development for small unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Pitt, Timothy S.; Wood, Sam B.; Waddle, Caleb E.; Edwards, William D.; Yeske, Ben S.

    2010-04-01

    The US Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) is developing a micro-uncooled infrared (IR) capability for small unmanned aerial systems (SUAS). In 2007, AMRDEC procured several uncooled microbolometers for lab and field test evaluations, and static tower tests involving specific target sets confirmed initial modeling and simulation predictions. With these promising results, AMRDEC procured two captive flight test (CFT) vehicles and, in 2008, completed numerous captive flights to capture imagery with the micro-uncooled infrared sensors. Several test configurations were used to build a comprehensive data set. These configurations included variations in look-down angles, fields of view (FOV), environments, altitudes, and target scenarios. Data collected during these field tests is also being used to develop human tracking algorithms and image stabilization software by other AMRDEC personnel. Details of these ongoing efforts will be presented in this paper and will include: 1) onboard digital data recording capabilities; 2) analog data links for visual verification of imagery; 3) sensor packaging and design; which include both infrared and visible cameras; 4) field test and data collection results; 5) future plans; 6) potential applications. Finally, AMRDEC has recently acquired a 17 μm pitch detector array. The paper will include plans to test both 17 μm and 25 μm microbolometer technologies simultaneously in a side-by-side captive flight comparison.

  20. Video Event Detection Framework on Large-Scale Video Data

    ERIC Educational Resources Information Center

    Park, Dong-Jun

    2011-01-01

    Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data present a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data…

  1. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply:...

  2. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Video description of video programming. 79.3... ACCESSIBILITY OF VIDEO PROGRAMMING Video Programming Owners, Providers, and Distributors § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following...

  3. Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: NASA Applied Sciences Program; USGS Land Remote Sensing: Overview; QuickBird System Status and Product Overview; ORBIMAGE Overview; IKONOS 2004 Calibration and Validation Status; OrbView-3 Spatial Characterization; On-Orbit Modulation Transfer Function (MTF) Measurement of QuickBird; Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season; Image Quality Evaluation of QuickBird Super Resolution and Revisit of IKONOS: Civil and Commercial Application Project (CCAP); On-Orbit System MTF Measurement; QuickBird Post Launch Geopositional Characterization Update; OrbView-3 Geometric Calibration and Geopositional Accuracy; Geopositional Statistical Methods; QuickBird and OrbView-3 Geopositional Accuracy Assessment; Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images; Laboratory Measurement of Bidirectional Reflectance of Radiometric Tarps; Stennis Space Center Verification and Validation Capabilities; Joint Agency Commercial Imagery Evaluation (JACIE) Team; Adjacency Effects in High Resolution Imagery; Effect of Pulse Width vs. GSD on MTF Estimation; Camera and Sensor Calibration at the USGS; QuickBird Geometric Verification; Comparison of MODTRAN to Heritage-based Results in Vicarious Calibration at University of Arizona; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Estimating Sub-Pixel Proportions of Sagebrush with a Regression Tree; How Do YOU Use the National Land Cover Dataset?; The National Map Hazards Data Distribution System; Recording a Troubled World; What Does This-Have to Do with This?; When Can a Picture Save a Thousand Homes?; InSAR Studies of Alaska Volcanoes; Earth Observing-1 (EO-1) Data Products; Improving Access to the USGS Aerial Film Collections: High Resolution Scanners; Improving Access to the USGS Aerial Film Collections: Phoenix Digitizing System Product Distribution; System and Product Characterization: Issues Approach

  4. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  5. Developing a Promotional Video

    ERIC Educational Resources Information Center

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  6. Secure video communications system

    DOEpatents

    Smith, Robert L.

    1991-01-01

    A secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  7. Independent Video in Britain.

    ERIC Educational Resources Information Center

    Stewart, David

    Maintaining the status quo as well as the attitude toward cultural funding and development that it imposes on video are detrimental to the formation of a thriving video network, and also out of key with the present social and political situation in Britain. Independent video has some quite specific advantages as a medium for cultural production…

  8. Video: Modalities and Methodologies

    ERIC Educational Resources Information Center

    Hadfield, Mark; Haw, Kaye

    2012-01-01

    In this article, we set out to explore what we describe as the use of video in various modalities. For us, modality is a synthesizing construct that draws together and differentiates between the notion of "video" both as a method and as a methodology. It encompasses the use of the term video as both product and process, and as a data…

  9. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  10. Video Cartridges and Cassettes.

    ERIC Educational Resources Information Center

    Kletter, Richard C.; Hudson, Heather

    The economic and social significance of video cassettes (viewer-controlled playback system) is explored in this report. The potential effect of video cassettes on industrial training, education, libraries, and television is analyzed in conjunction with the anticipated hardware developments. The entire video cassette industry is reviewed firm by…

  11. Vehicle classification in WAMI imagery using deep network

    NASA Astrophysics Data System (ADS)

    Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin

    2016-05-01

    Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep

  12. The Imagery Exchange (TIE): Open Source Imagery Management System

    NASA Astrophysics Data System (ADS)

    Alarcon, C.; Huang, T.; Thompson, C. K.; Roberts, J. T.; Hall, J. R.; Cechini, M.; Schmaltz, J. E.; McGann, J. M.; Boller, R. A.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    The NASA's Global Imagery Browse Service (GIBS) is the Earth Observation System (EOS) imagery solution for delivering global, full-resolution satellite imagery in a highly responsive manner. GIBS consists of two major subsystems, OnEarth and The Imagery Exchange (TIE). TIE is the GIBS horizontally scaled imagery workflow manager component, an Open Archival Information System (OAIS) responsible for orchestrating the acquisition, preparation, generation, and archiving of imagery to be served by OnEarth. TIE is an extension of the Data Management and Archive System (DMAS), a high performance data management system developed at the Jet Propulsion Laboratory by leveraging open source tools and frameworks, which includes Groovy/Grails, Restlet, Apache ZooKeeper, Apache Solr, and other open source solutions. This presentation focuses on the application of Open Source technologies in developing a horizontally scaled data system like DMAS and TIE. As part of our commitment in contributing back to the open source community, TIE is in the process of being open sourced. This presentation will also cover our current effort in getting TIE in to the hands of the community from which we benefited from.

  13. Aerial Photographs and Satellite Images

    USGS Publications Warehouse

    ,

    1997-01-01

    Photographs and other images of the Earth taken from the air and from space show a great deal about the planet's landforms, vegetation, and resources. Aerial and satellite images, known as remotely sensed images, permit accurate mapping of land cover and make landscape features understandable on regional, continental, and even global scales. Transient phenomena, such as seasonal vegetation vigor and contaminant discharges, can be studied by comparing images acquired at different times. The U.S. Geological Survey (USGS), which began using aerial photographs for mapping in the 1930's, archives photographs from its mapping projects and from those of some other Federal agencies. In addition, many images from such space programs as Landsat, begun in 1972, are held by the USGS. Most satellite scenes can be obtained only in digital form for use in computer-based image processing and geographic information systems, but in some cases are also available as photographic products.

  14. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  15. Imagery: Paintings in the Mind.

    ERIC Educational Resources Information Center

    Carey, Albert R.

    1986-01-01

    Describes using the overlapping areas of relaxation, meditation, hypnosis, and imagery as a counseling technique. Explains the methods in terms of right brain functioning, a capability children use naturally. (ABB)

  16. Intelligence and imagery in personality.

    PubMed

    Tedford, W H; Penk, M L

    1977-08-01

    One hundred college undergraduates were administered the Richardson revision of the Gordon Test of Visual Imagery Control, the Betts-Sheehan Questionnaire Upon Mental Imagery, and the Shipley-Hartford Institute of Living Scale. The latter provided a conceptual quotient (CQ) score of intellectual impairment based upon a ratio between vocabulary and abstraction scores. Subjects with CQs above 100 had significantly higher control scores (p less than .02). High control subjects had significantly higher total IQ scores than did low control subjects (p less than .04). Subjects with high and medium range control had higher vocabulary scores than those with low control. This suggests possible assessment of proneness toward introverted and extraverted neuroticism based upon a combination type of imagery score and the ratio between abstract or vocabulary scores. The connection of imagery with dimensions of IQ may be a start toward a more refined measure of this aspect of personality. Problems and implications are discussed.

  17. [Psychophysiologic research on mental imagery].

    PubMed

    Fontana, A E; Heumann, G A

    1988-06-01

    This paper studies the different types of imagery likely to occur during the sleep/wake cycle in experiment subjects under part sensory deprivation conditions, where they are administered a sound-stimulus- namely an electronically recorded heart-beat which acts as propioceptive inductor. Meanwhile, a polysmonographic register in recorded so that a correlation between the time the imagery appears, and the states of consciousness likely to arouse the images is duly established. The study allows a fresh re-elaboration to be raised as regards imagery matureness and formation in the mind, a semiologic re-statement of imagery types, and a better understanding how the self works during sleep stage, dream state, and hypnagogic-hypnopompic phases as well. Finally, the authors stress up the importance of interpersonal relationship between the subjects and the research team, altogether with the frame of reference the professionals work in since their focusing could modify the sleep recording characteristics.

  18. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for

  19. Content-Aware Adaptive Compression of Satellite Imagery Using Artificial Vision

    DTIC Science & Technology

    2013-09-01

    in ICAPS, 2004, pp. 142–149. [24] J. E. Fowler, S. Mun, and E. W. Tramel, “Block- based compressed sensing of images and video,” Foundations and...Imagery Compression algorithm (OASIC) aims to conserve satellite channel capacity when transmitting oceanic imagery to Earth. OASIC conserves chan...losses, respectively. C N0 = AtPt(LpLd)Ar KTe (1.1) 2 Channel capacity, is the rate at which bits can be propagated through the range of frequencies

  20. Aerial robotic data acquisition system

    SciTech Connect

    Hofstetter, K.J.; Hayes, D.W.; Pendergast, M.M.; Corban, J.E.

    1993-12-31

    A small, unmanned aerial vehicle (UAV), equipped with sensors for physical and chemical measurements of remote environments, is described. A miniature helicopter airframe is used as a platform for sensor testing and development. The sensor output is integrated with the flight control system for real-time, interactive, data acquisition and analysis. Pre-programmed flight missions will be flown with several sensors to demonstrate the cost-effective surveillance capabilities of this new technology.

  1. Telemetry of Aerial Radiological Measurements

    SciTech Connect

    H. W. Clark, Jr.

    2002-10-01

    Telemetry has been added to National Nuclear Security Administration's (NNSA's) Aerial Measuring System (AMS) Incident Response aircraft to accelerate availability of aerial radiological mapping data. Rapid aerial radiological mapping is promptly performed by AMS Incident Response aircraft in the event of a major radiological dispersal. The AMS airplane flies the entire potentially affected area, plus a generous margin, to provide a quick look at the extent and severity of the event. The primary result of the AMS Incident Response over flight is a map of estimated exposure rate on the ground along the flight path. Formerly, it was necessary to wait for the airplane to land before the map could be seen. Now, while the flight is still in progress, data are relayed via satellite directly from the aircraft to an operations center, where they are displayed and disseminated. This permits more timely utilization of results by decision makers and redirection of the mission to optimize its value. The current telemetry capability can cover all of North America. Extension to a global capability is under consideration.

  2. Video Captions Benefit Everyone

    PubMed Central

    Gernsbacher, Morton Ann

    2016-01-01

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions. PMID:28066803

  3. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  4. Observation of coral reefs on Ishigaki Island, Japan, using Landsat TM images and aerial photographs

    SciTech Connect

    Matsunaga, Tsuneo; Kayanne, Hajime

    1997-06-01

    Ishigaki Island is located at the southwestern end of Japanese Islands and famous for its fringing coral reefs. More than twenty LANDSAT TM images in twelve years and aerial photographs taken on 1977 and 1994 were used to survey two shallow reefs on this island, Shiraho and Kabira. Intensive field surveys were also conducted in 1995. All satellite images of Shiraho were geometrically corrected and overlaid to construct a multi-date satellite data set. The effects of solar elevation and tide on satellite imagery were studied with this data set. The comparison of aerial and satellite images indicated that significant changes occurred between 1977 and 1984 in Kabira: rapid formation in the western part and decrease in the eastern part of dark patches. The field surveys revealed that newly formed dark patches in the west contain young corals. These results suggest that remote sensing is useful for not only mapping but also monitoring of shallow coral reefs.

  5. Monitoring black-tailed prairie dog colonies with high-resolution satellite imagery

    USGS Publications Warehouse

    Sidle, John G.; Johnson, D.H.; Euliss, B.R.; Tooze, M.

    2002-01-01

    The United States Fish and Wildlife Service has determined that the black-tailed prairie dog (Cynomys ludovicianus) warrants listing as a threatened species under the Endangered Species Act. Central to any conservation planning for the black-tailed prairie dog is an appropriate detection and monitoring technique. Because coarse-resolution satellite imagery is not adequate to detect black-tailed prairie dog colonies, we examined the usefulness of recently available high-resolution (1-m) satellite imagery. In 6 purchased scenes of national grasslands, we were easily able to visually detect small and large colonies without using image-processing algorithms. The Ikonos (Space Imaging(tm)) satellite imagery was as adequate as large-scale aerial photography to delineate colonies. Based on the high quality of imagery, we discuss a possible monitoring program for black-tailed prairie dog colonies throughout the Great Plains, using the species' distribution in North Dakota as an example. Monitoring plots could be established and imagery acquired periodically to track the expansion and contraction of colonies.

  6. An aerial multispectral thermographic survey of the Oak Ridge Reservation for selected areas K-25, X-10, and Y-12, Oak Ridge, Tennessee

    SciTech Connect

    Ginsberg, I.W.

    1996-10-01

    During June 5-7, 1996, the Department of Energy`s Remote Sensing Laboratory performed day and night multispectral surveys of three areas at the Oak Ridge Reservation: K-25, X-10, and Y-12. Aerial imagery was collected with both a Daedalus DS1268 multispectral scanner and National Aeronautics and Space Administration`s Thermal Infrared Multispectral System, which has six bands in the thermal infrared region of the spectrum. Imagery from the Thermal Infrared Multispectral System was processed to yield images of absolute terrain temperature and of the terrain`s emissivities in the six spectral bands. The thermal infrared channels of the Daedalus DS1268 were radiometrically calibrated and converted to apparent temperature. A recently developed system for geometrically correcting and geographically registering scanner imagery was used with the Daedalus DS1268 multispectral scanner. The corrected and registered 12-channel imagery was orthorectified using a digital elevation model. 1 ref., 5 figs., 5 tabs.

  7. Spatial Feature Evaluation for Aerial Scene Analysis

    SciTech Connect

    Swearingen, Thomas S; Cheriyadat, Anil M

    2013-01-01

    High-resolution aerial images are becoming more readily available, which drives the demand for robust, intelligent and efficient systems to process increasingly large amounts of image data. However, automated image interpretation still remains a challenging problem. Robust techniques to extract and represent features to uniquely characterize various aerial scene categories is key for automated image analysis. In this paper we examined the role of spatial features to uniquely characterize various aerial scene categories. We studied low-level features such as colors, edge orientations, and textures, and examined their local spatial arrangements. We computed correlograms representing the spatial correlation of features at various distances, then measured the distance between correlograms to identify similar scenes. We evaluated the proposed technique on several aerial image databases containing challenging aerial scene categories. We report detailed evaluation of various low-level features by quantitatively measuring accuracy and parameter sensitivity. To demonstrate the feature performance, we present a simple query-based aerial scene retrieval system.

  8. Quantitative analysis of drainage obtained from aerial photographs and RBV/LANDSAT images

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Formaggio, A. R.; Epiphanio, J. C. N.; Filho, M. V.

    1981-01-01

    Data obtained from aerial photographs (1:60,000) and LANDSAT return beam vidicon imagery (1:100,000) concerning drainage density, drainage texture, hydrography density, and the average length of channels were compared. Statistical analysis shows that significant differences exist in data from the two sources. The highly drained area lost more information than the less drained area. In addition, it was observed that the loss of information about the number of rivers was higher than that about the length of the channels.

  9. Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.

    1981-01-01

    Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.

  10. Landmarks recognition for autonomous aerial navigation by neural networks and Gabor transform

    NASA Astrophysics Data System (ADS)

    Shiguemori, Elcio Hideiti; Martins, Maurício Pozzobon; Monteiro, Marcus Vinícius T.

    2007-02-01

    Template matching in real-time is a fundamental issue in many applications in computer vision such as tracking, stereo vision and autonomous navigation. The goal of this paper is present a system for automatic landmarks recognition in video frames over a georeferenced high resolution satellite image, for autonomous aerial navigation research. The video frames employed were obtained from a camera fixed to a helicopter in a low level flight, simulating the vision system of an unmanned aerial vehicle (UAV). The landmarks descriptors used in recognition task were texture features extracted by a Gabor Wavelet filters bank. The recognition system consists on a supervised neural network trained to recognize the satellite image landmarks texture features. In activation phase, each video frame has its texture feature extracted and the neural network has to classify it as a predefined landmark. The video frames are also preprocessed to reduce their difference of scale and rotation from the satellite image before the texture feature extraction, so the UAV altitude and heading for each frame are considered as known. The neural network techniques present the advantage of low computational cost, been appropriate to real-time applications. Promising results were obtained, mainly during flight over urban areas.

  11. Writing Assignments in Disguise: Lessons Learned Using Video Projects in the Classroom

    NASA Astrophysics Data System (ADS)

    Wade, P.; Courtney, A.

    2012-12-01

    This study describes the instructional approach of using student-created video documentaries as projects in an undergraduate non-science majors' Energy Perspectives science course. Four years of teaching this course provided many reflective teaching moments from which we have enhanced our instructional approach to teaching students how to construct a quality Ken Burn's style science video. Fundamental to a good video documentary is the story told via a narrative which involves significant writing, editing and rewriting. Many students primarily associate a video documentary with visual imagery and do not realize the importance of writing in the production of the video. Required components of the student-created video include: 1) select a topic, 2) conduct research, 3) write an outline, 4) write a narrative, 5) construct a project storyboard, 6) shoot or acquire video and photos (from legal sources), 7) record the narrative, 8) construct the video documentary, 9) edit and 10) finalize the project. Two knowledge survey instruments (administered pre- and post) were used for assessment purposes. One survey focused on the skills necessary to research and produce video documentaries and the second survey assessed students' content knowledge acquired from each documentary. This talk will focus on the components necessary for video documentaries and the instructional lessons learned over the years. Additionally, results from both surveys and student reflections of the video project will be shared.

  12. Unmanned Aerial Vehicles Master Plan, 1993.

    DTIC Science & Technology

    2007-11-02

    PHOTOGRAPH THIS SHEET AND RETURN To DTIC-FDAC DTIC 70A DOCUMENT PROCESSMING I~ SlEW -, mmllamm LOAN DOCUMENT DEPARTMENT OF DEFENSE UNMANNED AERIAL VEHICLES (UAV...11 B. Program Executive Officer for Cruise Missiles 3 and Unmanned Aerial Vehicles (PEO[CU...69 I ! I I ivI -- UAV 1993 MASTER PLAN U I EXECUTIVE SUMMARY 3 A. OVERVIEW Unmanned Aerial Vehicles (UAVs)* can make significant

  13. Crop identification and acreage measurement utilizing ERTS imagery

    NASA Technical Reports Server (NTRS)

    Vonsteen, D. H. (Principal Investigator)

    1972-01-01

    There are no author-identified significant results in this report. The microdensitometer will be used to analyze data acquired by ERTS-1 imagery. The classification programs and software packages have been acquired and are being prepared for use with the information as it is received. Photo and digital tapes have been acquired for coverage of virtually 100 percent of the test site areas. These areas are located in South Dakota, Idaho, Missouri, and Kansas. Hass 70mm color infrared, infrared, black and white high altitude aerial photography of the test sites is available. Collection of ground truth for updating the data base has been completed and a computer program written to count the number of fields and give total acres by size group for the segments in each test site. Results are given of data analysis performed on digitized data from densitometer measurements of fields of corn, sugar, beets, and alfalfa in Kansas.

  14. Accuracy of Measurements in Oblique Aerial Images for Urban Environment

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.

    2016-10-01

    Oblique aerial images have been a source of data for urban areas for several years. However, the accuracy of measurements in oblique images during this time has been limited to a single meter due to the use of direct -georeferencing technology and the underlying digital elevation model. Therefore, oblique images have been used mostly for visualization purposes. This situation changed in recent years as new methods, which allowed for a higher accuracy of exterior orientation, were developed. Current developments include the process of determining exterior orientation and the previous but still crucial process of tie point extraction. Progress in this area was shown in the ISPRS/EUROSDR Benchmark on Multi-Platform Photogrammetry and is also noticeable in the growing interest in the use of this kind of imagery. The higher level of accuracy in the orientation of oblique aerial images that has become possible in the last few years should result in a higher level of accuracy in the measurements of these types of images. The main goal of this research was to set and empirically verify the accuracy of measurements in oblique aerial images. The research focused on photogrammetric measurements composed of many images, which use a high overlap within an oblique dataset and different view angles. During the experiments, two series of images of urban areas were used. Both were captured using five DigiCam cameras in a Maltese cross configuration. The tilt angles of the oblique cameras were 45 degrees, and the position of the cameras during flight used a high grade GPS/INS navigation system. The orientation of the images was set using the Pix4D Mapper Pro software with both measurements of the in-flight camera position and the ground control points (measured with GPS RTK technology). To control the accuracy, check points were used (which were also measured with GPS RTK technology). As reference data for the whole study, an area of the city-based map was used. The archived results

  15. Enabling high-quality observations of surface imperviousness for water runoff modelling from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank

    2015-04-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  16. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    PubMed

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.

  17. Detection of unmanned aerial vehicles using a visible camera system.

    PubMed

    Hu, Shuowen; Goldman, Geoffrey H; Borel-Donohue, Christoph C

    2017-01-20

    Unmanned aerial vehicles (UAVs) flown by adversaries are an emerging asymmetric threat to homeland security and the military. To help address this threat, we developed and tested a computationally efficient UAV detection algorithm consisting of horizon finding, motion feature extraction, blob analysis, and coherence analysis. We compare the performance of this algorithm against two variants, one using the difference image intensity as the motion features and another using higher-order moments. The proposed algorithm and its variants are tested using field test data of a group 3 UAV acquired with a panoramic video camera in the visible spectrum. The performance of the algorithms was evaluated using receiver operating characteristic curves. The results show that the proposed approach had the best performance compared to the two algorithmic variants.

  18. Learning, attentional control and action video games

    PubMed Central

    Green, C.S.; Bavelier, D.

    2012-01-01

    While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on ‘action video games’ produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. PMID:22440805

  19. ERTS-1 imagery use in reconnaissance prospecting: Evaluation of commercial utility of ERTS-1 imagery in structural reconnaissance for minerals and petroleum

    NASA Technical Reports Server (NTRS)

    Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.

    1973-01-01

    The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.

  20. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  1. Pasadena, California Anaglyph with Aerial Photo Overlay

    NASA Technical Reports Server (NTRS)

    2000-01-01

    and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

    Size: 2.2 km (1.4 miles) x 2.4 km (1.49 miles) Location: 34.16 deg. North lat., 118.16 deg. West lon. Orientation: looking straight down at land Original Data Resolution: SRTM, 30 meters; Aerial Photo, 3 meters. Date Acquired: February 16, 2000 Image: NASA/JPL/NIMA

  2. Capabilities Assessment and Employment Recommendations for Full Motion Video Optical Navigation Exploitation (FMV-ONE)

    DTIC Science & Technology

    2015-06-01

    exportable in a 3D printer -compatible format. The military is increasing its experimentation and fielding of additive printing devices, and although...51 1. Vignette Revisited, Part II .................................................................51 2. 3D Model...platform telemetry and GPS in an unrefined mode.26 Sensor, Map, and Free view overlay the video feed onto base imagery and terrain data to give a 3D

  3. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  4. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles

    PubMed Central

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time. PMID:28033385

  5. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    PubMed

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  6. Drone with thermal infrared camera provides high resolution georeferenced imagery of the Waikite geothermal area, New Zealand

    NASA Astrophysics Data System (ADS)

    Harvey, M. C.; Rowland, J. V.; Luketina, K. M.

    2016-10-01

    Drones are now routinely used for collecting aerial imagery and creating digital elevation models (DEM). Lightweight thermal sensors provide another payload option for generation of very high-resolution aerial thermal orthophotos. This technology allows for the rapid and safe survey of thermal areas, often present in inaccessible or dangerous terrain. Here we present a 2.2 km2 georeferenced, temperature-calibrated thermal orthophoto of the Waikite geothermal area, New Zealand. The image represents a mosaic of nearly 6000 thermal images captured by drone over a period of about 2 weeks. This is thought by the authors to be the first such image published of a significant geothermal area produced by a drone equipped with a thermal camera. Temperature calibration of the image allowed calculation of heat loss (43 ± 12 MW) from thermal lakes and streams in the survey area (loss from evaporation, conduction and radiation). An RGB (visible spectrum) orthomosaic photo and digital elevation model was also produced for this area, with ground resolution and horizontal position error comparable to commercially produced LiDAR and aerial imagery obtained from crewed aircraft. Our results show that thermal imagery collected by drones has the potential to become a key tool in geothermal science, including geological, geochemical and geophysical surveys, environmental baseline and monitoring studies, geotechnical studies and civil works.

  7. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  8. A Spherical Aerial Terrestrial Robot

    NASA Astrophysics Data System (ADS)

    Dudley, Christopher J.

    This thesis focuses on the design of a novel, ultra-lightweight spherical aerial terrestrial robot (ATR). The ATR has the ability to fly through the air or roll on the ground, for applications that include search and rescue, mapping, surveillance, environmental sensing, and entertainment. The design centers around a micro-quadcopter encased in a lightweight spherical exoskeleton that can rotate about the quadcopter. The spherical exoskeleton offers agile ground locomotion while maintaining characteristics of a basic aerial robot in flying mode. A model of the system dynamics for both modes of locomotion is presented and utilized in simulations to generate potential trajectories for aerial and terrestrial locomotion. Details of the quadcopter and exoskeleton design and fabrication are discussed, including the robot's turning characteristic over ground and the spring-steel exoskeleton with carbon fiber axle. The capabilities of the ATR are experimentally tested and are in good agreement with model-simulated performance. An energy analysis is presented to validate the overall efficiency of the robot in both modes of locomotion. Experimentally-supported estimates show that the ATR can roll along the ground for over 12 minutes and cover the distance of 1.7 km, or it can fly for 4.82 minutes and travel 469 m, on a single 350 mAh battery. Compared to a traditional flying-only robot, the ATR traveling over the same distance in rolling mode is 2.63-times more efficient, and in flying mode the system is only 39 percent less efficient. Experimental results also demonstrate the ATR's transition from rolling to flying mode.

  9. A method for generating enhanced vision displays using OpenGL video texture

    NASA Astrophysics Data System (ADS)

    Bernier, Kenneth L.

    2010-04-01

    Degraded visual conditions can marvel the curious and destroy the unprepared. While navigation instruments are trustworthy companions, true visual reference remains king of the hills. Poor visibility may be overcome via imaging sensors such as low light level charge-coupled-device, infrared, and millimeter wave radar. Enhanced Vision systems combine this imagery into a comprehensive situation awareness display, presented to the pilot as reference imagery on a cockpit display, or as world-conformal imagery on head-up or head-mounted displays. This paper demonstrates that Enhanced Vision imaging can be achieved at video rates using typical CPU / GPU architecture, standard video capture hardware, dynamic non-linear ray tracing algorithms, efficient image transfer methods, and simple OpenGL rendering techniques.

  10. Development of an autonomous video rendezvous and docking system, phase 2

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Richardson, T. E.

    1983-01-01

    The critical elements of an autonomous video rendezvous and docking system were built and used successfully in a physical laboratory simulation. The laboratory system demonstrated that a small, inexpensive electronic package and a flight computer of modest size can analyze television images to derive guidance information for spacecraft. In the ultimate application, the system would use a docking aid consisting of three flashing lights mounted on a passive target spacecraft. Television imagery of the docking aid would be processed aboard an active chase vehicle to derive relative positions and attitudes of the two spacecraft. The demonstration system used scale models of the target spacecraft with working docking aids. A television camera mounted on a 6 degree of freedom (DOF) simulator provided imagery of the target to simulate observations from the chase vehicle. A hardware video processor extracted statistics from the imagery, from which a computer quickly computed position and attitude. Computer software known as a Kalman filter derived velocity information from position measurements.

  11. Unmanned aerial vehicles in astronomy

    NASA Astrophysics Data System (ADS)

    Biondi, Federico; Magrin, Demetrio; Ragazzoni, Roberto; Farinato, Jacopo; Greggio, Davide; Dima, Marco; Gullieuszik, Marco; Bergomi, Maria; Carolo, Elena; Marafatto, Luca; Portaluri, Elisa

    2016-07-01

    In this work we discuss some options for using Unmanned Aerial Vehicles (UAVs) for daylight alignment activities and maintenance of optical telescopes, relating them to a small numbers of parameters, and tracing which could be the schemes, requirements and benefits for employing them both at the stage of erection and maintenance. UAVs can easily reach the auto-collimation points of optical components of the next class of Extremely Large Telescopes. They can be equipped with tools for the measurement of the co-phasing, scattering, and reflectivity of segmented mirrors or environmental parameters like C2n and C2T to characterize the seeing during both the day and the night.

  12. Human-friendly stylization of video content using simulated colored paper mosaics

    NASA Astrophysics Data System (ADS)

    Kim, Seulbeom; Kang, Dongwann; Yoon, Kyunghyun

    2016-07-01

    Video content is used extensively in many fields. However, in some fields, video manipulation techniques are required to improve the human-friendliness of such content. In this paper, we propose a method that automatically generates animations in the style of colored paper mosaics, to create human-friendly, artistic imagery. To enhance temporal coherence while maintaining the characteristics of colored paper mosaics, we also propose a particle video-based method that determines coherent locations for tiles in animations. The proposed method generates evenly distributed particles, which are used to produce animated tiles via our tile modeling process.

  13. Viking 1975 Mars lander interactive computerized video stereophotogrammetry

    NASA Technical Reports Server (NTRS)

    Liebes, S., Jr.; Schwartz, A. A.

    1977-01-01

    A novel computerized interactive video stereophotogrammetry system has been developed for analysis of Viking 1975 lander imaging data. Prompt, accurate, and versatile performance is achieved. Earth-returned digital imagery data are driven from a computer to a pair of video monitors. Powerful computer support enables a photogrammetrist, stereoscopically viewing the video displays, to create diverse topographic products. Profiles, representing the intersection of any definable surface with the Martian relief, are readily generated. Vertical profiles and elevation contour maps, including stereo versions, are produced. Computer overlays of map products on stereo images aid map interpretation and permit independent quality evaluation. Slaved monitors enable parallel viewing. Maps span from the immediate foreground to the remote limits of ranging capability. Surface sampler arm specific vertical profiles enable direct reading of arm commands required for sample acquisition, rock rolling, and trenching. The ranging accuracy of plus or minus 2 cm throughout the sample area degrades to plus or minus 20 m at 100-m range.

  14. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.

  15. Green Power Partnership Videos

    EPA Pesticide Factsheets

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  16. MAPPING EELGRASS SPECIES ZOSTERA ZAPONICA AND Z. MARINA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETATION HABITATS IN PACIFIC NORTHWEST ESTUARIES USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    Aerial photographic surveys of Oregon's Yaquina Bay estuary were conducted during consecutive summers from 1997 through 2000. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communit...

  17. MAPPING NON-INDIGENOUS EELGRASS ZOSTERA JAPONICA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETARIAN HABITATS IN A PACIFIC NORTHWEST ESTUARY USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    We conducted aerial photographic surveys of Oregon's Yaquina Bay estuary during consecutive summers from 1997 through 2001. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communitie...

  18. USGS Earth Explorer Client for Co-Discovery of Aerial and Satellite Data

    NASA Astrophysics Data System (ADS)

    Longhenry, R.; Sohre, T.; McKinney, R.; Mentele, T.

    2011-12-01

    The United States Geological Survey (USGS) Earth Resources Observation Science (EROS) Center is home to one of the largest civilian collections of images of the Earth's surface. These images are collected from recent satellite platforms such as the Landsat, Terra, Aqua and Earth Observer-1, historical airborne systems such as digital cameras and side-looking radar, and digitized historical aerial photography dating to the 1930's. The aircraft scanners include instruments such as the Advanced Solid State Array Spectrometer (ASAS). Also archived at EROS are specialized collections of aerial images, such as high-resolution orthoimagery, extensive collections over Antarctica, and historical airborne campaigns such as the National Aerial Photography Program (NAPP) and the National High Altitude Photography (NHAP) collections. These collections, as well as digital map data, declassified historical space-based photography, and variety of collections such as the Global Land Survey 2000 (GLS2000) and the Shuttle Radar Topography Mission (SRTM) are accessible through the USGS Earth Explorer (EE) client. EE allows for the visual discovery and browse of diverse datasets simultaneously, permitting the co-discovery and selection refinement of both satellite and aircraft imagery. The client, in use for many years was redesigned in 2010 to support requirements for next generation Landsat Data Continuity Mission (LDCM) data access and distribution. The redesigned EE is now supported by standards-based, open source infrastructure. EE gives users the capability to search 189 datasets through one interface, including over 8.4 million frames of aerial imagery. Since April 2011, NASA datasets archived at the Land Processes Distributed Active Archive Center (LP DAAC) including the MODIS land data products and ASTER Level-1B data products over the U.S. and Territories were made available via the EE client enabling users to co-discover aerial data archived at the USGS EROS along with USGS

  19. Terrestrial polarization imagery obtained from the Space Shuttle - Characterization and interpretation

    NASA Technical Reports Server (NTRS)

    Egan, Walter G.; Johnson, W. R.; Whitehead, V. S.

    1991-01-01

    An experiment to measure the polarization of land, sea, haze, and cloud areas from space was carried aboard the Space Shuttle in September 1985. Digitized polarimetric and photometric imagery in mutually perpendicular planes was derived in the red, green, and blue spectral regions from photographs taken with two synchronized Hasselblad cameras using type 5036 Ektachrome film. Digitization at the NASA Houston Video Digital Analysis Systems Laboratory permitted reduction of the imagery into equipolarimetric contours with a relative accuracy of + or - 20 percent for comparison to ground truth. The Island of Hawaii and adjacent sea and cloud areas were the objects of the specific imagery analyzed. Results show that cloud development is uniquely characterized using percent polarization without requiring precision photometric calibration. Furthermore, sea state and wind direction over the sea could be inferred as well as terrestrial soil texture.

  20. IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY

    EPA Science Inventory

    This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...

  1. NOAA's Use of High-Resolution Imagery

    NASA Technical Reports Server (NTRS)

    Hund, Erik

    2007-01-01

    NOAA's use of high-resolution imagery consists of: a) Shoreline mapping and nautical chart revision; b) Coastal land cover mapping; c) Benthic habitat mapping; d) Disaster response; and e) Imagery collection and support for coastal programs.

  2. Monitoring Seabirds and Marine Mammals by Georeferenced Aerial Photography

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Weidauer, A.; Coppack, T.

    2016-06-01

    The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software

  3. Approximate Dynamic Programming and Aerial Refueling

    DTIC Science & Technology

    2007-06-01

    were values derived from “AFPAM 10-1403, AIR MOBILITY PLANNING FACTORS” used by the US Air Force when making gross calculations of aerial refueling...Aerial Refueling. U.S. Centennial of Flight Commision. centennialofflight.gov/essay/EvolutionofT echnology /refueling?Tech22.htm. 20003. 5 [6] DOD Needs

  4. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Aerial wire. 32.2431 Section 32.2431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire. (a) This account shall include the original cost of bare line wire and other material used in...

  5. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Aerial wire. 32.2431 Section 32.2431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire. (a) This account shall include the original cost of bare line wire and other material used in...

  6. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Aerial wire. 32.2431 Section 32.2431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire. (a) This account shall include the original cost of bare line wire and other material used in...

  7. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Aerial wire. 32.2431 Section 32.2431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire. (a) This account shall include the original cost of bare line wire and other material used in...

  8. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Aerial wire. 32.2431 Section 32.2431... FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire. (a) This account shall include the original cost of bare line wire and other material used in...

  9. BOREAS Level-0 ER-2 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominquez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the ER-2 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The ER-2 aerial photography consists of color-IR transparencies collected during flights in 1994 and 1996 over the study areas.

  10. Astronomical Methods in Aerial Navigation

    NASA Technical Reports Server (NTRS)

    Beij, K Hilding

    1925-01-01

    The astronomical method of determining position is universally used in marine navigation and may also be of service in aerial navigation. The practical application of the method, however, must be modified and adapted to conform to the requirements of aviation. Much of this work of adaptation has already been accomplished, but being scattered through various technical journals in a number of languages, is not readily available. This report is for the purpose of collecting under one cover such previous work as appears to be of value to the aerial navigator, comparing instruments and methods, indicating the best practice, and suggesting future developments. The various methods of determining position and their application and value are outlined, and a brief resume of the theory of the astronomical method is given. Observation instruments are described in detail. A complete discussion of the reduction of observations follows, including a rapid method of finding position from the altitudes of two stars. Maps and map cases are briefly considered. A bibliography of the subject is appended.

  11. GLIDER: Free tool imagery data visualization, analysis and mining

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Graves, S. J.; Berendes, T.; Maskey, M.; Chidambaram, C.; Hogan, P.; Gaskin, T.

    2009-12-01

    Satellite imagery can be analyzed to extract thematic information, which has increasingly been used as a source of information for making policy decisions. The uses of such thematic information can vary from military applications such as detecting assets of interest to science applications such as characterizing land-use/land cover change at local, regional and global scales. However, extracting thematic information using satellite imagery is a non-trivial task. It requires a user to preprocess the data by applying operations for radiometric and geometric corrections. The user also needs to be able to visualize the data and apply different image enhancement operations to digitally improve the images to identify subtle information that might be otherwise missed. Finally, the user needs to apply different information extraction algorithms to the imagery to obtain the thematic information. At present, there are limited tools that provide users with the capability to easily extract and exploit the information contained within the satellite imagery. This presentation will present GLIDER, a free software tool addressing this void. GLIDER provides users with a easy to use tool to visualize, analyze and mine satellite imagery. GLIDER allows users to visualize and analyze satellite in its native sensor view, an important capability because any transformation to either a geographic coordinate system or any projected coordinate system entails spatial and intensity interpolation; and hence, loss of information. GLIDER allows users to perform their analysis in the native sensor view without any loss of information. GLIDER provides users with a full suite of image processing algorithms that can be used to enhance the satellite imagery. It also provides pattern recognition and data mining algorithms for information extraction. GLIDER allows its users to project satellite data and the analysis/mining results onto to a globe and overlay additional data layers. Traditional analysis

  12. Aerial Surveying Uav Based on Open-Source Hardware and Software

    NASA Astrophysics Data System (ADS)

    Mészáros, J.

    2011-09-01

    In the last years the functionality and type of UAV-systems increased fast, but unfortunately these systems are hardly available for researchers in some cases. A simple and low-cost solution was developed to build an autonomous aerial surveying airplane, which can fulfil the necessities (aerial photographs with very-high resolution) of other departments at the university and very useful and practical for teaching photogrammetry.. The base was a commercial, remote controlled model airplane and an open-source GPS/IMU system (MatrixPilot) was adapted to achieve the semi-automatic or automatic stabilization and navigation of the model airplane along predefined trajectory. The firmware is completely open-source and easily available on the website of the project. The first used camera system was a low-budget, low-quality video camera, which could provide only 1.2 megapixel photographs or low resolution video depending on the light conditions and the desired spatial resolution. A field measurement test was carried out with the described system: the aerial surveying of an undiscovered archaeological site, signed by a crop-mark in mountain Pilis (Hungary).

  13. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  14. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  15. Evaluation of maritime object detection methods for full motion video applications using the PASCAL VOC Challenge framework

    NASA Astrophysics Data System (ADS)

    Jaszewski, Martin; Parameswaran, Shibin; Hallenborg, Eric; Bagnall, Bryan

    2015-03-01

    We present an initial target detection performance evaluation system for the RAPid Image Exploitation Resource (RAPIER) Full Motion Video (RFMV) maritime target tracking software. We test and evaluate four statistical target detection methods using 30 Hz full motion video from aerial platforms. Using appropriate algorithm performance criteria inspired by the PASCAL Visual Object Classes (VOC) Challenge, we address the tradeoffs between detection fidelity and computational speed/throughput.

  16. The Value of Video

    ERIC Educational Resources Information Center

    Thompson, Douglas E.

    2011-01-01

    Video connects sight and sound, creating a composite experience greater than either alone. More than any other single technology, video is the most powerful way to communicate with others--and an ideal medium for sharing with others the vital learning occurring in music classrooms. In this article, the author leads readers through the process of…

  17. Digital Video Editing

    ERIC Educational Resources Information Center

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  18. 2016 Perseids: outreach video

    NASA Astrophysics Data System (ADS)

    Madiedo, Jose Maria

    2016-02-01

    In order to promote the observation of the Perseids in August 2016 I have prepared an outreach video. The video contains computer animations and actual footage related to this meteor shower. It has been released by the University of Huelva and the Institute of Astrophysics of Andalusia in two versions: English and Spanish.

  19. Policy for Instructional Video.

    ERIC Educational Resources Information Center

    Lipson, Joseph I.

    An examination of the general uses of video in instruction helps to formulate appropriate policy for maximizing video production and use. Wide use of instructional television makes advanced knowledge more usable and increases public awareness of new discoveries, reduces the time lag between conception and application of ideas which change society,…

  20. Writing in Video.

    ERIC Educational Resources Information Center

    Carraher, David; Nemirovsky, Ricardo; DiMattia, Cara; Lara-Meloy, Teresa; Earnest, Darrell

    1999-01-01

    Video and electronic media have the potential to bridge the gap between classroom research and practice by providing rich and detailed data for grounded discussions about teaching and learning. Describes attempts to use digital video technologies to increase collaboration between researchers and practitioners. (WRM)

  1. Video Communication Program.

    ERIC Educational Resources Information Center

    Haynes, Leonard Stanley

    This thesis describes work done as part of the Video Console Indexing Project (VICI), a program to improve the quality and reduce the time and work involved in indexing documents. The objective of the work described was to design a video terminal system which could be connected to a main computer to provide rapid natural communication between the…

  2. The Video Generation.

    ERIC Educational Resources Information Center

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  3. Perceptual evaluation of colorized nighttime imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-02-01

    We recently presented a color transform that produces fused nighttime imagery with a realistic color appearance (Hogervorst and Toet, 2010, Information Fusion, 11-2, 69-77). To assess the practical value of this transform we performed two experiments in which we compared human scene recognition for monochrome intensified (II) and longwave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First we investigated the amount of detail observers can perceive in a short time span (the gist of the scene). Participants watched brief image presentations and provided a full report of what they had seen. Our results show that REF and CF imagery yielded the highest precision and recall measures, while both II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty extracting information from monochrome than from color imagery. Next, we measured eye fixations of participants who freely explored the images. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representation such that the resulting fixation behavior resembles the fixation behavior for daylight color imagery.

  4. Strategies for Defeating Commercial Imagery Systems

    DTIC Science & Technology

    2005-12-01

    STRATEGIES FOR DEFEATING COMMERCIAL IMAGERY SYSTEMS by Stephen Latchford, Lieutenant Colonel, USAF December 2005 Occasional...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Strategies for Defeating Commercial Imagery Systems 5a. CONTRACT NUMBER 5b...FOR DEFEATING COMMERCIAL IMAGERY SYSTEMS Stephen Latchford, Lieutenant Colonel, USAF December 2005 The Occasional papers series was

  5. Imagery: A Neglected Correlate of Reading Instruction.

    ERIC Educational Resources Information Center

    Fillmer, H. T.; Parkay, Forrest W.

    Imagery has a significant role in cognitive development. Reading research has established the fact that good readers image spontaneously and that there is a high interrelationship between overall preference for a story, the amount of text-related imagery in the story, comprehension, and recall. Imagery researchers agree that everyone is capable of…

  6. Automatic Orientation and Mosaicking of Archived Aerial Photography Using Structure from Motion

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.

    2016-03-01

    Aerial photography has been acquired regularly for topographic mapping since the decade of 1930. In Portugal there are several archives of aerial photos in national mapping institutes, as well as in local authorities, containing a total of nearly one hundred thousand photographs, mainly from the 1940s, 1950s and some from 1930s. These data sets provide important information about the evolution of the territory, for environment and agricultural studies, land planning, and many other examples. There is an interest in making these aerial coverages available in the form of orthorectified mosaics for integration in a GIS. The orthorectification of old photographs may pose several difficulties. Required data about the camera and lens system used, such as the focal distance, fiducial marks coordinates or distortion parameters may not be available, making it difficult to process these data in conventional photogrammetric software. This paper describes an essentially automatic methodology for orientation, orthorectification and mosaic composition of blocks of old aerial photographs, using Agisoft Photoscan structure from motion software. The operation sequence is similar to the processing of UAV imagery. The method was applied to photographs from 1947 and 1958, provided by the Portuguese Army Geographic Institute. The orientation was done with GCPs collected from recent orthophototos and topographic maps. This may be a difficult task, especially in urban areas that went through many changes. Residuals were in general below 1 meter. The agreement of the orthomosaics with recent orthophotos and GIS vector data was in general very good. The process is relatively fast and automatic, and can be considered in the processing of full coverages of old aerial photographs.

  7. Use of Airborne Thermal Imagery to Detect and Monitor Inshore Oil Spill Residues During Darkness Hours.

    PubMed

    GRIERSON

    1998-11-01

    / Trials were conducted using an airborne video system operating in the visible, near-infrared, and thermal wavelengths to detect two known oil spill releases during darkness at a distance of 10 nautical miles from the shore in St. Vincent's Gulf, South Australia. The oil spills consisted of two 20-liter samples released at 2-h intervals, one sample consisted of paraffinic neutral material and the other of automotive diesel oil. A tracking buoy was sent overboard in conjunction with the release of sample 1, and its movement monitored by satellite relay. Both oil residues were overflown by a light aircraft equipped with thermal, visible, and infrared imagers at a period of approximately 1 h after the release of the second oil residue. Trajectories of the oil residue releases were also modeled and the results compared to those obtained by the airborne video and the tracking buoy. Airborne imagery in the thermal wavelengths successfully located and mapped both oil residue samples during nighttime conditions. Results from the trial suggest that the most advantageous technique would be the combined use of the tracking beacon to obtain an approximate location of the oil spill and the airborne imagery to ascertain its extent and characteristics.KEY WORDS: Airborne video; Thermal imagery; Global positioning; Oil-spill monitoring; Tracking beacon

  8. Violence against women in video games: a prequel or sequel to rape myth acceptance?

    PubMed

    Beck, Victoria Simpson; Boys, Stephanie; Rose, Christopher; Beck, Eric

    2012-10-01

    Current research suggests a link between negative attitudes toward women and violence against women, and it also suggests that media may condition such negative attitudes. When considering the tremendous and continued growth of video game sales, and the resulting proliferation of sexual objectification and violence against women in some video games, it is lamentable that there is a dearth of research exploring the effect of such imagery on attitudes toward women. This study is the first study to use actual video game playing and control for causal order, when exploring the effect of sexual exploitation and violence against women in video games on attitudes toward women. By employing a Solomon Four-Group experimental research design, this exploratory study found that a video game depicting sexual objectification of women and violence against women resulted in statistically significant increased rape myths acceptance (rape-supportive attitudes) for male study participants but not for female participants.

  9. Evaluation of Bare Ground on Rangelands using Unmanned Aerial Vehicles

    SciTech Connect

    Robert P. Breckenridge; Maxine Dakins

    2011-01-01

    Attention is currently being given to methods that assess the ecological condition of rangelands throughout the United States. There are a number of different indicators that assess ecological condition of rangelands. Bare Ground is being considered by a number of agencies and resource specialists as a lead indicator that can be evaluated over a broad area. Traditional methods of measuring bare ground rely on field technicians collecting data along a line transect or from a plot. Unmanned aerial vehicles (UAVs) provide an alternative to collecting field data, can monitor a large area in a relative short period of time, and in many cases can enhance safety and time required to collect data. In this study, both fixed wing and helicopter UAVs were used to measure bare ground in a sagebrush steppe ecosystem. The data were collected with digital imagery and read using the image analysis software SamplePoint. The approach was tested over seven different plots and compared against traditional field methods to evaluate accuracy for assessing bare ground. The field plots were located on the Idaho National Laboratory (INL) site west of Idaho Falls, Idaho in locations where there is very little disturbance by humans and the area is grazed only by wildlife. The comparison of fixed-wing and helicopter UAV technology against field estimates shows good agreement for the measurement of bare ground. This study shows that if a high degree of detail and data accuracy is desired, then a helicopter UAV may be a good platform. If the data collection objective is to assess broad-scale landscape level changes, then the collection of imagery with a fixed-wing system is probably more appropriate.

  10. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses

    PubMed Central

    Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) ‘Patriot’, Zoysia matrella (Zm) ‘Zeon’ and Paspalum vaginatum (Pv) ‘Salam’. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option. PMID:27341674

  11. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses.

    PubMed

    Caturegli, Lisa; Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) 'Patriot', Zoysia matrella (Zm) 'Zeon' and Paspalum vaginatum (Pv) 'Salam'. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option.

  12. A wetlands inventory of the state of Nebraska using ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Seevers, P. M.; Peterson, R. M.; Mahoney, D. J.; Maroney, D. G.; Rundquist, D. C.

    1975-01-01

    The use of ERTS-1 imagery permitted a rapid, economic, and accurate inventory of wetlands in Nebraska that are ten acres or larger in size. Four categories of wetlands - Open Water, Subirrigated Meadows, Marshes, and Seasonally Flooded Basins - were delineated by using two seasons of imagery and an electronic image-enhancing system. Positive print enlargements of bands 5 and 7 at a scale of 1:250,000 (acquired in the spring) as well as band 7 (acquired in late summer) were used to delineate all categories. Electronic enhancement of band 6 (acquired in the fall) was used as an aid to further differentiate marshes. Accuracy estimates based on color infrared aerial photography as ground truth indicated, as an overall average, 85 percent correct identification.

  13. Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.

    PubMed

    Everitt, J H; Yang, C

    2007-11-01

    A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.

  14. A Vegetation Analysis on Horn Island Mississippi, ca. 1940 using Habitat Characteristic Dimensions Derived from Historical Aerial Photography

    NASA Astrophysics Data System (ADS)

    Jeter, G. W.; Carter, G. A.

    2013-12-01

    Guy (Will) Wilburn Jeter Jr., Gregory A. Carter University of Southern Mississippi Geography and Geology Gulf Coast Geospatial Center The over-arching goal of this research is to assess habitat change over a seventy year period to better understand the combined effects of global sea level rise and storm impacts on the stability of Horn Island, MS habitats. Historical aerial photography is often overlooked as a resource for use in determining habitat change. However, the spatial information provided even by black and white imagery can give insight into past habitat composition via textural analysis. This research will evaluate characteristic dimensions; most notably patch size of habitat types using simple geo-statistics and textures of brightness values of historical aerial imagery. It is assumed that each cover type has an identifiable patch size that can be used as a unique classifier of each habitat type. Analytical methods applied to the 1940 imagery were developed using 2010 field data and USDA aerial imagery. Textural moving window methods and basic geo-statistics were used to estimate characteristic dimensions of each cover type in 1940 aerial photography. The moving window texture analysis was configured with multiple window sizes to capture the characteristic dimensions of six habitat types; water, bare sand , dune herb land, estuarine shrub land, marsh land and slash pine woodland. Coefficient of variation (CV), contrast, and entropy texture filters were used to analyze the spatial variability of the 1940 and 2010 imagery. (CV) was used to depict the horizontal variability of each habitat characteristic dimension. Contrast was used to represent the variability of bright versus dark pixel values; entropy was used to show the variation in the slash pine woodland habitat type. Results indicate a substantial increase in marshland habitat relative to other habitat types since 1940. Results also reveal each habitat-type, such as dune herb-land, marsh

  15. New Percepts via Mental Imagery?

    PubMed Central

    Mast, Fred W.; Tartaglia, Elisa M.; Herzog, Michael H.

    2012-01-01

    We are able to extract detailed information from mental images that we were not explicitly aware of during encoding. For example, we can discover a new figure when we rotate a previously seen image in our mind. However, such discoveries are not “really” new but just new “interpretations.” In two recent publications, we have shown that mental imagery can lead to perceptual learning (Tartaglia et al., 2009, 2012). Observers imagined the central line of a bisection stimulus for thousands of trials. This training enabled observers to perceive bisection offsets that were invisible before training. Hence, it seems that perceptual learning via mental imagery leads to new percepts. We will argue, however, that these new percepts can occur only within “known” models. In this sense, perceptual learning via mental imagery exceeds new discoveries in mental images. Still, the effects of mental imagery on perceptual learning are limited. Only perception can lead to really new perceptual experience. PMID:23060830

  16. Digital Imagery, Preservation and Access.

    ERIC Educational Resources Information Center

    Lesk, Michael; Lynn, M. Stuart

    1990-01-01

    These two reports published by the Commission on Preservation and Access (CPA) include a comparison of digital and microfilm imagery, as well as discussions of chemical deacidification; ASCII (nonimage) files; and storage, conversion, and transmission considerations. A structured glossary of terms relating to media conversion and digital computer…

  17. Dialectical Imagery and Postmodern Research

    ERIC Educational Resources Information Center

    Davison, Kevin G.

    2006-01-01

    This article suggests utilizing dialectical imagery, as understood by German social philosopher Walter Benjamin, as an additional qualitative data analysis strategy for research into the postmodern condition. The use of images mined from research data may offer epistemological transformative possibilities that will assist in the demystification of…

  18. Stereoscopy in cinematographic synthetic imagery

    NASA Astrophysics Data System (ADS)

    Eisenmann, Jonathan; Parent, Rick

    2009-02-01

    In this paper we present experiments and results pertaining to the perception of depth in stereoscopic viewing of synthetic imagery. In computer animation, typical synthetic imagery is highly textured and uses stylized illumination of abstracted material models by abstracted light source models. While there have been numerous studies concerning stereoscopic capabilities, conventions for staging and cinematography in stereoscopic movies have not yet been well-established. Our long-term goal is to measure the effectiveness of various cinematography techniques on the human visual system in a theatrical viewing environment. We would like to identify the elements of stereoscopic cinema that are important in terms of enhancing the viewer's understanding of a scene as well as providing guidelines for the cinematographer relating to storytelling. In these experiments we isolated stereoscopic effects by eliminating as many other visual cues as is reasonable. In particular, we aim to empirically determine what types of movement in synthetic imagery affect the perceptual depth sensing capabilities of our viewers. Using synthetic imagery, we created several viewing scenarios in which the viewer is asked to locate a target object's depth in a simple environment. The scenarios were specifically designed to compare the effectiveness of stereo viewing, camera movement, and object motion in aiding depth perception. Data were collected showing the error between the choice of the user and the actual depth value, and patterns were identified that relate the test variables to the viewer's perceptual depth accuracy in our theatrical viewing environment.

  19. The remote characterization of vegetation using Unmanned Aerial Vehicle photography

    NASA Astrophysics Data System (ADS)

    Rango, A.; Laliberte, A.; Winters, C.; Maxwell, C.; Steele, C.

    2008-12-01

    Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial photographic, multispectral and hyperspectral radiometric, LIDAR, and radar data. The characteristics of several small UAVs less than 55lbs (25kg)) along with some payload instruments will be reviewed. Common types of remote sensing coverage available from a small, limited-payload UAV are video and hyperspatial, digital photography. From evaluation of these simple types of remote sensing data, we conclude that UAVs can play an important role in measuring and monitoring vegetation health and structure of the vegetation/soil complex in rangelands. If we fly our MLB Bat-3 at an altitude of 700ft (213m), we can obtain a digital photographic resolution of 6cm. The digital images acquired cover an area of approximately 29,350sq m. Video imaging is usually only useful for monitoring the flight path of the UAV in real time. In our experiments with the 6cm resolution data, we have been able to measure vegetation patch size, crown width, gap sizes between vegetation, percent vegetation and bare soil cover, and type of vegetation. The UAV system is also being tested to acquire height of the vegetation canopy using shadow measurements and a digital elevation model obtained with stereo images. Evaluation of combining the UAV digital photography with LIDAR data of the Jornada Experimental Range in south central New Mexico is ongoing. The use of UAVs is increasing and is becoming a very promising tool for vegetation assessment and change, but there are several operational components to flying UAVs that users need to consider. These include cost, a whole set of, as yet, undefined regulations regarding flying in the National Air Space(NAS), procedures to gain approval for flying in the NAS

  20. A nearly real-time UAV video flow mosaic method

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Jiang, C.; Sun, M.; Li, X. D.; Xiang, R.; Liu, Lei

    2014-12-01

    In order to solve the problem of low accuracy and high computation cost of current video mosaic methods, and also to acquire large field of view images by the unmanned aerial vehicles (UAV), which have high accuracy and high resolution, this paper propose a method for near real-time mosaic of video flow, so that we can provide essential reference data for the earthquake relief, as well as post-disaster reconstruction and recovery, in time. In this method, we obtain the flight area scope in the route planning process, and calculate the sizes of each frame with sensor sizes and altitudes. Given an overlap degree, time intervals are calculated, and key frames are extracted. After that, feature points are detected in each frame, and they are matched using Hamming distance. The RANSAC algorithm is then applied to remove error matching and calculate parameters of the transformation model. In one-strip case, the newly extracted frame is taken as the reference image in the first half, while after the middle frame is extracted, it is the reference one until the end. Experimental results show that our method can reduce the cascading error, and improve the accuracy and quality of the mosaic images, near real-time mosaic of aerial video flow is feasible.

  1. The Potential Uses of Commercial Satellite Imagery in the Middle East

    SciTech Connect

    Vannoni, M.G.

    1999-06-08

    It became clear during the workshop that the applicability of commercial satellite imagery to the verification of future regional arms control agreements is limited at this time. Non-traditional security topics such as environmental protection, natural resource management, and the development of infrastructure offer the more promising applications for commercial satellite imagery in the short-term. Many problems and opportunities in these topics are regional, or at least multilateral, in nature. A further advantage is that, unlike arms control and nonproliferation applications, cooperative use of imagery in these topics can be done independently of the formal Middle East Peace Process. The value of commercial satellite imagery to regional arms control and nonproliferation, however, will increase during the next three years as new, more capable satellite systems are launched. Aerial imagery, such as that used in the Open Skies Treaty, can also make significant contributions to both traditional and non-traditional security applications but has the disadvantage of requiring access to national airspace and potentially higher cost. There was general consensus that commercial satellite imagery is under-utilized in the Middle East and resources for remote sensing, both human and institutional, are limited. This relative scarcity, however, provides a natural motivation for collaboration in non-traditional security topics. Collaborations between scientists, businesses, universities, and non-governmental organizations can work at the grass-roots level and yield contributions to confidence building as well as scientific and economic results. Joint analysis projects would benefit the region as well as establish precedents for cooperation.

  2. MEMS Based Micro Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Joshi, Niranjan; Köhler, Elof; Enoksson, Peter

    2016-10-01

    Designing a flapping wing insect robot requires understanding of insect flight mechanisms, wing kinematics and aerodynamic forces. These subsystems are interconnected and their dependence on one another affects the overall performance. Additionally it requires an artificial muscle like actuator and transmission to power the wings. Several kinds of actuators and mechanisms are candidates for this application with their own strengths and weaknesses. This article provides an overview of the insect scaled flight mechanism along with discussion of various methods to achieve the Micro Aerial Vehicle (MAV) flight. Ongoing projects in Chalmers is aimed at developing a low cost and low manufacturing time MAV. The MAV design considerations and design specifications are mentioned. The wings are manufactured using 3D printed carbon fiber and are under experimental study.

  3. How To Obtain Aerial Photographs

    USGS Publications Warehouse

    ,

    1999-01-01

    The U.S. Geological Survey (USGS) maintains an informational data base of aerial photographic coverage of the United States and its territories that dates back to the 1940?s. This information describes photographic projects from the USGS, other Federal, State, and local government agencies, and commercial firms. The pictures on this page show a part of a standard 9- by 9-inch photograph and the results obtained by enlarging the original photograph two and four times. Compare the size of the Qualcomm Stadium, Jack Murphy Field, in San Diego, Calif, and the adjacent parking lot and freeways shown at the different scales. USGS Earth Science Information Center (ESIC) representatives will assist you in locating and ordering photographs. Please submit the completed checklist and a marked map showing your area of interest to any ESIC.

  4. A Moored Airborne Video System with Nearshore Applications

    NASA Astrophysics Data System (ADS)

    Smith, G.; Lippmann, T.

    2004-12-01

    Over the past two decades researchers have developed video-based remote sensing techniques to measure relevant nearshore variables. Measurements made include spatial patterns in sand bar morphology, run-up oscillations, wave breaking distributions, phase speed and wave angle, and most recently, surface currents within the surf zone and swash. In general, vertical (i.e., downward oriented) photography or videography is preferred to high-oblique land-based systems. However, although aircraft-mounted video systems have been under development for several years, the relatively high cost and short dwell time has limited its widespread application. Thus, most video measurements for research applications are obtained through methods whereby arrays of video cameras are fixed on land and oriented obliquely to the surf zone region of interest. The typically high-oblique imagery is limited in spatial ground coverage by rapidly degrading resolution in the far field, as well as lay-over problems associated with a fluctuating sea surface and high incidence look-angle. In order to alleviate these problems, researchers have attempted mounting video (or photographic) sensors on tethered balloons where long time series can be obtained over large regions of the surf zone without limiting resolution in the far field. In our research we have developed a technique for mounting a video system onboard a tethered helikite, a combination kite and helium-filled blimp (Allsopp Helikites, Ltd.). The video system consists of a downward-looking video camera in a custom weather-proof housing mounted on the keel of the helikite. Also included are a differential GPS receiver, tilt and heading sensor for accurate geometrical transformation, micro-processor, onboard power supply, and wireless data link. In this presentation, we will discuss the system in more detail, the image resolution and accuracies, and the expected applications to nearshore processes research. This work is sponsored by the Office

  5. Use of Remote Sensed Imagery to Evaluate Land Cover Change: North Platte River Basin

    NASA Astrophysics Data System (ADS)

    Kerr, G.; Piburn, J.; Rudolph, J.; Tootle, G.; Marks, J. A.

    2012-12-01

    High resolution remote sensed data for land cover classification, such as LiDAR, is often times not readily available in rural areas. For basin-wide and other small-scale projects, proprietary LiDAR collection may not be cost effective and an alternative is found with the use of the National Agricultural Imagery Program (NAIP). NAIP imagery provides 1-meter resolution aerial imagery for the entire United States, temporally updated on a state by state basis at no charge to the user. NAIP imagery was used to classify forest cover change due to beetle infestation in the roughly 4,000 square-mile North Platte River Basin (NPRB). Using an interactive classification method with an underlying maximum likelihood classification algorithm, it was found that forest cover in the NPRB decreased by approximately 25% from 2005-2006 to 2009. Using focal histograms to refine the classifications to large-scale USGS 7.5 minute quadrangles, the land cover results will be used as parameters in the Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model to estimate how this physical change in land cover affects the riparian system of the NPRB, specifically streamflow response.

  6. Unmanned aerial survey of elephants.

    PubMed

    Vermeulen, Cédric; Lejeune, Philippe; Lisein, Jonathan; Sawadogo, Prosper; Bouché, Philippe

    2013-01-01

    The use of a UAS (Unmanned Aircraft System) was tested to survey large mammals in the Nazinga Game Ranch in the south of Burkina Faso. The Gatewing ×100™ equipped with a Ricoh GR III camera was used to test animal reaction as the UAS passed, and visibility on the images. No reaction was recorded as the UAS passed at a height of 100 m. Observations, made on a set of more than 7000 images, revealed that only elephants (Loxodonta africana) were easily visible while medium and small sized mammals were not. The easy observation of elephants allows experts to enumerate them on images acquired at a height of 100 m. We, therefore, implemented an aerial strip sample count along transects used for the annual wildlife foot count. A total of 34 elephants were recorded on 4 transects, each overflown twice. The elephant density was estimated at 2.47 elephants/km(2) with a coefficient of variation (CV%) of 36.10%. The main drawback of our UAS was its low autonomy (45 min). Increased endurance of small UAS is required to replace manned aircraft survey of large areas (about 1000 km of transect per day vs 40 km for our UAS). The monitoring strategy should be adapted according to the sampling plan. Also, the UAS is as expensive as a second-hand light aircraft. However the logistic and flight implementation are easier, the running costs are lower and its use is safer. Technological evolution will make civil UAS more efficient, allowing them to compete with light aircraft for aerial wildlife surveys.

  7. Unmanned Aerial Survey of Elephants

    PubMed Central

    Vermeulen, Cédric; Lejeune, Philippe; Lisein, Jonathan; Sawadogo, Prosper; Bouché, Philippe

    2013-01-01

    The use of a UAS (Unmanned Aircraft System) was tested to survey large mammals in the Nazinga Game Ranch in the south of Burkina Faso. The Gatewing ×100™ equipped with a Ricoh GR III camera was used to test animal reaction as the UAS passed, and visibility on the images. No reaction was recorded as the UAS passed at a height of 100 m. Observations, made on a set of more than 7000 images, revealed that only elephants (Loxodonta africana) were easily visible while medium and small sized mammals were not. The easy observation of elephants allows experts to enumerate them on images acquired at a height of 100 m. We, therefore, implemented an aerial strip sample count along transects used for the annual wildlife foot count. A total of 34 elephants were recorded on 4 transects, each overflown twice. The elephant density was estimated at 2.47 elephants/km2 with a coefficient of variation (CV%) of 36.10%. The main drawback of our UAS was its low autonomy (45 min). Increased endurance of small UAS is required to replace manned aircraft survey of large areas (about 1000 km of transect per day vs 40 km for our UAS). The monitoring strategy should be adapted according to the sampling plan. Also, the UAS is as expensive as a second-hand light aircraft. However the logistic and flight implementation are easier, the running costs are lower and its use is safer. Technological evolution will make civil UAS more efficient, allowing them to compete with light aircraft for aerial wildlife surveys. PMID:23405088

  8. The DOE ARM Aerial Facility

    SciTech Connect

    Schmid, Beat; Tomlinson, Jason M.; Hubbe, John M.; Comstock, Jennifer M.; Mei, Fan; Chand, Duli; Pekour, Mikhail S.; Kluzek, Celine D.; Andrews, Elisabeth; Biraud, S.; McFarquhar, Greg

    2014-05-01

    The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites that provide long-term measurements of climate relevant properties, mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months), and the ARM Aerial Facility (AAF). The airborne observations acquired by the AAF enhance the surface-based ARM measurements by providing high-resolution in-situ measurements for process understanding, retrieval-algorithm development, and model evaluation that are not possible using ground- or satellite-based techniques. Several ARM aerial efforts were consolidated into the AAF in 2006. With the exception of a small aircraft used for routine measurements of aerosols and carbon cycle gases, AAF at the time had no dedicated aircraft and only a small number of instruments at its disposal. In this "virtual hangar" mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, AAF started managing operations of the Battelle-owned Gulfstream I (G-1) large twin-turboprop research aircraft. Furthermore, the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of over twenty new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments.

  9. Chosen Aspects of the Production of the Basic Map Using Uav Imagery

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.

    2016-06-01

    For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.

  10. Habitat Mapping and Classification of the Grand Bay National Estuarine Research Reserve using AISA Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Rose, K.

    2012-12-01

    Habitat mapping and classification provides essential information for land use planning and ecosystem research, monitoring and management. At the Grand Bay National Estuarine Research Reserve (GRDNERR), Mississippi, habitat characterization of the Grand Bay watershed will also be used to develop a decision-support tool for the NERR's managers and state and local partners. Grand Bay NERR habitat units were identified using a combination of remotely sensed imagery, aerial photography and elevation data. Airborne Imaging Spectrometer for Applications (AISA) hyperspectral data, acquired 5 and 6 May 2010, was analyzed and classified using ENVI v4.8 and v5.0 software. The AISA system was configured to return 63 bands of digital imagery data with a spectral range of 400 to 970 nm (VNIR), spectral resolution (bandwidth) at 8.76 nm, and 1 m spatial resolution. Minimum Noise Fraction (MNF) and Inverse Minimum Noise Fraction were applied to the data prior to using Spectral Angle Mapper ([SAM] supervised) and ISODATA (unsupervised) classification techniques. The resulting class image was exported to ArcGIS 10.0 and visually inspected and compared with the original imagery as well as auxiliary datasets to assist in the attribution of habitat characteristics to the spectral classes, including: National Agricultural Imagery Program (NAIP) aerial photography, Jackson County, MS, 2010; USFWS National Wetlands Inventory, 2007; an existing GRDNERR habitat map (2004), SAV (2009) and salt panne (2002-2003) GIS produced by GRDNERR; and USACE lidar topo-bathymetry, 2005. A field survey to validate the map's accuracy will take place during the 2012 summer season. ENVI's Random Sample generator was used to generate GIS points for a ground-truth survey. The broad range of coastal estuarine habitats and geomorphological features- many of which are transitional and vulnerable to environmental stressors- that have been identified within the GRDNERR point to the value of the Reserve for

  11. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  12. Interventional video tomography

    NASA Astrophysics Data System (ADS)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  13. Movement and stretching imagery during flexibility training.

    PubMed

    Vergeer, Ineke; Roberts, Jenny

    2006-02-01

    The aim of this study was to examine the effect of movement and stretching imagery on increases in flexibility. Thirty volunteers took part in a 4 week flexibility training programme. They were randomly assigned to one of three groups: (1) movement imagery, where participants imagined moving the limb they were stretching; (2) stretching imagery, where participants imagined the physiological processes involved in stretching the muscle; and (3) control, where participants did not engage in mental imagery. Active and passive range of motion around the hip was assessed before and after the programme. Participants provided specific ratings of vividness and comfort throughout the programme. Results showed significant increases in flexibility over time, but no differences between the three groups. A significant relationship was found, however, between improved flexibility and vividness ratings in the movement imagery group. Furthermore, both imagery groups scored significantly higher than the control group on levels of comfort, with the movement imagery group also scoring significantly higher than the stretching imagery group. We conclude that the imagery had stronger psychological than physiological effects, but that there is potential for enhancing physiological effects by maximizing imagery vividness, particularly for movement imagery.

  14. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.

    2015-01-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  15. Cooperative Lander-Surface/Aerial Microflyer Missions for Mars Exploration

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Lay, Norman; Hine, Butler; Zornetzer, Steven

    2004-01-01

    Concepts are being investigated for exploratory missions to Mars based on Bioinspired Engineering of Exploration Systems (BEES), which is a guiding principle of this effort to develop biomorphic explorers. The novelty lies in the use of a robust telecom architecture for mission data return, utilizing multiple local relays (including the lander itself as a local relay and the explorers in the dual role of a local relay) to enable ranges 10 to 1,000 km and downlink of color imagery. As illustrated in Figure 1, multiple microflyers that can be both surface or aerially launched are envisioned in shepherding, metamorphic, and imaging roles. These microflyers imbibe key bio-inspired principles in their flight control, navigation, and visual search operations. Honey-bee inspired algorithms utilizing visual cues to perform autonomous navigation operations such as terrain following will be utilized. The instrument suite will consist of a panoramic imager and polarization imager specifically optimized to detect ice and water. For microflyers, particularly at small sizes, bio-inspired solutions appear to offer better alternate solutions than conventional engineered approaches. This investigation addresses a wide range of interrelated issues, including desired scientific data, sizes, rates, and communication ranges that can be accomplished in alternative mission scenarios. The mission illustrated in Figure 1 offers the most robust telecom architecture and the longest range for exploration with two landers being available as main local relays in addition to an ephemeral aerial probe local relay. The shepherding or metamorphic plane are in their dual role as local relays and image data collection/storage nodes. Appropriate placement of the landing site for the scout lander with respect to the main mission lander can allow coverage of extremely large ranges and enable exhaustive survey of the area of interest. In particular, this mission could help with the path planning and risk

  16. Unmanned aerial optical systems for spatial monitoring of Antarctic mosses

    NASA Astrophysics Data System (ADS)

    Lucieer, Arko; Turner, Darren; Veness, Tony; Malenovsky, Zbynek; Harwin, Stephen; Wallace, Luke; Kelcey, Josh; Robinson, Sharon

    2013-04-01

    The Antarctic continent has experienced major changes in temperature, wind speed and stratospheric ozone levels during the last 50 years. In a manner similar to tree rings, old growth shoots of Antarctic mosses, the only plants on the continent, also preserve a climate record of their surrounding environment. This makes them an ideal bio-indicator of the Antarctic climate change. Spatially extensive ground sampling of mosses is laborious and time limited due to the short Antarctic growing season. Obviously, there is a need for an efficient method to monitor spatially climate change induced stress of the Antarctic moss flora. Cloudy weather and high spatial fragmentation of the moss turfs makes satellite imagery unsuitable for this task. Unmanned aerial systems (UAS), flying at low altitudes and collecting image data even under a full overcast, can, however, overcome the insufficiency of satellite remote sensing. We, therefore, developed scientific UAS, consisting of a remote-controlled micro-copter carrying on-board different remote sensing optical sensors, tailored to perform fast and cost-effective mapping of Antarctic flora at ultra-high spatial resolution (1-10 cm depending on flight altitude). A single lens reflex (SLR) camera carried by UAS acquires multi-view aerial photography, which processed by the Structure from Motion computer vision algorithm provides an accurate three-dimensional digital surface model (DSM) at ultra-high spatial resolution. DSM is the key input parameter for modelling a local seasonal snowmelt run-off, which provides mosses with the vital water supply. A lightweight multispectral camera on-board of UVS is collecting images of six selected spectral wavebands with the full-width-half-maximum (FWHM) of 10 nm. The spectral bands can be used to compute various vegetation optical indices, e.g. Difference Vegetation Index (NDVI) or Photochemical Reflectance Index (PRI), assessing the actual physiological state of polar vegetation. Recently

  17. Mental Imagery in Depression: Phenomenology, Potential Mechanisms, and Treatment Implications.

    PubMed

    Holmes, Emily A; Blackwell, Simon E; Burnett Heyes, Stephanie; Renner, Fritz; Raes, Filip

    2016-01-01

    Mental imagery is an experience like perception in the absence of a percept. It is a ubiquitous feature of human cognition, yet it has been relatively neglected in the etiology, maintenance, and treatment of depression. Imagery abnormalities in depression include an excess of intrusive negative mental imagery; impoverished positive imagery; bias for observer perspective imagery; and overgeneral memory, in which specific imagery is lacking. We consider the contribution of imagery dysfunctions to depressive psychopathology and implications for cognitive behavioral interventions. Treatment advances capitalizing on the representational format of imagery (as opposed to its content) are reviewed, including imagery rescripting, positive imagery generation, and memory specificity training. Consideration of mental imagery can contribute to clinical assessment and imagery-focused psychological therapeutic techniques and promote investigation of underlying mechanisms for treatment innovation. Research into mental imagery in depression is at an early stage. Work that bridges clinical psychology and neuroscience in the investigation of imagery-related mechanisms is recommended.

  18. The application of ERTS imagery to mapping snow cover in the western United States. [Salt Verde in Arizona and Sierra Nevada California

    NASA Technical Reports Server (NTRS)

    Barnes, J. C. (Principal Investigator); Bowley, C. J.; Simmes, D. A.

    1974-01-01

    The author has identified the following significant results. In much of the western United States a large part of the utilized water comes from accumulated mountain snowpacks; thus, accurate measurements of snow distributions are required for input to streamflow prediction models. The application of ERTS-1 imagery for mapping snow has been evaluated for two geographic areas, the Salt-Verde watershed in central Arizona and the southern Sierra Nevada in California. Techniques have been developed to identify snow and to differentiate between snow and cloud. The snow extent for these two drainage areas has been mapped from the MSS-5 (0.6 - 0.7 microns) imagery and compared with aerial survey snow charts, aircraft photography, and ground-based snow measurements. The results indicate that ERTS imagery has substantial practical applications for snow mapping. Snow extent can be mapped from ERTS-1 imagery in more detail than is depicted on aerial survey snow charts. Moreover, in Arizona and southern California cloud obscuration does not appear to be a serious deterrent to the use of satellite data for snow survey. The costs involved in deriving snow maps from ERTS-1 imagery appear to be very reasonable in comparison with existing data collection methods.

  19. Video game epilepsy.

    PubMed

    Singh, R; Bhalla, A; Lehl, S S; Sachdev, A

    2001-12-01

    Reflex epilepsy is the commonest form of epilepsy in which seizures are provoked by specific external stimulus. Photosensitive reflex epilepsy is provoked by environmental flicker stimuli. Video game epilepsy is considered to be its variant or a pattern sensitive epilepsy. The mean age of onset is around puberty and boys suffer more commonly as they are more inclined to play video games. Television set or computer screen is the commonest precipitants. The treatment remains the removal of the offending stimulus along with drug therapy. Long term prognosis in these patients is better as photosensitivity gradually declines with increasing age. We present two such case of epilepsy induced by video game.

  20. Overview of NASA aerial applications research

    NASA Technical Reports Server (NTRS)

    Holmes, B. J.

    1978-01-01

    Aerial applications research conducted by NASA seeks improvements in environmental safety, fuel efficiency, and aircraft productivity and safety. From 1976 to 1978, NASA studied the technology needs of the aerial applications industry and developed in-house research capabilities for meeting those needs. This paper presents the research plans developed by NASA. High potential appears to exist for near term contributions to the industry from existing NASA research capabilities in drift reduction, stall departure safety, and dry materials dispersal system technology. A brief, annotated bibliography is included listing documents recently produced as a result of NASA aerial applications research efforts.