Science.gov

Sample records for aerial video imagery

  1. Acquisition and registration of aerial video imagery of urban traffic

    SciTech Connect

    Loveland, Rohan C

    2008-01-01

    The amount of information available about urban traffic from aerial video imagery is extremely high. Here we discuss the collection of such video imagery from a helicopter platform with a low-cost sensor, and the post-processing used to correct radial distortion in the data and register it. The radial distortion correction is accomplished using a Harris model. The registration is implemented in a two-step process, using a globally applied polyprojective correction model followed by a fine scale local displacement field adjustment. The resulting cleaned-up data is sufficiently well-registered to allow subsequent straight-forward vehicle tracking.

  2. Data annotation of aerial reconnaissance imagery and exploitation

    NASA Astrophysics Data System (ADS)

    Wareberg, P. Gunnar; Prunes, V.; Scholes, Richard W.

    1995-09-01

    This paper reviews the use of LED recording head assemblies (RHAs) for film annotation in aerial reconnaissance cameras and discusses code matrix block readers (CMBRs). Annotation of video imagery is also covered.

  3. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  4. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  5. COCOA: tracking in aerial imagery

    NASA Astrophysics Data System (ADS)

    Ali, Saad; Shah, Mubarak

    2006-05-01

    Unmanned Aerial Vehicles (UAVs) are becoming a core intelligence asset for reconnaissance, surveillance and target tracking in urban and battlefield settings. In order to achieve the goal of automated tracking of objects in UAV videos we have developed a system called COCOA. It processes the video stream through number of stages. At first stage platform motion compensation is performed. Moving object detection is performed to detect the regions of interest from which object contours are extracted by performing a level set based segmentation. Finally blob based tracking is performed for each detected object. Global tracks are generated which are used for higher level processing. COCOA is customizable to different sensor resolutions and is capable of tracking targets as small as 100 pixels. It works seamlessly for both visible and thermal imaging modes. The system is implemented in Matlab and works in a batch mode.

  6. Advanced Image Processing of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn; Jobson, Daniel J.; Rahman, Zia-ur; Hines, Glenn

    2006-01-01

    Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at the NASA Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.

  7. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  8. Agency Video, Audio and Imagery Library

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  9. Object and activity detection from aerial video

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Shi, Feng; Liu, Xin; Ghazel, Mohsen

    2015-05-01

    Aerial video surveillance has advanced significantly in recent years, as inexpensive high-quality video cameras and airborne platforms are becoming more readily available. Video has become an indispensable part of military operations and is now becoming increasingly valuable in the civil and paramilitary sectors. Such surveillance capabilities are useful for battlefield intelligence and reconnaissance as well as monitoring major events, border control and critical infrastructure. However, monitoring this growing flood of video data requires significant effort from increasingly large numbers of video analysts. We have developed a suite of aerial video exploitation tools that can alleviate mundane monitoring from the analysts, by detecting and alerting objects and activities that require analysts' attention. These tools can be used for both tactical applications and post-mission analytics so that the video data can be exploited more efficiently and timely. A feature-based approach and a pixel-based approach have been developed for Video Moving Target Indicator (VMTI) to detect moving objects at real-time in aerial video. Such moving objects can then be classified by a person detector algorithm which was trained with representative aerial data. We have also developed an activity detection tool that can detect activities of interests in aerial video, such as person-vehicle interaction. We have implemented a flexible framework so that new processing modules can be added easily. The Graphical User Interface (GUI) allows the user to configure the processing pipeline at run-time to evaluate different algorithms and parameters. Promising experimental results have been obtained using these tools and an evaluation has been carried out to characterize their performance.

  10. Building and road detection from large aerial imagery

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Aoki, Yoshimitsu

    2015-02-01

    Building and road detection from aerial imagery has many applications in a wide range of areas including urban design, real-estate management, and disaster relief. The extracting buildings and roads from aerial imagery has been performed by human experts manually, so that it has been very costly and time-consuming process. Our goal is to develop a system for automatically detecting buildings and roads directly from aerial imagery. Many attempts at automatic aerial imagery interpretation have been proposed in remote sensing literature, but much of early works use local features to classify each pixel or segment to an object label, so that these kind of approach needs some prior knowledge on object appearance or class-conditional distribution of pixel values. Furthermore, some works also need a segmentation step as pre-processing. Therefore, we use Convolutional Neural Networks(CNN) to learn mapping from raw pixel values in aerial imagery to three object labels (buildings, roads, and others), in other words, we generate three-channel maps from raw aerial imagery input. We take a patch-based semantic segmentation approach, so we firstly divide large aerial imagery into small patches and then train the CNN with those patches and corresponding three-channel map patches. Finally, we evaluate our system on a large-scale road and building detection datasets that is publicly available.

  11. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    SciTech Connect

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected by high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.

  12. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  13. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  14. Texture mapping based on multiple aerial imageries in urban areas

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Ye, Siqi; Wang, Yuefeng; Han, Caiyun; Wang, Chenxi

    2015-12-01

    In the realistic 3D model reconstruction, the requirement of the texture is very high. Texture is one of the key factors that affecting realistic of the model and using texture mapping technology to realize. In this paper we present a practical approach of texture mapping based on photogrammetry theory from multiple aerial imageries in urban areas. By collinearity equation to matching the model and imageries, and in order to improving the quality of texture, we describe an automatic approach for select the optimal texture to realized 3D building from the aerial imageries of many strip. The texture of buildings can be automatically matching by the algorithm. The experimental results show that the platform of texture mapping process has a high degree of automation and improve the efficiency of the 3D modeling reconstruction.

  15. Encoding and analyzing aerial imagery using geospatial semantic graphs

    SciTech Connect

    Watson, Jean-Paul; Strip, David R.; McLendon, William C.; Parekh, Ojas D.; Diegert, Carl F.; Martin, Shawn Bryan; Rintoul, Mark Daniel

    2014-02-01

    While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

  16. Building population mapping with aerial imagery and GIS data

    NASA Astrophysics Data System (ADS)

    Ural, Serkan; Hussain, Ejaz; Shan, Jie

    2011-12-01

    Geospatial distribution of population at a scale of individual buildings is needed for analysis of people's interaction with their local socio-economic and physical environments. High resolution aerial images are capable of capturing urban complexities and considered as a potential source for mapping urban features at this fine scale. This paper studies population mapping for individual buildings by using aerial imagery and other geographic data. Building footprints and heights are first determined from aerial images, digital terrain and surface models. City zoning maps allow the classification of the buildings as residential and non-residential. The use of additional ancillary geographic data further filters residential utility buildings out of the residential area and identifies houses and apartments. In the final step, census block population, which is publicly available from the U.S. Census, is disaggregated and mapped to individual residential buildings. This paper proposes a modified building population mapping model that takes into account the effects of different types of residential buildings. Detailed steps are described that lead to the identification of residential buildings from imagery and other GIS data layers. Estimated building populations are evaluated per census block with reference to the known census records. This paper presents and evaluates the results of building population mapping in areas of West Lafayette, Lafayette, and Wea Township, all in the state of Indiana, USA.

  17. Oblique Aerial Imagery for NMA - Some best Practices

    NASA Astrophysics Data System (ADS)

    Remondino, F.; Toschi, I.; Gerke, M.; Nex, F.; Holland, D.; McGill, A.; Talaya Lopez, J.; Magarinos, A.

    2016-06-01

    Oblique airborne photogrammetry is rapidly maturing and being offered by service providers as a good alternative or replacement of the more traditional vertical imagery and for very different applications (Fig.1). EuroSDR, representing European National Mapping Agencies (NMAs) and research organizations of most EU states, is following the development of oblique aerial cameras since 2013, when an ongoing activity was created to continuously update its members on the developments in this technology. Nowadays most European NMAs still rely on the traditional workflow based on vertical photography but changes are slowly taking place also at production level. Some NMAs have already run some tests internally to understand the potential for their needs whereas other agencies are discussing on the future role of this technology and how to possibly adapt their production pipelines. At the same time, some research institutions and academia demonstrated the potentialities of oblique aerial datasets to generate textured 3D city models or large building block models. The paper provides an overview of tests, best practices and considerations coming from the R&D community and from three European NMAs concerning the use of oblique aerial imagery.

  18. High resolution channel geometry from repeat aerial imagery

    NASA Astrophysics Data System (ADS)

    King, T.; Neilson, B. T.; Jensen, A.; Torres-Rua, A. F.; Winkelaar, M.; Rasmussen, M. T.

    2015-12-01

    River channel cross sectional geometry is a key attribute for controlling the river energy balances where surface heat fluxes dominate and discharge varies significantly over short time periods throughout the open water season. These dynamics are seen in higher gradient portions of Arctic rivers where surface heat fluxes can dominates river energy balances and low hillslope storage produce rapidly varying hydrographs. Additionally, arctic river geometry can be highly dynamic in the face of thermal erosion of permafrost landscape. While direct in-situ measurements of channel cross sectional geometry are accurate, they are limited in spatial resolution and coverage, and can be access limited in remote areas. Remote sensing can help gather data at high spatial resolutions and large areas, however techniques for extracting channel geometry is often limited to the banks and flood plains adjacent to river, as the water column inhibits sensing of the river bed itself. Green light LiDAR can be used to map bathymetry, however this is expensive, difficult to obtain at large spatial scales, and dependent on water quality. Alternatively, 3D photogrammetry from aerial imagery can be used to analyze the non-wetted portion of the river channel, but extracting full cross sections requires extrapolation into the wetted portion of the river. To bridge these gaps, an approach for using repeat aerial imagery surveys with visual (RGB) and near infrared (NIR) to extract high resolution channel geometry for the Kuparuk River in the Alaskan Arctic was developed. Aerial imagery surveys were conducted under multiple flow conditions and water surface geometry (elevation and width) were extracted through photogrammetry. Channel geometry was extracted by combining water surface widths and elevations from multiple flights. The accuracy of these results were compared against field surveyed cross sections at many locations throughout the study reach and a digital elevation model created under

  19. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  20. Automated rendezvous and docking with video imagery

    NASA Technical Reports Server (NTRS)

    Rodgers, Mike; Kennedy, Larry Z.

    1991-01-01

    For rendezvous and docking, assessing and tracking relative orientation is necessary within a minimum approach distance. Special target light patterns have previously been considered for use with video sensors for ease of determining relative orientation. A generalization of those approaches is addressed. At certain ranges, the entire structure of the target vehicle constitutes an acceptable target; at closer ranges, substructures will suffice. Acting on the same principle as the human intelligence, these structures can be compared with a memory model to assess the relative orientation and range. Models for comparison are constructed from a CAD facet model and current imagery. This approach requires fast image handling, projection, and comparison techniques which rely on rapidly developing parallel processing technology. Relative orientation and range assessment consists of successful comparison of the perceived target aspect with a known aspect. Generating a known projection from a model within required times, say subsecond times, is only now approaching feasibility. With this capability, rates of comparison used by the human brain can be approached and arbitrary known structures can be compared in reasonable times. Future space programs will have access to powerful computation devices which far exceed even this capability. For example, the possibility will exist to assess unknown structures and then control rendezvous and docking, all at very fast rates. The first step which has the current utility, namely applying this to known structures, is taken.

  1. Automated rendezvous and docking with video imagery

    NASA Astrophysics Data System (ADS)

    Rodgers, Mike; Kennedy, Larry Z.

    For rendezvous and docking, assessing and tracking relative orientation is necessary within a minimum approach distance. Special target light patterns have previously been considered for use with video sensors for ease of determining relative orientation. A generalization of those approaches is addressed. At certain ranges, the entire structure of the target vehicle constitutes an acceptable target; at closer ranges, substructures will suffice. Acting on the same principle as the human intelligence, these structures can be compared with a memory model to assess the relative orientation and range. Models for comparison are constructed from a CAD facet model and current imagery. This approach requires fast image handling, projection, and comparison techniques which rely on rapidly developing parallel processing technology. Relative orientation and range assessment consists of successful comparison of the perceived target aspect with a known aspect. Generating a known projection from a model within required times, say subsecond times, is only now approaching feasibility. With this capability, rates of comparison used by the human brain can be approached and arbitrary known structures can be compared in reasonable times. Future space programs will have access to powerful computation devices which far exceed even this capability. For example, the possibility will exist to assess unknown structures and then control rendezvous and docking, all at very fast rates. The first step which has the current utility, namely applying this to known structures, is taken.

  2. Pricise Target Geolocation and Tracking Based on Uav Video Imagery

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    There is an increasingly large number of applications for Unmanned Aerial Vehicles (UAVs) from monitoring, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using an extended Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors, Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process. The results of this study compared with code-based ordinary GPS, indicate that RTK observation with proposed method shows more than 10 times improvement of accuracy in target geolocation.

  3. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  4. Applicability Evaluation of Object Detection Method to Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Kamiya, K.; Fuse, T.; Takahashi, M.

    2016-06-01

    Since satellite and aerial imageries are recently widely spread and frequently observed, combination of them are expected to complement spatial and temporal resolution each other. One of the prospective applications is traffic monitoring, where objects of interest, or vehicles, need to be recognized automatically. Techniques that employ object detection before object recognition can save a computational time and cost, and thus take a significant role. However, there is not enough knowledge whether object detection method can perform well on satellite and aerial imageries. In addition, it also has to be studied how characteristics of satellite and aerial imageries affect the object detection performance. This study employ binarized normed gradients (BING) method that runs significantly fast and is robust to rotation and noise. For our experiments, 11-bits BGR-IR satellite imageries from WorldView-3, and BGR-color aerial imageries are used respectively, and we create thousands of ground truth samples. We conducted several experiments to compare the performances with different images, to verify whether combination of different resolution images improved the performance, and to analyze the applicability of mixing satellite and aerial imageries. The results showed that infrared band had little effect on the detection rate, that 11-bit images performed less than 8-bit images and that the better spatial resolution brought the better performance. Another result might imply that mixing higher and lower resolution images for training dataset could help detection performance. Furthermore, we found that aerial images improved the detection performance on satellite images.

  5. First results for an image processing workflow for hyperspatial imagery acquired with a low-cost unmanned aerial vehicle (UAV).

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Very high-resolution images from unmanned aerial vehicles (UAVs) have great potential for use in rangeland monitoring and assessment, because the imagery fills the gap between ground-based observations and remotely sensed imagery from aerial or satellite sensors. However, because UAV imagery is ofte...

  6. Detection, classification, and tracking of compact objects in video imagery

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.; Nebrich, Mark A.

    2012-06-01

    A video data conditioner (VDC) for automated full-­motion video (FMV) detection, classification, and tracking is described. VDC extends our multi-­stage image data conditioner (IDC) to video. Key features include robust detection of compact objects in motion imagery, coarse classification of all detections, and tracking of fixed and moving objects. An implementation of the detection and tracking components of the VDC on an Apple iPhone is discussed. Preliminary tracking results of naval ships captured during the Phoenix Express 2009 Photo Exercise are presented.

  7. Noise reduction of video imagery through simple averaging

    NASA Astrophysics Data System (ADS)

    Vorder Bruegge, Richard W.

    1999-02-01

    Examiners in the Special Photographic Unit of the Federal Bureau of Investigation Laboratory Division conduct examinations of questioned photographic evidence of all types, including surveillance imagery recorded on film and video tape. A primary type of examination includes side-by- side comparisons, in which unknown objects or people depicted in the questioned images are compared with known objects recovered from suspects or with photographs of suspects themselves. Most imagery received in the SPU for such comparisons originate from time-lapse video or film systems. In such circumstances, the delay between sequential images is so great that standard image summing and/or averaging techniques are useless as a means of improving image detail in questioned subjects or objects without also resorting to processing-intensive pattern reconstruction algorithms. Occasionally, however, the receipt of real-time video imagery will include a questioned object at rest. In such cases, it is possible to use relatively simple image averaging techniques as a means of reducing transient noise in the images, without further compromising the already-poor resolution inherent in most video surveillance images. This paper presents an example of one such case in which multiple images were averaged to reduce the transient noise to a sufficient degree to permit the positive identification of a vehicle based upon the presence of scrape marks and dents on the side of the vehicle.

  8. Analysis of aerial multispectral imagery to assess water quality parameters of Mississippi water bodies

    NASA Astrophysics Data System (ADS)

    Irvin, Shane Adison

    The goal of this study was to demonstrate the application of aerial imagery as a tool in detecting water quality indicators in a three mile segment of Tibbee Creek in, Clay County, Mississippi. Water samples from 10 transects were collected per sampling date over two periods in 2010 and 2011. Temperature and dissolved oxygen (DO) were measured at each point, and water samples were tested for turbidity and total suspended solids (TSS). Relative reflectance was extracted from high resolution (0.5 meter) multispectral aerial images. A regression model was developed for turbidity and TSS as a function of values for specific sampling dates. The best model was used to predict turbidity and TSS using datasets outside the original model date. The development of an appropriate predictive model for water quality assessment based on the relative reflectance of aerial imagery is affected by the quality of imagery and time of sampling.

  9. Mixed Messages: The Relationship between Sexual and Religious Imagery in Rock, Country, and Christian Videos.

    ERIC Educational Resources Information Center

    McKee, Kathy B.; Pardun, Carol J.

    1996-01-01

    Finds sexual imagery more common than religious imagery in a sample of 207 rock, country, and Christian videos, although religious imagery was present in approximately 30% of the videos. Finds that the presence of sexual and religious images in combination occurred more often than would be expected by chance and in relatively equal proportions…

  10. Multi-Scale Validation of Forest Insect Mortality Using QuickBird and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Meddens, A. J.; Hicke, J. A.; Vierling, L. A.

    2008-12-01

    Insects are major disturbances in forested ecosystems, affecting forest succession, carbon cycling, and fuel loads. Insect-caused tree mortality causes significant change in forest structure as dead trees lose their needles and eventually fall to the ground. To manage forests and establish natural resource policy, accurate estimates of the extent of insect disturbance are needed. Mountain pine beetles (Dendroctonus ponderosae Hopkins), one of the most damaging insect species, have affected large forested areas in the United States and Canada. Our goal is to map landscape-level tree mortality caused by insect outbreaks across the western United States using satellite remote sensing. Here we report on the methods we are developing and applying to an outbreak of mountain pine beetle in Colorado as well as the means of evaluating the classification using field measurements and finer spatial resolution aerial imagery. In August 2008, 36 forest inventory plots were established in an area affected by mountain pine beetle in the Arapaho National Forest in the Rocky Mountains of Colorado, USA. A digital aerial multispectral image with a spatial resolution of 30 cm and a QuickBird image with a spatial resolution of 2.4 m were acquired in the same area. We are employing a nested validation approach using the aerial imagery and QuickBird imagery to ultimately validate a Landsat-based insect disturbance detection product. Tree-level field measurements are used to evaluate classified the aerial imagery. The classified aerial imagery is subsequently used to evaluate classified QuickBird imagery, which in turn is used to evaluate the Landsat product. The outcomes of finer spatial level classification can be used to increase understanding of the coarser spatial image classification and provide means to investigate disturbance level thresholds for the detection of insect outbreaks using the coarser resolution imagery.

  11. Acquisition, orthorectification, and object-based classification of unmanned aerial vehicle (UAV) imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this paper, we examine the potential of using a small unmanned aerial vehicle (UAV) for rangeland inventory, assessment and monitoring. Imagery with 8-cm resolution was acquired over 290 ha in southwestern Idaho. We developed a semi-automated orthorectification procedure suitable for handling lar...

  12. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

  13. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-01-01

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management. PMID:26556355

  14. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery

    PubMed Central

    Rivas Casado, Monica; Ballesteros Gonzalez, Rocio; Kriechbaumer, Thomas; Veal, Amanda

    2015-01-01

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management. PMID:26556355

  15. Practical use of video imagery in nearshore oceanographic field studies

    USGS Publications Warehouse

    Holland, K.T.; Holman, R.A.; Lippmann, T.C.; Stanley, J.; Plant, N.

    1997-01-01

    An approach was developed for using video imagery to quantify, in terms of both spatial and temporal dimensions, a number of naturally occurring (nearshore) physical processes. The complete method is presented, including the derivation of the geometrical relationships relating image and ground coordinates, principles to be considered when working with video imagery and the two-step strategy for calibration of the camera model. The techniques are founded on the principles of photogrammetry, account for difficulties inherent in the use of video signals, and have been adapted to allow for flexibility of use in field studies. Examples from field experiments indicate that this approach is both accurate and applicable under the conditions typically experienced when sampling in coastal regions. Several applications of the camera model are discussed, including the measurement of nearshore fluid processes, sand bar length scales, foreshore topography, and drifter motions. Although we have applied this method to the measurement of nearshore processes and morphologic features, these same techniques are transferable to studies in other geophysical settings.

  16. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    NASA Astrophysics Data System (ADS)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  17. Automatic Extraction of Building Outline from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Wang, Yandong

    2016-06-01

    In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  18. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  19. Soft computing and minimization/optimization of video/imagery redundancy

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Kostrzewski, Andrew A.; Wang, Wenjian; Hester, Todd

    2004-01-01

    This paper investigates the application of soft computing techniques to minimize video/imagery tactical redundancy, to enable video and high-resolution still imagery transmission through low-bandwidth tactical radio channels in the Future Combat System, including spatial, temporal event, and shape extraction for object-oriented processing.

  20. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  1. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  2. Evaluation of unmanned aerial vehicle (UAV) imagery to model vegetation heights in Hulun Buir grassland ecosystem

    NASA Astrophysics Data System (ADS)

    Wang, D.; Xin, X.; Li, Z.

    2015-12-01

    Vertical vegetation structure in grassland ecosystem is needed to assess grassland health and monitor available forage for livestock and wildlife habitat. Traditional ground-based field methods for measuring vegetation heights are time consuming. Most emerging airborne remote sensing techniques capable of measuring surface and vegetation height (e.g., LIDAR) are too expensive to apply at broad scales. Aerial or spaceborne stereo imagery has the cost advantage for mapping height of tall vegetation, such as forest. However, the accuracy and uncertainty of using stereo imagery for modeling heights of short vegetation, such as grass (generally lower than 50cm) needs to be investigated. In this study, 2.5-cm resolution UAV stereo imagery are used to model vegetation heights in Hulun Buir grassland ecosystem. Strong correlations were observed (r > 0.9) between vegetation heights derived from UAV stereo imagery and those field-measured ones at individual and plot level. However, vegetation heights tended to be underestimated in the imagery especially for those areas with high vegetation coverage. The strong correlations between field-collected vegetation heights and metrics derived from UAV stereo imagery suggest that UAV stereo imagery can be used to estimate short vegetation heights such as those in grassland ecosystem. Future work will be needed to verify the extensibility of the methods to other sites and vegetation types.

  3. Exterior Orientation Estimation of Oblique Aerial Imagery Using Vanishing Points

    NASA Astrophysics Data System (ADS)

    Verykokou, Styliani; Ioannidis, Charalabos

    2016-06-01

    In this paper, a methodology for the calculation of rough exterior orientation (EO) parameters of multiple large-scale overlapping oblique aerial images, in the case that GPS/INS information is not available (e.g., for old datasets), is presented. It consists of five main steps; (a) the determination of the overlapping image pairs and the single image in which four ground control points have to be measured; (b) the computation of the transformation parameters from every image to the coordinate reference system; (c) the rough estimation of the camera interior orientation parameters; (d) the estimation of the true horizon line and the nadir point of each image; (e) the calculation of the rough EO parameters of each image. A developed software suite implementing the proposed methodology is tested using a set of UAV multi-perspective oblique aerial images. Several tests are performed for the assessment of the errors and show that the estimated EO parameters can be used either as initial approximations for a bundle adjustment procedure or as rough georeferencing information for several applications, like 3D modelling, even by non-photogrammetrists, because of the minimal user intervention needed. Finally, comparisons with a commercial software are made, in terms of automation and correctness of the computed EO parameters.

  4. Rigorous LiDAR Strip Adjustment with Triangulated Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y. J.; Xiong, X. D.; Hu, X. Y.

    2013-10-01

    This paper proposes a POS aided LiDAR strip adjustment method. Firstly, aero-triangulation of the simultaneously obtained aerial images is conducted with a few photogrammetry-specific ground control points. Secondly, LiDAR intensity images are generated from the reflectance signals of laser foot points, and conjugate points are automatically matched between the LiDAR intensity image and the aero-triangulated aerial image. Control points used in LiDAR strip adjustment are derived from these conjugate points. Finally, LiDAR strip adjustment of real data is conducted with the POS aided LiDAR strip adjustment method proposed in this paper, and comparison experiment using three-dimensional similarity transformation method is also performed. The results indicate that the POS aided LiDAR strip adjustment method can significantly correct the planimetric and vertical errors of LiDAR strips. The planimetric correction accuracy is higher than average point distance while the vertical correction accuracy is comparable to that of the result of aero-triangulation. Moreover, the proposed method is obliviously superior to the traditional three-dimensional similarity transformation method.

  5. Canopy Density Mapping on Ultracam-D Aerial Imagery in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Khodaee, Z.

    2013-09-01

    Canopy density maps express different characteristics of forest stands, especially in woodlands. Obtaining such maps by field measurements is so expensive and time-consuming. It seems necessary to find suitable techniques to produce these maps to be used in sustainable management of woodland ecosystems. In this research, a robust procedure was suggested to obtain these maps by very high spatial resolution aerial imagery. It was aimed to produce canopy density maps by UltraCam-D aerial imagery, newly taken in Zagros woodlands by Iran National Geographic Organization (NGO), in this study. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The very high spatial resolution aerial imagery of the plot purchased from NGO, was classified by kNN technique and the tree crowns were extracted precisely. The canopy density was determined in each cell of different meshes with different sizes overlaid on the study area map. The accuracy of the final maps was investigated by the ground truth obtained by complete field measurements. The results showed that the proposed method of obtaining canopy density maps was efficient enough in the study area. The final canopy density map obtained by a mesh with 30 Ar (3000 m2) cell size had 80% overall accuracy and 0.61 KHAT coefficient of agreement which shows a great agreement with the observed samples. This method can also be tested in other case studies to reveal its capability in canopy density map production in woodlands.

  6. A procedure for orthorectification of sub-decimeter resolution imagery obtained with an unmanned aerial vehicle (UAV)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Digital aerial photography acquired with unmanned aerial vehicles (UAVs) has great value for resource management due to the flexibility and relatively low cost for image acquisition, and very high resolution imagery (5 cm) which allows for mapping bare soil and vegetation types, structure and patter...

  7. Unmanned Aerial Vehicles Produce High-Resolution Seasonally-Relevant Imagery for Classifying Wetland Vegetation

    NASA Astrophysics Data System (ADS)

    Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.

    2015-08-01

    With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.

  8. Vectorization of Road Data Extracted from Aerial and Uav Imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Pohl, Melanie

    2016-06-01

    Road databases are essential instances of urban infrastructure. Therefore, automatic road detection from sensor data has been an important research activity during many decades. Given aerial images in a sufficient resolution, dense 3D reconstruction can be performed. Starting at a classification result of road pixels from combined elevation and optical data, we present in this paper a fivestep procedure for creating vectorized road networks. These main steps of the algorithm are: preprocessing, thinning, polygonization, filtering, and generalization. In particular, for the generalization step, which represents the principal area of innovation, two strategies are presented. The first strategy corresponds to a modification of the Douglas-Peucker-algorithm in order to reduce the number of vertices while the second strategy allows a smoother representation of street windings by Bezir curves, which results in reduction - to a decimal power - of the total curvature defined for the dataset. We tested our approach on three datasets with different complexity. The quantitative assessment of the results was performed by means of shapefiles from OpenStreetMap data. For a threshold of 6 m, completeness and correctness values of up to 85% were achieved.

  9. Challenges in collecting hyperspectral imagery of coastal waters using Unmanned Aerial Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    English, D. C.; Herwitz, S.; Hu, C.; Carlson, P. R., Jr.; Muller-Karger, F. E.; Yates, K. K.; Ramsewak, D.

    2013-12-01

    Airborne multi-band remote sensing is an important tool for many aquatic applications; and the increased spectral information from hyperspectral sensors may increase the utility of coastal surveys. Recent technological advances allow Unmanned Aerial Vehicles (UAVs) to be used as alternatives or complements to manned aircraft or in situ observing platforms, and promise significant advantages for field studies. These include the ability to conduct programmed flight plans, prolonged and coordinated surveys, and agile flight operations under difficult conditions such as measurements made at low altitudes. Hyperspectral imagery collected from UAVs should allow the increased differentiation of water column or shallow benthic communities at relatively small spatial scales. However, the analysis of hyperspectral imagery from airborne platforms over shallow coastal waters differs from that used for terrestrial or oligotrophic ocean color imagery, and the operational constraints and considerations for the collection of such imagery from autonomous platforms also differ from terrestrial surveys using manned aircraft. Multispectral and hyperspectral imagery of shallow seagrass and coral environments in the Florida Keys were collected with various sensor systems mounted on manned and unmanned aircrafts in May 2012, October 2012, and May 2013. The imaging systems deployed on UAVs included NovaSol's Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK), a Tetracam multispectral imaging system, and the Sunflower hyperspectal imager from Galileo Group, Inc. The UAVs carrying these systems were Xtreme Aerial Concepts' Vision-II Rotorcraft UAV, MLB Company's Bat-4 UAV, and NASA's SIERRA UAV, respectively. Additionally, the Galileo Group's manned aircraft also surveyed the areas with their AISA Eagle hyperspectral imaging system. For both manned and autonomous flights, cloud cover and sun glint (solar and viewing angles) were dominant constraints on retrieval of quantitatively

  10. Detecting blind building façades from highly overlapping wide angle aerial imagery

    NASA Astrophysics Data System (ADS)

    Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas

    2014-10-01

    This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.

  11. Lori Losey - The Woman Behind the Video Camera

    NASA Video Gallery

    The often-spectacular aerial video imagery of NASA flight research, airborne science missions and space satellite launches doesn't just happen. Much of it is the work of Lori Losey, senior video pr...

  12. Environmental waste site characterization utilizing aerial photographs and satellite imagery: Three sites in New Mexico, USA

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Becker, N.; Wells, B.; Lewis, A.; David, N.

    1996-04-01

    The proper handling and characterization of past hazardous waste sites is becoming more and more important as world population extends into areas previously deemed undesirable. Historical photographs, past records, current aerial satellite imagery can play an important role in characterizing these sites. These data provide clear insight into defining problem areas which can be surface samples for further detail. Three such areas are discussed in this paper: (1) nuclear wastes buried in trenches at Los Alamos National Laboratory, (2) surface dumping at one site at Los Alamos National Laboratory, and (3) the historical development of a municipal landfill near Las Cruces, New Mexico.

  13. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  14. Estimation of walrus populations on sea ice with infrared imagery and aerial photography

    USGS Publications Warehouse

    Udevitz, M.S.; Burn, D.M.; Webber, M.A.

    2008-01-01

    Population sizes of ice-associated pinnipeds have often been estimated with visual or photographic aerial surveys, but these methods require relatively slow speeds and low altitudes, limiting the area they can cover. Recent developments in infrared imagery and its integration with digital photography could allow substantially larger areas to be surveyed and more accurate enumeration of individuals, thereby solving major problems with previous survey methods. We conducted a trial survey in April 2003 to estimate the number of Pacific walruses (Odobenus rosmarus divergens) hauled out on sea ice around St. Lawrence Island, Alaska. The survey used high altitude infrared imagery to detect groups of walruses on strip transects. Low altitude digital photography was used to determine the number of walruses in a sample of detected groups and calibrate the infrared imagery for estimating the total number of walruses. We propose a survey design incorporating this approach with satellite radio telemetry to estimate the proportion of the population in the water and additional low-level flights to estimate the proportion of the hauled-out population in groups too small to be detected in the infrared imagery. We believe that this approach offers the potential for obtaining reliable population estimates for walruses and other ice-associated pinnipeds. ?? 2007 by the Society for Marine Mammalogy.

  15. Pricise Target Geolocation Based on Integeration of Thermal Video Imagery and Rtk GPS in Uavs

    NASA Astrophysics Data System (ADS)

    Hosseinpoor, H. R.; Samadzadegan, F.; Dadras Javan, F.

    2015-12-01

    There are an increasingly large number of uses for Unmanned Aerial Vehicles (UAVs) from surveillance, mapping and target geolocation. However, most of commercial UAVs are equipped with low-cost navigation sensors such as C/A code GPS and a low-cost IMU on board, allowing a positioning accuracy of 5 to 10 meters. This low accuracy which implicates that it cannot be used in applications that require high precision data on cm-level. This paper presents a precise process for geolocation of ground targets based on thermal video imagery acquired by small UAV equipped with RTK GPS. The geolocation data is filtered using a linear Kalman filter, which provides a smoothed estimate of target location and target velocity. The accurate geo-locating of targets during image acquisition is conducted via traditional photogrammetric bundle adjustment equations using accurate exterior parameters achieved by on board IMU and RTK GPS sensors and Kalman filtering and interior orientation parameters of thermal camera from pre-flight laboratory calibration process.

  16. A Semi-Automated Single Day Image Differencing Technique to Identify Animals in Aerial Imagery

    PubMed Central

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates. PMID:24454827

  17. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  18. The effects of gender and music video imagery on sexual attitudes.

    PubMed

    Kalof, L

    1999-06-01

    This study examined the influence of gender and exposure to gender-stereo-typed music video imagery on sexual attitudes (adversarial sexual beliefs, acceptance of rape myths, acceptance of interpersonal violence, and gender role stereotyping). A group of 44 U.S. college students were randomly assigned to 1 of 2 groups that viewed either a video portraying stereotyped sexual imagery or a video that excluded all sexual images. Exposure to traditional sexual imagery had a significant main effect on attitudes about adversarial sexual relationships, and gender had main effects on 3 of 4 sexual attitudes. There was some evidence of an interaction between gender and exposure to traditional sexual imagery on the acceptance of interpersonal violence.

  19. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  20. Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.

    2016-01-01

    Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to

  1. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  2. Unsupervised building detection from irregularly spaced LiDAR and aerial imagery

    NASA Astrophysics Data System (ADS)

    Shorter, Nicholas Sven

    As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and

  3. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  4. Identification of wild areas in southern lower Michigan. [terrain analysis from aerial photography, and satellite imagery

    NASA Technical Reports Server (NTRS)

    Habowski, S.; Cialek, C.

    1978-01-01

    An inventory methodology was developed to identify potential wild area sites. A list of site criteria were formulated and tested in six selected counties. Potential sites were initially identified from LANDSAT satellite imagery. A detailed study of the soil, vegetation and relief characteristics of each site based on both high-altitude aerial photographs and existing map data was conducted to eliminate unsuitable sites. Ground reconnaissance of the remaining wild areas was made to verify suitability and acquire information on wildlife and general aesthetics. Physical characteristics of the wild areas in each county are presented in tables. Maps show the potential sites to be set aside for natural preservation and regulation by the state under the Wilderness and Natural Areas Act of 1972.

  5. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  6. The Photo-Mosaic Assistant: Incorporating Historic Aerial Imagery into Modern Research Projects

    NASA Astrophysics Data System (ADS)

    Flathers, E.

    2013-12-01

    One challenge that researchers face as data organization and analysis shift into the digital realm is the incorporation of 'dirty' data from analog back-catalogs into current projects. Geospatial data collections in university libraries, government data repositories, and private industry contain historic data such as aerial photographs that may be stored as negatives, prints, and as scanned digital image files. A typical aerial imagery series is created by taking photos of the ground from an aircraft along a series of parallel flight lines. The raw photos can be assembled into a mosaic that represents the full geographic area of the collection, but each photo suffers from individual distortion according to the attitude and altitude of the collecting aircraft at the moment of acquisition, so there is a process of orthorectification needed in order to produce a planimetric composite image that can be used to accurately refer to locations on the ground. Historic aerial photo collections often need significant preparation for consumption by a GIS: they may need to be digitized, often lack any explicit spatial coordinates, and may not include information about flight line patterns. Many collections lack even such basic information as index numbers for the photos, so it may be unclear in what order the photos were acquired. When collections contain large areas of, for example, forest or agricultural land, any given photo may have few visual cues to assist in relating it to the other photos or to an area on the ground. The Photo-Mosaic Assistant (PMA) is a collection of tools designed to assist in the organization of historic aerial photo collections and the preparation of collections for orthorectification and use in modern research applications. The first tool is a light table application that allows a user to take advantage of visual cues within photos to organize and explore the collection, potentially building a rough image mosaic by hand. The second tool is a set of

  7. Forest and land inventory using ERTS imagery and aerial photography in the boreal forest region of Alberta, Canada

    NASA Technical Reports Server (NTRS)

    Kirby, C. L.

    1974-01-01

    Satellite imagery and small-scale (1:120,000) infrared ektachrome aerial photography for the development of improved forest and land inventory techniques in the boreal forest region are presented to demonstrate spectral signatures and their application. The forest is predominately mixed, stands of white spruce and poplar, with some pure stands of black spruce, pine and large areas of poorly drained land with peat and sedge type muskegs. This work is part of coordinated program to evaluate ERTS imagery by the Canadian Forestry Service.

  8. Geomorphological relationships through the use of 2-D seismic reflection data, Lidar, and aerial imagery

    NASA Astrophysics Data System (ADS)

    Alesce, Meghan Elizabeth

    Barrier Islands are crucial in protecting coastal environments. This study focuses on Dauphin Island, Alabama, located within the Northern Gulf of Mexico (NGOM) Barrier Island complex. It is one of many islands serving as natural protection for NGOM ecosystems and coastal cities. The NGOM barrier islands formed at 4 kya in response to a decrease in rate of sea level rise. The morphology of these islands changes with hurricanes, anthropogenic activity, and tidal and wave action. This study focuses on ancient incised valleys and and the impact on island morphology on hurricane breaches. Using high frequency 2-D seismic reflection data four horizons, including the present seafloor, were interpreted. Subaerial portions of Dauphin Island were imaged using Lidar data and aerial imagery over a ten-year time span, as well as historical maps. Historical shorelines of Dauphin Island were extracted from aerial imagery and historical maps, and were compared to the location of incised valleys seen within the 2-D seismic reflection data. Erosion and deposition volumes of Dauphin Island from 1998 to 2010 (the time span covering hurricanes Ivan and Katrina) in the vicinity of Katrina Cut and Pelican Island were quantified using Lidar data. For the time period prior to Hurricane Ivan an erosional volume of 46,382,552 m3 and depositional volume of 16,113.6 m3 were quantified from Lidar data. The effects of Hurricane Ivan produced a total erosion volume of 4,076,041.5 m3. The erosional and depositional volumes of Katrina Cut being were 7,562,068.5 m3 and 510,936.7 m3, respectively. More volume change was found within Pelican Pass. For the period between hurricanes Ivan and Katrina the erosion volume was 595,713.8 m3. This was mostly located within Katrina Cut. Total deposition for the same period, including in Pelican Pass, was 15,353,961 m3. Hurricane breaches were compared to ancient incised valleys seen within the 2-D seismic reflection results. Breaches from hurricanes from 1849

  9. Outlier and target detection in aerial hyperspectral imagery: a comparison of traditional and percentage occupancy hit or miss transform techniques

    NASA Astrophysics Data System (ADS)

    Young, Andrew; Marshall, Stephen; Gray, Alison

    2016-05-01

    The use of aerial hyperspectral imagery for the purpose of remote sensing is a rapidly growing research area. Currently, targets are generally detected by looking for distinct spectral features of the objects under surveillance. For example, a camouflaged vehicle, deliberately designed to blend into background trees and grass in the visible spectrum, can be revealed using spectral features in the near-infrared spectrum. This work aims to develop improved target detection methods, using a two-stage approach, firstly by development of a physics-based atmospheric correction algorithm to convert radiance into re ectance hyperspectral image data and secondly by use of improved outlier detection techniques. In this paper the use of the Percentage Occupancy Hit or Miss Transform is explored to provide an automated method for target detection in aerial hyperspectral imagery.

  10. Integrating Terrestrial LIDAR with Point Clouds Created from Unmanned Aerial Vehicle Imagery

    NASA Astrophysics Data System (ADS)

    Leslar, M.

    2015-08-01

    Using unmanned aerial vehicles (UAV) for the purposes of conducting high-accuracy aerial surveying has become a hot topic over the last year. One of the most promising means of conducting such a survey involves integrating a high-resolution non-metric digital camera with the UAV and using the principals of digital photogrammetry to produce high-density colorized point clouds. Through the use of stereo imagery, precise and accurate horizontal positioning information can be produced without the need for integration with any type of inertial navigation system (INS). Of course, some form of ground control is needed to achieve this result. Terrestrial LiDAR, either static or mobile, provides the solution. Points extracted from Terrestrial LiDAR can be used as control in the digital photogrammetry solution required by the UAV. In return, the UAV is an affordable solution for filling in the shadows and occlusions typically experienced by Terrestrial LiDAR. In this paper, the accuracies of points derived from a commercially available UAV solution will be examined and compared to the accuracies achievable by a commercially available LIDAR solution. It was found that the LiDAR system produced a point cloud that was twice as accurate as the point cloud produced by the UAV's photogrammetric solution. Both solutions gave results within a few centimetres of the control field. In addition the about of planar dispersion on the vertical wall surfaces in the UAV point cloud was found to be multiple times greater than that from the horizontal ground based UAV points or the LiDAR data.

  11. Mapping of riparian invasive species with supervised classification of Unmanned Aerial System (UAS) imagery

    NASA Astrophysics Data System (ADS)

    Michez, Adrien; Piégay, Hervé; Jonathan, Lisein; Claessens, Hugues; Lejeune, Philippe

    2016-02-01

    Riparian zones are key landscape features, representing the interface between terrestrial and aquatic ecosystems. Although they have been influenced by human activities for centuries, their degradation has increased during the 20th century. Concomitant with (or as consequences of) these disturbances, the invasion of exotic species has increased throughout the world's riparian zones. In our study, we propose a easily reproducible methodological framework to map three riparian invasive taxa using Unmanned Aerial Systems (UAS) imagery: Impatiens glandulifera Royle, Heracleum mantegazzianum Sommier and Levier, and Japanese knotweed (Fallopia sachalinensis (F. Schmidt Petrop.), Fallopia japonica (Houtt.) and hybrids). Based on visible and near-infrared UAS orthophoto, we derived simple spectral and texture image metrics computed at various scales of image segmentation (10, 30, 45, 60 using eCognition software). Supervised classification based on the random forests algorithm was used to identify the most relevant variable (or combination of variables) derived from UAS imagery for mapping riparian invasive plant species. The models were built using 20% of the dataset, the rest of the dataset being used as a test set (80%). Except for H. mantegazzianum, the best results in terms of global accuracy were achieved with the finest scale of analysis (segmentation scale parameter = 10). The best values of overall accuracies reached 72%, 68%, and 97% for I. glandulifera, Japanese knotweed, and H. mantegazzianum respectively. In terms of selected metrics, simple spectral metrics (layer mean/camera brightness) were the most used. Our results also confirm the added value of texture metrics (GLCM derivatives) for mapping riparian invasive species. The results obtained for I. glandulifera and Japanese knotweed do not reach sufficient accuracies for operational applications. However, the results achieved for H. mantegazzianum are encouraging. The high accuracies values combined to

  12. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel

  13. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  14. Wildlife Multispecies Remote Sensing Using Visible and Thermal Infrared Imagery Acquired from AN Unmanned Aerial Vehicle (uav)

    NASA Astrophysics Data System (ADS)

    Chrétien, L.-P.; Théau, J.; Ménard, P.

    2015-08-01

    Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0-2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.

  15. Fusion of Multi-View and Multi-Scale Aerial Imagery for Real-Time Situation Awareness Applications

    NASA Astrophysics Data System (ADS)

    Zhuo, X.; Kurz, F.; Reinartz, P.

    2015-08-01

    Manned aircraft has long been used for capturing large-scale aerial images, yet the high costs and weather dependence restrict its availability in emergency situations. In recent years, MAV (Micro Aerial Vehicle) emerged as a novel modality for aerial image acquisition. Its maneuverability and flexibility enable a rapid awareness of the scene of interest. Since these two platforms deliver scene information from different scale and different view, it makes sense to fuse these two types of complimentary imagery to achieve a quick, accurate and detailed description of the scene, which is the main concern of real-time situation awareness. This paper proposes a method to fuse multi-view and multi-scale aerial imagery by establishing a common reference frame. In particular, common features among MAV images and geo-referenced airplane images can be extracted by a scale invariant feature detector like SIFT. From the tie point of geo-referenced images we derive the coordinate of corresponding ground points, which are then utilized as ground control points in global bundle adjustment of MAV images. In this way, the MAV block is aligned to the reference frame. Experiment results show that this method can achieve fully automatic geo-referencing of MAV images even if GPS/IMU acquisition has dropouts, and the orientation accuracy is improved compared to the GPS/IMU based georeferencing. The concept for a subsequent 3D classification method is also described in this paper.

  16. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  17. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery.

    PubMed

    Lisein, Jonathan; Michez, Adrien; Claessens, Hugues; Lejeune, Philippe

    2015-01-01

    Technology advances can revolutionize Precision Forestry by providing accurate and fine forest information at tree level. This paper addresses the question of how and particularly when Unmanned Aerial System (UAS) should be used in order to efficiently discriminate deciduous tree species. The goal of this research is to determine when is the best time window to achieve an optimal species discrimination. A time series of high resolution UAS imagery was collected to cover the growing season from leaf flush to leaf fall. Full benefit was taken of the temporal resolution of UAS acquisition, one of the most promising features of small drones. The disparity in forest tree phenology is at the maximum during early spring and late autumn. But the phenology state that optimized the classification result is the one that minimizes the spectral variation within tree species groups and, at the same time, maximizes the phenologic differences between species. Sunlit tree crowns (5 deciduous species groups) were classified using a Random Forest approach for monotemporal, two-date and three-date combinations. The end of leaf flushing was the most efficient single-date time window. Multitemporal datasets definitely improve the overall classification accuracy. But single-date high resolution orthophotomosaics, acquired on optimal time-windows, result in a very good classification accuracy (overall out of bag error of 16%).

  18. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery.

    PubMed

    Lisein, Jonathan; Michez, Adrien; Claessens, Hugues; Lejeune, Philippe

    2015-01-01

    Technology advances can revolutionize Precision Forestry by providing accurate and fine forest information at tree level. This paper addresses the question of how and particularly when Unmanned Aerial System (UAS) should be used in order to efficiently discriminate deciduous tree species. The goal of this research is to determine when is the best time window to achieve an optimal species discrimination. A time series of high resolution UAS imagery was collected to cover the growing season from leaf flush to leaf fall. Full benefit was taken of the temporal resolution of UAS acquisition, one of the most promising features of small drones. The disparity in forest tree phenology is at the maximum during early spring and late autumn. But the phenology state that optimized the classification result is the one that minimizes the spectral variation within tree species groups and, at the same time, maximizes the phenologic differences between species. Sunlit tree crowns (5 deciduous species groups) were classified using a Random Forest approach for monotemporal, two-date and three-date combinations. The end of leaf flushing was the most efficient single-date time window. Multitemporal datasets definitely improve the overall classification accuracy. But single-date high resolution orthophotomosaics, acquired on optimal time-windows, result in a very good classification accuracy (overall out of bag error of 16%). PMID:26600422

  19. Random Forest and Objected-Based Classification for Forest Pest Extraction from Uav Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Hu, Xiangyun

    2016-06-01

    Forest pest is one of the most important factors affecting the health of forest. However, since it is difficult to figure out the pest areas and to predict the spreading ways just to partially control and exterminate it has not effective enough so far now. The infected areas by it have continuously spreaded out at present. Thus the introduction of spatial information technology is highly demanded. It is very effective to examine the spatial distribution characteristics that can establish timely proper strategies for control against pests by periodically figuring out the infected situations as soon as possible and by predicting the spreading ways of the infection. Now, with the UAV photography being more and more popular, it has become much cheaper and faster to get UAV images which are very suitable to be used to monitor the health of forest and detect the pest. This paper proposals a new method to effective detect forest pest in UAV aerial imagery. For an image, we segment it to many superpixels at first and then we calculate a 12-dimension statistical texture information for each superpixel which are used to train and classify the data. At last, we refine the classification results by some simple rules. The experiments show that the method is effective for the extraction of forest pest areas in UAV images.

  20. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  1. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery

    PubMed Central

    Lisein, Jonathan; Michez, Adrien; Claessens, Hugues; Lejeune, Philippe

    2015-01-01

    Technology advances can revolutionize Precision Forestry by providing accurate and fine forest information at tree level. This paper addresses the question of how and particularly when Unmanned Aerial System (UAS) should be used in order to efficiently discriminate deciduous tree species. The goal of this research is to determine when is the best time window to achieve an optimal species discrimination. A time series of high resolution UAS imagery was collected to cover the growing season from leaf flush to leaf fall. Full benefit was taken of the temporal resolution of UAS acquisition, one of the most promising features of small drones. The disparity in forest tree phenology is at the maximum during early spring and late autumn. But the phenology state that optimized the classification result is the one that minimizes the spectral variation within tree species groups and, at the same time, maximizes the phenologic differences between species. Sunlit tree crowns (5 deciduous species groups) were classified using a Random Forest approach for monotemporal, two-date and three-date combinations. The end of leaf flushing was the most efficient single-date time window. Multitemporal datasets definitely improve the overall classification accuracy. But single-date high resolution orthophotomosaics, acquired on optimal time-windows, result in a very good classification accuracy (overall out of bag error of 16%). PMID:26600422

  2. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  3. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  4. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  5. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    PubMed Central

    Zhang, Yanning; Tong, Xiaomin; Yang, Tao; Ma, Wenguang

    2015-01-01

    With the wide development of UAV (Unmanned Aerial Vehicle) technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly. PMID:25856330

  6. Multi-model estimation based moving object detection for aerial video.

    PubMed

    Zhang, Yanning; Tong, Xiaomin; Yang, Tao; Ma, Wenguang

    2015-01-01

    With the wide development of UAV (Unmanned Aerial Vehicle) technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel's subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly. PMID:25856330

  7. Exploration towards the modeling of gable-roofed buildings using a combination of aerial and street-level imagery

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo; Hazelhoff, Lykele; de With, Peter H. N.

    2015-03-01

    Extraction of residential building properties is helpful for numerous applications, such as computer-guided feasibility analysis for solar panel placement, determination of real-estate taxes and assessment of real-estate insurance policies. Therefore, this work explores the automated modeling of buildings with a gable roof (the most common roof type within Western Europe), based on a combination of aerial imagery and street-level panoramic images. This is a challenging task, since buildings show large variations in shape, dimensions and building extensions, and may additionally be captured under non-ideal lighting conditions. The aerial images feature a coarse overview of the building due to the large capturing distance. The building footprint and an initial estimate of the building height is extracted based on the analysis of stereo aerial images. The estimated model is then refined using street-level images, which feature higher resolution and enable more accurate measurements, however, displaying a single building side only. Initial experiments indicate that the footprint dimensions of the main building can be accurately extracted from aerial images, while the building height is extracted with slightly less accuracy. By combining aerial and street-level images, we have found that the accuracies of these height measurements are significantly increased, thereby improving the overall quality of the extracted building model, and resulting in an average inaccuracy of the estimated volume below 10%.

  8. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  9. Object detection and classification using image moment functions in the applied to video and imagery analysis

    NASA Astrophysics Data System (ADS)

    Mise, Olegs; Bento, Stephen

    2013-05-01

    This paper proposes an object detection algorithm and a framework based on a combination of Normalized Central Moment Invariant (NCMI) and Normalized Geometric Radial Moment (NGRM). The developed framework allows detecting objects with offline pre-loaded signatures and/or using the tracker data in order to create an online object signature representation. The framework has been successfully applied to the target detection and has demonstrated its performance on real video and imagery scenes. In order to overcome the implementation constraints of the low-powered hardware, the developed framework uses a combination of image moment functions and utilizes a multi-layer neural network. The developed framework has been shown to be robust to false alarms on non-target objects. In addition, optimization for fast calculation of the image moments descriptors is discussed. This paper presents an overview of the developed framework and demonstrates its performance on real video and imagery scenes.

  10. The research of moving objects behavior detection and tracking algorithm in aerial video

    NASA Astrophysics Data System (ADS)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  11. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  12. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.

  13. New interpretations of the Fort Clark State Historic Site based on aerial color and thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Heller, Andrew Roland

    The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.

  14. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  15. Bridging Estimates of Greenness in an Arid Grassland Using Field Observations, Phenocams, and Time Series Unmanned Aerial System (UAS) Imagery

    NASA Astrophysics Data System (ADS)

    Browning, D. M.; Tweedie, C. E.; Rango, A.

    2013-12-01

    Spatially extensive grasslands and savannas in arid and semi-arid ecosystems (i.e., rangelands) require cost-effective, accurate, and consistent approaches for monitoring plant phenology. Remotely sensed imagery offers these capabilities; however contributions of exposed soil due to modest vegetation cover, susceptibility of vegetation to drought, and lack of robust scaling relationships challenge biophysical retrievals using moderate- and coarse-resolution satellite imagery. To evaluate methods for characterizing plant phenology of common rangeland species and to link field measurements to remotely sensed metrics of land surface phenology, we devised a hierarchical study spanning multiple spatial scales. We collect data using weekly standardized field observations on focal plants, daily phenocam estimates of vegetation greenness, and very high spatial resolution imagery from an Unmanned Aerial System (UAS) throughout the growing season. Field observations of phenological condition and vegetation cover serve to verify phenocam greenness indices along with indices derived from time series UAS imagery. UAS imagery is classified using object-oriented image analysis to identify species-specific image objects for which greenness indices are derived. Species-specific image objects facilitate comparisons with phenocam greenness indices and scaling spectral responses to footprints of Landsat and MODIS pixels. Phenocam greenness curves indicated rapid canopy development for the widespread deciduous shrub Prosopis glandulosa over 14 (in April 2012) to 16 (in May 2013) days. The modest peak in greenness for the dominant perennial grass Bouteloua eriopoda occurred in October 2012 following peak summer rainfall. Weekly field estimates of canopy development closely coincided with daily patterns in initial growth and senescence for both species. Field observations improve the precision of the timing of phenophase transitions relative to inflection points calculated from phenocam

  16. Trafficking in tobacco farm culture: Tobacco companies use of video imagery to undermine health policy

    PubMed Central

    Otañez, Martin G; Glantz, Stanton A

    2009-01-01

    The cigarette companies and their lobbying organization used tobacco industry-produced films and videos about tobacco farming to support their political, public relations, and public policy goals. Critical discourse analysis shows how tobacco companies utilized film and video imagery and narratives of tobacco farmers and tobacco economies for lobbying politicians and influencing consumers, industry-allied groups, and retail shop owners to oppose tobacco control measures and counter publicity on the health hazards, social problems, and environmental effects of tobacco growing. Imagery and narratives of tobacco farmers, tobacco barns, and agricultural landscapes in industry videos constituted a tobacco industry strategy to construct a corporate vision of tobacco farm culture that privileges the economic benefits of tobacco. The positive discursive representations of tobacco farming ignored actual behavior of tobacco companies to promote relationships of dependency and subordination for tobacco farmers and to contribute to tobacco-related poverty, child labor, and deforestation in tobacco growing countries. While showing tobacco farming as a family and a national tradition and a source of jobs, tobacco companies portrayed tobacco as a tradition to be protected instead of an industry to be regulated and denormalized. PMID:20160936

  17. Trafficking in tobacco farm culture: Tobacco companies use of video imagery to undermine health policy.

    PubMed

    Otañez, Martin G; Glantz, Stanton A

    2009-05-01

    The cigarette companies and their lobbying organization used tobacco industry-produced films and videos about tobacco farming to support their political, public relations, and public policy goals. Critical discourse analysis shows how tobacco companies utilized film and video imagery and narratives of tobacco farmers and tobacco economies for lobbying politicians and influencing consumers, industry-allied groups, and retail shop owners to oppose tobacco control measures and counter publicity on the health hazards, social problems, and environmental effects of tobacco growing. Imagery and narratives of tobacco farmers, tobacco barns, and agricultural landscapes in industry videos constituted a tobacco industry strategy to construct a corporate vision of tobacco farm culture that privileges the economic benefits of tobacco. The positive discursive representations of tobacco farming ignored actual behavior of tobacco companies to promote relationships of dependency and subordination for tobacco farmers and to contribute to tobacco-related poverty, child labor, and deforestation in tobacco growing countries. While showing tobacco farming as a family and a national tradition and a source of jobs, tobacco companies portrayed tobacco as a tradition to be protected instead of an industry to be regulated and denormalized. PMID:20160936

  18. Intergration of LiDAR Data with Aerial Imagery for Estimating Rooftop Solar Photovoltaic Potentials in City of Cape Town

    NASA Astrophysics Data System (ADS)

    Adeleke, A. K.; Smit, J. L.

    2016-06-01

    Apart from the drive to reduce carbon dioxide emissions by carbon-intensive economies like South Africa, the recent spate of electricity load shedding across most part of the country, including Cape Town has left electricity consumers scampering for alternatives, so as to rely less on the national grid. Solar energy, which is adequately available in most part of Africa and regarded as a clean and renewable source of energy, makes it possible to generate electricity by using photovoltaics technology. However, before time and financial resources are invested into rooftop solar photovoltaic systems in urban areas, it is important to evaluate the potential of the building rooftop, intended to be used in harvesting the solar energy. This paper presents methodologies making use of LiDAR data and other ancillary data, such as high-resolution aerial imagery, to automatically extract building rooftops in City of Cape Town and evaluate their potentials for solar photovoltaics systems. Two main processes were involved: (1) automatic extraction of building roofs using the integration of LiDAR data and aerial imagery in order to derive its' outline and areal coverage; and (2) estimating the global solar radiation incidence on each roof surface using an elevation model derived from the LiDAR data, in order to evaluate its solar photovoltaic potential. This resulted in a geodatabase, which can be queried to retrieve salient information about the viability of a particular building roof for solar photovoltaic installation.

  19. Decision Level Fusion of LIDAR Data and Aerial Color Imagery Based on Bayesian Theory for Urban Area Classification

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.

    2015-12-01

    Airborne Light Detection and Ranging (LiDAR) generates high-density 3D point clouds to provide a comprehensive information from object surfaces. Combining this data with aerial/satellite imagery is quite promising for improving land cover classification. In this study, fusion of LiDAR data and aerial imagery based on Bayesian theory in a three-level fusion algorithm is presented. In the first level, pixel-level fusion, the proper descriptors for both LiDAR and image data are extracted. In the next level of fusion, feature-level, using extracted features the area are classified into six classes of "Buildings", "Trees", "Asphalt Roads", "Concrete roads", "Grass" and "Cars" using Naïve Bayes classification algorithm. This classification is performed in three different strategies: (1) using merely LiDAR data, (2) using merely image data, and (3) using all extracted features from LiDAR and image. The results of three classifiers are integrated in the last phase, decision level fusion, based on Naïve Bayes algorithm. To evaluate the proposed algorithm, a high resolution color orthophoto and LiDAR data over the urban areas of Zeebruges, Belgium were applied. Obtained results from the decision level fusion phase revealed an improvement in overall accuracy and kappa coefficient.

  20. Detection of two intermixed invasive woody species using color infrared aerial imagery and the support vector machine classifier

    NASA Astrophysics Data System (ADS)

    Mirik, Mustafa; Chaudhuri, Sriroop; Surber, Brady; Ale, Srinivasulu; James Ansley, R.

    2013-01-01

    Both the evergreen redberry juniper (Juniperus pinchotii Sudw.) and deciduous honey mesquite (Prosopis glandulosa Torr.) are destructive and aggressive invaders that affect rangelands and grasslands of the southern Great Plains of the United States. However, their current spatial extent and future expansion trends are unknown. This study was aimed at: (1) exploring the utility of aerial imagery for detecting and mapping intermixed redberry juniper and honey mesquite while both are in full foliage using the support vector machine classifier at two sites in north central Texas and, (2) assessing and comparing the mapping accuracies between sites. Accuracy assessments revealed that the overall accuracies were 90% with the associated kappa coefficient of 0.86% and 89% with the associated kappa coefficient of 0.85 for sites 1 and 2, respectively. Z-statistics (0.102<1.96) used to compare the classification results for both sites indicated an insignificant difference between classifications at 95% probability level. In most instances, juniper and mesquite were identified correctly with <7% being mistaken for the other woody species. These results indicated that assessment of the current infestation extent and severity of these two woody species in a spatial context is possible using aerial remote sensing imagery.

  1. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  2. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    USGS Publications Warehouse

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  3. Using high-resolution digital aerial imagery to map land cover

    USGS Publications Warehouse

    Dieck, J.J.; Robinson, Larry

    2014-01-01

    The Upper Midwest Environmental Sciences Center (UMESC) has used aerial photography to map land cover/land use on federally owned and managed lands for over 20 years. Until recently, that process used 23- by 23-centimeter (9- by 9-inch) analog aerial photos to classify vegetation along the Upper Mississippi River System, on National Wildlife Refuges, and in National Parks. With digital aerial cameras becoming more common and offering distinct advantages over analog film, UMESC transitioned to an entirely digital mapping process in 2009. Though not without challenges, this method has proven to be much more accurate and efficient when compared to the analog process.

  4. Monitoring the invasion of Spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai, Guangxi (China).

    PubMed

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Fu, Jingying; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population.

  5. Monitoring the invasion of Spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai, Guangxi (China).

    PubMed

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Fu, Jingying; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population. PMID:24892066

  6. Monitoring the Invasion of Spartina alterniflora Using Very High Resolution Unmanned Aerial Vehicle Imagery in Beihai, Guangxi (China)

    PubMed Central

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population. PMID:24892066

  7. Quantifying the rapid evolution of a nourishment project with video imagery

    USGS Publications Warehouse

    Elko, N.A.; Holman, R.A.; Gelfenbaum, G.

    2005-01-01

    Spatially and temporally high-resolution video imagery was combined with traditional surveyed beach profiles to investigate the evolution of a rapidly eroding beach nourishment project. Upham Beach is a 0.6-km beach located downdrift of a structured inlet on the west coast of Florida. The beach was stabilized in seaward advanced position during the 1960s and has been nourished every 4-5 years since 1975. During the 1996 nourishment project, 193,000 m 3 of sediment advanced the shoreline as much as 175 m. Video images were collected concurrent with traditional surveys during the 1996 nourishment project to test video imaging as a nourishment monitoring technique. Video imagery illustrated morphologic changes that were unapparent in survey data. Increased storminess during the second (El Nin??o) winter after the 1996 project resulted in increased erosion rates of 0.4 m/d (135.0 m/y) as compared with 0.2 m/d (69.4 m/y) during the first winter. The measured half-life, the time at which 50% of the nourished material remains, of the nourishment project was 0.94 years. A simple analytical equation indicates reasonable agreement with the measured values, suggesting that project evolution follows a predictable pattern of exponential decay. Long-shore planform equilibration does not occur on Upham Beach, rather sediment diffuses downdrift until 100% of the nourished material erodes. The wide nourished beach erodes rapidly due to the lack of sediment bypassing from the north and the stabilized headland at Upham Beach that is exposed to wave energy.

  8. Preliminary statistical studies concerning the Campos RJ sugar cane area, using LANDSAT imagery and aerial photographs

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.

    1983-01-01

    The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.

  9. Three-dimensional imaging applications in Earth Sciences using video data acquired from an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    McLeod, Tara

    For three dimensional (3D) aerial images, unmanned aerial vehicles (UAVs) are cheaper to operate and easier to fly than the typical manned craft mounted with a laser scanner. This project explores the feasibility of using 2D video images acquired with a UAV and transforming them into 3D point clouds. The Aeryon Scout -- a quad-copter micro UAV -- flew two missions: the first at York University Keele campus and the second at the Canadian Wollastonite Mine Property. Neptec's ViDAR software was used to extract 3D information from the 2D video using structure from motion. The resulting point clouds were sparsely populated, yet captured vegetation well. They were used successfully to measure fracture orientation in rock walls. Any improvement in the video resolution would cascade through the processing and improve the overall results.

  10. Using Unmanned Aerial Vehicle (UAV) Imagery to Investigate Surface Displacements and Surface Features of the Super-Sauze Earthflow (France)

    NASA Astrophysics Data System (ADS)

    James, M. R.; Tizzard, S.; Niethammer, U.

    2014-12-01

    We present the result of using imagery collected with a small rotary wing UAV (unmanned aerial vehicle) to investigate surface displacements and fissures on the Super-Sauze earthflow (France); a slow moving earthflow with the potential to develop into rapid and highly destructive mud flows. UAV imagery acquired in October 2009 was processed using a structure-from-motion and multi-view stereo (SfM-MVS) approach in PhotoScan software. Identification of ~200 ground control points throughout the image set was facilitated by automated image matching in SfM_georef software[1] and the data incorporated into PhotoScan for network optimisation and georeferencing. The completed 2009 model enabled an ~5 cm spatial resolution orthoimage to be generated with an expected accuracy (based on residuals on control) of ~0.3 m. This was supported by comparison to a previously created 2008 model, which gave standard deviations on tie points (located on stationary terrain) of 0.27 m and 0.43 m in Easting and Northing respectively. The high resolution of the orthoimage allowed an investigation into surface displacements and geomorphology of surface features (compared to the 2008 model). The results have produced a comprehensive surface displacement map of the Super-Sauze earthflow, as well as highlighting interesting variations in fissure geomorphology and density between the 2008 and 2009 models. This study underscored the capability for UAV imagery and SfM-MVS to generate highly detailed orthographic imagery and DEMs with a low cost approach that offers significant potential for landslide hazard assessments. [1] http://www.lancaster.ac.uk/staff/jamesm/software/sfm_georef.htm

  11. Use of video observation and motor imagery on jumping performance in national rhythmic gymnastics athletes.

    PubMed

    Battaglia, Claudia; D'Artibale, Emanuele; Fiorilli, Giovanni; Piazza, Marina; Tsopani, Despina; Giombini, Arrigo; Calcagno, Giuseppe; di Cagno, Alessandra

    2014-12-01

    The aim of this study was to evaluate whether a mental training protocol could improve gymnastic jumping performance. Seventy-two rhythmic gymnasts were randomly divided into an experimental and control group. At baseline, experimental group completed the Movement Imagery Questionnaire Revised (MIQ-R) to assess the gymnast ability to generate movement imagery. A repeated measures design was used to compare two different types of training aimed at improving jumping performance: (a) video observation and PETTLEP mental training associated with physical practice, for the experimental group, and (b) physical practice alone for the control group. Before and after six weeks of training, their jumping performance was measured using the Hopping Test (HT), Drop Jump (DJ), and Counter Movement Jump (CMJ). Results revealed differences between jumping parameters F(1,71)=11.957; p<.01, and between groups F(1,71)=10.620; p<.01. In the experimental group there were significant correlations between imagery ability and the post-training Flight Time of the HT, r(34)=-.295, p<.05 and the DJ, r(34)=-.297, p<.05. The application of the protocol described herein was shown to improve jumping performance, thereby preserving the elite athlete's energy for other tasks. PMID:25457420

  12. Forest fuel treatment detection using multi-temporal airborne Lidar data and high resolution aerial imagery ---- A case study at Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Collins, B.; Fry, D.; Kelly, M.

    2014-12-01

    Forest fuel treatments (FFT) are often employed in Sierra Nevada forest (located in California, US) to enhance forest health, regulate stand density, and reduce wildfire risk. However, there have been concerns that FFTs may have negative impacts on certain protected wildlife species. Due to the constraints and protection of resources (e.g., perennial streams, cultural resources, wildlife habitat, etc.), the actual FFT extents are usually different from planned extents. Identifying the actual extent of treated areas is of primary importance to understand the environmental influence of FFTs. Light detection and ranging (Lidar) is a powerful remote sensing technique that can provide accurate forest structure measurements, which provides great potential to monitor forest changes. This study used canopy height model (CHM) and canopy cover (CC) products derived from multi-temporal airborne Lidar data to detect FFTs by an approach combining a pixel-wise thresholding method and a object-of-interest segmentation method. We also investigated forest change following the implementation of landscape-scale FFT projects through the use of normalized difference vegetation index (NDVI) and standardized principle component analysis (PCA) from multi-temporal high resolution aerial imagery. The same FFT detection routine was applied on the Lidar data and aerial imagery for the purpose of comparing the capability of Lidar data and aerial imagery on FFT detection. Our results demonstrated that the FFT detection using Lidar derived CC products produced both the highest total accuracy and kappa coefficient, and was more robust at identifying areas with light FFTs. The accuracy using Lidar derived CHM products was significantly lower than that of the result using Lidar derived CC, but was still slightly higher than using aerial imagery. FFT detection results using NDVI and standardized PCA using multi-temporal aerial imagery produced almost identical total accuracy and kappa coefficient

  13. Automated identification of rivers and shorelines in aerial imagery using image texture

    NASA Astrophysics Data System (ADS)

    McKay, Paul; Blain, Cheryl Ann; Linzell, Robert

    2011-06-01

    A method has been developed which automatically extracts river and river bank locations from arbitrarily sourced high resolution (~1m) visual spectrum imagery without recourse to multi-spectral or even color information. This method relies on quantifying the difference in image texture between the relatively smooth surface of the river water and the rougher surface of the vegetated land or built environment bordering it and then segmenting the image into high and low roughness regions. The edges of the low roughness regions then define the river banks. The method can be coded in any language without recourse to proprietary tools and requires minimal operator intervention. As this sort of imagery is increasingly being made freely available through such services as Google Earth or Worldwind this technique can be used to extract river features when more specialized imagery or software is not available.

  14. Characterizing Sediment Flux Using Reconstructed Topography and Bathymetry from Historical Aerial Imagery on the Willamette River, OR.

    NASA Astrophysics Data System (ADS)

    Langston, T.; Fonstad, M. A.

    2014-12-01

    The Willamette is a gravel-bed river that drains ~28,800 km^2 between the Coast Range and Cascade Range in northwestern Oregon before entering the Columbia River near Portland. In the last 150 years, natural and anthropogenic drivers have altered the sediment transport regime, drastically reducing the geomorphic complexity of the river. Previously dynamic multi-threaded reaches have transformed into stable single channels to the detriment of ecosystem diversity and productivity. Flow regulation by flood-control dams, bank revetments, and conversion of riparian forests to agriculture have been key drivers of channel change. To date, little has been done to quantitatively describe temporal and spatial trends of sediment transport in the Willamette. This knowledge is critical for understanding how modern processes shape landforms and habitats. The goal of this study is to describe large-scale temporal and spatial trends in the sediment budget by reconstructing historical topography and bathymetry from aerial imagery. The area of interest for this project is a reach of the Willamette stretching from the confluence of the McKenzie River to the town of Peoria. While this reach remains one of the most dynamic sections of the river, it has exhibited a great loss in geomorphic complexity. Aerial imagery for this section of the river is available from USDA and USACE projects dating back to the 1930's. Above water surface elevations are extracted using the Imagine Photogrammetry package in ERDAS. Bathymetry is estimated using a method known as Hydraulic Assisted Bathymetry in which hydraulic parameters are used to develop a regression between water depth and pixel values. From this, pixel values are converted to depth below the water surface. Merged together, topography and bathymetry produce a spatially continuous digital elevation model of the geomorphic floodplain. Volumetric changes in sediment stored along the study reach are then estimated for different historic periods

  15. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  16. Monitoring a BLM level 5 watershed with very-large aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A fifth order BLM watershed in central Wyoming was flown using a Sport-airplane to acquire high-resolution aerial images from 2 cameras at 2 altitudes. Project phases 1 and 2 obtained images for measuring ground cover, species composition and canopy cover of Wyoming big sagebrush by ecological site....

  17. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach

  18. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach

  19. Surface Temperature Mapping of the University of Northern Iowa Campus Using High Resolution Thermal Infrared Aerial Imageries

    PubMed Central

    Savelyev, Alexander; Sugumaran, Ramanathan

    2008-01-01

    The goal of this project was to map the surface temperature of the University of Northern Iowa campus using high-resolution thermal infrared aerial imageries. A thermal camera with a spectral bandwidth of 3.0-5.0 μm was flown at the average altitude of 600 m, achieving ground resolution of 29 cm. Ground control data was used to construct the pixel- to-temperature conversion model, which was later used to produce temperature maps of the entire campus and also for validation of the model. The temperature map then was used to assess the building rooftop conditions and steam line faults in the study area. Assessment of the temperature map revealed a number of building structures that may be subject to insulation improvement due to their high surface temperatures leaks. Several hot spots were also identified on the campus for steam pipelines faults. High-resolution thermal infrared imagery proved highly effective tool for precise heat anomaly detection on the campus, and it can be used by university facility services for effective future maintenance of buildings and grounds.

  20. Wavelet-based detection of bush encroachment in a savanna using multi-temporal aerial photographs and satellite imagery

    NASA Astrophysics Data System (ADS)

    Shekede, Munyaradzi D.; Murwira, Amon; Masocha, Mhosisi

    2015-03-01

    Although increased woody plant abundance has been reported in tropical savannas worldwide, techniques for detecting the direction and magnitude of change are mostly based on visual interpretation of historical aerial photography or textural analysis of multi-temporal satellite images. These techniques are prone to human error and do not permit integration of remotely sensed data from diverse sources. Here, we integrate aerial photographs with high spatial resolution satellite imagery and use a discrete wavelet transform to objectively detect the dynamics in bush encroachment at two protected Zimbabwean savanna sites. Based on the recently introduced intensity-dominant scale approach, we test the hypotheses that: (1) the encroachment of woody patches into the surrounding grassland matrix causes a shift in the dominant scale. This shift in the dominant scale can be detected using a discrete wavelet transform regardless of whether aerial photography and satellite data are used; and (2) as the woody patch size stabilises, woody cover tends to increase thereby triggering changes in intensity. The results show that at the first site where tree patches were already established (Lake Chivero Game Reserve), between 1972 and 1984 the dominant scale of woody patches initially increased from 8 m before stabilising at 16 m and 32 m between 1984 and 2012 while the intensity fluctuated during the same period. In contrast, at the second site, which was formely grass-dominated site (Kyle Game Reserve), we observed an unclear dominant scale (1972) which later becomes distinct in 1985, 1996 and 2012. Over the same period, the intensity increased. Our results imply that using our approach we can detect and quantify woody/bush patch dynamics in savanna landscapes.

  1. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover.

  2. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover. PMID:23255169

  3. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data

    PubMed Central

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971

  4. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data.

    PubMed

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data.

  5. A study of video frame rate on the perception of moving imagery detail

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  6. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  7. Image degradation in aerial imagery duplicates. [photographic processing of photographic film and reproduction (copying)

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    A series of Earth Resources Aircraft Program data flights were made over an aerial test range in Arizona for the evaluation of large cameras. Specifically, both medium altitude and high altitude flights were made to test and evaluate a series of color as well as black-and-white films. Image degradation, inherent in duplication processing, was studied. Resolution losses resulting from resolution characteristics of the film types are given. Color duplicates, in general, are shown to be degraded more than black-and-white films because of the limitations imposed by available aerial color duplicating stock. Results indicate that a greater resolution loss may be expected when the original has higher resolution. Photographs of the duplications are shown.

  8. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  9. Aerial videotape mapping of coastal geomorphic changes

    USGS Publications Warehouse

    Debusschere, Karolien; Penland, Shea; Westphal, Karen A.; Reimer, P. Douglas; McBride, Randolph A.

    1991-01-01

    An aerial geomorphic mapping system was developed to examine the spatial and temporal variability in the coastal geomorphology of Louisiana. Between 1984 and 1990 eleven sequential annual and post-hurricane aerial videotape surveys were flown covering periods of prolonged fair weather, hurricane impacts and subsequent post-storm recoveries. A coastal geomorphic classification system was developed to map the spatial and temporal geomorphic changes between these surveys. The classification system is based on 10 years of shoreline monitoring, analysis of aerial photography for 1940-1989, and numerous field surveys. The classification system divides shorelines into two broad classes: natural and altered. Each class consists of several genetically linked categories of shorelines. Each category is further subdivided into morphologic types on the basis of landform relief, elevation, habitat type, vegetation density and type, and sediment characteristics. The classification is used with imagery from the low-altitude, high-resolution aerial videotape surveys to describe and quantify the longshore and cross-shore geomorphic, sedimentologic, and vegetative character of Louisiana's shoreline systems. The mapping system makes it possible to delineate and map detailed geomorphic habitat changes at a resolution higher than that of conventional vertical aerial photography. Morphologic units are mapped parallel to the regional shoreline from the aerial videotape imagery onto the base maps at a scale of 1:24,000. The base maps were constructed from vertical aerial photography concurrent with the data of the video imagery.

  10. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  11. Mapping Urban Tree Canopy Coverage and Structure using Data Fusion of High Resolution Satellite Imagery and Aerial Lidar

    NASA Astrophysics Data System (ADS)

    Elmes, A.; Rogan, J.; Williams, C. A.; Martin, D. G.; Ratick, S.; Nowak, D.

    2015-12-01

    Urban tree canopy (UTC) coverage is a critical component of sustainable urban areas. Trees provide a number of important ecosystem services, including air pollution mitigation, water runoff control, and aesthetic and cultural values. Critically, urban trees also act to mitigate the urban heat island (UHI) effect by shading impervious surfaces and via evaporative cooling. The cooling effect of urban trees can be seen locally, with individual trees reducing home HVAC costs, and at a citywide scale, reducing the extent and magnitude of an urban areas UHI. In order to accurately model the ecosystem services of a given urban forest, it is essential to map in detail the condition and composition of these trees at a fine scale, capturing individual tree crowns and their vertical structure. This paper presents methods for delineating UTC and measuring canopy structure at fine spatial resolution (<1m). These metrics are essential for modeling the HVAC benefits from UTC for individual homes, and for assessing the ecosystem services for entire urban areas. Such maps have previously been made using a variety of methods, typically relying on high resolution aerial or satellite imagery. This paper seeks to contribute to this growing body of methods, relying on a data fusion method to combine the information contained in high resolution WorldView-3 satellite imagery and aerial lidar data using an object-based image classification approach. The study area, Worcester, MA, has recently undergone a large-scale tree removal and reforestation program, following a pest eradication effort. Therefore, the urban canopy in this location provides a wide mix of tree age class and functional type, ideal for illustrating the effectiveness of the proposed methods. Early results show that the object-based classifier is indeed capable of identifying individual tree crowns, while continued research will focus on extracting crown structural characteristics using lidar-derived metrics. Ultimately

  12. Detection and spatiotemporal analysis of methane ebullition on thermokarst lake ice using high-resolution optical aerial imagery

    NASA Astrophysics Data System (ADS)

    Lindgren, P. R.; Grosse, G.; Anthony, K. M. Walter; Meyer, F. J.

    2016-01-01

    Thermokarst lakes are important emitters of methane, a potent greenhouse gas. However, accurate estimation of methane flux from thermokarst lakes is difficult due to their remoteness and observational challenges associated with the heterogeneous nature of ebullition. We used high-resolution (9-11 cm) snow-free aerial images of an interior Alaskan thermokarst lake acquired 2 and 4 days following freeze-up in 2011 and 2012, respectively, to detect and characterize methane ebullition seeps and to estimate whole-lake ebullition. Bubbles impeded by the lake ice sheet form distinct white patches as a function of bubbling when lake ice grows downward and around them, trapping the gas in the ice. Our aerial imagery thus captured a snapshot of bubbles trapped in lake ice during the ebullition events that occurred before the image acquisition. Image analysis showed that low-flux A- and B-type seeps are associated with low brightness patches and are statistically distinct from high-flux C-type and hotspot seeps associated with high brightness patches. Mean whole-lake ebullition based on optical image analysis in combination with bubble-trap flux measurements was estimated to be 174 ± 28 and 216 ± 33 mL gas m-2 d-1 for the years 2011 and 2012, respectively. A large number of seeps demonstrated spatiotemporal stability over our 2-year study period. A strong inverse exponential relationship (R2 > = 0.79) was found between the percent of the surface area of lake ice covered with bubble patches and distance from the active thermokarst lake margin. Even though the narrow timing of optical image acquisition is a critical factor, with respect to both atmospheric pressure changes and snow/no-snow conditions during early lake freeze-up, our study shows that optical remote sensing is a powerful tool to map ebullition seeps on lake ice, to identify their relative strength of ebullition, and to assess their spatiotemporal variability.

  13. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  14. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    SciTech Connect

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; Glenn, Nancy F.

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis. The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).

  15. Mapping potential Blanding's turtle habitat using aerial orthophotographic imagery and object based classification

    NASA Astrophysics Data System (ADS)

    Barker, Rebecca

    Blanding's turtle (Emydoidea blandingii) is a threatened species in southern Quebec that is being inventoried to determine abundance and potential habitat by the Quebec Ministry of Natural Resources and Wildlife. In collaboration with that program and using spring leaf-off aerial orthophotos of Gatineau Park, attributes associated with known habitat criteria were analyzed: wetlands with open water, vegetation mounds for camouflage and thermoregulation, and logs for spring sun-basking. Pixel-based classification to separate wetlands from other land cover types was followed by object-based segmentation and rule-based classification of within--wetland vegetation and logs. Classifications integrated several image characteristics including texture, context, shape, area and spectral attributes. Field data and visual interpretation showed the accuracies of wetland and within wetland habitat feature classifications to be over 82.5%. The wetland classification results were used to develop a ranked potential habitat suitability map for Blanding's turtle that can be employed in conservation planning and management.

  16. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  17. Parameter optimization of image classification techniques to delineate crowns of coppice trees on UltraCam-D aerial imagery in woodlands

    NASA Astrophysics Data System (ADS)

    Erfanifard, Yousef; Stereńczak, Krzysztof; Behnia, Negin

    2014-01-01

    Estimating the optimal parameters of some classification techniques becomes their negative aspect as it affects their performance for a given dataset and reduces classification accuracy. It was aimed to optimize the combination of effective parameters of support vector machine (SVM), artificial neural network (ANN), and object-based image analysis (OBIA) classification techniques by the Taguchi method. The optimized techniques were applied to delineate crowns of Persian oak coppice trees on UltraCam-D very high spatial resolution aerial imagery in Zagros semiarid woodlands, Iran. The imagery was classified and the maps were assessed by receiver operating characteristic curve and other performance metrics. The results showed that Taguchi is a robust approach to optimize the combination of effective parameters in these image classification techniques. The area under curve (AUC) showed that the optimized OBIA could well discriminate tree crowns on the imagery (AUC=0.897), while SVM and ANN yielded slightly less AUC performances of 0.819 and 0.850, respectively. The indices of accuracy (0.999) and precision (0.999) and performance metrics of specificity (0.999) and sensitivity (0.999) in the optimized OBIA were higher than with other techniques. The optimization of effective parameters of image classification techniques by the Taguchi method, thus, provided encouraging results to discriminate the crowns of Persian oak coppice trees on UltraCam-D aerial imagery in Zagros semiarid woodlands.

  18. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; Glenn, Nancy F.

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  19. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots

    PubMed Central

    Lelong, Camille C. D.; Burger, Philippe; Jubelin, Guillaume; Roux, Bruno; Labbé, Sylvain; Baret, Frédéric

    2008-01-01

    This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships.

  20. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  1. Cultivated land information extraction from high-resolution unmanned aerial vehicle imagery data

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Han, Wenquan; Zhong, Lishan; Li, Manchun

    2014-01-01

    The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. Simultaneously, unmanned aerial vehicles (UAVs) have been increasingly used for natural resource applications in recent years as a result of their greater availability, the miniaturization of sensors, and the ability to deploy UAVs relatively quickly and repeatedly at low altitudes. We examine the potential of utilizing a small UAV for the characterization, assessment, and monitoring of cultivated land. Because most UAV images lack spectral information, we propose a novel cultivated land information extraction method based on a triangulation for cultivated land information extraction (TCLE) method. Thus, the information on more spatial properties of a region is incorporated into the classification process. The TCLE comprises three main steps: image segmentation, triangulation construction, and triangulation clustering using AUTOCLUST. Experiments were conducted on three UAV images in Deyang, China, using TCLE and eCognition for cultivated land information extraction (ECLE). Experimental results show that TCLE, which does not require training samples and has a much higher level of automation, can obtain accuracies equivalent to ECLE. Comparing with ECLE, TCLE also extracts coherent cultivated land with much less noise. As such, cultivated land information extraction based on high-resolution UAV images can be effectively and efficiently conducted using the proposed method.

  2. Improvement of erosion risk modelling using soil information derived from aerial Vis-NIR imagery

    NASA Astrophysics Data System (ADS)

    Ciampalini, Rossano; Raclot, Damien; Le Bissonnais, Yves

    2016-04-01

    The aim of this research is to test the benefit of the hyperspectral imagery in soil surface properties characterisation for soil erosion modelling purposes. The research area is the Lebna catchment located in the in the north of Tunisia (Cap Bon Region). Soil erosion is evaluated with the use of two different soil erosion models: PESERA (Pan-European Soil Erosion Risk Assessment already used for the soil erosion risk mapping for the European Union, Kirkby et al., 2008) and Mesales (Regional Modelling of Soil Erosion Risk developed by Le Bissonnais et al., 1998, 2002); for that, different sources for soil properties and derived parameters such as soil erodibility map and soil crusting map have been evaluated with use of four different supports: 1) IAO soil map (IAO, 2000), 2) Carte Agricole - CA - (Ministry of Agriculture, Tunisia), 3) Hyperspectral VIS-NIR map - HY - (Gomez et al., 2012; Ciampalini t al., 2012), and, 3) a here developed Hybrid map - CY - integrating information from Hyperspectral VIS-NIR and pedological maps. Results show that the data source has a high influence on the estimation of the parameters for both the models with a more evident sensitivity for Pesera. With regard to the classical pedological data, the VIS-NIR data clearly ameliorates the spatialization of the texture, then, the spatial detail of the results. Differences in the output using different maps are more important in Pesera model than in Mesales showing no-change ranges of about 15 to 41% and 53 to 67%, respectively.

  3. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  4. Aerial Imagery and Other Non-invasive Approaches to Detect Nitrogen and Water Stress in a Potato Crop

    NASA Astrophysics Data System (ADS)

    Nigon, Tyler John

    commercial potato field using aerial imagery. Reference areas were found to be necessary in order to make accurate recommendations because of differences in sensors, potato variety, growth stage, and other local conditions. The results from this study suggest that diagnostic criteria based on both biomass and plant nutrient concentration (e.g., canopy-level spectral reflectance data) were best suited to determine overall crop N status for determination of in-season N fertilizer recommendations.

  5. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  6. Terrestrial and unmanned aerial system imagery for deriving photogrammetric three-dimensional point clouds and volume models of mass wasting sites

    NASA Astrophysics Data System (ADS)

    Hämmerle, Martin; Schütt, Fabian; Höfle, Bernhard

    2016-04-01

    Three-dimensional (3-D) geodata of mass wasting sites are important to model surfaces, volumes, and their changes over time. With a photogrammetric approach commonly known as structure from motion, 3-D point clouds can be derived from image collections in a straightforward way. The quality of point clouds covering a quarry dump derived from terrestrial and aerial imagery is compared and assessed. A comprehensive set of quality indicators is calculated and compared to surveyed reference data and to a terrestrial LiDAR point cloud. The examined indicators are completeness of coverage, point density, vertical accuracy, multiscale point cloud distance, scaling accuracy, and dump volume. It is found that the photogrammetric datasets generally represent the examined dump well with, for example, an area coverage of up to 90% and 100% in case of terrestrial and aerial imagery, respectively, a maximum scaling difference of 0.62%, and volume estimations reaching up to 100% of the LiDAR reference. Combining the advantages of 3-D geodata derived from terrestrial (high detail, accurate volume calculation even with a small number of input images) and aerial images (high coverage) can be a promising method to further improve the quality of 3-D geodata derived with low-cost approaches.

  7. Tree Crown Delineation on Vhr Aerial Imagery with Svm Classification Technique Optimized by Taguchi Method: a Case Study in Zagros Woodlands

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Behnia, N.; Moosavi, V.

    2013-09-01

    The Support Vector Machine (SVM) is a theoretically superior machine learning methodology with great results in classification of remotely sensed datasets. Determination of optimal parameters applied in SVM is still vague to some scientists. In this research, it is suggested to use the Taguchi method to optimize these parameters. The objective of this study was to detect tree crowns on very high resolution (VHR) aerial imagery in Zagros woodlands by SVM optimized by Taguchi method. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The VHR aerial imagery of the plot with 0.06 m spatial resolution was obtained from National Geographic Organization (NGO), Iran, to extract the crowns of Persian oak trees in this study. The SVM parameters were optimized by Taguchi method and thereafter, the imagery was classified by the SVM with optimal parameters. The results showed that the Taguchi method is a very useful approach to optimize the combination of parameters of SVM. It was also concluded that the SVM method could detect the tree crowns with a KHAT coefficient of 0.961 which showed a great agreement with the observed samples and overall accuracy of 97.7% that showed the accuracy of the final map. Finally, the authors suggest applying this method to optimize the parameters of classification techniques like SVM.

  8. Tracking aeolian transport patterns across a mega-nourishment using video imagery

    NASA Astrophysics Data System (ADS)

    Wijnberg, Kathelijne; van der Weerd, Lianne; Hulscher, Suzanne

    2014-05-01

    Coastal dune areas protect the hinterland from flooding. In order to maintain the safety level provided by the dunes, it may be necessary to artificially supply the beach-dune system with sand. How to best design these shore nourishments, amongst others with respect to optimal dune growth on the long-term (decadal scale), is not yet clear. One reason for this is that current models for aeolian transport on beaches appear to have limited predictive capabilities regarding annual onshore sediment supply. These limited capabilities may be attributed to the lack of appropriate input data, for instance on moisture content of the beach surface, or shortcomings in process understanding. However, it may also be argued that for the long-term prediction of onshore aeolian sand supply from the beach to the dunes, we may need to develop some aggregated-scale transport equations, because the detailed input data required for the application of process-scale transport equations may never be available in reality. A first step towards the development of such new concepts for aggregated-scale transport equations is to increase phenomenological insight into the characteristics and number of aeolian transport events that account for the annual volume changes of the foredunes. This requires high-frequency, long-term data sets to capture the only intermittently occurring aeolian transport events. Automated video image collection seems a promising way to collect such data. In the present study we describe the movement (direction and speed) of sand patches and aeolian bed forms across a nourished site, using video imagery, to characterize aeolian transport pathways and their variability in time. The study site is a mega-nourishment (21 Mm3 of sand) that was recently constructed at the Dutch coast. This mega-nourishment, also referred to as the Sand Motor, is a pilot project that may potentially replace current practice of more frequently applying small scale nourishments. The mega

  9. Draper Laboratory small autonomous aerial vehicle

    NASA Astrophysics Data System (ADS)

    DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.

    1997-06-01

    The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.

  10. Detecting new Buffel grass infestations in Australian arid lands: evaluation of methods using high-resolution multispectral imagery and aerial photography.

    PubMed

    Marshall, V M; Lewis, M M; Ostendorf, B

    2014-03-01

    We assess the feasibility of using airborne imagery for Buffel grass detection in Australian arid lands and evaluate four commonly used image classification techniques (visual estimate, manual digitisation, unsupervised classification and normalised difference vegetation index (NDVI) thresholding) for their suitability to this purpose. Colour digital aerial photography captured at approximately 5 cm of ground sample distance (GSD) and four-band (visible–near-infrared) multispectral imagery (25 cm GSD) were acquired (14 February 2012) across overlapping subsets of our study site. In the field, Buffel grass projected cover estimates were collected for quadrates (10 m diameter), which were subsequently used to evaluate the four image classification techniques. Buffel grass was found to be widespread throughout our study site; it was particularly prevalent in riparian land systems and alluvial plains. On hill slopes, Buffel grass was often present in depressions, valleys and crevices of rock outcrops, but the spread appeared to be dependent on soil type and vegetation communities. Visual cover estimates performed best (r 2 0.39), and pixel-based classifiers (unsupervised classification and NDVI thresholding) performed worst (r 2 0.21). Manual digitising consistently underrepresented Buffel grass cover compared with field- and image-based visual cover estimates; we did not find the labours of digitising rewarding. Our recommendation for regional documentation of new infestation of Buffel grass is to acquire ultra-high-resolution aerial photography and have a trained observer score cover against visual standards and use the scored sites to interpolate density across the region.

  11. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    NASA Astrophysics Data System (ADS)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  12. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  13. Methods for efficient correction of complex noise in outdoor video rate passive millimeter wavelength imagery

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Baron, Joshua; Matic, Roy M.

    2012-09-01

    Passive millimeter wavelength (PMMW) video holds great promise, given its ability to see targets and obstacles through fog, smoke, and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snowlike) noise and a slower-forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise measures feature change in time that is abrupt but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring, particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of non-uniformity correction and mean image wavelet transformation. The combination allows for online removal of time-varying fixed pattern noise, even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. We also discuss a method for sharpening frames using deconvolution. The fixed pattern and shot noise filters are all efficient, which allows real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees, and utility poles at 20 frames per second.

  14. Classifying Multiple Stages of Mountain Pine Beetle Disturbance Using Multispectral Aerial Imagery in North-Central Colorado

    NASA Astrophysics Data System (ADS)

    Meddens, A. J.; Hicke, J. A.; Vierling, L. A.

    2010-12-01

    Insect outbreaks are major forest disturbances, killing trees across millions of ha in the United States. These dead trees affect the condition of the ecosystems, leading to alterations of forest functioning and fuel arrangement, among other impacts. In this study, we evaluated methods for classifying 30-cm multispectral imagery including insect-caused tree mortality (both red and gray attack) classes and non-forest classes. We acquired 4-band imagery in lodgepole pine stands of central Colorado that were recently attacked by mountain pine beetle. The 30-cm resolution image facilitated delineation of field-observed trees, which were used for image classification. We employed the maximum likelihood classifier with the Normalized Difference Vegetation Index (NDVI), the Red-Green Index (RGI), and Green band (GREEN). Our initial classification used original spatial resolution imagery to identify green trees, red-attack, gray-attack, herbaceous, bare soil, and shadow classes. Although classification accuracies were good (overall accuracy of 85.95%, kappa = 0.826), we noted confusion between sunlit crowns of live (green) trees and herbaceous classes at this very fine spatial resolution, and confusion between sunlit crowns of gray- and red-attack trees and bare soil, and thus explored additional methods to reduce omission and commission errors. Classification confusion was overcome by aggregating the 30-cm multispectral imagery into a 2.4-m resolution image (matching very high resolution satellite imagery). Pixels in the 2.4-m resolution image included more shadow in the forested regions than the 30-cm resolution, thereby reducing forest canopy reflectance and improving the separability between the forest and non-forest classes that had caused previous errors. We conclude that operational mapping of insect-caused tree mortality with multispectral imagery has great potential for forest disturbance mapping, and that imagery with a spatial resolution about the crown width of

  15. Αutomated 2D shoreline detection from coastal video imagery: an example from the island of Crete

    NASA Astrophysics Data System (ADS)

    Velegrakis, A. F.; Trygonis, V.; Vousdoukas, M. I.; Ghionis, G.; Chatzipavlis, A.; Andreadis, O.; Psarros, F.; Hasiotis, Th.

    2015-06-01

    Beaches are both sensitive and critical coastal system components as they: (i) are vulnerable to coastal erosion (due to e.g. wave regime changes and the short- and long-term sea level rise) and (ii) form valuable ecosystems and economic resources. In order to identify/understand the current and future beach morphodynamics, effective monitoring of the beach spatial characteristics (e.g. the shoreline position) at adequate spatio-temporal resolutions is required. In this contribution we present the results of a new, fully-automated detection method of the (2-D) shoreline positions using high resolution video imaging from a Greek island beach (Ammoudara, Crete). A fully-automated feature detection method was developed/used to monitor the shoreline position in geo-rectified coastal imagery obtained through a video system set to collect 10 min videos every daylight hour with a sampling rate of 5 Hz, from which snapshot, time-averaged (TIMEX) and variance images (SIGMA) were generated. The developed coastal feature detector is based on a very fast algorithm using a localised kernel that progressively grows along the SIGMA or TIMEX digital image, following the maximum backscatter intensity along the feature of interest; the detector results were found to compare very well with those obtained from a semi-automated `manual' shoreline detection procedure. The automated procedure was tested on video imagery obtained from the eastern part of Ammoudara beach in two 5-day periods, a low wave energy period (6-10 April 2014) and a high wave energy period (1 -5 November 2014). The results showed that, during the high wave energy event, there have been much higher levels of shoreline variance which, however, appeared to be similarly unevenly distributed along the shoreline as that related to the low wave energy event, Shoreline variance `hot spots' were found to be related to the presence/architecture of an offshore submerged shallow beachrock reef, found at a distance of 50-80 m

  16. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery.

  17. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  18. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. PMID:24473345

  19. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest. PMID:26571671

  20. BOREAS RSS-3 Imagery and Snapshots from a Helicopter-Mounted Video Camera

    NASA Technical Reports Server (NTRS)

    Walthall, Charles L.; Loechel, Sara; Nickeson, Jaime (Editor); Hall, Forrest G. (Editor)

    2000-01-01

    The BOREAS RSS-3 team collected helicopter-based video coverage of forested sites acquired during BOREAS as well as single-frame "snapshots" processed to still images. Helicopter data used in this analysis were collected during all three 1994 IFCs (24-May to 16-Jun, 19-Jul to 10-Aug, and 30-Aug to 19-Sep), at numerous tower and auxiliary sites in both the NSA and the SSA. The VHS-camera observations correspond to other coincident helicopter measurements. The field of view of the camera is unknown. The video tapes are in both VHS and Beta format. The still images are stored in JPEG format.

  1. An algebraic restoration method for estimating fixed-pattern noise in infrared imagery from a video sequence

    NASA Astrophysics Data System (ADS)

    Sakoglu, Unal; Hardie, Russell C.; Hayat, Majeed M.; Ratliff, Bradley M.; Tyo, J. Scott

    2004-11-01

    The inherent nonuniformity in the photoresponse and readout-circuitry of the individual detectors in infrared focal-plane-array imagers result in the notorious fixed-pattern noise (FPN). FPN generally degrades the performance of infrared imagers and it is particularly problematic in the midwavelength and longwavelength infrared regimes. In many applications, employing signal-processing techniques to combat FPN may be preferred over hard calibration (e.g., two-point calibration), as they are less expensive and, more importantly, do not require halting the operation of the camera. In this paper, a new technique that uses knowledge of global motion in a video sequence to restore the true scene in the presence of FPN is introduced. In the proposed setting, the entire video sequence is regarded as an output of a motion-dependent linear transformation, which acts collectively on the true scene and the unknown bias elements (which represent the FPN) in each detector. The true scene is then estimated from the video sequence according to a minimum mean-square-error criterion. Two modes of operation are considered. First, we consider non-radiometric restoration, in which case the true scene is estimated by performing a regularized minimization, since the problem is ill-posed. The other mode of operation is radiometric, in which case we assume that only the perimeter detectors have been calibrated. This latter mode does not require regularization and therefore avoids compromising the radiometric accuracy of the restored scene. The algorithm is demonstrated through preliminary results from simulated and real infrared imagery.

  2. Characterization of Shrubland-Atmosphere Interactions through Use of the Eddy Covariance Method, Distributed Footprint Sampling, and Imagery from Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Anderson, C.; Vivoni, E. R.; Pierini, N.; Robles-Morua, A.; Rango, A.; Laliberte, A.; Saripalli, S.

    2012-12-01

    Ecohydrological dynamics can be evaluated from field observations of land-atmosphere states and fluxes, including water, carbon, and energy exchanges measured through the eddy covariance method. In heterogeneous landscapes, the representativeness of these measurements is not well understood due to the variable nature of the sampling footprint and the mixture of underlying herbaceous, shrub, and soil patches. In this study, we integrate new field techniques to understand how ecosystem surface states are related to turbulent fluxes in two different semiarid shrubland settings in the Jornada (New Mexico) and Santa Rita (Arizona) Experimental Ranges. The two sites are characteristic of Chihuahuan (NM) and Sonoran (AZ) Desert mixed-shrub communities resulting from woody plant encroachment into grassland areas. In each study site, we deployed continuous soil moisture and soil temperature profile observations at twenty sites around an eddy covariance tower after local footprint estimation revealed the optimal sensor network design. We then characterized the tower footprint through terrain and vegetation analyses derived at high resolution (<1 m) from imagery obtained from a fixed-wing and rotary-wing Unmanned Aerial Vehicles (UAV). Our analysis focuses on the summertime land-atmosphere states and fluxes during which each ecosystem responded differentially to the North American monsoon. We found that vegetation heterogeneity induces spatial differences in soil moisture and temperature that are important to capture when relating these states to the eddy covariance flux measurements. Spatial distributions of surface states at different depths reveal intricate patterns linked to vegetation cover that vary between the two sites. Furthermore, single site measurements at the tower are insufficient to capture the footprint conditions and their influence on turbulent fluxes. We also discuss techniques for aggregating the surface states based upon the vegetation and soil

  3. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    PubMed

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures. PMID:24473549

  4. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    PubMed

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  5. Evolution of a natural debris flow: In situ measurements of flow dynamics, video imagery, and terrestrial laser scanning

    USGS Publications Warehouse

    McCoy, S.W.; Kean, J.W.; Coe, J.A.; Staley, D.M.; Wasklewicz, T.A.; Tucker, G.E.

    2010-01-01

    Many theoretical and laboratory studies have been undertaken to understand debris-flow processes and their associated hazards. However, complete and quantitative data sets from natural debris flows needed for confirmation of these results are limited. We used a novel combination of in situ measurements of debris-flow dynamics, video imagery, and pre- and postflow 2-cm-resolution digital terrain models to study a natural debris-flow event. Our field data constrain the initial and final reach morphology and key flow dynamics. The observed event consisted of multiple surges, each with clear variation of flow properties along the length of the surge. Steep, highly resistant, surge fronts of coarse-grained material without measurable pore-fluid pressure were pushed along by relatively fine-grained and water-rich tails that had a wide range of pore-fluid pressures (some two times greater than hydrostatic). Surges with larger nonequilibrium pore-fluid pressures had longer travel distances. A wide range of travel distances from different surges of similar size indicates that dynamic flow properties are of equal or greater importance than channel properties in determining where a particular surge will stop. Progressive vertical accretion of multiple surges generated the total thickness of mapped debris-flow deposits; nevertheless, deposits had massive, vertically unstratified sedimentological textures. ?? 2010 Geological Society of America.

  6. Estimation of wave phase speed and nearshore bathymetry from video imagery

    USGS Publications Warehouse

    Stockdon, H.F.; Holman, R.A.

    2000-01-01

    A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.

  7. Analysis of Small-Scale Convective Dynamics in a Crown Fire Using Infrared Video Camera Imagery.

    NASA Astrophysics Data System (ADS)

    Clark, Terry L.; Radke, Larry; Coen, Janice; Middleton, Don

    1999-10-01

    A good physical understanding of the initiation, propagation, and spread of crown fires remains an elusive goal for fire researchers. Although some data exist that describe the fire spread rate and some qualitative aspects of wildfire behavior, none have revealed the very small timescales and spatial scales in the convective processes that may play a key role in determining both the details and the rate of fire spread. Here such a dataset is derived using data from a prescribed burn during the International Crown Fire Modelling Experiment. A gradient-based image flow analysis scheme is presented and applied to a sequence of high-frequency (0.03 s), high-resolution (0.05-0.16 m) radiant temperature images obtained by an Inframetrics ThermaCAM instrument during an intense crown fire to derive wind fields and sensible heat flux. It was found that the motions during the crown fire had energy-containing scales on the order of meters with timescales of fractions of a second. Estimates of maximum vertical heat fluxes ranged between 0.6 and 3 MW m2 over the 4.5-min burn, with early time periods showing surprisingly large fluxes of 3 MW m2. Statistically determined velocity extremes, using five standard deviations from the mean, suggest that updrafts between 10 and 30 m s1, downdrafts between 10 and 20 m s1, and horizontal motions between 5 and 15 m s1 frequently occurred throughout the fire.The image flow analyses indicated a number of physical mechanisms that contribute to the fire spread rate, such as the enhanced tilting of horizontal vortices leading to counterrotating convective towers with estimated vertical vorticities of 4 to 10 s1 rotating such that air between the towers blew in the direction of fire spread at canopy height and below. The IR imagery and flow analysis also repeatedly showed regions of thermal saturation (infrared temperature > 750°C), rising through the convection. These regions represent turbulent bursts or hairpin vortices resulting again from

  8. An algorithm for the measurement of shoreline and intertidal beach profiles using video imagery: PSDM

    NASA Astrophysics Data System (ADS)

    Osorio, A. F.; Medina, R.; Gonzalez, M.

    2012-09-01

    A critical factor when undertaking proper coastline management is the availability of reliable data, allowing those responsible to make informed decisions about land use and its impact on natural resources. Marine resource data (bathymetry, time-series of waves, coastal use levels, etc.) have often been difficult for policy-makers to obtain and use, given the high costs involved and difficulties in their application and interpretation. In cases where a source of coastline data is available, it is important that it be sufficiently complete to compare habitat changes (in morphology and use) that are only observable over a period of years. Thus, it is necessary to look for alternative methods to obtain this long-term information. The availability of new data-gathering techniques for the study of coastlines and coastal processes, such as remote sensors and underwater photography and videography, has increased during the last decade. One of the principal research applications of these techniques is bathymetry. This paper presents state-of-the-art image-based models that allow the construction of topo-bathymetric shore data. The Physical and Statistical Detection Model (PSDM), which constitutes the core of this research, is discussed and validated. The PSDM is based on a combination of six different algorithms that describe the shoreline features, and it assigns physical and statistical criteria to the detection process. In conclusion, a numerical tool based on video systems is introduced which shows great potential for monitoring coastal issues and supplying data to aid technicians and shoreline managers.

  9. Salient object detection approach in UAV video

    NASA Astrophysics Data System (ADS)

    Zhang, Yueqiang; Su, Ang; Zhu, Xianwei; Zhang, Xiaohu; Shang, Yang

    2013-10-01

    The automatic detection of visually salient information from abundant video imagery is crucial, as it plays an important role in surveillance and reconnaissance tasks for Unmanned Aerial Vehicle (UAV). A real-time approach for the detection of salient objects on road, e.g. stationary and moving vehicle or people, is proposed, which is based on region segmentation and saliency detection within related domains. Generally, the traditional method specifically depends upon additional scene information and auxiliary thermal or IR sensing for secondary confirmation. However, this proposed approach can detect the interesting objects directly from video imagery captured by optical camera fixed on the small level UAV platform. To validate this proposed salient object detection approach, the 25 Hz video data from our low speed small UAV are tested. The results have demonstrated the proposed approach performs excellently in isolated rural environments.

  10. Analysis of Biophysical Mechanisms of Gilgai Microrelief Formation in Dryland Swelling Soils Using Ultra-High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Krell, N.; DeCarlo, K. F.; Caylor, K. K.

    2015-12-01

    Microrelief formations ("gilgai"), which form due to successive wetting-drying cycles typical of swelling soils, provide ecological hotspots for local fauna and flora, including higher and more robust vegetative growth. The distribution of these gilgai suggests a remarkable degree of regularity. However, it is unclear to what extent the mechanisms that drive gilgai formation are physical, such as desiccation-induced fracturing, or biological in nature, namely antecedent vegetative clustering. We investigated gilgai genesis and pattern formation in a 100 x 100 meter study area with swelling soils in a semiarid grassland at the Mpala Research Center in central Kenya. Our ongoing experiment is composed of three 9m2 treatments: we removed gilgai and limited vegetative growth by herbicide application in one plot, allowed for unrestricted seed dispersal in another, and left gilgai unobstructed in a control plot. To estimate the spatial frequencies of the repeating patterns of gilgai, we obtained ultra-high resolution (0.01-0.03m/pixel) images with an unmanned aerial vehicle (UAV) from which digital elevation models were also generated. Geostatistical analyses using wavelet and fourier methods in 1- and 2-dimensions were employed to characterize gilgai size and distribution. Preliminary results support regular spatial patterning across the gilgaied landscape and heterogeneities may be related to local soil properties and biophysical influences. Local data on gilgai and fracture characteristics suggest that gilgai form at characteristic heights and spacing based on fracture morphology: deep, wide cracks result in large, highly vegetated mounds whereas shallow cracks, induced by animal trails, are less correlated with gilgai size and shape. Our experiments will help elucidate the links between shrink-swell processes and gilgai-vegetation patterning in high activity clay soils and advance our understanding of the mechanisms of gilgai formation in drylands.

  11. Semi-Automated Approach for Mapping Urban Trees from Integrated Aerial LiDAR Point Cloud and Digital Imagery Datasets

    NASA Astrophysics Data System (ADS)

    Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-09-01

    Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  12. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  13. Small UAV-Acquired, High-resolution, Georeferenced Still Imagery

    SciTech Connect

    Ryan Hruska

    2005-09-01

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical to use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.

  14. Regional albedo of Arctic first-year drift ice in advanced stages of melt from the combination of in situ measurements and aerial imagery

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Granskog, M. A.; Hudson, S. R.; Pedersen, C. A.; Karlsen, T. I.; Divina, S. A.; Gerland, S.

    2014-07-01

    The paper presents a case study of the regional (≈ 150 km) broadband albedo of first year Arctic sea ice in advanced stages of melt, estimated from a combination of in situ albedo measurements and aerial imagery. The data were collected during the eight day ICE12 drift experiment carried out by the Norwegian Polar Institute in the Arctic north of Svalbard at 82.3° N from 26 July to 3 August 2012. The study uses in situ albedo measurements representative of the four main surface types: bare ice, dark melt ponds, bright melt ponds and open water. Images acquired by a helicopter borne camera system during ice survey flights covered about 28 km2. A subset of > 8000 images from the area of homogeneous melt with open water fraction of ≈ 0.11 and melt pond coverage of ≈ 0.25 used in the upscaling yielded a regional albedo estimate of 0.40 (0.38; 0.42). The 95% confidence interval on the estimate was derived using the moving block bootstrap approach applied to sequences of classified sea ice images and albedo of the four surface types treated as random variables. Uncertainty in the mean estimates of surface type albedo from in situ measurements contributed some 95% of the variance of the estimated regional albedo, with the remaining variance resulting from the spatial inhomogeneity of sea ice cover. The results of the study are of relevance for the modeling of sea ice processes in climate simulations. It particularly concerns the period of summer melt, when the optical properties of sea ice undergo substantial changes, which existing sea ice models have significant diffuculty accurately reproducing.

  15. Use of Aerial high resolution visible imagery to produce large river bathymetry: a multi temporal and spatial study over the by-passed Upper Rhine

    NASA Astrophysics Data System (ADS)

    Béal, D.; Piégay, H.; Arnaud, F.; Rollet, A.; Schmitt, L.

    2011-12-01

    Aerial high resolution visible imagery allows producing large river bathymetry assuming that water depth is related to water colour (Beer-Bouguer-Lambert law). In this paper we aim at monitoring Rhine River geometry changes for a diachronic study as well as sediment transport after an artificial injection (25.000 m3 restoration operation). For that a consequent data base of ground measurements of river depth is used, built on 3 different sources: (i) differential GPS acquisitions, (ii) sounder data and (iii) lateral profiles realized by experts. Water depth is estimated using a multi linear regression over neo channels built on a principal component analysis over red, green and blue bands and previously cited depth data. The study site is a 12 km long reach of the by-passed section of the Rhine River that draws French and German border. This section has been heavily impacted by engineering works during the last two centuries: channelization since 1842 for navigation purposes and the construction of a 45 km long lateral canal and 4 consecutive hydroelectric power plants of since 1932. Several bathymetric models are produced based on 3 different spatial resolutions (6, 13 and 20 cm) and 5 acquisitions (January, March, April, August and October) since 2008. Objectives are to find the optimal spatial resolution and to characterize seasonal effects. Best performances according to the 13 cm resolution show a 18 cm accuracy when suspended matters impacted less water transparency. Discussions are oriented to the monitoring of the artificial reload after 2 flood events during winter 2010-2011. Bathymetric models produced are also useful to build 2D hydraulic model's mesh.

  16. A new technique for the detection of large scale landslides in glacio-lacustrine deposits using image correlation based upon aerial imagery: A case study from the French Alps

    NASA Astrophysics Data System (ADS)

    Fernandez, Paz; Whitworth, Malcolm

    2016-10-01

    Landslide monitoring has benefited from recent advances in the use of image correlation of high resolution optical imagery. However, this approach has typically involved satellite imagery that may not be available for all landslides depending on their time of movement and location. This study has investigated the application of image correlation techniques applied to a sequence of aerial imagery to an active landslide in the French Alps. We apply an indirect landslide monitoring technique (COSI-Corr) based upon the cross-correlation between aerial photographs, to obtain horizontal displacement rates. Results for the 2001-2003 time interval are presented, providing a spatial model of landslide activity and motion across the landslide, which is consistent with previous studies. The study has identified areas of new landslide activity in addition to known areas and through image decorrelation has identified and mapped two new lateral landslides within the main landslide complex. This new approach for landslide monitoring is likely to be of wide applicability to other areas characterised by complex ground displacements.

  17. Classification of a wetland area along the upper Mississippi River with aerial videography

    USGS Publications Warehouse

    Jennings, C.A.; Vohs, P.A.; Dewey, M.R.

    1992-01-01

    We evaluated the use of aerial videography for classifying wetland habitats along the upper Mississippi River and found the prompt availability of habitat feature maps to be the major advantage of the video imagery technique. We successfully produced feature maps from digitized video images that generally agreed with the known distribution and areal coverages of the major habitat types independently identified and quantified with photointerpretation techniques. However, video images were not sufficiently detailed to allow us to consistently discriminate among the classes of aquatic macrophytes present or to quantify their areal coverage. Our inability to consistently distinguish among emergent, floating, and submergent macrophytes from the feature maps may have been related to the structural complexity of the site, to our limited vegetation sampling, and to limitations in video imagery. We expect that careful site selection (i.e., the desired level of resolution is available from video imagery) and additional vegetation samples (e.g., along a transect) will allow improved assignment of spectral values to specific plant types and enhance plant classification from feature maps produced from video imagery.

  18. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

  19. Using IKONOS and Aerial Videography to Validate Landsat Land Cover Maps of Central African Tropical Rain Forests

    NASA Astrophysics Data System (ADS)

    Lin, T.; Laporte, N. T.

    2003-12-01

    Compared to the traditional validation methods, aerial videography is a relatively inexpensive and time-efficient approach to collect "field" data for validating satellite-derived land cover map over large areas. In particular, this approach is valuable in remote and inaccessible locations. In the Sangha Tri-National Park region of Central Africa, where road access is limited to industrial logging sites, we are using IKONOS imagery and aerial videography to assess the accuracy of Landsat-derived land cover maps. As part of a NASA Land Cover Land Use Change project (INFORMS) and in collaboration with the Wildlife Conservation Society in the Republic of Congo, over 1500km of aerial video transects were collected in the Spring of 2001. The use of MediaMapper software combined with a VMS 200 video mapping system enabled the collection of aerial transects to be registered with geographic locations from a Geographic Positioning System. Video frame were extracted, visually interpreted, and compared to land cover types mapped by Landsat. We addressed the limitations of accuracy assessment using aerial-base data and its potential for improving vegetation mapping in tropical rain forests. The results of the videography and IKONOS image analysis demonstrate the utility of very high resolution imagery for map validation and forest resource assessment.

  20. Near infrared-red models for the remote estimation of chlorophyll- a concentration in optically complex turbid productive waters: From in situ measurements to aerial imagery

    NASA Astrophysics Data System (ADS)

    Gurlin, Daniela

    Today the water quality of many inland and coastal waters is compromised by cultural eutrophication in consequence of increased human agricultural and industrial activities and remote sensing is widely applied to monitor the trophic state of these waters. This study explores near infrared-red models for the remote estimation of chlorophyll-a concentration in turbid productive waters and compares several near infrared-red models developed within the last 35 years. Three of these near infrared-red models were calibrated for a dataset with chlorophyll-a concentrations from 2.3 to 81.2 mg m -3 and validated for independent and statistically significantly different datasets with chlorophyll-a concentrations from 4.0 to 95.5 mg m-3 and 4.0 to 24.2 mg m-3 for the spectral bands of the MEdium Resolution Imaging Spectrometer (MERIS) and Moderate-resolution Imaging Spectroradiometer (MODIS). The developed MERIS two-band algorithm estimated chlorophyll-a concentrations from 4.0 to 24.2 mg m-3, which are typical for many inland and coastal waters, very accurately with a mean absolute error 1.2 mg m-3. These results indicate a high potential of the simple MERIS two-band algorithm for the reliable estimation of chlorophyll-a concentration without any reduction in accuracy compared to more complex algorithms, even though more research seems required to analyze the sensitivity of this algorithm to differences in the chlorophyll-a specific absorption coefficient of phytoplankton. Three near infrared-red models were calibrated and validated for a smaller dataset of atmospherically corrected multi-temporal aerial imagery collected by the hyperspectral airborne imaging spectrometer for applications (AisaEAGLE). The developed algorithms successfully captured the spatial and temporal variability of the chlorophyll-a concentrations and estimated chlorophyll- a concentrations from 2.3 to 81.2 mg m-3 with mean absolute errors from 4.4 mg m-3 for the AISA two band algorithm to 5.2 mg m-3

  1. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  2. Land cover/use mapping using multi-band imageries captured by Cropcam Unmanned Aerial Vehicle Autopilot (UAV) over Penang Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Fuyi, Tan; Boon Chun, Beh; Mat Jafri, Mohd Zubir; Hwee San, Lim; Abdullah, Khiruddin; Mohammad Tahrin, Norhaslinda

    2012-11-01

    The problem of difficulty in obtaining cloud-free scene at the Equatorial region from satellite platforms can be overcome by using airborne imagery. Airborne digital imagery has proved to be an effective tool for land cover studies. Airborne digital camera imageries were selected in this present study because of the airborne digital image provides higher spatial resolution data for mapping a small study area. The main objective of this study is to classify the RGB bands imageries taken from a low-altitude Cropcam UAV for land cover/use mapping over USM campus, penang Island, Malaysia. A conventional digital camera was used to capture images from an elevation of 320 meter on board on an UAV autopilot. This technique was cheaper and economical compared with other airborne studies. The artificial neural network (NN) and maximum likelihood classifier (MLC) were used to classify the digital imageries captured by using Cropcam UAV over USM campus, Penang Islands, Malaysia. The supervised classifier was chosen based on the highest overall accuracy (<80%) and Kappa statistic (<0.8). The classified land cover map was geometrically corrected to provide a geocoded map. The results produced by this study indicated that land cover features could be clearly identified and classified into a land cover map. This study indicates the use of a conventional digital camera as a sensor on board on an UAV autopilot can provide useful information for planning and development of a small area of coverage.

  3. The ASPRS Digital Imagery Product Guideline Project

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Kuper, Philip; Stanley, Thomas; Mondello, Charles

    2001-01-01

    The American Society for Photogrammetry and Remote Sensing (ASPRS) Primary Data Acquisition Division is developing a Digital Imagery Product Guideline in conjunction with NASA, the U.S. Geological Survey (USGS), the National Imagery and Mapping Agency (NIMA), academia, and industry. The goal of the guideline is to offer providers and users of digital imagery a set of recommendatons analogous those defined by the ASPRS Aerial Photography 1995 Draft Standard for film-based imagery. This article offers a general outline and description of the Digital Imagery Product Guideline and Digital Imagery Tutorial/Reference documents for defining digital imagery requirements.

  4. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    USGS Publications Warehouse

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  5. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  6. Parallax visualization of UAV FMV and WAMI imagery

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.; Mayhew, Craig M.

    2012-06-01

    The US Military is increasingly relying on the use of unmanned aerial vehicles (UAV) for intelligence, surveillance, and reconnaissance (ISR) missions. Complex arrays of Full-Motion Video (FMV), Wide-Area Motion Imaging (WAMI) and Wide Area Airborne Surveillance (WAAS) technologies are being deployed on UAV platforms for ISR applications. Nevertheless, these systems are only as effective as the Image Analyst's (IA) ability to extract relevant information from the data. A variety of tools assist in the analysis of imagery captured with UAV sensors. However, until now, none has been developed to extract and visualize parallax three-dimensional information. Parallax Visualization (PV) is a technique that produces a near-three-dimensional visual response to standard UAV imagery. The overlapping nature of UAV imagery lends itself to parallax visualization. Parallax differences can be obtained by selecting frames that differ in time and, therefore, points of view of the area of interest. PV is accomplished using software tools to critically align a common point in two views while alternately displaying both views in a square-wave manner. Humans produce an autostereoscopic response to critically aligned parallax information presented alternately on a standard unaided display at frequencies between 3 and 6 Hz. This simple technique allows for the exploitation of spatial and temporal differences in image sequences to enhance depth, size, and spatial relationships of objects in areas of interest. PV of UAV imagery has been successfully performed in several US Military exercises over the last two years.

  7. Characterising Upland Swamps Using Object-Based Classification Methods and Hyper-Spatial Resolution Imagery Derived from AN Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Lechner, A. M.; Fletcher, A.; Johansen, K.; Erskine, P.

    2012-07-01

    Subsidence, resulting from underground coal mining can alter the structure of overlying rock formations changing hydrological conditions and potentially effecting ecological communities found on the surface. Of particular concern are impacts to endangered and/or protected swamp communities and swamp species sensitive to changes in hydrologic conditions. This paper describes a monitoring approach that uses UAVs with modified digital cameras and object-based image analysis methods to characterise swamp landcover on the Newnes plateau in the Blue Mountains near Sydney, Australia. The characterisation of swamp spatial distribution is key to identifying long term changes in swamp condition. In this paper we describe i) the characteristics of the UAV and the sensor, ii) the pre-processing of the remote sensing data with sub-decimeter pixel size to derive visible and near infrared multispectral imagery and a digital surface model (DSM), and iii) the application of object-based image analysis in eCognition using the multi-spectral data and DSM to map swamp extent. Finally, we conclude with a discussion of the potential application of remote sensing data derived from UAVs to conduct environmental monitoring.

  8. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    NASA Astrophysics Data System (ADS)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  9. Aerial Explorers

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg; Ippolito, Corey

    2005-01-01

    This paper presents recent results from a mission architecture study of planetary aerial explorers. In this study, several mission scenarios were developed in simulation and evaluated on success in meeting mission goals. This aerial explorer mission architecture study is unique in comparison with previous Mars airplane research activities. The study examines how aerial vehicles can find and gain access to otherwise inaccessible terrain features of interest. The aerial explorer also engages in a high-level of (indirect) surface interaction, despite not typically being able to takeoff and land or to engage in multiple flights/sorties. To achieve this goal, a new mission paradigm is proposed: aerial explorers should be considered as an additional element in the overall Entry, Descent, Landing System (EDLS) process. Further, aerial vehicles should be considered primarily as carrier/utility platforms whose purpose is to deliver air-deployed sensors and robotic devices, or symbiotes, to those high-value terrain features of interest.

  10. Automated imagery orthorectification pilot

    NASA Astrophysics Data System (ADS)

    Slonecker, E. Terrence; Johnson, Brad; McMahon, Joe

    2009-10-01

    Automated orthorectification of raw image products is now possible based on the comprehensive metadata collected by Global Positioning Systems and Inertial Measurement Unit technology aboard aircraft and satellite digital imaging systems, and based on emerging pattern-matching and automated image-to-image and control point selection capabilities in many advanced image processing systems. Automated orthorectification of standard aerial photography is also possible if a camera calibration report and sufficient metadata is available. Orthorectification of historical imagery, for which only limited metadata was available, was also attempted and found to require some user input, creating a semi-automated process that still has significant potential to reduce processing time and expense for the conversion of archival historical imagery into geospatially enabled, digital formats, facilitating preservation and utilization of a vast archive of historical imagery. Over 90 percent of the frames of historical aerial photos used in this experiment were successfully orthorectified to the accuracy of the USGS 100K base map series utilized for the geospatial reference of the archive. The accuracy standard for the 100K series maps is approximately 167 feet (51 meters). The main problems associated with orthorectification failure were cloud cover, shadow and historical landscape change which confused automated image-to-image matching processes. Further research is recommended to optimize automated orthorectification methods and enable broad operational use, especially as related to historical imagery archives.

  11. "We're from the Generation that was Raised on Television": A Qualitative Exploration of Media Imagery in Elementary Preservice Teachers' Video Production

    ERIC Educational Resources Information Center

    Hayes, Michael T.; Petrie, Gina Mikel

    2006-01-01

    In this article, the authors present their analysis of preservice teachers video production. Twenty-eight students in the first authors Social Foundations of the Elementary Curriculum course produced a 5 to 10 minute video as the major assignment for the class, interviews were conducted with six of the seven video production groups and the videos…

  12. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY, DIGITAL ORTHOPHOTO QUADRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades' worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data ar...

  13. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY DIGITAL ORTHOPHOTO QUADRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades'worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data are...

  14. ASSESSING THE ACCURACY OF SATELLITE-DERIVED LAND COVER CLASSIFICATION USING HISTORICAL AERIAL PHOTOGRAPHY, DIGITAL ORTHOPHOTO QUADDRANGLES, AND AIRBORNE VIDEO DATA

    EPA Science Inventory

    As the rapidly growing archives of satellite remote sensing imagery now span decades'worth of data, there is increasing interest in the study of long-term regional land cover change across multiple image dates. In most cases, however, temporally coincident ground sampled data are...

  15. Satellite Imagery Assisted Road-Based Visual Navigation System

    NASA Astrophysics Data System (ADS)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  16. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  17. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    PubMed Central

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091

  18. Unmanned aerial vehicles for rangeland mapping and monitoring: a comparison of two systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography from unmanned aerial vehicles (UAVs) bridges the gap between ground-based observations and remotely sensed imagery from aerial and satellite platforms. UAVs can be deployed quickly and repeatedly, are less costly and safer than piloted aircraft, and can obtain very high-resolution...

  19. "A" Is for Aerial Maps and Art

    ERIC Educational Resources Information Center

    Todd, Reese H.; Delahunty, Tina

    2007-01-01

    The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…

  20. Using Airborne and Satellite Imagery to Distinguish and Map Black Mangrove

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of studies evaluating color-infrared (CIR) aerial photography, CIR aerial true digital imagery, and high resolution QuickBird multispectral satellite imagery for distinguishing and mapping black mangrove [Avicennia germinans (L.) L.] populations along the lower Texas g...

  1. Use of Kendall's coefficient of concordance to assess agreement among observers of very high resolution imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ground-based vegetation monitoring methods are expensive, time-consuming, and limited in sample-size. Aerial imagery is appealing to managers because of the reduced time and expense and the increase in sample size. One challenge of aerial imagery is detecting differences among observers of the sam...

  2. Incorporation of texture, intensity, hue, and saturation for rangeland monitoring with unmanned aircraft imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography acquired with unmanned aerial vehicles (UAVs) has great potential for incorporation into rangeland health monitoring protocols, and object-based image analysis is well suited for this hyperspatial imagery. A major drawback, however, is the low spectral resolution of the imagery, b...

  3. Evaluate ERTS imagery for mapping and detection of changes of snowcover on land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The area of snow cover on land was determined from ERTS-1 imagery. Snow cover in specific drainage basins was measured with the Stanford Research Institute console by electronically superimposing basin outlines on imagery, with video density slicing to measure areas. Snow covered area and snowline altitudes were also determined by enlarging ERTS-1 imagery 1:250,000 and using a transparent map overlay. Under very favorable conditions, snowline altitude was determined to an accuracy of about 60 m. Ability to map snow cover or to determine snowline altitude depends primarily on cloud cover and vegetation and secondarily on slope, terrain roughness, sun angle, radiometric fidelity, and amount of spectral information available. Glacier accumulation area ratios were determined from ERTS-1 imagery. Also, subtle flow structures, undetected on aerial photographs, were visible. Surging glaciers were identified, and the changes resulting from the surge of a large glacier were measured as were changes in tidal glacier termini.

  4. Aerial Photography

    NASA Technical Reports Server (NTRS)

    1985-01-01

    John Hill, a pilot and commercial aerial photographer, needed an information base. He consulted NERAC and requested a search of the latest developments in camera optics. NERAC provided information; Hill contacted the manufacturers of camera equipment and reduced his photographic costs significantly.

  5. High-resolution spatial patterns of Soil Organic Carbon content derived from low-altitude aerial multi-band imagery on the Broadbalk Wheat Experiment at Rothamsted,UK

    NASA Astrophysics Data System (ADS)

    Aldana Jague, Emilien; Goulding, Keith; Heckrath, Goswin; Macdonald, Andy; Poulton, Paul; Stevens, Antoine; Van Wesemael, Bas; Van Oost, Kristof

    2014-05-01

    Soil organic C (SOC) contents in arable landscapes change as a function of management, climate and topography (Johnston et al, 2009). Traditional methods to measure soil C stocks are labour intensive, time consuming and expensive. Consequently, there is a need for developing low-cost methods for monitoring SOC contents in agricultural soils. Remote sensing methods based on multi-spectral images may help map SOC variation in surface soils. Recently, the costs of both Unmanned Aerial Vehicles (UAVs) and multi-spectral cameras have dropped dramatically, opening up the possibility for more widespread use of these tools for SOC mapping. Long-term field experiments with distinct SOC contents in adjacent plots, provide a very useful resource for systematically testing remote sensing approaches for measuring SOC. This study focusses on the Broadbalk Wheat Experiment at Rothamsted (UK). The Broadbalk experiment started in 1843. It is widely acknowledged to be the oldest continuing agronomic field experiment in the world. The initial aim of the experiment was to test the effects of different organic manures and inorganic fertilizers on the yield of winter wheat. The experiment initially contained 18 strips, each about 320m long and 6m wide, separated by paths of 1.5-2.5m wide. The strips were subsequently divided into ten sections (>180 plots) to test the effects of other factors (crop rotation, herbicides, pesticides etc.). The different amounts and combinations of mineral fertilisers (N,P,K,Na & Mg) and Farmyard Manure (FYM) applied to these plots for over 160 years has resulted in very different SOC contents in adjacent plots, ranging between 0.8% and 3.5%. In addition to large inter-plot variability in SOC there is evidence of within-plot trends related to the use of discard areas between plots and movement of soil as a result of ploughing. The objectives of this study are (i) to test whether low-altitude multi-band imagery can be used to accurately predict spatial

  6. Satellite imagery and discourses of transparency

    NASA Astrophysics Data System (ADS)

    Harris, Chad Vincent

    In the last decade there has been a dramatic increase in satellite imagery available in the commercial marketplace and to the public in general. Satellite imagery systems and imagery archives, a knowledge domain formally monopolized by nation states, have become available to the public, both from declassified intelligence data and from fully integrated commercial vendors who create and market imagery data. Some of these firms have recently launched their own satellite imagery systems and created rather large imagery "architectures" that threaten to rival military reconnaissance systems. The increasing resolution of the imagery and the growing expertise of software and imagery interpretation developers has engendered a public discourse about the potentials for increased transparency in national and global affairs. However, transparency is an attribute of satellite remote sensing and imagery production that is taken for granted in the debate surrounding the growing public availability of high-resolution satellite imagery. This paper examines remote sensing and military photo reconnaissance imagery technology and the production of satellite imagery in the interests of contemplating the complex connections between imagery satellites, historically situated discourses about democratic and global transparency, and the formation and maintenance of nation state systems. Broader historical connections will also be explored between satellite imagery and the history of the use of cartographic and geospatial technologies in the formation and administrative control of nation states and in the discursive formulation of national identity. Attention will be on the technology itself as a powerful social actor through its connection to both national sovereignty and transcendent notions of scientific objectivity. The issues of the paper will be explored through a close look at aerial photography and satellite imagery both as communicative tools of power and as culturally relevant

  7. The remote characterization of vegetation using Unmanned Aerial Vehicle photography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial phot...

  8. In-flow evolution of lahar deposits from video-imagery with implications for post-event deposit interpretation, Mount Semeru, Indonesia

    NASA Astrophysics Data System (ADS)

    Starheim, Colette C. A.; Gomez, Christopher; Davies, Tim; Lavigne, Franck; Wassmer, Patrick

    2013-04-01

    The hazardous and unpredictable nature of lahars makes them challenging to study, yet the in-flow processes characterizing these events are important to understand. As a result, much of the previous research on lahar sedimentation and flow processes has been derived from experimental flows or stratigraphic surveys of post-event deposits. By comparison, little is known on the time-dependent sediment and flow dynamics of lahars in natural environments. Using video-footage of seven lahars on the flanks of Semeru Volcano (East Java, Indonesia), the present study offers new insights on the in-flow evolution of sediment in natural lahars. Video analysis revealed several distinctive patterns of sediment entrainment and deposition that varied with time-related fluctuations in flow. These patterns were used to generate a conceptual framework describing possible processes of formation for subsurface architectural features identified in an earlier lateral survey of lahar deposits on Semeru Volcano (Gomez and Lavigne, 2010a). The formation of lateral discontinuities was related to the partial erosion of transitional bank deposits followed by fresh deposition along the erosional contact. This pattern was observed over the course of several lahar events and within individual flows. Observations similarly offer potential explanations for the formation of lenticular features. Depending on flow characteristics, these features appeared to form by preferential erosion or deposition around large stationary blocks, and by deposition along channel banks during episodes of channel migration or channel constriction. Finally, conditions conducive to the deposition of fine laminated beds were observed during periods of attenuating and surging flow. These results emphasize the difficulties associated with identifying process-structure relationships solely from post-event deposit interpretation and illustrate that an improved understanding of the time-dependent sediment dynamics in lahars may

  9. The availability of local aerial photography in southern California. [for solution of urban planning problems

    NASA Technical Reports Server (NTRS)

    Allen, W., III; Sledge, B.; Paul, C. K.; Landini, A. J.

    1974-01-01

    Some of the major photography and photogrammetric suppliers and users located in Southern California are listed. Recent trends in aerial photographic coverage of the Los Angeles basin area are also noted, as well as the uses of that imagery.

  10. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  11. Observations of debris flows at Chalk Cliffs, Colorado, USA: Part 1, in-situ measurements of flow dynamics, tracer particle movement and video imagery from the summer of 2009

    USGS Publications Warehouse

    McCoy, Scott W.; Coe, Jeffrey A.; Kean, Jason W.; Tucker, Greg E.; Staley, Dennis M.; Wasklewicz, Thad A.

    2011-01-01

    Debris flows initiated by surface-water runoff during short duration, moderate- to high-intensity rainfall are common in steep, rocky, and sparsely vegetated terrain. Yet large uncertainties remain about the potential for a flow to grow through entrainment of loose debris, which make formulation of accurate mechanical models of debris-flow routing difficult. Using a combination of in situ measurements of debris flow dynamics, video imagery, tracer rocks implanted with passive integrated transponders (PIT) and pre- and post-flow 2-cm resolution digital terrain models (terrain data presented in a companion paper by STALEY et alii, 2011), we investigated the entrainment and transport response of debris flows at Chalk Cliffs, CO, USA. Four monitored events during the summer of 2009 all initiated from surface-water runoff, generally less than an hour after the first measurable rain. Despite reach-scale morphology that remained relatively constant, the four flow events displayed a range of responses, from long-runout flows that entrained significant amounts of channel sediment and dammed the main-stem river, to smaller, short-runout flows that were primarily depositional in the upper basin. Tracer-rock travel-distance distributions for these events were bimodal; particles either remained immobile or they travelled the entire length of the catchment. The long-runout, large-entrainment flow differed from the other smaller flows by the following controlling factors: peak 10-minute rain intensity; duration of significant flow in the channel; and to a lesser extent, peak surge depth and velocity. Our growing database of natural debris-flow events can be used to develop linkages between observed debris-flow transport and entrainment responses and the controlling rainstorm characteristics and flow properties.

  12. Oriental - Automatic Geo-Referencing and Ortho-Rectification of Archaeological Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Karel, W.; Doneus, M.; Verhoeve, G.; Bries, C.; Ressl, C.; Pfeifer, N.

    2013-07-01

    This paper presents the newly developed software OrientAL, which aims at providing a fully automated processing chain from aerial photographs to orthophoto maps. It considers the special requirements of archaeological aerial images, including oblique imagery, single images, poor approximate georeferencing, and historic photographs. As a first step the automatic relative orientation of images from an archaeological image archive is presented.

  13. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  14. Video surveillance at night

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Pollak, Joshua B.; Ralph, Scott; Snorrason, Magnus S.

    2005-05-01

    The interpretation of video imagery is the quintessential goal of computer vision. The ability to group moving pixels into regions and then associate those regions with semantic labels has long been studied by the vision community. In urban nighttime scenarios, the difficulty of this task is simultaneously alleviated and compounded. At night there is typically less movement in the scene, which makes the detection of relevant motion easier. However, the poor quality of the imagery makes it more difficult to interpret actions from these motions. In this paper, we present a system capable of detecting moving objects in outdoor nighttime video. We focus on visible-and-near-infrared (VNIR) cameras, since they offer low cost and very high resolution compared to alternatives such as thermal infrared. We present empirical results demonstrating system performance on a parking lot surveillance scenario. We also compare our results to a thermal infrared sensor viewing the same scene.

  15. Looking for an old aerial photograph

    USGS Publications Warehouse

    ,

    1997-01-01

    Attempts to photograph the surface of the Earth date from the 1800's, when photographers attached cameras to balloons, kites, and even pigeons. Today, aerial photographs and satellite images are commonplace. The rate of acquiring aerial photographs and satellite images has increased rapidly in recent years. Views of the Earth obtained from aircraft or satellites have become valuable tools to Government resource planners and managers, land-use experts, environmentalists, engineers, scientists, and a wide variety of other users. Many people want historical aerial photographs for business or personal reasons. They may want to locate the boundaries of an old farm or a piece of family property. Or they may want a photograph as a record of changes in their neighborhood, or as a gift. The U.S. Geological Survey (USGS) maintains the Earth Science Information Centers (ESIC?s) to sell aerial photographs, remotely sensed images from satellites, a wide array of digital geographic and cartographic data, as well as the Bureau?s wellknown maps. Declassified photographs from early spy satellites were recently added to the ESIC offerings of historical images. Using the Aerial Photography Summary Record System database, ESIC researchers can help customers find imagery in the collections of other Federal agencies and, in some cases, those of private companies that specialize in esoteric products.

  16. Aerial radiation surveys

    SciTech Connect

    Jobst, J.

    1980-01-01

    A recent aerial radiation survey of the surroundings of the Vitro mill in Salt Lake City shows that uranium mill tailings have been removed to many locations outside their original boundary. To date, 52 remote sites have been discovered within a 100 square kilometer aerial survey perimeter surrounding the mill; 9 of these were discovered with the recent aerial survey map. Five additional sites, also discovered by aerial survey, contained uranium ore, milling equipment, or radioactive slag. Because of the success of this survey, plans are being made to extend the aerial survey program to other parts of the Salt Lake valley where diversions of Vitro tailings are also known to exist.

  17. Orthorectification, mosaicking, and analysis of sub-decimeter resolution UAV imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...

  18. Dreams and Mediation in Music Video.

    ERIC Educational Resources Information Center

    Burns, Gary

    The most extensive use of dream imagery in popular culture occurs in the visual arts, and in the past five years it has become evident that music video (a semi-narrative hybrid of film and television) is the most dreamlike media product of all. The rampant depiction and implication of dreams and media fantasies in music video are often strongly…

  19. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  20. Structural geologic interpretations from radar imagery

    USGS Publications Warehouse

    Reeves, Robert G.

    1969-01-01

    Certain structural geologic features may be more readily recognized on sidelooking airborne radar (SLAR) images than on conventional aerial photographs, other remote sensor imagery, or by ground observations. SLAR systems look obliquely to one or both sides and their images resemble aerial photographs taken at low sun angle with the sun directly behind the camera. They differ from air photos in geometry, resolution, and information content. Radar operates at much lower frequencies than the human eye, camera, or infrared sensors, and thus "sees" differently. The lower frequency enables it to penetrate most clouds and some precipitation, haze, dust, and some vegetation. Radar provides its own illumination, which can be closely controlled in intensity and frequency. It is narrow band, or essentially monochromatic. Low relief and subdued features are accentuated when viewed from the proper direction. Runs over the same area in significantly different directions (more than 45° from each other), show that images taken in one direction may emphasize features that are not emphasized on those taken in the other direction; optimum direction is determined by those features which need to be emphasized for study purposes. Lineaments interpreted as faults stand out on radar imagery of central and western Nevada; folded sedimentary rocks cut by faults can be clearly seen on radar imagery of northern Alabama. In these areas, certain structural and stratigraphic features are more pronounced on radar images than on conventional photographs; thus radar imagery materially aids structural interpretation.

  1. Integrating multisource imagery and GIS analysis for mapping Bermuda`s benthic habitats

    SciTech Connect

    Vierros, M.K.

    1997-06-01

    Bermuda is a group of isolated oceanic situated in the northwest Atlantic Ocean and surrounded by the Sargasso Sea. Bermuda possesses the northernmost coral reefs and mangroves in the Atlantic Ocean, and because of its high population density, both the terrestrial and marine environments are under intense human pressure. Although a long record of scientific research exists, this study is the first attempt to comprehensively map the area`s benthic habitats, despite the need for such a map for resource assessment and management purposes. Multi-source and multi-date imagery were used for producing the habitat map due to lack of a complete up-to-date image. Classifications were performed with SPOT data, and the results verified from recent aerial photography and current aerial video, along with extensive ground truthing. Stratification of the image into regions prior to classification reduced the confusing effects of varying water depth. Classification accuracy in shallow areas was increased by derivation of a texture pseudo-channel, while bathymetry was used as a classification tool in deeper areas, where local patterns of zonation were well known. Because of seasonal variation in extent of seagrasses, a classification scheme based on density could not be used. Instead, a set of classes based on the seagrass area`s exposure to the open ocean were developed. The resulting habitat map is currently being assessed for accuracy with promising preliminary results, indicating its usefulness as a basis for future resource assessment studies.

  2. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  3. Aerial Image Systems

    NASA Astrophysics Data System (ADS)

    Clapp, Robert E.

    1987-09-01

    Aerial images produce the best stereoscopic images of the viewed world. Despite the fact that every optic in existence produces an aerial image, few persons are aware of their existence and possible uses. Constant reference to the eye and other optical systems have produced a psychosis of design that only considers "focal planes" in the design and analysis of optical systems. All objects in the field of view of the optical device are imaged by the device as an aerial image. Use of aerial images in vision and visual display systems can provide a true stereoscopic representation of the viewed world. This paper discusses aerial image systems - their applications and designs and presents designs and design concepts that utilize aerial images to obtain superior visual displays, particularly with application to visual simulation.

  4. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    PubMed

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  5. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    PubMed

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  6. Aerial imagery and structure-from-motion based DEM reconstruction of region-sized areas (Sierra Arana, Spain and Namur Province, Belgium) using an high-altitude drifting balloon platform.

    NASA Astrophysics Data System (ADS)

    Burlet, Christian; María Mateos, Rosa; Azañón, Jose Miguel; Perez, José Vicente; Vanbrabant, Yves

    2015-04-01

    different elevations. A 1m/pixel ground resolution set covering an area of about 200km² and mapping the eastern part of the Sierra Arana (Andalucía, Spain) includes a kartsic field directly to the south-east of the ridge and the cliffs of the "Riscos del Moro". A 4m/pixel ground resolution set covering an area of about 900km² includes the landslide active Diezma region (Andalucía, Spain) and the water reserve of Francisco Abellan lake. The third set has a 3m/pixel ground resolution, covers about 100km² and maps the Famennian rocks formations, known as part of "La Calestienne", outcropping near Beauraing and Rochefort in the Namur Province (Belgium). The DEM and orthophoto's have been referenced using ground control points from satellite imagery (Spain, Belgium) and DPGS (Belgium). The quality of produced DEM were then evaluated by comparing the level and accuracy of details and surface artefacts between available topographic data (SRTM- 30m/pixel, topographic maps) and the three Stratochip sets. This evaluation showed that the models were in good correlation with existing data, and can be readily be used in geomorphology, structural and natural hazard studies.

  7. 11. Photocopy of aerial photograph (original aerial located in the ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of aerial photograph (original aerial located in the U.S. Forest Service, Toiyabe National Forest, Carson District Office). AERIAL VIEW OF THE GENOA PEAK ROAD, SPUR. - Genoa Peak Road, Spur, Glenbrook, Douglas County, NV

  8. Aerial photographic reproductions

    USGS Publications Warehouse

    ,

    1975-01-01

    The National Cartographic Information Center of the U.S. Geological Survey maintains records of aerial photographic coverage of the United States and its Territories, based on reports from other Federal agencies as well as State governmental agencies and commercial companies. From these records, the Center furnishes data to prospective purchasers on available photography and the agency holding the aerial film.

  9. Preliminary assessment of aerial photography techniques for canvasback population analysis

    USGS Publications Warehouse

    Munro, R.E.; Trauger, D.L.

    1976-01-01

    Recent intensive research on the canvasback has focused attention on the need for more precise estimates of population parameters. During the 1972-75 period, various types of aerial photographing equipment were evaluated to determine the problems and potentials for employing these techniques in appraisals of canvasback populations. The equipment and procedures available for automated analysis of aerial photographic imagery were also investigated. Serious technical problems remain to be resolved, but some promising results were obtained. Final conclusions about the feasibility of operational implementation await a more rigorous analysis of the data collected.

  10. Review of the SAFARI 2000 RC-10 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Myers, Jeff; Shelton, Gary; Annegarn, Harrold; Peterson, David L. (Technical Monitor)

    2001-01-01

    This presentation will review the aerial photography collected by the NASA ER-2 aircraft during the SAFARI (Southern African Regional Science Initiative) year 2000 campaign. It will include specifications on the camera and film, and will show examples of the imagery. It will also detail the extent of coverage, and the procedures to obtain film products from the South African government. Also included will be some sample applications of aerial photography for various environmental applications, and its use in augmenting other SAFARI data sets.

  11. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  12. Interpretation of high-resolution imagery for detecting vegetation cover composition change after fuels reduction treatments in woodlands

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of very high resolution (VHR; ground sampling distances < ~5cm) aerial imagery to estimate site vegetation cover and to detect changes from management has been well documented. However, as the purpose of monitoring is to document change over time, the ability to detect changes from imagery a...

  13. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  14. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  15. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  16. Updating Maps Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Shahzad Janjua, Khurram; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Kingdom of Saudi Arabia is one of the most dynamic countries of the world. We have witnessed a very rapid urban development's which are altering Kingdom's landscape on daily basis. In recent years a substantial increase in urban populations is observed which results in the formation of large cities. Considering this fast paced growth, it has become necessary to monitor these changes, in consideration with challenges faced by aerial photography projects. It has been observed that data obtained through aerial photography has a lifecycle of 5-years because of delay caused by extreme weather conditions and dust storms which acts as hindrances or barriers during aerial imagery acquisition, which has increased the costs of aerial survey projects. All of these circumstances require that we must consider some alternatives that can provide us easy and better ways of image acquisition in short span of time for achieving reliable accuracy and cost effectiveness. The approach of this study is to conduct an extensive comparison between different resolutions of data sets which include: Orthophoto of (10 cm) GSD, Stereo images of (50 cm) GSD and Stereo images of (1 m) GSD, for map updating. Different approaches have been applied for digitizing buildings, roads, tracks, airport, roof level changes, filling stations, buildings under construction, property boundaries, mosques buildings and parking places.

  17. Video document

    NASA Astrophysics Data System (ADS)

    Davies, Bob; Lienhart, Rainer W.; Yeo, Boon-Lock

    1999-08-01

    The metaphor of film and TV permeates the design of software to support video on the PC. Simply transplanting the non- interactive, sequential experience of film to the PC fails to exploit the virtues of the new context. Video ont eh PC should be interactive and non-sequential. This paper experiments with a variety of tools for using video on the PC that exploits the new content of the PC. Some feature are more successful than others. Applications that use these tools are explored, including primarily the home video archive but also streaming video servers on the Internet. The ability to browse, edit, abstract and index large volumes of video content such as home video and corporate video is a problem without appropriate solution in today's market. The current tools available are complex, unfriendly video editors, requiring hours of work to prepare a short home video, far more work that a typical home user can be expected to provide. Our proposed solution treats video like a text document, providing functionality similar to a text editor. Users can browse, interact, edit and compose one or more video sequences with the same ease and convenience as handling text documents. With this level of text-like composition, we call what is normally a sequential medium a 'video document'. An important component of the proposed solution is shot detection, the ability to detect when a short started or stopped. When combined with a spreadsheet of key frames, the host become a grid of pictures that can be manipulated and viewed in the same way that a spreadsheet can be edited. Multiple video documents may be viewed, joined, manipulated, and seamlessly played back. Abstracts of unedited video content can be produce automatically to create novel video content for export to other venues. Edited and raw video content can be published to the net or burned to a CD-ROM with a self-installing viewer for Windows 98 and Windows NT 4.0.

  18. High-biomass sorghum yield estimate with aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Abstract. To reach the goals laid out by the U.S. Government for displacing fossil fuels with biofuels, agricultural production of dedicated biomass crops is required. High-biomass sorghum is advantageous across wide regions because it requires less water per unit dry biomass and can produce very hi...

  19. Locating inputs of freshwater to Lynch Cove, Hood Canal, Washington, using aerial infrared photography

    USGS Publications Warehouse

    Sheibley, Rich W.; Josberger, Edward G.; Chickadel, Chris

    2010-01-01

    The input of freshwater and associated nutrients into Lynch Cove and lower Hood Canal (fig. 1) from sources such as groundwater seeps, small streams, and ephemeral creeks may play a major role in the nutrient loading and hydrodynamics of this low dissolved-oxygen (hypoxic) system. These disbursed sources exhibit a high degree of spatial variability. However, few in-situ measurements of groundwater seepage rates and nutrient concentrations are available and thus may not represent adequately the large spatial variability of groundwater discharge in the area. As a result, our understanding of these processes and their effect on hypoxic conditions in Hood Canal is limited. To determine the spatial variability and relative intensity of these sources, the U.S. Geological Survey Washington Water Science Center collaborated with the University of Washington Applied Physics Laboratory to obtain thermal infrared (TIR) images of the nearshore and intertidal regions of Lynch Cove at or near low tide. In the summer, cool freshwater discharges from seeps and streams, flows across the exposed, sun-warmed beach, and out on the warm surface of the marine water. These temperature differences are readily apparent in aerial thermal infrared imagery that we acquired during the summers of 2008 and 2009. When combined with co-incident video camera images, these temperature differences allow identification of the location, the type, and the relative intensity of the sources.

  20. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    NASA Astrophysics Data System (ADS)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  1. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  2. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  3. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  4. Aerial Photography Summary Record System

    USGS Publications Warehouse

    ,

    1998-01-01

    The Aerial Photography Summary Record System (APSRS) describes aerial photography projects that meet specified criteria over a given geographic area of the United States and its territories. Aerial photographs are an important tool in cartography and a number of other professions. Land use planners, real estate developers, lawyers, environmental specialists, and many other professionals rely on detailed and timely aerial photographs. Until 1975, there was no systematic approach to locate an aerial photograph, or series of photographs, quickly and easily. In that year, the U.S. Geological Survey (USGS) inaugurated the APSRS, which has become a standard reference for users of aerial photographs.

  5. Artificial Video for Video Analysis

    ERIC Educational Resources Information Center

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  6. Remote sensing and GIS integration: Towards intelligent imagery within a spatial data infrastructure

    NASA Astrophysics Data System (ADS)

    Abdelrahim, Mohamed Mahmoud Hosny

    2001-11-01

    In this research, an "Intelligent Imagery System Prototype" (IISP) was developed. IISP is an integration tool that facilitates the environment for active, direct, and on-the-fly usage of high resolution imagery, internally linked to hidden GIS vector layers, to query the real world phenomena and, consequently, to perform exploratory types of spatial analysis based on a clear/undisturbed image scene. The IISP was designed and implemented using the software components approach to verify the hypothesis that a fully rectified, partially rectified, or even unrectified digital image can be internally linked to a variety of different hidden vector databases/layers covering the end user area of interest, and consequently may be reliably used directly as a base for "on-the-fly" querying of real-world phenomena and for performing exploratory types of spatial analysis. Within IISP, differentially rectified, partially rectified (namely, IKONOS GEOCARTERRA(TM)), and unrectified imagery (namely, scanned aerial photographs and captured video frames) were investigated. The system was designed to handle four types of spatial functions, namely, pointing query, polygon/line-based image query, database query, and buffering. The system was developed using ESRI MapObjects 2.0a as the core spatial component within Visual Basic 6.0. When used to perform the pre-defined spatial queries using different combinations of image and vector data, the IISP provided the same results as those obtained by querying pre-processed vector layers even when the image used was not orthorectified and the vector layers had different parameters. In addition, the real-time pixel location orthorectification technique developed and presented within the IKONOS GEOCARTERRA(TM) case provided a horizontal accuracy (RMSE) of +/- 2.75 metres. This accuracy is very close to the accuracy level obtained when purchasing the orthorectified IKONOS PRECISION products (RMSE of +/- 1.9 metre). The latter cost approximately four

  7. Immersive video

    NASA Astrophysics Data System (ADS)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  8. Comparative Assessment of Very High Resolution Satellite and Aerial Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.

    2015-03-01

    This paper aims to assess the accuracy and radiometric quality of orthorectified high resolution satellite imagery from Pleiades-1B satellites through a comparative evaluation of their quantitative and qualitative properties. A Pleiades-B1 stereopair of high resolution images taken in 2013, two adjacent GeoEye-1 stereopairs from 2011 and aerial orthomosaic (LSO) provided by NCMA S.A (Hellenic Cadastre) from 2007 have been used for the comparison tests. As control dataset orthomosaic from aerial imagery provided also by NCMA S.A (0.25m GSD) from 2012 was selected. The process for DSM and orthoimage production was performed using commercial digital photogrammetric workstations. The two resulting orthoimages and the aerial orthomosaic (LSO) were relatively and absolutely evaluated for their quantitative and qualitative properties. Test measurements were performed using the same check points in order to establish their accuracy both as far as the single point coordinates as well as their distances are concerned. Check points were distributed according to JRC Guidelines for Best Practice and Quality Checking of Ortho Imagery and NSSDA standards while areas with different terrain relief and land cover were also included. The tests performed were based also on JRC and NSSDA accuracy standards. Finally, tests were carried out in order to assess the radiometric quality of the orthoimagery. The results are presented with a statistical analysis and they are evaluated in order to present the merits and demerits of the imaging sensors involved for orthoimage production. The results also serve for a critical approach for the usability and cost efficiency of satellite imagery for the production of Large Scale Orthophotos.

  9. Aerial Explorers and Robotic Ecosystems

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg

    2004-01-01

    A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.

  10. Adding Insult to Imagery? Art Education and Censorship

    ERIC Educational Resources Information Center

    Sweeny, Robert W.

    2007-01-01

    The "Adding Insult to Imagery? Artistic Responses to Censorship and Mass-Media" exhibition opened in January 16, 2006, Kipp Gallery on the Indiana University of Pennsylvania campus. Eleven gallery-based works, 9 videos, and 10 web-based artworks comprised the show; each dealt with the relationship between censorship and mass mediated images. Many…

  11. Application of airborne thermal imagery to surveys of Pacific walrus

    USGS Publications Warehouse

    Burn, D.M.; Webber, M.A.; Udevitz, M.S.

    2006-01-01

    We conducted tests of airborne thermal imagery of Pacific walrus to determine if this technology can be used to detect walrus groups on sea ice and estimate the number of walruses present in each group. In April 2002 we collected thermal imagery of 37 walrus groups in the Bering Sea at spatial resolutions ranging from 1-4 m. We also collected high-resolution digital aerial photographs of the same groups. Walruses were considerably warmer than the background environment of ice, snow, and seawater and were easily detected in thermal imagery. We found a significant linear relation between walrus group size and the amount of heat measured by the thermal sensor at all 4 spatial resolutions tested. This relation can be used in a double-sampling framework to estimate total walrus numbers from a thermal survey of a sample of units within an area and photographs from a subsample of the thermally detected groups. Previous methods used in visual aerial surveys of Pacific walrus have sampled only a small percentage of available habitat, resulting in population estimates with low precision. Results of this study indicate that an aerial survey using a thermal sensor can cover as much as 4 times the area per hour of flight time with greater reliability than visual observation.

  12. Aerial Perspective Artistry

    ERIC Educational Resources Information Center

    Wolfe, Linda

    2010-01-01

    This article presents a lesson centering on aerial perspective artistry of students and offers suggestions on how art teachers should carry this project out. This project serves to develop students' visual perception by studying reproductions by famous artists. This lesson allows one to imagine being lured into a landscape capable of captivating…

  13. Aerial of the VAB

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Even in this aerial view at KSC, the Vehicle Assembly Building is imposing. In front of it is the Launch Control Center. In the background is the Rotation/Processing Facility, next to the Banana Creek. In the foreground is the Saturn Causeway that leads to Launch Pads 39A and 39B.

  14. Aerial photographic reproductions

    USGS Publications Warehouse

    U.S. Geological Survey

    1971-01-01

    Geological Survey vertical aerial photography is obtained primarily for topographic and geologic mapping. Reproductions from this photography are usually satisfactory for general use. Because reproductions are not stocked, but are custom processed for each order, they cannot be returned for credit or refund.

  15. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  16. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  17. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  18. Aerial Videography From Locally Launched Rockets

    NASA Technical Reports Server (NTRS)

    Lyle, Stacey D.

    2007-01-01

    A method of quickly collecting digital imagery of ground areas from video cameras carried aboard locally launched rockets has been developed. The method can be used, for example, to record rare or episodic events or to gather image data to guide decisions regarding treatment of agricultural fields or fighting wildfires. The method involves acquisition and digitization of a video frame at a known time along with information on the position and orientation of the rocket and camera at that time. The position and orientation data are obtained by use of a Global Positioning System receiver and a digital magnetic compass carried aboard the rocket. These data are radioed to a ground station, where they are processed, by a real-time algorithm, into georeferenced position and orientation data. The algorithm also generates a file of transformation parameters that account for the variation of image magnification and distortion associated with the position and orientation of the camera relative to the ground scene depicted in the image. As the altitude, horizontal position, and orientation of the rocket change between image frames, the algorithm calculates the corresponding new georeferenced position and orientation data and the associated transformation parameters. The output imagery can be rendered in any of a variety of formats. The figure presents an example of one such format.

  19. Processing of SeaMARC swath sonar imagery

    SciTech Connect

    Pratson, L.; Malinverno, A.; Edwards, M.; Ryan, W. )

    1990-05-01

    Side-scan swath sonar systems have become an increasingly important means of mapping the sea floor. Two such systems are the deep-towed, high-resolution SeaMARC I sonar, which has a variable swath width of up to 5 km, and the shallow-towed, lower-resolution SeaMARC II sonar, which has a swath width of 10 km. The sea-floor imagery of acoustic backscatter output by the SeaMARC sonars is analogous to aerial photographs and airborne side-looking radar images of continental topography. Geologic interpretation of the sea-floor imagery is greatly facilitated by image processing. Image processing of the digital backscatter data involves removal of noise by median filtering, spatial filtering to remove sonar scans of anomalous intensity, across-track corrections to remove beam patterns caused by nonuniform response of the sonar transducers to changes in incident angle, and contrast enhancement by histogram equalization to maximize the available dynamic range. Correct geologic interpretation requires submarine structural fabrics to be displayed in their proper locations and orientations. Geographic projection of sea-floor imagery is achieved by merging the enhanced imagery with the sonar vehicle navigation and correcting for vehicle attitude. Co-registration of bathymetry with sonar imagery introduces sea-floor relief and permits the imagery to be displayed in three-dimensional perspectives, furthering the ability of the marine geologist to infer the processes shaping formerly hidden subsea terrains.

  20. A qualitative evaluation of Landsat imagery of Australian rangelands

    USGS Publications Warehouse

    Graetz, R.D.; Carneggie, David M.; Hacker, R.; Lendon, C.; Wilcox, D.G.

    1976-01-01

    The capability of multidate, multispectral ERTS-1 imagery of three different rangeland areas within Australia was evaluated for its usefulness in preparing inventories of rangeland types, assessing on a broad scale range condition within these rangeland types, and assessing the response of rangelands to rainfall events over large areas. For the three divergent rangeland test areas, centered on Broken W, Alice Springs and Kalgoorlie, detailed interpretation of the imagery only partially satisfied the information requirements set. It was most useful in the Broken Hill area where fenceline contrasts in range condition were readily visible. At this and the other sites an overstorey of trees made interpretation difficult. Whilst the low resolution characteristics and the lack of stereoscopic coverage hindered interpretation it was felt that this type of imagery with its vast coverage, present low cost and potential for repeated sampling is a useful addition to conventional aerial photography for all rangeland types.

  1. Automatic target detection in UAV imagery using image formation conditions

    NASA Astrophysics Data System (ADS)

    Lin, Huibao; Si, Jennie; Abousleman, Glen P.

    2003-09-01

    This paper is about automatic target detection (ATD) in unmanned aerial vehicle (UAV) imagery. Extracting reliable features under all conditions from a 2D projection of a target in UAV imagery is a difficult problem. However, since the target size information is usually invariant to the image formation proces, we propose an algorithm for automatically estimating the size of a 3D target by using its 2D projection. The size information in turn becomes an important feature to be used in a knowledge-driven, multi-resolution-based algorithm for automatically detecting targets in UAV imagery. Experimental results show that our proposed ATD algorithm provides outstanding detection performance, while significantly reducing the false alarm rate and the computational complexity.

  2. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  3. Forestry, geology and hydrological investigations from ERTS-1 imagery in two areas of Ecuador, South America

    NASA Technical Reports Server (NTRS)

    Moreno, N. V. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. In the Oriente area, well-drained forests containing commercially valuable hardwoods can be recognized confidently and delineated quickly on the ERTS imagery. In the tropical rainforest, ERTS can provide an abundance of inferential information about large scale geologic structures. ERTS imagery is better than normal aerial photography for recognizing linears. The imagery is particularly useful for updating maps of the distributary system of the Guagas River Basin and of any other river with a similarly rapid changing channel pattern.

  4. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  5. Applicability of ERTS-1 imagery to the study of suspended sediment and aquatic fronts

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Srna, R.; Treasure, W.; Otley, M.

    1973-01-01

    Imagery from three successful ERTS-1 passes over the Delaware Bay and Atlantic Coastal Region have been evaluated to determine visibility of aquatic features. Data gathered from ground truth teams before and during the overflights, in conjunction with aerial photographs taken at various altitudes, were used to interpret the imagery. The overpasses took place on August 16, October 10, 1972, and January 26, 1973, with cloud cover ranging from about zero to twenty percent. (I.D. Nos. 1024-15073, 1079-15133, and 1187-15140). Visual inspection, density slicing and multispectral analysis of the imagery revealed strong suspended sediment patterns and several distinct types of aquatic interfaces or frontal systems.

  6. Accessibility Videos.

    PubMed

    Kurppa, Ari; Nordlund, Marika

    2016-01-01

    It can be difficult to understand accessibility, if you do not have the personal experience. The Accessibility Centre ESKE produced short videos which demonstrate the meaning of accessibility in different situations. Videos will raise accessibility awareness of architects, other planners and professionals in the construction field and maintenance. PMID:27534282

  7. Unmanned Aerial Vehicle (UAV) Dynamic-Tracking Directional Wireless Antennas for Low Powered Applications that Require Reliable Extended Range Operations in Time Critical Scenarios

    SciTech Connect

    Scott G. Bauer; Matthew O. Anderson; James R. Hanneman

    2005-10-01

    The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs require wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.

  8. Waste site characterization through digital analysis of historical aerial photographs at Los Alamos National Laboratory and Eglin Air Force Base

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Wells, B.; Rofer, C.; Martin, B.

    1995-05-01

    Historical aerial photographs are used to provide a physical history and preliminary mapping information for characterizing hazardous waste sites at Los Alamos National Laboratory and Eglin Air Force Base. The examples cited show how imagery was used to accurately locate and identify previous activities at a site, monitor changes that occurred over time, and document the observable of such activities today. The methodology demonstrates how historical imagery (along with any other pertinent data) can be used in the characterization of past environmental damage.

  9. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    The Human Exploration Science Office (KX) provides leadership for NASA's Imagery Integration (Integration 2) Team, an affiliation of experts in the use of engineering-class imagery intended to monitor the performance of launch vehicles and crewed spacecraft in flight. Typical engineering imagery assessments include studying and characterizing the liftoff and ascent debris environments; launch vehicle and propulsion element performance; in-flight activities; and entry, landing, and recovery operations. Integration 2 support has been provided not only for U.S. Government spaceflight (e.g., Space Shuttle, Ares I-X) but also for commercial launch providers, such as Space Exploration Technologies Corporation (SpaceX) and Orbital Sciences Corporation, servicing the International Space Station. The NASA Integration 2 Team is composed of imagery integration specialists from JSC, the Marshall Space Flight Center (MSFC), and the Kennedy Space Center (KSC), who have access to a vast pool of experience and capabilities related to program integration, deployment and management of imagery assets, imagery data management, and photogrammetric analysis. The Integration 2 team is currently providing integration services to commercial demonstration flights, Exploration Flight Test-1 (EFT-1), and the Space Launch System (SLS)-based Exploration Missions (EM)-1 and EM-2. EM-2 will be the first attempt to fly a piloted mission with the Orion spacecraft. The Integration 2 Team provides the customer (both commercial and Government) with access to a wide array of imagery options - ground-based, airborne, seaborne, or vehicle-based - that are available through the Government and commercial vendors. The team guides the customer in assembling the appropriate complement of imagery acquisition assets at the customer's facilities, minimizing costs associated with market research and the risk of purchasing inadequate assets. The NASA Integration 2 capability simplifies the process of securing one

  10. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  11. Auditory imagery: empirical findings.

    PubMed

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  12. AERIAL RADIOLOGICAL SURVEYS

    SciTech Connect

    Proctor, A.E.

    1997-06-09

    Measuring terrestrial gamma radiation from airborne platforms has proved to be a useful method for characterizing radiation levels over large areas. Over 300 aerial radiological surveys have been carried out over the past 25 years including U.S. Department of Energy (DOE) sites, commercial nuclear power plants, Formerly Utilized Sites Remedial Action Program/Uranium Mine Tailing Remedial Action Program (FUSRAP/UMTRAP) sites, nuclear weapons test sites, contaminated industrial areas, and nuclear accident sites. This paper describes the aerial measurement technology currently in use by the Remote Sensing Laboratory (RSL) for routine environmental surveys and emergency response activities. Equipment, data-collection and -analysis methods, and examples of survey results are described.

  13. Evaluation of unmanned aerial vehicles (UAVs) for detection of cattle in the Cattle Fever Tick Permanent Quarantine Zone

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An unmanned aerial vehicle was used to capture videos of cattle in pastures to determine the efficiency of this technology for use by Mounted Inspectors in the Permanent Quarantine zone (PQZ) of the Cattle Fever Tick Eradication Program in south Texas along the U.S.-Mexico Border. These videos were ...

  14. Multistage, Multiband and sequential imagery to identify and quantify non-forest vegetation resources

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1971-01-01

    Analysis and recognition processing of multispectral scanner imagery for plant community classification and interpretations of various film-filter-scale aerial photographs are reported. Data analyses and manuscript preparation of research on microdensitometry for plant community and component identification and remote estimates of biomass are included.

  15. Video games.

    PubMed

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  16. MISR Field Campaign Imagery

    Atmospheric Science Data Center

    2014-07-23

      MISR Support of Field Campaigns Aerosol Arctic Research of the Composition of the ... Daily ARCTAS Aerosol Polar Imagery ​Gulf of Mexico Atmospheric Composition and Climate Study ( GoMACCS ) ​July - ...

  17. MISR Imagery and Articles

    Atmospheric Science Data Center

    2016-05-27

    ... of select parameters available in the MISR Level 3 global data products Field Campaigns :  Imagery supporting field ... explore the links between atmospheric aerosols, climate change, and ultraviolet rays. Following the World Trade Center plume ...

  18. Vehicle detection in aerial surveillance using dynamic Bayesian networks.

    PubMed

    Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying

    2012-04-01

    We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

  19. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  20. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  1. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  2. Aerial thermography studies of power plant heated lakes

    SciTech Connect

    Villa-Aleman, E.

    2000-01-26

    Remote sensing temperature measurements of water bodies is complicated by the temperature differences between the true surface or skin water and the bulk water below. Weather conditions control the reduction of the skin temperature relative to the bulk water temperature. Typical skin temperature depressions range from a few tenths of a degree Celsius to more than one degree. In this research project, the Savannah River Technology Center (SRTC) used aerial thermography and surface-based meteorological and water temperature measurements to study a power plant cooling lake in South Carolina. Skin and bulk water temperatures were measured simultaneously for imagery calibration and to produce a database for modeling of skin temperature depressions as a function of weather and bulk water temperatures. This paper will present imagery that illustrates how the skin temperature depression was affected by different conditions in several locations on the lake and will present skin temperature modeling results.

  3. Applications of thermal infrared imagery for energy conservation and environmental surveys

    NASA Technical Reports Server (NTRS)

    Carney, J. R.; Vogel, T. C.; Howard, G. E., Jr.; Love, E. R.

    1977-01-01

    The survey procedures, developed during the winter and summer of 1976, employ color and color infrared aerial photography, thermal infrared imagery, and a handheld infrared imaging device. The resulting imagery was used to detect building heat losses, deteriorated insulation in built-up type building roofs, and defective underground steam lines. The handheld thermal infrared device, used in conjunction with the aerial thermal infrared imagery, provided a method for detecting and locating those roof areas that were underlain with wet insulation. In addition, the handheld infrared device was employed to conduct a survey of a U.S. Army installation's electrical distribution system under full operating loads. This survey proved to be cost effective procedure for detecting faulty electrical insulators and connections that if allowed to persist could have resulted in both safety hazards and loss in production.

  4. Exploration applications of satellite imagery in mature basins - A summation

    SciTech Connect

    Berger, Z. )

    1991-08-01

    A series of examples supported by surface and subsurface controls illustrates procedures used to integrate satellite imagery interpretation into a conventional exploration program, and the potential contribution of such an approach to the recognition of new hydrocarbon plays in mature basins. Integrated analysis of satellite imagery data consists of four major steps. The first step focuses on the recognition of style, trend, and timing of deformation of exposed structures located at the basin interior or around its margins. This information is obtained through an integrated analysis of satellite imagery data, stereo aerial photography, surface geological mapping, and field observations. The second step consists of integrating the satellite imagery with gravity and magnetic data to recognize obscured and/or buried structures. The third step involves the analysis of available seismic data which is specifically processes to enhance subtle basement topography in order to determine influences on reservoir quality. In the fourth step, subsurface structure, isopach, show, and pool maps derived from available well information are integrated into the structural interpretation. These four analytical steps are demonstrated with examples form the Powder River basin, Western Canada basin, Paris basin, and Central basin platform of west Texas. In all of these highly mature basins, it is easy to demonstrate that (1) hydrocarbon migration and accumulation was largely controlled by subtle basement structures, and (2) these structures can be detected through the integrated analysis of satellite imagery.

  5. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  6. FINDINGS ON THE USE OF LANDSAT-3 RETURN BEAM VIDICON IMAGERY FOR DETECTING LAND USE AND LAND COVER CHANGES.

    USGS Publications Warehouse

    Milazzo, Valerie A.

    1983-01-01

    The spatial resolution of imagery from the return beam vidicon (RBV) camera aboard the Landsat-3 satellite suggested that such data might prove useful in inspecting land use and land cover maps. In this study, a 1972 land use and land cover map derived from aerial photographs is compared with a 1978 Landsat RBV image to delineate areas of change. Findings indicate RBV imagery useful in establishing the fact of change and in identifying gross category changes.

  7. Thermal imagery for census of ungulates

    NASA Technical Reports Server (NTRS)

    Wride, M. C.; Baker, K.

    1977-01-01

    A Daedalus thermal linescanner mounted in a light single engine aircraft was used to image the entire 270 square kilometers within the fenced perimeter of ElK Island Park, Alberta, Canada. The data were collected during winter, 1976 in morning and midday (overcast conditions) processed and analyzed to obtain a number for total ungulates. Five different ungulate species were present during the survey. Ungulates were easily observed during the analysis of linescanner imagery and the total number of ungulates was established at 2175 compared to figures of 1010 and 1231 for visual method aerial survey results of the same area that year. It was concluded that the scanner was much more accurate and precise for census of ungulates than visual techniques.

  8. Measuring creative imagery abilities

    PubMed Central

    Jankowska, Dorota M.; Karwowski, Maciej

    2015-01-01

    Over the decades, creativity and imagination research developed in parallel, but they surprisingly rarely intersected. This paper introduces a new theoretical model of creative visual imagination, which bridges creativity and imagination research, as well as presents a new psychometric instrument, called the Test of Creative Imagery Abilities (TCIA), developed to measure creative imagery abilities understood in accordance with this model. Creative imagination is understood as constituted by three interrelated components: vividness (the ability to create images characterized by a high level of complexity and detail), originality (the ability to produce unique imagery), and transformativeness (the ability to control imagery). TCIA enables valid and reliable measurement of these three groups of abilities, yielding the general score of imagery abilities and at the same time making profile analysis possible. We present the results of nine studies on a total sample of more than 1700 participants, showing the factor structure of TCIA using confirmatory factor analysis, as well as provide data confirming this instrument's validity and reliability. The availability of TCIA for interested researchers may result in new insights and possibilities of integrating the fields of creativity and imagination science. PMID:26539140

  9. Infrared film for aerial photography

    USGS Publications Warehouse

    Anderson, William H.

    1979-01-01

    Considerable interest has developed recently in the use of aerial photographs for agricultural management. Even the simplest hand-held aerial photographs, especially those taken with color infrared film, often provide information not ordinarily available through routine ground observation. When fields are viewed from above, patterns and variations become more apparent, often allowing problems to be spotted which otherwise may go undetected.

  10. Videographic enhancement of GRASS imagery: Recent advances

    SciTech Connect

    Sullivan, R.G.

    1992-06-01

    The Geographic Resource Analysis Support System (GRASS), a geographic information system, has been fielded at approximately 50 US Army training installations as a land-management decision-making tool. Use of the GRASS geographic information system involves the production of numerous digital maps of environmental parameters, such as elevation, soils, hydrography, etc. A recently emerging technology called computer videographics can be used to graphically enhance GRASS images, thereby creating new ways to visualize GRASS analysis results. The project described in this report explored the enhancement of GRASS images through the use of videographic technology. General image quality of videographically enhanced GRASS images was improved through the use of high-resolution imagery and improved software. Several new types of geographic data visualizations were developed, including three-dimensional shaded-relief maps of GRASS data, overlay of GRASS images with satellite images, and integration of computer-aided-design imagery with GRASS images. GRASS images were successfully enhanced using Macintosh hardware and software, rather than the DOS-based equipment used previously. Images scanned with a document scanner were incorporated into GRASS imagery, and enhanced images were output in an S-VHS high-resolution video format.

  11. Improved reduced-resolution satellite imagery

    NASA Technical Reports Server (NTRS)

    Ellison, James; Milstein, Jaime

    1995-01-01

    The resolution of satellite imagery is often traded-off to satisfy transmission time and bandwidth, memory, and display limitations. Although there are many ways to achieve the same reduction in resolution, algorithms vary in their ability to preserve the visual quality of the original imagery. These issues are investigated in the context of the Landsat browse system, which permits the user to preview a reduced resolution version of a Landsat image. Wavelets-based techniques for resolution reduction are proposed as alternatives to subsampling used in the current system. Experts judged imagery generated by the wavelets-based methods visually superior, confirming initial quantitative results. In particular, compared to subsampling, the wavelets-based techniques were much less likely to obscure roads, transmission lines, and other linear features present in the original image, introduce artifacts and noise, and otherwise reduce the usefulness of the image. The wavelets-based techniques afford multiple levels of resolution reduction and computational speed. This study is applicable to a wide range of reduced resolution applications in satellite imaging systems, including low resolution display, spaceborne browse, emergency image transmission, and real-time video downlinking.

  12. Thermal Imaging Using Small-Aerial Platforms for Assessment of Crop Water Stress in Humid Subtropical Climates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Leaf- or canopy-to-air temperature difference (hereafter called CATD) can provide information on crop energy status. Thermal imagery from agricultural aircraft or Unmanned Aerial Vehicles (UAVs) have the potential of providing thermal data for calculation of CATD and visual snapshots that can guide ...

  13. Video Golf

    NASA Technical Reports Server (NTRS)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  14. Pixel-wise Motion Detection in Persistent Aerial Video Surveillance

    SciTech Connect

    Vesom, G

    2012-03-23

    In ground stabilized WAMI, stable objects with depth appear to have precessive motion due to sensor movement alongside objects undergoing true, independent motion in the scene. Computational objective is to disambiguate independent and structural motion in WAMI efficiently and robustly.

  15. Imagery analysis and the need for standards

    NASA Astrophysics Data System (ADS)

    Grant, Barbara G.

    2014-09-01

    While efforts within the optics community focus on the development of high-quality systems and data products, comparatively little attention is paid to their use. Our standards for verification and validation are high; but in some user domains, standards are either lax or do not exist at all. In forensic imagery analysis, for example, standards exist to judge image quality, but do not exist to judge the quality of an analysis. In litigation, a high quality analysis is by default the one performed by the victorious attorney's expert. This paper argues for the need to extend quality standards into the domain of imagery analysis, which is expected to increase in national visibility and significance with the increasing deployment of unmanned aerial vehicle—UAV, or "drone"—sensors in the continental U. S.. It argues that like a good radiometric calibration, made as independent of the calibrated instrument as possible, a good analysis should be subject to standards the most basic of which is the separation of issues of scientific fact from analysis results.

  16. A temporal and ecological analysis of the Huntington Beach Wetlands through an unmanned aerial system remote sensing perspective

    NASA Astrophysics Data System (ADS)

    Rafiq, Talha

    Wetland monitoring and preservation efforts have the potential to be enhanced with advanced remote sensing acquisition and digital image analysis approaches. Progress in the development and utilization of Unmanned Aerial Systems (UAS) and Unmanned Aerial Vehicles (UAV) as remote sensing platforms has offered significant spatial and temporal advantages over traditional aerial and orbital remote sensing platforms. Photogrammetric approaches to generate high spatial resolution orthophotos of UAV acquired imagery along with the UAV's low-cost and temporally flexible characteristics are explored. A comparative analysis of different spectral based land cover maps derived from imagery captured using UAV, satellite, and airplane platforms provide an assessment of the Huntington Beach Wetlands. This research presents a UAS remote sensing methodology encompassing data collection, image processing, and analysis in constructing spectral based land cover maps to augment the efforts of the Huntington Beach Wetlands Conservancy by assessing ecological and temporal changes at the Huntington Beach Wetlands.

  17. Phenomenology of passive multi-band submillimeter-wave imagery

    NASA Astrophysics Data System (ADS)

    Enestam, Sissi; Kajatkari, Perttu; Kivimäki, Olli; Leivo, Mikko M.; Rautiainen, Anssi; Tamminen, Aleksi A.; Luukanen, Arttu R.

    2016-05-01

    In 2015, Asqella Oy commercialized a passive multi-band submillimeter-wave camera system intended for use in walk-by personnel security screening applications. In this paper we study the imagery acquired with the prototype of the ARGON passive multi-band submm-wave video camera. To challenge the system and test its limits, imagery has been obtained in various environments with varying background surface temperatures, with people of different body types, with different clothing materials and numbers of layers of clothing and with objects of different materials. In addition to the phenomenological study, we discuss the detection statistics of the system, evaluated by running blind trials with human operators. While significant improvements have been made particularly in the software side since the beginning of the testing, the obtained imagery enables a comprehensive evaluation of the capabilities and challenges of the multiband submillimeter-wave imaging system.

  18. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  19. Imagery Production Specialist (AFSC 23350).

    ERIC Educational Resources Information Center

    Air Univ., Gunter AFS, Ala. Extension Course Inst.

    This course of study is designed to lead the student to full qualification as an Air Force imagery production specialist. The complete course consists of six volumes: general subjects in imagery production (39 hours), photographic fundamentals (57 hours), continuous imagery production (54 hours), chemical analysis and process control (volumes A…

  20. The Imagery-Creativity Connection.

    ERIC Educational Resources Information Center

    Daniels-McGhee, Susan; Davis, Gary A.

    1994-01-01

    This paper reviews historical highlights of the imagery-creativity connection, including early and contemporary accounts, along with notable examples of imagery in the creative process. It also looks at cross-modal imagery (synesthesia), a model of image-based creativity and the creative process, and implications for strengthening creativity by…

  1. Seaworthy Videos.

    ERIC Educational Resources Information Center

    Green, John O.

    1991-01-01

    Advice on creation of effective videotape recordings for use in alumni affairs is drawn from the experience of a number of colleges. Suggested uses include special events, lifelong learning, admissions, video magazines, and development. Specific do's and don'ts for production are also offered. (MSE)

  2. Interactive Video.

    ERIC Educational Resources Information Center

    Boyce, Carol

    1992-01-01

    A workshop on interactive video was designed for fourth and fifth grade students, with the goals of familiarizing students with laser disc technology, developing a cadre of trained students to train other students and staff, and challenging able learners to utilize higher level thinking skills while conducting a research project. (JDD)

  3. Mapping and Characterizing Selected Canopy Tree Species at the Angkor World Heritage Site in Cambodia Using Aerial Data

    PubMed Central

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  4. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    PubMed

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148

  5. Mapping and characterizing selected canopy tree species at the Angkor World Heritage site in Cambodia using aerial data.

    PubMed

    Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean

    2015-01-01

    At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.

  6. Gypsy moth defoliation assessment: Forest defoliation in detectable from satellite imagery. [New England, New York, Pennsylvania, and New Jersey

    NASA Technical Reports Server (NTRS)

    Moore, H. J. (Principal Investigator); Rohde, W. G.

    1975-01-01

    The author has identified the following significant results. ERTS-1 imagery obtained over eastern Pennsylvania during July 1973, indicates that forest defoliation is detectable from satellite imagery and correlates well with aerial visual survey data. It now appears that two damage classes (heavy and moderate-light) and areas of no visible defoliation can be detected and mapped from properly prepared false composite imagery. In areas where maple is the dominant species or in areas of small woodlots interspersed with agricultural areas, detection and subsequent mapping is more difficult.

  7. AERIAL MEASURING SYSTEM IN JAPAN

    SciTech Connect

    Lyons, Craig; Colton, David

    2012-01-01

    The U.S. Department of Energy National Nuclear Security Agency’s Aerial Measuring System deployed personnel and equipment to partner with the U.S. Air Force in Japan to conduct multiple aerial radiological surveys. These were the first and most comprehensive sources of actionable information for U.S. interests in Japan and provided early confirmation to the government of Japan as to the extent of the release from the Fukushima Daiichi Nuclear Power Generation Station. Many challenges were overcome quickly during the first 48 hours; including installation and operation of Aerial Measuring System equipment on multiple U.S. Air Force Japan aircraft, flying over difficult terrain, and flying with talented pilots who were unfamiliar with the Aerial Measuring System flight patterns. These all combined to make for a dynamic and non-textbook situation. In addition, the data challenges of the multiple and on-going releases, and integration with the Japanese government to provide valid aerial radiological survey products that both military and civilian customers could use to make informed decisions, was extremely complicated. The Aerial Measuring System Fukushima response provided insight in addressing these challenges and gave way to an opportunity for the expansion of the Aerial Measuring System’s mission beyond the borders of the US.

  8. Processing Digital Imagery Data

    NASA Technical Reports Server (NTRS)

    Conner, P. K.; Junkin, B. G.; Graham, M. H.; Kalcic, M. T.; Seyfarth, B. R.

    1985-01-01

    Earth Resources Laboratory Applications Software (ELAS) is geobased information system designed for analyzing and processing digital imagery data. ELAS offers user of remotely sensed data wide range of easy to use capabilities in areas of land cover analysis. ELAS system written in FORTRAN and Assembler for batch or interactive processing.

  9. U. S. Department of Energy Aerial Measuring Systems

    SciTech Connect

    J. J. Lease

    1998-10-01

    The Aerial Measuring Systems (AMS) is an aerial surveillance system. This system consists of remote sensing equipment to include radiation detectors; multispectral, thermal, radar, and laser scanners; precision cameras; and electronic imaging and still video systems. This equipment, in varying combinations, is mounted in an airplane or helicopter and flown at different heights in specific patterns to gather various types of data. This system is a key element in the US Department of Energy's (DOE) national emergency response assets. The mission of the AMS program is twofold--first, to respond to emergencies involving radioactive materials by conducting aerial surveys to rapidly track and map the contamination that may exist over a large ground area and second, to conduct routinely scheduled, aerial surveys for environmental monitoring and compliance purposes through the use of credible science and technology. The AMS program evolved from an early program, begun by a predecessor to the DOE--the Atomic Energy Commission--to map the radiation that may have existed within and around the terrestrial environments of DOE facilities, which produced, used, or stored radioactive materials.

  10. Obtaining biophysical measurements of woody vegetation from high resolution digital aerial photography in tropical and arid environments: Northern Territory, Australia

    NASA Astrophysics Data System (ADS)

    Staben, G. W.; Lucieer, A.; Evans, K. G.; Scarth, P.; Cook, G. D.

    2016-10-01

    Biophysical parameters obtained from woody vegetation are commonly measured using field based techniques which require significant investment in resources. Quantitative measurements of woody vegetation provide important information for ecological studies investigating landscape change. The fine spatial resolution of aerial photography enables identification of features such as trees and shrubs. Improvements in spatial and spectral resolution of digital aerial photographic sensors have increased the possibility of using these data in quantitative remote sensing. Obtaining biophysical measurements from aerial photography has the potential to enable it to be used as a surrogate for the collection of field data. In this study quantitative measurements obtained from digital aerial photography captured at ground sampling distance (GSD) of 15 cm (n = 50) and 30 cm (n = 52) were compared to woody biophysical parameters measured from 1 ha field plots. Supervised classification of the aerial photography using object based image analysis was used to quantify woody and non-woody vegetation components in the imagery. There was a high correlation (r ≥ 0.92) between all field measured woody canopy parameters and aerial derived green woody cover measurements, however only foliage projective cover (FPC) was found to be statistically significant (paired t-test; α = 0.01). There was no significant difference between measurements derived from imagery captured at either GSD of 15 cm and 30 cm over the same field site (n = 20). Live stand basal area (SBA) (m2 ha-1) was predicted from the aerial photographs by applying an allometric equation developed between field-measured live SBA and woody FPC. The results show that there was very little difference between live SBA predicted from FPC measured in the field or from aerial photography. The results of this study show that accurate woody biophysical parameters can be obtained from aerial photography from a range of woody vegetation

  11. Aerial thermography for energy conservation

    NASA Technical Reports Server (NTRS)

    Jack, J. R.

    1978-01-01

    Thermal infrared scanning from an aircraft is a convenient and commercially available means for determining relative rates of energy loss from building roofs. The need to conserve energy as fuel costs makes the mass survey capability of aerial thermography an attractive adjunct to community energy awareness programs. Background information on principles of aerial thermography is presented. Thermal infrared scanning systems, flight and environmental requirements for data acquisition, preparation of thermographs for display, major users and suppliers of thermography, and suggested specifications for obtaining aerial scanning services were reviewed.

  12. Evaluating the Accuracy of dem Generation Algorithms from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Ruiz, J. J.; Diaz-Mas, L.; Perez, F.; Viguria, A.

    2013-08-01

    In this work we evaluated how the use of different positioning systems affects the accuracy of Digital Elevation Models (DEMs) generated from aerial imagery obtained with Unmanned Aerial Vehicles (UAVs). In this domain, state-of-the-art DEM generation algorithms suffer from typical errors obtained by GPS/INS devices in the position measurements associated with each picture obtained. The deviations from these measurements to real world positions are about meters. The experiments have been carried out using a small quadrotor in the indoor testbed at the Center for Advanced Aerospace Technologies (CATEC). This testbed houses a system that is able to track small markers mounted on the UAV and along the scenario with millimeter precision. This provides very precise position measurements, to which we can add random noise to simulate errors in different GPS receivers. The results showed that final DEM accuracy clearly depends on the positioning information.

  13. Low-altitude aerial color digital photographic survey of the San Andreas Fault

    USGS Publications Warehouse

    Lynch, David K.; Hudnut, Kenneth W.; Dearborn, David S.P.

    2010-01-01

    Ever since 1858, when Gaspard-Félix Tournachon (pen name Félix Nadar) took the first aerial photograph (Professional Aerial Photographers Association 2009), the scientific value and popular appeal of such pictures have been widely recognized. Indeed, Nadar patented the idea of using aerial photographs in mapmaking and surveying. Since then, aerial imagery has flourished, eventually making the leap to space and to wavelengths outside the visible range. Yet until recently, the availability of such surveys has been limited to technical organizations with significant resources. Geolocation required extensive time and equipment, and distribution was costly and slow. While these situations still plague older surveys, modern digital photography and lidar systems acquire well-calibrated and easily shared imagery, although expensive, platform-specific software is sometimes still needed to manage and analyze the data. With current consumer-level electronics (cameras and computers) and broadband internet access, acquisition and distribution of large imaging data sets are now possible for virtually anyone. In this paper we demonstrate a simple, low-cost means of obtaining useful aerial imagery by reporting two new, high-resolution, low-cost, color digital photographic surveys of selected portions of the San Andreas fault in California. All pictures are in standard jpeg format. The first set of imagery covers a 92-km-long section of the fault in Kern and San Luis Obispo counties and includes the entire Carrizo Plain. The second covers the region from Lake of the Woods to Cajon Pass in Kern, Los Angeles, and San Bernardino counties (151 km) and includes Lone Pine Canyon soon after the ground was largely denuded by the Sheep Fire of October 2009. The first survey produced a total of 1,454 oblique digital photographs (4,288 x 2,848 pixels, average 6 Mb each) and the second produced 3,762 nadir images from an elevation of approximately 150 m above ground level (AGL) on the

  14. Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of aerial photograph, Pacific Air Industries, Flight 123V, June 29, 1960 (University of California, Santa Barbara, Map and Imagery Collection) PORTION OF IRVINE RANCH SHOWING SITE CA-2275-A IN LOWER LEFT QUADRANT AND SITE CA-2275-B IN UPPER RIGHT QUADRANT (see separate photograph index for 2275-B) - Irvine Ranch Agricultural Headquarters, Carillo Tenant House, Southwest of Intersection of San Diego & Santa Ana Freeways, Irvine, Orange County, CA

  15. Aerial and satellite photography - A valuable tool for water quality investigations

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.; Van Domelen, J. F.; Klooster, S. A.

    1973-01-01

    An investigation of surface, volume, and bottom effects in Lake Superior is conducted. The objective of the reported study is the development of a reliable technique for the monitoring and the quantification of the water quality parameters associated with volume reflectance. Basic relationships are discussed together with details concerning the equipment used in the studies, the water quality on the basis of aerial photos and satellite imagery, and the effects of oil on sky-light reflection.

  16. Applications of Landsat imagery to a coastal inlet stability study

    NASA Technical Reports Server (NTRS)

    Wang, Y.-H.

    1981-01-01

    Polcyn and Lyzenga (1975) and Middleton and Barber (1976) have demonstrated that it is possible to correlate the radiance values of a multispectral imagery, such as Landsat imagery, with the depth related information. The present study is one more example of such an effort. Two sets of Landsat magnetic tape were obtained and displayed on the screen of an Image-100 computer. Spectral analysis was performed to produce various signatures, their extent, and location. Subsequent ground truth observations and measurements were gathered by means of hydrographic surveys and low altitude aerial photographs for interpretation and calibration of the Landsat data. Finally, a coastal engineering assessment based on the Landsat data was made. Recommendations regarding the navigational canal alignment and dredging practice are presented in the light of inlet stability.

  17. Height Gradient Approach for Occlusion Detection in Uav Imagery

    NASA Astrophysics Data System (ADS)

    Oliveira, H. C.; Habib, A. F.; Dal Poz, A. P.; Galo, M.

    2015-08-01

    The use of Unmanned Aerial Vehicle (UAV) significantly increased in the last years. It is used for several different applications, such as mapping, publicity, security, natural disasters assistance, environmental monitoring, 3D building model generation, cadastral survey, etc. The imagery obtained by this kind of system has a great potential. To use these images in true orthophoto generation projects related to urban scenes or areas where buildings are present, it is important to consider the occlusion caused by surface height variation, platform attitude, and perspective projection. Occlusions in UAV imagery are usually larger than in conventional airborne dataset due to the low-altitude and excessive change in orientation due to the low-weight and wind effects during the flight mission. Therefore, this paper presents a method for occlusion detection together with some obtained results for images acquired by a UAV platform. The proposed method shows potential in occlusion detection and true orthophoto generation.

  18. Bay-scale assessment of eelgrass beds using sidescan and video

    NASA Astrophysics Data System (ADS)

    Vandermeulen, Herb

    2014-12-01

    The assessment of the status of eelgrass ( Zostera marina) beds at the bay-scale in turbid, shallow estuaries is problematic. The bay-scale assessment (i.e., tens of km) of eelgrass beds usually involves remote sensing methods such as aerial photography or satellite imagery. These methods can fail if the water column is turbid, as is the case for many shallow estuaries on Canada's eastern seaboard. A novel towfish package was developed for the bay-scale assessment of eelgrass beds irrespective of water column turbidity. The towfish consisted of an underwater video camera with scaling lasers, sidescan sonar and a transponder-based positioning system. The towfish was deployed along predetermined transects in three northern New Brunswick estuaries. Maps were created of eelgrass cover and health (epiphyte load) and ancillary bottom features such as benthic algal growth, bacterial mats ( Beggiatoa) and oysters. All three estuaries had accumulations of material reminiscent of the oomycete Leptomitus, although it was not positively identified in our study. Tabusintac held the most extensive eelgrass beds of the best health. Cocagne had the lowest scores for eelgrass health, while Bouctouche was slightly better. The towfish method proved to be cost effective and useful for the bay-scale assessment of eelgrass beds to sub-meter precision in real time.

  19. Aerial surveys adjusted by ground surveys to estimate area occupied by black-tailed prairie dog colonies

    USGS Publications Warehouse

    Sidle, John G.; Augustine, David J.; Johnson, Douglas H.; Miller, Sterling D.; Cully, Jack F.; Reading, Richard P.

    2012-01-01

    Aerial surveys using line-intercept methods are one approach to estimate the extent of prairie dog colonies in a large geographic area. Although black-tailed prairie dogs (Cynomys ludovicianus) construct conspicuous mounds at burrow openings, aerial observers have difficulty discriminating between areas with burrows occupied by prairie dogs (colonies) versus areas of uninhabited burrows (uninhabited colony sites). Consequently, aerial line-intercept surveys may overestimate prairie dog colony extent unless adjusted by an on-the-ground inspection of a sample of intercepts. We compared aerial line-intercept surveys conducted over 2 National Grasslands in Colorado, USA, with independent ground-mapping of known black-tailed prairie dog colonies. Aerial line-intercepts adjusted by ground surveys using a single activity category adjustment overestimated colonies by ≥94% on the Comanche National Grassland and ≥58% on the Pawnee National Grassland. We present a ground-survey technique that involves 1) visiting on the ground a subset of aerial intercepts classified as occupied colonies plus a subset of intercepts classified as uninhabited colony sites, and 2) based on these ground observations, recording the proportion of each aerial intercept that intersects a colony and the proportion that intersects an uninhabited colony site. Where line-intercept techniques are applied to aerial surveys or remotely sensed imagery, this method can provide more accurate estimates of black-tailed prairie dog abundance and trends

  20. Classification of wetlands vegetation using small scale color infrared imagery

    NASA Technical Reports Server (NTRS)

    Williamson, F. S. L.

    1975-01-01

    A classification system for Chesapeake Bay wetlands was derived from the correlation of film density classes and actual vegetation classes. The data processing programs used were developed by the Laboratory for the Applications of Remote Sensing. These programs were tested for their value in classifying natural vegetation, using digitized data from small scale aerial photography. Existing imagery and the vegetation map of Farm Creek Marsh were used to determine the optimal number of classes, and to aid in determining if the computer maps were a believable product.

  1. Integrating the services' imagery architectures

    NASA Astrophysics Data System (ADS)

    Mader, John F.

    1993-04-01

    Any military organization requiring imagery must deal with one or more of several architectures: the tactical architectures of the three military departments, the theater architectures, and their interfaces to a separate national architecture. A seamless, joint, integrated architecture must meet today's imagery requirements. The CIO's vision of 'the right imagery to the right people in the right format at the right time' would serve well as the objective of a joint, integrated architecture. A joint imagery strategy should be initially shaped by the four pillars of the National Military Strategy of the United States: strategic deterrence; forward presence; crisis response; and reconstitution. In a macro view, it must consist of a series of sub-strategies to include science and technology and research and development, maintenance of the imagery related industrial base, acquisition, resource management, and burden sharing. Common imagery doctrine must follow the imagery strategy. Most of all, control, continuity, and direction must be maintained with regard to organizations and systems development as the architecture evolves. These areas and more must be addressed to reach the long term goal of a joint, integrated imagery architecture. This will require the services and theaters to relinquish some sovereignty over at least systems development and acquisition. Nevertheless, the goal of a joint, integrated imagery architecture is feasible. The author presents arguments and specific recommendations to orient the imagery community in the direction of a joint, integrated imagery architecture.

  2. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  3. Unmanned aerial systems for photogrammetry and remote sensing: A review

    NASA Astrophysics Data System (ADS)

    Colomina, I.; Molina, P.

    2014-06-01

    We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment.

  4. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  5. Modeling aerial refueling operations

    NASA Astrophysics Data System (ADS)

    McCoy, Allen B., III

    Aerial Refueling (AR) is the act of offloading fuel from one aircraft (the tanker) to another aircraft (the receiver) in mid flight. Meetings between tanker and receiver aircraft are referred to as AR events and are scheduled to: escort one or more receivers across a large body of water; refuel one or more receivers; or train receiver pilots, tanker pilots, and boom operators. In order to efficiently execute the Aerial Refueling Mission, the Air Mobility Command (AMC) of the United States Air Force (USAF) depends on computer models to help it make tanker basing decisions, plan tanker sorties, schedule aircraft, develop new organizational doctrines, and influence policy. We have worked on three projects that have helped AMC improve its modeling and decision making capabilities. Optimal Flight Planning. Currently Air Mobility simulation and optimization software packages depend on algorithms which iterate over three dimensional fuel flow tables to compute aircraft fuel consumption under changing flight conditions. When a high degree of fidelity is required, these algorithms use a large amount of memory and CPU time. We have modeled the rate of aircraft fuel consumption with respect to AC GrossWeight, Altitude and Airspeed. When implemented, this formula will decrease the amount of memory and CPU time needed to compute sortie fuel costs and cargo capacity values. We have also shown how this formula can be used in optimal control problems to find minimum costs flight plans. Tanker Basing Demand Mismatch Index. Since 1992, AMC has relied on a Tanker Basing/AR Demand Mismatch Index which aggregates tanker capacity and AR demand data into six regions. This index was criticized because there were large gradients along regional boundaries. Meanwhile tankers frequently cross regional boundaries to satisfy the demand for AR support. In response we developed continuous functions to score locations with respect to their proximity to demand for AR support as well as their

  6. Aerial-Photointerpretation of landslides along the Ohio and Mississippi rivers

    USGS Publications Warehouse

    Su, W.-J.; Stohr, C.

    2000-01-01

    A landslide inventory was conducted along the Ohio and Mississippi rivers in the New Madrid Seismic Zone of southern Illinois, between the towns of Olmsted and Chester, Illinois. Aerial photography and field reconnaissance identified 221 landslides of three types: rock/debris falls, block slides, and undifferentiated rotational/translational slides. Most of the landslides are small- to medium-size, ancient rotational/translational features partially ob-scured by vegetation and modified by weathering. Five imagery sources were interpreted for landslides: 1:250,000-scale side-looking airborne radar (SLAR); 1:40,000-scale, 1:20,000-scale, 1:6,000-scale, black and white aerial photography; and low altitude, oblique 35-mm color photography. Landslides were identified with three levels of confidence on the basis of distinguishing characteristics and ambiguous indicators. SLAR imagery permitted identification of a 520 hectare mega-landslide which would not have been identified on medium-scale aerial photography. The leaf-off, 35-mm color, oblique photography provided the best imagery for confident interpretation of detailed features needed for smaller landslides.

  7. Ground-target detection system for digital video database

    NASA Astrophysics Data System (ADS)

    Liang, Yiqing; Huang, Jeffrey R.; Wolf, Wayne H.; Liu, Bede

    1998-07-01

    As more and more visual information is available on video, information indexing and retrieval of digital video data is becoming important. A digital video database embedded with visual information processing using image analysis and image understanding techniques such as automated target detection, classification, and identification can provide query results of higher quality. We address in this paper a robust digital video database system within which a target detection module is implemented and applied onto the keyframe images extracted by our digital library system. The tasks and application scenarios under consideration involve indexing video with information about detection and verification of artificial objects that exist in video scenes. Based on the scenario that the video sequences are acquired by an onboard camera mounted on Predator unmanned aircraft, we demonstrate how an incoming video stream is structured into different levels -- video program level, scene level, shot level, and object level, based on the analysis of video contents using global imagery information. We then consider that the keyframe representation is most appropriate for video processing and it holds the property that can be used as the input for our detection module. As a result, video processing becomes feasible in terms of decreased computational resources spent and increased confidence in the (detection) decisions reached. The architecture we proposed can respond to the query of whether artificial structures and suspected combat vehicles are detected. The architecture for ground detection takes advantage of the image understanding paradigm and it involves different methods to locate and identify the artificial object rather than nature background such as tree, grass, and cloud. Edge detection, morphological transformation, line and parallel line detection using Hough transform applied on key frame images at video shot level are introduced in our detection module. This function can

  8. Enhancing voluntary imitation through attention and motor imagery.

    PubMed

    Bek, Judith; Poliakoff, Ellen; Marshall, Hannah; Trueman, Sophie; Gowen, Emma

    2016-07-01

    Action observation activates brain areas involved in performing the same action and has been shown to increase motor learning, with potential implications for neurorehabilitation. Recent work indicates that the effects of action observation on movement can be increased by motor imagery or by directing attention to observed actions. In voluntary imitation, activation of the motor system during action observation is already increased. We therefore explored whether imitation could be further enhanced by imagery or attention. Healthy participants observed and then immediately imitated videos of human hand movement sequences, while movement kinematics were recorded. Two blocks of trials were completed, and after the first block participants were instructed to imagine performing the observed movement (Imagery group, N = 18) or attend closely to the characteristics of the movement (Attention group, N = 15), or received no further instructions (Control group, N = 17). Kinematics of the imitated movements were modulated by instructions, with both Imagery and Attention groups being closer in duration, peak velocity and amplitude to the observed model compared with controls. These findings show that both attention and motor imagery can increase the accuracy of imitation and have implications for motor learning and rehabilitation. Future work is required to understand the mechanisms by which these two strategies influence imitation accuracy. PMID:26892882

  9. Aerial surveys and tagging of free-drifting icebergs using an unmanned aerial vehicle (UAV)

    NASA Astrophysics Data System (ADS)

    McGill, P. R.; Reisenbichler, K. R.; Etchemendy, S. A.; Dawe, T. C.; Hobson, B. W.

    2011-06-01

    Ship-based observations of free-drifting icebergs are hindered by the dangers of calving ice. To improve the efficacy and safety of these studies, new unmanned aerial vehicles (UAVs) were developed and then deployed in the Southern Ocean. These inexpensive UAVs were launched and recovered from a ship by scientific personal with a few weeks of flight training. The UAVs sent real-time video back to the ship, allowing researchers to observe conditions in regions of the icebergs not visible from the ship. In addition, the UAVs dropped newly developed global positioning system (GPS) tracking tags, permitting researchers to record the precise position of the icebergs over time. The position reports received from the tags show that the motion of free-drifting icebergs changes rapidly and is a complex combination of both translation and rotation.

  10. Digital reproduction of historical aerial photographic prints for preserving a deteriorating archive

    USGS Publications Warehouse

    Luman, D.E.; Stohr, C.; Hunt, L.

    1997-01-01

    Aerial photography from the 1920s and 1930s is a unique record of historical information used by government agencies, surveyors, consulting scientists and engineers, lawyers, and individuals for diverse purposes. Unfortunately, the use of the historical aerial photographic prints has resulted in their becoming worn, lost, and faded. Few negatives exist for the earliest photography. A pilot project demonstrated that high-quality, precision scanning of historical aerial photography is an appealing alternative to traditional methods for reproduction. Optimum sampling rate varies from photograph to photograph, ranging between 31 and 42 ??m/pixel for the USDA photographs tested. Inclusion of an index, such as a photomosaic or gazetteer, and ability to view the imagery promptly upon request are highly desirable.

  11. The live service of video geo-information

    NASA Astrophysics Data System (ADS)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  12. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team.

    PubMed

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-01-01

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus' estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling. PMID:27618060

  13. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team

    PubMed Central

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-01-01

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus’ estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling. PMID:27618060

  14. Sediment Sampling in Estuarine Mudflats with an Aerial-Ground Robotic Team.

    PubMed

    Deusdado, Pedro; Guedes, Magno; Silva, André; Marques, Francisco; Pinto, Eduardo; Rodrigues, Paulo; Lourenço, André; Mendonça, Ricardo; Santana, Pedro; Corisco, José; Almeida, Susana Marta; Portugal, Luís; Caldeira, Raquel; Barata, José; Flores, Luis

    2016-09-09

    This paper presents a robotic team suited for bottom sediment sampling and retrieval in mudflats, targeting environmental monitoring tasks. The robotic team encompasses a four-wheel-steering ground vehicle, equipped with a drilling tool designed to be able to retain wet soil, and a multi-rotor aerial vehicle for dynamic aerial imagery acquisition. On-demand aerial imagery, properly fused on an aerial mosaic, is used by remote human operators for specifying the robotic mission and supervising its execution. This is crucial for the success of an environmental monitoring study, as often it depends on human expertise to ensure the statistical significance and accuracy of the sampling procedures. Although the literature is rich on environmental monitoring sampling procedures, in mudflats, there is a gap as regards including robotic elements. This paper closes this gap by also proposing a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Field trials in the south bank of the river Tagus' estuary show the ability of the robotic system to successfully extract and transport bottom sediment samples for offline analysis. The results also show the efficiency of the extraction and the benefits when compared to (conventional) human-based sampling.

  15. Overall evaluation of LANDSAT (ERTS) follow on imagery for cartographic application

    NASA Technical Reports Server (NTRS)

    Colvocoresses, A. P. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. LANDSAT imagery can be operationally applied to the revision of nautical charts. The imagery depicts shallow seas in a form that permits accurate planimetric image mapping of features to 20 meters of depth where the conditions of water clarity and bottom reflection are suitable. LANDSAT data also provide an excellent simulation of the earth's surface, for such applications as aeronautical charting and radar image correlation in aircraft and aircraft simulators. Radiometric enhancement, particularly edge enhancement, a technique only marginally successful with aerial photographs has proved to be high value when applied to LANDSAT data.

  16. The Potential of Unmanned Aerial Vehicle for Large Scale Mapping of Coastal Area

    NASA Astrophysics Data System (ADS)

    Darwin, N.; Ahmad, A.; Zainon, O.

    2014-02-01

    Many countries in the tropical region are covered with cloud for most of the time, hence, it is difficult to get clear images especially from high resolution satellite imagery. Aerial photogrammetry can be used but most of the time the cloud problem still exists. Today, this problem could be solved using a system known as unmanned aerial vehicle (UAV) where the aerial images can be acquired at low altitude and the system can fly under the cloud. The UAV system could be used in various applications including mapping coastal area. The UAV system is equipped with an autopilot system and automatic method known as autonomous flying that can be utilized for data acquisition. To achieve high resolution imagery, a compact digital camera of high resolution was used to acquire the aerial images at an altitude. In this study, the UAV system was employed to acquire aerial images of a coastal simulation model at low altitude. From the aerial images, photogrammetric image processing was executed to produce photogrammetric outputs such a digital elevation model (DEM), contour line and orthophoto. In this study, ground control point (GCP) and check point (CP) were established using conventional ground surveying method (i.e total station). The GCP is used for exterior orientation in photogrammetric processes and CP for accuracy assessment based on Root Mean Square Error (RMSE). From this study, it was found that the UAV system can be used for large scale mapping of coastal simulation model with accuracy at millimeter level. It is anticipated that the same system could be used for large scale mapping of real coastal area and produces good accuracy. Finally, the UAV system has great potential to be used for various applications that require accurate results or products at limited time and less man power.

  17. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and

  18. Hyperspectral imagery and segmentation

    NASA Astrophysics Data System (ADS)

    Wellman, Mark C.; Nasrabadi, Nasser M.

    2002-07-01

    Hyperspectral imagery (HSI), a passive infrared imaging technique which creates images of fine resolution across the spectrum is currently being considered for Army tactical applications. An important tactical application of infra-red (IR) hyperspectral imagery is the detection of low contrast targets, including those targets that may employ camouflage, concealment and deception (CCD) techniques [1,2]. Spectral reflectivity characteristics were used for efficient segmentation between different materials such as painted metal, vegetation and soil for visible to near IR bands in the range of 0.46-1.0 microns as shown previously by Kwon et al [3]. We are currently investigating the HSI where the wavelength spans from 7.5-13.7 microns. The energy in this range of wavelengths is almost entirely emitted rather than reflected, therefore, the gray level of a pixel is a function of the temperature and emissivity of the object. This is beneficial since light level and reflection will not need to be considered in the segmentation. We will present results of a step-wise segmentation analysis on the long-wave infrared (LWIR) hyperspectrum utilizing various classifier architectures applied to both the full-band, broad-band and narrow-band features derived from the Spatially Enhanced Broadband Array Spectrograph System (SEBASS) data base. Stepwise segmentation demonstrates some of the difficulties in the multi-class case. These results give an indication of the added capability the hyperspectral imagery and associated algorithms will bring to bear on the target acquisition problem.

  19. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  20. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  1. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  2. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  3. Mapping Urban Ecosystem Services Using High Resolution Aerial Photography

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Neale, A.; Wilhelm, D.

    2010-12-01

    Ecosystem services (ES) are the many life-sustaining benefits we receive from nature: e.g., clean air and water, food and fiber, cultural-aesthetic-recreational benefits, pollination and flood control. The ES concept is emerging as a means of integrating complex environmental and economic information to support informed environmental decision making. The US EPA is developing a web-based National Atlas of Ecosystem Services, with a component for urban ecosystems. Currently, the only wall-to-wall, national scale land cover data suitable for this analysis is the National Land Cover Data (NLCD) at 30 m spatial resolution with 5 and 10 year updates. However, aerial photography is acquired at higher spatial resolution (0.5-3 m) and more frequently (1-5 years, typically) for most urban areas. Land cover was mapped in Raleigh, NC using freely available USDA National Agricultural Imagery Program (NAIP) with 1 m ground sample distance to test the suitability of aerial photography for urban ES analysis. Automated feature extraction techniques were used to extract five land cover classes, and an accuracy assessment was performed using standard techniques. Results will be presented that demonstrate applications to mapping ES in urban environments: greenways, corridors, fragmentation, habitat, impervious surfaces, dark and light pavement (urban heat island). Automated feature extraction results mapped over NAIP color aerial photograph. At this scale, we can look at land cover and related ecosystem services at the 2-10 m scale. Small features such as individual trees and sidewalks are visible and mappable. Classified aerial photo of Downtown Raleigh NC Red: impervious surface Dark Green: trees Light Green: grass Tan: soil

  4. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe's requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a

  5. Video data compression using MPEG-2 and frame decimation

    NASA Astrophysics Data System (ADS)

    Leachtenauer, Jon C.; Richardson, Mark; Garvin, Paul

    1999-07-01

    Video systems have seen a resurgence in military applications since the recent proliferation of unmanned aerial vehicles (UAVs). Video system offer light weight, low cost, and proven COTS technology. Video has not proven to be a panacea, however, as generally available storage and transmission systems are limited in bandwidth. Digital video systems collect data at rates of up to 270 Mbs; typical transmission bandwidths range from 9600 baud to 10 Mbs. Either extended transmission times or data compression are needed to handle video bit streams. Video compression algorithm have been developed and evaluated in the commercial broadcast and entertainment industry. The Motion Pictures Expert Group developed MPEG-1 to compress videos to CD ROM bandwidths and MPEG-2 to cover the range of 5-10 Mbs and higher. Commercial technology has not extended to lower bandwidths, nor has the impact of MPEG compression for military applications been demonstrated. Using digitized video collected by UAV systems, the effects of data compression on image interpretability and task satisfaction were investigated. Using both MPEG-2 and frame decimation, video clips were compressed to rates of 6MPS, 1.5 Mbs, and 0.256 Mbs. Experienced image analysts provided task satisfaction estimates and National Image Interpretability Rating Scale ratings on the compressed and uncompressed video clips. Result were analyzed to define the effects of compression rate and method on interpretability and task satisfaction. Lossless compression was estimated to occur at approximately 10 Mbs and frame decimation was superior to MPEG-2 at low bit rates.

  6. Kinesthetic imagery of musical performance

    PubMed Central

    Lotze, Martin

    2013-01-01

    Musicians use different kinds of imagery. This review focuses on kinesthetic imagery, which has been shown to be an effective complement to actively playing an instrument. However, experience in actual movement performance seems to be a requirement for a recruitment of those brain areas representing movement ideation during imagery. An internal model of movement performance might be more differentiated when training has been more intense or simply performed more often. Therefore, with respect to kinesthetic imagery, these strategies are predominantly found in professional musicians. There are a few possible reasons as to why kinesthetic imagery is used in addition to active training; one example is the need for mental rehearsal of the technically most difficult passages. Another reason for mental practice is that mental rehearsal of the piece helps to improve performance if the instrument is not available for actual training as is the case for professional musicians when they are traveling to various appearances. Overall, mental imagery in musicians is not necessarily specific to motor, somatosensory, auditory, or visual aspects of imagery, but integrates them all. In particular, the audiomotor loop is highly important, since auditory aspects are crucial for guiding motor performance. All these aspects result in a distinctive representation map for the mental imagery of musical performance. This review summarizes behavioral data, and findings from functional brain imaging studies of mental imagery of musical performance. PMID:23781196

  7. The evolution of wireless video transmission technology for surveillance missions

    NASA Astrophysics Data System (ADS)

    Durso, Christopher M.; McCulley, Eric

    2012-06-01

    Covert and overt video collection systems as well as tactical unmanned aerial vehicles (UAV's) and unmanned ground vehicles (UGV's) can deliver real-time video intelligence direct from sensor systems to command staff providing unprecedented situational awareness and tactical advantage. Today's tactical video communications system must be secure, compact, lightweight, and fieldable in quick reaction scenarios. Four main technology implementations can be identified with the evolutionary development of wireless video transmission systems. Analog FM led to single carrier digital modulation, which gave way to multi-carrier orthogonal modulation. Each of these systems is currently in use today. Depending on the operating environment and size, weight, and power limitations, a system designer may choose one over another to support tactical video collection missions.

  8. Standardized rendering from IR surveillance motion imagery

    NASA Astrophysics Data System (ADS)

    Prokoski, F. J.

    2014-06-01

    Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.

  9. Preliminary Results from the Portable Imagery Quality Assessment Test Field (PIQuAT) of Uav Imagery for Imagery Reconnaissance Purposes

    NASA Astrophysics Data System (ADS)

    Dabrowski, R.; Orych, A.; Jenerowicz, A.; Walczykowski, P.

    2015-08-01

    The article presents a set of initial results of a quality assessment study of 2 different types of sensors mounted on an unmanned aerial vehicle, carried out over an especially designed and constructed test field. The PIQuAT (Portable Imagery Quality Assessment Test Field) field had been designed especially for the purposes of determining the quality parameters of UAV sensors, especially in terms of the spatial, spectral and radiometric resolutions and chosen geometric aspects. The sensor used include a multispectral framing camera and a high-resolution RGB sensor. The flights were conducted from a number of altitudes ranging from 10 m to 200 m above the test field. Acquiring data at a number of different altitudes allowed the authors to evaluate the obtained results and check for possible linearity of the calculated quality assessment parameters. The radiometric properties of the sensors were evaluated from images of the grayscale target section of the PIQuAT field. The spectral resolution of the imagery was determined based on a number of test samples with known spectral reflectance curves. These reference spectral reflectance curves were then compared with spectral reflectance coefficients at the wavelengths registered by the miniMCA camera. Before conducting all of these experiments in field conditions, the interior orientation parameters were calculated for the MiniMCA and RGB sensor in laboratory conditions. These parameters include: the actual pixel size on the detector, distortion parameters, calibrated focal length (CFL) and the coordinates of the principal point of autocollimation (miniMCA - for each of the six channels separately.

  10. Real-time people and vehicle detection from UAV imagery

    NASA Astrophysics Data System (ADS)

    Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan

    2011-01-01

    A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.

  11. Airborne Hyperspectral Imagery for the Detection of Agricultural Crop Stress

    NASA Technical Reports Server (NTRS)

    Cassady, Philip E.; Perry, Eileen M.; Gardner, Margaret E.; Roberts, Dar A.

    2001-01-01

    Multispectral digital imagery from aircraft or satellite is presently being used to derive basic assessments of crop health for growers and others involved in the agricultural industry. Research indicates that narrow band stress indices derived from hyperspectral imagery should have improved sensitivity to provide more specific information on the type and cause of crop stress, Under funding from the NASA Earth Observation Commercial Applications Program (EOCAP) we are identifying and evaluating scientific and commercial applications of hyperspectral imagery for the remote characterization of agricultural crop stress. During the summer of 1999 a field experiment was conducted with varying nitrogen treatments on a production corn-field in eastern Nebraska. The AVIRIS (Airborne Visible-Infrared Imaging Spectrometer) hyperspectral imager was flown at two critical dates during crop development, at two different altitudes, providing images with approximately 18m pixels and 3m pixels. Simultaneous supporting soil and crop characterization included spectral reflectance measurements above the canopy, biomass characterization, soil sampling, and aerial photography. In this paper we describe the experiment and results, and examine the following three issues relative to the utility of hyperspectral imagery for scientific study and commercial crop stress products: (1) Accuracy of reflectance derived stress indices relative to conventional measures of stress. We compare reflectance-derived indices (both field radiometer and AVIRIS) with applied nitrogen and with leaf level measurement of nitrogen availability and chlorophyll concentrations over the experimental plots (4 replications of 5 different nitrogen levels); (2) Ability of the hyperspectral sensors to detect sub-pixel areas under crop stress. We applied the stress indices to both the 3m and 18m AVIRIS imagery for the entire production corn field using several sub-pixel areas within the field to compare the relative

  12. Marketing through Video Presentations.

    ERIC Educational Resources Information Center

    Newhart, Donna

    1989-01-01

    Discusses the advantages of using video presentations as marketing tools. Includes information about video news releases, public service announcements, and sales/marketing presentations. Describes the three stages in creating a marketing video: preproduction planning; production; and postproduction. (JOW)

  13. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  14. Dynamics of aerial target pursuit

    NASA Astrophysics Data System (ADS)

    Pal, S.

    2015-12-01

    During pursuit and predation, aerial species engage in multitasking behavior that involve simultaneous target detection, tracking, decision-making, approach and capture. The mobility of the pursuer and the target in a three dimensional environment during predation makes the capture task highly complex. Many researchers have studied and analyzed prey capture dynamics in different aerial species such as insects and bats. This article focuses on reviewing the capture strategies adopted by these species while relying on different sensory variables (vision and acoustics) for navigation. In conclusion, the neural basis of these capture strategies and some applications of these strategies in bio-inspired navigation and control of engineered systems are discussed.

  15. Assessing the Impacts of US Landfall Hurricanes in 2012 using Aerial Remote Sensing

    NASA Astrophysics Data System (ADS)

    Bevington, John S.

    2013-04-01

    Remote sensing has become a widely-used technology for assessing and evaluating the extent and severity of impacts of natural disasters worldwide. Optical and radar data collected by air- and space-borne sensors have supported humanitarian and economic decision-making for over a decade. Advances in image spatial resolution and pre-processing speeds have meant images with centimetre spatial resolution are now available for analysis within hours following severe disaster events. This paper offers a retrospective view on recent large-scale responses to two of the major storms from the 2012 Atlantic hurricane season: Hurricane Isaac and post-tropical cyclone ("superstorm") Sandy. Although weak on the Saffir-Simpson hurricane wind scale, these slow-moving storms produced intense rainfall and coastal storm surges in the order of several metres in the Louisiana and Mississippi Gulf Coast (Isaac), and the Atlantic Seaboard (Sandy) of the United States. Data were generated for both events through interpretation of a combination of two types of aerial imagery: high spatial resolution optical imagery captured by fixed aerial sensors deployed by the National Oceanic and Atmospheric Administration (NOAA), and digital single lens reflex (DSLR) images captured by volunteers from the US Civil Air Patrol (CAP). Imagery for these events were collected over a period of days following the storms' landfall in the US, with availability of aerial data far outweighing the sub-metre satellite imagery. The imagery described were collected as vertical views (NOAA) and oblique views (CAP) over the whole affected coastal and major riverine areas. A network of over 150 remote sensing experts systematically and manually processed images through visual interpretation, culminating in hundreds of thousands of individual properties identified as damaged or destroyed by wind or surge. A discussion is presented on the challenges of responding at such a fine level of spatial granularity for coastal

  16. A Methodological Intercomparison of Topographic and Aerial Photographic Habitat Survey Techniques

    NASA Astrophysics Data System (ADS)

    Bangen, S. G.; Wheaton, J. M.; Bouwes, N.

    2011-12-01

    A severe decline in Columbia River salmonid populations and subsequent Federal listing of subpopulations has mandated both the monitoring of populations and evaluation of the status of available habitat. Numerous field and analytical methods exist to assist in the quantification of the abundance and quality of in-stream habitat for salmonids. These methods range from field 'stick and tape' surveys to spatially explicit topographic and aerial photographic surveys from a mix of ground-based and remotely sensed airborne platforms. Although several previous studies have assessed the quality of specific individual survey methods, the intercomparison of competing techniques across a diverse range of habitat conditions (wadeable headwater channels to non-wadeable mainstem channels) has not yet been elucidated. In this study, we seek to enumerate relative quality (i.e. accuracy, precision, extent) of habitat metrics and inventories derived from an array of ground-based and remotely sensed surveys of varying degrees of sophistication, as well as quantify the effort and cost in conducting the surveys. Over the summer of 2010, seven sample reaches of varying habitat complexity were surveyed in the Lemhi River Basin, Idaho, USA. Complete topographic surveys were attempted at each site using rtkGPS, total station, ground-based LiDaR and traditional airborne LiDaR. Separate high spatial resolution aerial imagery surveys were acquired using a tethered blimp, a drone UAV, and a traditional fixed-wing aircraft. Here we also developed a relatively simplistic methodology for deriving bathymetry from aerial imagery that could be readily employed by instream habitat monitoring programs. The quality of bathymetric maps derived from aerial imagery was compared with rtkGPS topographic data. The results are helpful for understanding the strengths and weaknesses of different approaches in specific conditions, and how a hybrid of data acquisition methods can be used to build a more complete

  17. Identification of irrigated crop types from ERTS-1 density contour maps and color infrared aerial photography. [Wyoming

    NASA Technical Reports Server (NTRS)

    Marrs, R. W.; Evans, M. A.

    1974-01-01

    The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.

  18. AERIAL OF VEHICLE ASSEMBLY BUILDING & SURROUNDING AREA

    NASA Technical Reports Server (NTRS)

    1977-01-01

    AERIAL OF VEHICLE ASSEMBLY BUILDING & SURROUNDING AREA KSC-377C-0082.41 116-KSC-377C-82.41, P-15877, ARCHIVE-04151 Aerial view - Shuttle construction progress - VAB and Orbiter Processing Facilities - direction northwest.

  19. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  20. Imagery Rescripting for Personality Disorders

    ERIC Educational Resources Information Center

    Arntz, Arnoud

    2011-01-01

    Imagery rescripting is a powerful technique that can be successfully applied in the treatment of personality disorders. For personality disorders, imagery rescripting is not used to address intrusive images but to change the implicational meaning of schemas and childhood experiences that underlie the patient's problems. Various mechanisms that may…

  1. Guided Imagery in Career Awareness.

    ERIC Educational Resources Information Center

    Wilson, William C.; Eddy, John

    1982-01-01

    Suggests guided imagery can stimulate clients to become more aware of the role of personal values, attitudes, and beliefs in career decision making. Presents guidelines, examples, and implications to enable rehabilitation counselors to use guided imagery exercises in career counseling. (Author)

  2. Floating aerial LED signage based on aerial imaging by retro-reflection (AIRR).

    PubMed

    Yamamoto, Hirotsugu; Tomiyama, Yuka; Suyama, Shiro

    2014-11-01

    We propose a floating aerial LED signage technique by utilizing retro-reflection. The proposed display is composed of LEDs, a half mirror, and retro-reflective sheeting. Directivity of the aerial image formation and size of the aerial image have been investigated. Furthermore, a floating aerial LED sign has been successfully formed in free space.

  3. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity. ?? 2010 IEEE.

  4. The use of historical imagery in the remediation of an urban hazardous waste site

    USGS Publications Warehouse

    Slonecker, E.T.

    2011-01-01

    The information derived from the interpretation of historical aerial photographs is perhaps the most basic multitemporal application of remote-sensing data. Aerial photographs dating back to the early 20th century can be extremely valuable sources of historical landscape activity. In this application, imagery from 1918 to 1927 provided a wealth of information about chemical weapons testing, storage, handling, and disposal of these hazardous materials. When analyzed by a trained photo-analyst, the 1918 aerial photographs resulted in 42 features of potential interest. When compared with current remedial activities and known areas of contamination, 33 of 42 or 78.5% of the features were spatially correlated with areas of known contamination or other remedial hazardous waste cleanup activity.

  5. Interactive projection for aerial dance using depth sensing camera

    NASA Astrophysics Data System (ADS)

    Dubnov, Tammuz; Seldess, Zachary; Dubnov, Shlomo

    2014-02-01

    This paper describes an interactive performance system for oor and Aerial Dance that controls visual and sonic aspects of the presentation via a depth sensing camera (MS Kinect). In order to detect, measure and track free movement in space, 3 degree of freedom (3-DOF) tracking in space (on the ground and in the air) is performed using IR markers. Gesture tracking and recognition is performed using a simpli ed HMM model that allows robust mapping of the actor's actions to graphics and sound. Additional visual e ects are achieved by segmentation of the actor body based on depth information, allowing projection of separate imagery on the performer and the backdrop. Artistic use of augmented reality performance relative to more traditional concepts of stage design and dramaturgy are discussed.

  6. Aerial detection of leaf senescence for a geobotanical study

    NASA Technical Reports Server (NTRS)

    Schwaller, M.; Tkach, S. J.

    1986-01-01

    A geobotanical investigation based on the detection of premature leaf senescence was conducted in an area of predominantly chalcocite mineralization of the Keweenaw Peninsula in Michigan's Upper Peninsula. Spectrophotometric measurements indicated that the region from 600 to 700 nm captures the rise in red reflectance characteristic of senescent leaves. Observations at other wavelengths do not distinguish between senescent and green leaves as clearly and unequivocably as observations at these wavelengths. Small format black and white aerial photographs filtered for the red band (600 to 700 nm) and Thematic Mapper Simulator imagery were collected during the period of fall senescence in the study area. Soil samples were collected from two areas identified by leaf senescence and from two additional sites where the leaf canopy was still green. Geochemical analysis revealed that the sites characterized by premature leaf senescence had a significantly higher median soil copper concentration than the other two areas.

  7. D Object Classification Based on Thermal and Visible Imagery in Urban Area

    NASA Astrophysics Data System (ADS)

    Hasani, H.; Samadzadegan, F.

    2015-12-01

    The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.

  8. Aspects of dem Generation from Uas Imagery

    NASA Astrophysics Data System (ADS)

    Greiwe, A.; Gehrke, R.; Spreckels, V.; Schlienkamp, A.

    2013-08-01

    Since a few years, micro UAS (unmanned aerial systems) with vertical take off and landing capabilities like quadro- or octocopter are used as sensor platform for Aerophotogrammetry. Since the restricted payload of micro UAS with a total weight up of 5 kg (payload only up to 1.5 kg), these systems are often equipped with small format cameras. These cameras can be classified as amateur cameras and it is often the case, that these systems do not meet the requirements of a geometric stable camera for photogrammetric measurement purposes. However, once equipped with a suitable camera system, an UAS is an interesting alternative to expensive manned flights for small areas. The operating flight height of the above described UAS is about 50 up to 150 meters above ground level. This low flight height lead on the one hand to a very high spatial resolution of the aerial imagery. Depending on the cameras focal length and the sensor's pixel size, the ground sampling distance (GSD) is usually about 1 up to 5 cm. This high resolution is useful especially for the automatic generation of homologous tie-points, which are a precondition for the image alignment (bundle block adjustment). On the other hand, the image scale depends on the object's height and the UAV operating height. Objects like mine heaps or construction sites show high variations of the object's height. As a result, operating the UAS with a constant flying height will lead to high variations in the image scale. For some processing approaches this will lead to problems e.g. the automatic tie-point generation in stereo image pairs. As precondition to all DEM generating approaches, first of all a geometric stable camera, sharp images are essentially. Well known calibration parameters are necessary for the bundle adjustment, to control the exterior orientations. It can be shown, that a simultaneous on site camera calibration may lead to misaligned aerial images. Also, the success rate of an automatic tie-point generation

  9. Automated motion imagery exploitation for surveillance and reconnaissance

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Laliberte, France; Kotamraju, Vinay; Dutkiewicz, Melanie

    2012-06-01

    Airborne surveillance and reconnaissance are essential for many military missions. Such capabilities are critical for troop protection, situational awareness, mission planning and others, such as post-operation analysis / damage assessment. Motion imagery gathered from both manned and unmanned platforms provides surveillance and reconnaissance information that can be used for pre- and post-operation analysis, but these sensors can gather large amounts of video data. It is extremely labour-intensive for operators to analyse hours of collected data without the aid of automated tools. At MDA Systems Ltd. (MDA), we have previously developed a suite of automated video exploitation tools that can process airborne video, including mosaicking, change detection and 3D reconstruction, within a GIS framework. The mosaicking tool produces a geo-referenced 2D map from the sequence of video frames. The change detection tool identifies differences between two repeat-pass videos taken of the same terrain. The 3D reconstruction tool creates calibrated geo-referenced photo-realistic 3D models. The key objectives of the on-going project are to improve the robustness, accuracy and speed of these tools, and make them more user-friendly to operational users. Robustness and accuracy are essential to provide actionable intelligence, surveillance and reconnaissance information. Speed is important to reduce operator time on data analysis. We are porting some processor-intensive algorithms to run on a Graphics Processing Unit (GPU) in order to improve throughput. Many aspects of video processing are highly parallel and well-suited for optimization on GPUs, which are now commonly available on computers. Moreover, we are extending the tools to handle video data from various airborne platforms and developing the interface to the Coalition Shared Database (CSD). The CSD server enables the dissemination and storage of data from different sensors among NATO countries. The CSD interface allows

  10. Multimodal detection of man-made objects in simulated aerial images

    NASA Astrophysics Data System (ADS)

    Baran, Matthew S.; Tutwiler, Richard L.; Natale, Donald J.; Bassett, Michael S.; Harner, Matthew P.

    2013-05-01

    This paper presents an approach to multi-modal detection of man-made objects from aerial imagery. Detections are made in polarization imagery, hyperspectral imagery, and LIDAR point clouds then fused into a single confidence map. The detections are based on reflective, spectral, and geometric features of man-made objects in airborne images. The polarization imagery detector uses the Stokes parameters and the degree of linear polarization to find highly polarizing objects. The hyperspectral detector matches scene spectra to a library of man-made materials using a combination of the spectral gradient angle and the generalized likelihood ratio test. The LIDAR detector clusters 3D points into objects using principle component analysis and prunes the detections by size and shape. Once the three channels are mapped into detection images, the information can be fused without some of the problems of multi-modal fusion, such as edge reversal. The imagery used in this system was simulated with a first-principles ray tracing image generator known as DIRSIG.

  11. Mapping Forest Edge Using Aerial Lidar

    NASA Astrophysics Data System (ADS)

    MacLean, M. G.

    2014-12-01

    Slightly more than 60% of Massachusetts is covered with forest and this land cover type is invaluable for the protection and maintenance of our natural resources and is a carbon sink for the state. However, Massachusetts is currently experiencing a decline in forested lands, primarily due to the expansion of human development (Thompson et al., 2011). Of particular concern is the loss of "core areas" or the areas within forests that are not influenced by other land cover types. These areas are of significant importance to native flora and fauna, since they generally are not subject to invasion by exotic species and are more resilient to the effects of climate change (Campbell et al., 2009). However, the expansion of development has reduced the amount of this core area, but the exact amount is still unknown. Current methods of estimating core area are not particularly precise, since edge, or the area of the forest that is most influenced by other land cover types, is quite variable and situation dependent. Therefore, the purpose of this study is to devise a new method for identifying areas that could qualify as "edge" within the Harvard Forest, in Petersham MA, using new remote sensing techniques. We sampled along eight transects perpendicular to the edge of an abandoned golf course within the Harvard Forest property. Vegetation inventories as well as Photosynthetically Active Radiation (PAR) at different heights within the canopy were used to determine edge depth. These measurements were then compared with small-footprint waveform aerial LiDAR datasets and imagery to model edge depths within Harvard Forest.

  12. Reconnaissance mapping from aerial photographs

    NASA Technical Reports Server (NTRS)

    Weeden, H. A.; Bolling, N. B. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Engineering soil and geology maps were successfully made from Pennsylvania aerial photographs taken at scales from 1:4,800 to 1:60,000. The procedure involved a detailed study of a stereoscopic model while evaluating landform, drainage, erosion, color or gray tones, tone and texture patterns, vegetation, and cultural or land use patterns.

  13. Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Gebelein, Jennifer

    1999-01-01

    This report is produced in accordance with the requirements outlined in the NASA Research Grant NAG9-1032 titled "Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery". This grant funds the Remote Sensing Research Unit of the University of California, Santa Barbara. This document summarizes the research progress and accomplishments to date and describes current on-going research activities. Even though this grant has technically expired, in a contractual sense, work continues on this project. Therefore, this summary will include all work done through and 5 May 1999. The principal goal of this effort is to test the accuracy of a sub-regional portion of an AVHRR-based land cover product. Land cover mapped to three different classification systems, in the southwestern United States, have been subjected to two specific accuracy assessments. One assessment utilizing astronaut acquired photography, and a second assessment employing Landsat Thematic Mapper imagery, augmented in some cases, high aerial photography. Validation of these three land cover products has proceeded using a stratified sampling methodology. We believe this research will provide an important initial test of the potential use of imagery acquired from Shuttle and ultimately the International Space Station (ISS) for the operational validation of the Moderate Resolution Imaging Spectrometer (MODIS) land cover products.

  14. Rheumatoid Arthritis Educational Video Series

    MedlinePlus

    ... Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed to help you ... Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic Arthritis 101 2010 E.S.C.A.P. ...

  15. Rheumatoid Arthritis Educational Video Series

    MedlinePlus

    ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  16. Application of ERTS imagery in estimating the environmental impact of a freeway through the Knysna area of South Africa

    NASA Technical Reports Server (NTRS)

    Williamson, D. T.; Gilbertson, B.

    1974-01-01

    In the coastal areas north-east and south-west of Knysna, South Africa lie natural forests, lakes and lagoons highly regarded by many for their aesthetic and ecological richness. A freeway construction project has given rise to fears of the degradation or destruction of these natural features. The possibility was investigated of using ERTS imagery to estimate the environmental impact of the freeway and found that: (1) All threatened features could readily be identified on the imagery. (2) It was possible within a short time to provide an area estimate of damage to indigenous forest. (3) In several important respects the imagery has advantages over maps and aerial photos for this type of work. (4) The imagery will enable monitoring of the actual environmental impact of the freeway when completed.

  17. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  18. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  19. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  20. The Video Book.

    ERIC Educational Resources Information Center

    Clendenin, Bruce

    This book provides a comprehensive step-by-step learning guide to video production. It begins with camera equipment, both still and video. It then describes how to reassemble the video and build a final product out of "video blocks," and discusses multiple-source configurations, which are required for professional level productions of live shows.…

  1. Evaluation of SPOT imagery data

    SciTech Connect

    Berger, Z.; Brovey, R.L.; Merembeck, B.F.; Hopkins, H.R.

    1988-01-01

    SPOT, the French satellite imaging system that became operational in April 1986, provides two major advances in satellite imagery technology: (1) a significant increase in spatial resolution of the data to 20 m multispectral and 10 m panchromatic, and (2) stereoscopic capabilities. The structural and stratigraphic mapping capabilities of SPOT data and compare favorably with those of other available space and airborne remote sensing data. In the Rhine graben and Jura Mountains, strike and dip of folded strata can be determined using SPOT stereoscopic imagery, greatly improving the ability to analyze structures in complex areas. The increased spatial resolution also allows many features to be mapped that are not visible on thematic mapper (TM) imagery. In the San Rafael swell, Utah, TM spectral data were combined with SPOT spatial data to map lithostratigraphic units of the exposed Jurassic and Cretaceous rocks. SPOT imagery provides information on attitude, geometry, and geomorphic expressions of key marker beds that is not available on TM imagery. Over the Central Basin platform, west Texas, SPOT imagery, compared to TM imagery, provided more precise information on the configuration of outcropping beds and drainage patterns that reflect the subtle surface expression of buried structures.

  2. An Antarctic Time Capsule: Compiling and Hosting 60 years of USGS Antarctic Aerial Photography

    NASA Astrophysics Data System (ADS)

    Niebuhr, S.; Child, S.; Porter, C.; Herried, B.; Morin, P. J.

    2010-12-01

    The Antarctic Geospatial Information Center (AGIC) and the US Geologic Survey (USGS) collaborated to scan, archive, and make available 330,000 trimetrogon aerial (TMA) photos from 1860 flight lines taken over Antarctica from 1946 to 2000. Staff at USGS scanned them at 400 dpi and 1024 dpi resolution. To geolocate them, AGIC digitized the flight line maps, added relevant metadata including flight line altitude, camera type, and focal length, and approximated geographic centers for each photo. Both USGS and AGIC host the medium resolution air photos online, and are adding high resolution scans as they become available. The development of these metadata allowed AGIC to create a web-based flight line and aerial photo browsing application to facilitate the searching process. The application allows the user to browse through air photos and flight lines by location with links to full resolution preview images and to image downloads. AGIC has also orthorectified selected photos of facilities and areas of high scientific interest and are making them available online. This includes a time series showing significant change in several glaciers and lakes in the McMurdo Dry Valleys over 50 years and a series illustrating how McMurdo Station has changed. For the first time, this collection of historical imagery over a swiftly changing continent are readily available to the Antarctic scientific community (www.agic.umn.edu/imagery/aerial).

  3. Open Skies aerial photography of selected areas in Central America affected by Hurricane Mitch

    USGS Publications Warehouse

    Molnia, Bruce; Hallam, Cheryl A.

    1999-01-01

    Between October 27 and November 1, 1998, Central America was devastated by Hurricane Mitch. Following a humanitarian relief effort, one of the first informational needs was complete aerial photographic coverage of the storm ravaged areas so that the governments of the affected countries, the U.S. agencies planning to provide assistance, and the international relief community could come to the aid of the residents of the devastated area. Between December 4 and 19, 1998 an Open Skies aircraft conducted five successful missions and obtained more than 5,000 high-resolution aerial photographs and more than 15,000 video images. The aerial data are being used by the Reconstruction Task Force and many others who are working to begin rebuilding and to help reduce the risk of future destruction.

  4. Imagery mismatch negativity in musicians.

    PubMed

    Herholz, Sibylle C; Lappe, Claudia; Knief, Arne; Pantev, Christo

    2009-07-01

    The present study investigated musical imagery in musicians and nonmusicians by means of magnetoencephalography (MEG). We used a new paradigm in which subjects had to continue familiar melodies in their mind and then judged if a further presented tone was a correct continuation of the melody. Incorrect tones elicited an imagery mismatch negativity (iMMN) in musicians but not in nonmusicians. This finding suggests that the MMN component can be based on an imagined instead of a sensory memory trace and that imagery of music is modulated by musical expertise. PMID:19673775

  5. Augmented reality using ultra-wideband radar imagery

    NASA Astrophysics Data System (ADS)

    Nguyen, Lam; Koenig, Francois; Sherbondy, Kelly

    2011-06-01

    The U.S. Army Research Laboratory (ARL) has been investigating the utility of ultra-wideband (UWB) synthetic aperture radar (SAR) technology for detecting concealed targets in various applications. We have designed and built a vehicle-based, low-frequency UWB SAR radar for proof-of-concept demonstration in detecting obstacles for autonomous navigation, detecting concealed targets (mines, etc.), and mapping internal building structures to locate enemy activity. Although the low-frequency UWB radar technology offers valuable information to complement other technologies due to its penetration capability, it is very difficult to comprehend the radar imagery and correlate the detection list from the radar with the objects in the real world. Using augmented reality (AR) technology, we can superimpose the information from the radar onto the video image of the real world in real-time. Using this, Soldiers would view the environment and the superimposed graphics (SAR imagery, detection locations, digital map, etc.) via a standard display or a head-mounted display. The superimposed information would be constantly changed and adjusted for every perspective and movement of the user. ARL has been collaborating with ITT Industries to implement an AR system that integrates the video data captured from the real world and the information from the UWB radar. ARL conducted an experiment and demonstrated the real-time geo-registration of the two independent data streams. The integration of the AR sub-system into the radar system is underway. This paper presents the integration of the AR and SAR systems. It shows results that include the real-time embedding of the SAR imagery and other information into the video data stream.

  6. Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: NASA Applied Sciences Program; USGS Land Remote Sensing: Overview; QuickBird System Status and Product Overview; ORBIMAGE Overview; IKONOS 2004 Calibration and Validation Status; OrbView-3 Spatial Characterization; On-Orbit Modulation Transfer Function (MTF) Measurement of QuickBird; Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season; Image Quality Evaluation of QuickBird Super Resolution and Revisit of IKONOS: Civil and Commercial Application Project (CCAP); On-Orbit System MTF Measurement; QuickBird Post Launch Geopositional Characterization Update; OrbView-3 Geometric Calibration and Geopositional Accuracy; Geopositional Statistical Methods; QuickBird and OrbView-3 Geopositional Accuracy Assessment; Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images; Laboratory Measurement of Bidirectional Reflectance of Radiometric Tarps; Stennis Space Center Verification and Validation Capabilities; Joint Agency Commercial Imagery Evaluation (JACIE) Team; Adjacency Effects in High Resolution Imagery; Effect of Pulse Width vs. GSD on MTF Estimation; Camera and Sensor Calibration at the USGS; QuickBird Geometric Verification; Comparison of MODTRAN to Heritage-based Results in Vicarious Calibration at University of Arizona; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Estimating Sub-Pixel Proportions of Sagebrush with a Regression Tree; How Do YOU Use the National Land Cover Dataset?; The National Map Hazards Data Distribution System; Recording a Troubled World; What Does This-Have to Do with This?; When Can a Picture Save a Thousand Homes?; InSAR Studies of Alaska Volcanoes; Earth Observing-1 (EO-1) Data Products; Improving Access to the USGS Aerial Film Collections: High Resolution Scanners; Improving Access to the USGS Aerial Film Collections: Phoenix Digitizing System Product Distribution; System and Product Characterization: Issues Approach

  7. The Imagery Exchange (TIE): Open Source Imagery Management System

    NASA Astrophysics Data System (ADS)

    Alarcon, C.; Huang, T.; Thompson, C. K.; Roberts, J. T.; Hall, J. R.; Cechini, M.; Schmaltz, J. E.; McGann, J. M.; Boller, R. A.; Murphy, K. J.; Bingham, A. W.

    2013-12-01

    The NASA's Global Imagery Browse Service (GIBS) is the Earth Observation System (EOS) imagery solution for delivering global, full-resolution satellite imagery in a highly responsive manner. GIBS consists of two major subsystems, OnEarth and The Imagery Exchange (TIE). TIE is the GIBS horizontally scaled imagery workflow manager component, an Open Archival Information System (OAIS) responsible for orchestrating the acquisition, preparation, generation, and archiving of imagery to be served by OnEarth. TIE is an extension of the Data Management and Archive System (DMAS), a high performance data management system developed at the Jet Propulsion Laboratory by leveraging open source tools and frameworks, which includes Groovy/Grails, Restlet, Apache ZooKeeper, Apache Solr, and other open source solutions. This presentation focuses on the application of Open Source technologies in developing a horizontally scaled data system like DMAS and TIE. As part of our commitment in contributing back to the open source community, TIE is in the process of being open sourced. This presentation will also cover our current effort in getting TIE in to the hands of the community from which we benefited from.

  8. Video-rate visible to LWIR hyperspectral image generation exploitation

    NASA Astrophysics Data System (ADS)

    Dombrowski, Mark S.; Willson, Paul

    1999-10-01

    Hyperspectral imaging is the latest advent in imaging technology, providing the potential to extract information about the objects in a scene that is unavailable to panchromatic imagers. This increased utility, however, comes at the cost of tremendously increased data. The ultimate utility of hyperspectral imagery is in the information that can be gleaned from the spectral dimension, rather than in the hyperspectral imagery itself. To have the broadest range of applications, extraction of this information must occur in real-time. Attempting to produce and exploit complete cubes of hyperspectral imagery at video rates, however, presents unique problems for both the imager and the processor, since data rates are scaled by the number of spectral planes in the cube. MIDIS, the Multi-band Identification and Discrimination Imaging Spectroradiometer, allows both real-time collection and processing of hyperspectral imagery over the range of 0.4 micrometer to 12 micrometer. Presented here are the major design challenges and solutions associated with producing high-speed, high-sensitivity hyperspectral imagers operating in the Vis/NIR, SWIR/MWIR and LWIR, and of the electronics capable of handling data rates up to 160 mega-pixels per second, continuously. Beyond design and performance issues associated with producing and processing hyperspectral imagery at such high speeds, this paper also discusses applications of real-time hyperspectral imaging technology. Example imagery includes such problems as buried mine detection, inspecting surfaces, and countering CCD (camouflage, concealment, and deception).

  9. Use Of Infrared Imagery In Continuous Flow Wind Tunnels

    NASA Astrophysics Data System (ADS)

    Stallings, D. W.; Whetsel, R. G.

    1983-03-01

    Thermal mapping with infrared imagery is a very useful test technique in continuous flow wind tunnels. Convective-heating patterns over large areas of a model can be obtained through remote sensing of the surface temperature. A system has been developed at AEDC which uses a commercially available infrared scanning camera to produce these heat-transfer maps. In addition to the camera, the system includes video monitors, an analog tape recording, an analog-to-digital converter, a digitizer control, and two minicomputers. This paper will describe the individual components, data reduction techniques, and typical applications. *

  10. Vehicle classification in WAMI imagery using deep network

    NASA Astrophysics Data System (ADS)

    Yi, Meng; Yang, Fan; Blasch, Erik; Sheaff, Carolyn; Liu, Kui; Chen, Genshe; Ling, Haibin

    2016-05-01

    Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep

  11. Aerial Photographs and Satellite Images

    USGS Publications Warehouse

    ,

    1997-01-01

    Photographs and other images of the Earth taken from the air and from space show a great deal about the planet's landforms, vegetation, and resources. Aerial and satellite images, known as remotely sensed images, permit accurate mapping of land cover and make landscape features understandable on regional, continental, and even global scales. Transient phenomena, such as seasonal vegetation vigor and contaminant discharges, can be studied by comparing images acquired at different times. The U.S. Geological Survey (USGS), which began using aerial photographs for mapping in the 1930's, archives photographs from its mapping projects and from those of some other Federal agencies. In addition, many images from such space programs as Landsat, begun in 1972, are held by the USGS. Most satellite scenes can be obtained only in digital form for use in computer-based image processing and geographic information systems, but in some cases are also available as photographic products.

  12. Aerial robotic data acquisition system

    SciTech Connect

    Hofstetter, K.J.; Hayes, D.W.; Pendergast, M.M.; Corban, J.E.

    1993-12-31

    A small, unmanned aerial vehicle (UAV), equipped with sensors for physical and chemical measurements of remote environments, is described. A miniature helicopter airframe is used as a platform for sensor testing and development. The sensor output is integrated with the flight control system for real-time, interactive, data acquisition and analysis. Pre-programmed flight missions will be flown with several sensors to demonstrate the cost-effective surveillance capabilities of this new technology.

  13. Imagery: Paintings in the Mind.

    ERIC Educational Resources Information Center

    Carey, Albert R.

    1986-01-01

    Describes using the overlapping areas of relaxation, meditation, hypnosis, and imagery as a counseling technique. Explains the methods in terms of right brain functioning, a capability children use naturally. (ABB)

  14. New Percepts via Mental Imagery?

    PubMed

    Mast, Fred W; Tartaglia, Elisa M; Herzog, Michael H

    2012-01-01

    We are able to extract detailed information from mental images that we were not explicitly aware of during encoding. For example, we can discover a new figure when we rotate a previously seen image in our mind. However, such discoveries are not "really" new but just new "interpretations." In two recent publications, we have shown that mental imagery can lead to perceptual learning (Tartaglia et al., 2009, 2012). Observers imagined the central line of a bisection stimulus for thousands of trials. This training enabled observers to perceive bisection offsets that were invisible before training. Hence, it seems that perceptual learning via mental imagery leads to new percepts. We will argue, however, that these new percepts can occur only within "known" models. In this sense, perceptual learning via mental imagery exceeds new discoveries in mental images. Still, the effects of mental imagery on perceptual learning are limited. Only perception can lead to really new perceptual experience.

  15. New Percepts via Mental Imagery?

    PubMed

    Mast, Fred W; Tartaglia, Elisa M; Herzog, Michael H

    2012-01-01

    We are able to extract detailed information from mental images that we were not explicitly aware of during encoding. For example, we can discover a new figure when we rotate a previously seen image in our mind. However, such discoveries are not "really" new but just new "interpretations." In two recent publications, we have shown that mental imagery can lead to perceptual learning (Tartaglia et al., 2009, 2012). Observers imagined the central line of a bisection stimulus for thousands of trials. This training enabled observers to perceive bisection offsets that were invisible before training. Hence, it seems that perceptual learning via mental imagery leads to new percepts. We will argue, however, that these new percepts can occur only within "known" models. In this sense, perceptual learning via mental imagery exceeds new discoveries in mental images. Still, the effects of mental imagery on perceptual learning are limited. Only perception can lead to really new perceptual experience. PMID:23060830

  16. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  17. Telemetry of Aerial Radiological Measurements

    SciTech Connect

    H. W. Clark, Jr.

    2002-10-01

    Telemetry has been added to National Nuclear Security Administration's (NNSA's) Aerial Measuring System (AMS) Incident Response aircraft to accelerate availability of aerial radiological mapping data. Rapid aerial radiological mapping is promptly performed by AMS Incident Response aircraft in the event of a major radiological dispersal. The AMS airplane flies the entire potentially affected area, plus a generous margin, to provide a quick look at the extent and severity of the event. The primary result of the AMS Incident Response over flight is a map of estimated exposure rate on the ground along the flight path. Formerly, it was necessary to wait for the airplane to land before the map could be seen. Now, while the flight is still in progress, data are relayed via satellite directly from the aircraft to an operations center, where they are displayed and disseminated. This permits more timely utilization of results by decision makers and redirection of the mission to optimize its value. The current telemetry capability can cover all of North America. Extension to a global capability is under consideration.

  18. Techniques for video indexing

    NASA Astrophysics Data System (ADS)

    Chen, C. Y. Roger; Meliksetian, Dikran S.; Liu, Larry J.; Chang, Martin C.

    1996-01-01

    A data model for long objects (such as video files) is introduced, to support general referencing structures, along with various system implementation strategies. Based on the data model, various indexing techniques for video are then introduced. A set of basic functionalities is described, including all the frame level control, indexing, and video clip editing. We show how the techniques can be used to automatically index video files based on closed captions with a typical video capture card, for both compressed and uncompressed video files. Applications are presented using those indexing techniques in security control and viewers' rating choice, general video search (from laser discs, CD ROMs, and regular disks), training videos, and video based user or system manuals.

  19. Observation of coral reefs on Ishigaki Island, Japan, using Landsat TM images and aerial photographs

    SciTech Connect

    Matsunaga, Tsuneo; Kayanne, Hajime

    1997-06-01

    Ishigaki Island is located at the southwestern end of Japanese Islands and famous for its fringing coral reefs. More than twenty LANDSAT TM images in twelve years and aerial photographs taken on 1977 and 1994 were used to survey two shallow reefs on this island, Shiraho and Kabira. Intensive field surveys were also conducted in 1995. All satellite images of Shiraho were geometrically corrected and overlaid to construct a multi-date satellite data set. The effects of solar elevation and tide on satellite imagery were studied with this data set. The comparison of aerial and satellite images indicated that significant changes occurred between 1977 and 1984 in Kabira: rapid formation in the western part and decrease in the eastern part of dark patches. The field surveys revealed that newly formed dark patches in the west contain young corals. These results suggest that remote sensing is useful for not only mapping but also monitoring of shallow coral reefs.

  20. Photogrammetric Processing of IceBridge DMS Imagery into High-Resolution Digital Surface Models (DEM and Visible Overlay)

    NASA Astrophysics Data System (ADS)

    Arvesen, J. C.; Dotson, R. C.

    2014-12-01

    The DMS (Digital Mapping System) has been a sensor component of all DC-8 and P-3 IceBridge flights since 2009 and has acquired over 3 million JPEG images over Arctic and Antarctic land and sea ice. The DMS imagery is primarily used for identifying and locating open leads for LiDAR sea-ice freeboard measurements and documenting snow and ice surface conditions. The DMS is a COTS Canon SLR camera utilizing a 28mm focal length lens, resulting in a 10cm GSD and swath of ~400 meters from a nominal flight altitude of 500 meters. Exterior orientation is provided by an Applanix IMU/GPS which records a TTL pulse coincident with image acquisition. Notable for virtually all IceBridge flights is that parallel grids are not flown and thus there is no ability to photogrammetrically tie any imagery to adjacent flight lines. Approximately 800,000 Level-3 DMS Surface Model data products have been delivered to NSIDC, each consisting of a Digital Elevation Model (GeoTIFF DEM) and a co-registered Visible Overlay (GeoJPEG). Absolute elevation accuracy for each individual Elevation Model is adjusted to concurrent Airborne Topographic Mapper (ATM) Lidar data, resulting in higher elevation accuracy than can be achieved by photogrammetry alone. The adjustment methodology forces a zero mean difference to the corresponding ATM point cloud integrated over each DMS frame. Statistics are calculated for each DMS Elevation Model frame and show RMS differences are within +/- 10 cm with respect to the ATM point cloud. The DMS Surface Model possesses similar elevation accuracy to the ATM point cloud, but with the following advantages: · Higher and uniform spatial resolution: 40 cm GSD · 45% wider swath: 435 meters vs. 300 meters at 500 meter flight altitude · Visible RGB co-registered overlay at 10 cm GSD · Enhanced visualization through 3-dimensional virtual reality (i.e. video fly-through) Examples will be presented of the utility of these advantages and a novel use of a cell phone camera for

  1. Monitoring black-tailed prairie dog colonies with high-resolution satellite imagery

    USGS Publications Warehouse

    Sidle, John G.; Johnson, D.H.; Euliss, B.R.; Tooze, M.

    2002-01-01

    The United States Fish and Wildlife Service has determined that the black-tailed prairie dog (Cynomys ludovicianus) warrants listing as a threatened species under the Endangered Species Act. Central to any conservation planning for the black-tailed prairie dog is an appropriate detection and monitoring technique. Because coarse-resolution satellite imagery is not adequate to detect black-tailed prairie dog colonies, we examined the usefulness of recently available high-resolution (1-m) satellite imagery. In 6 purchased scenes of national grasslands, we were easily able to visually detect small and large colonies without using image-processing algorithms. The Ikonos (Space Imaging(tm)) satellite imagery was as adequate as large-scale aerial photography to delineate colonies. Based on the high quality of imagery, we discuss a possible monitoring program for black-tailed prairie dog colonies throughout the Great Plains, using the species' distribution in North Dakota as an example. Monitoring plots could be established and imagery acquired periodically to track the expansion and contraction of colonies.

  2. Semantic home video categorization

    NASA Astrophysics Data System (ADS)

    Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man

    2009-02-01

    Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.

  3. Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.

    1981-01-01

    Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.

  4. Quantitative analysis of drainage obtained from aerial photographs and RBV/LANDSAT images

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Formaggio, A. R.; Epiphanio, J. C. N.; Filho, M. V.

    1981-01-01

    Data obtained from aerial photographs (1:60,000) and LANDSAT return beam vidicon imagery (1:100,000) concerning drainage density, drainage texture, hydrography density, and the average length of channels were compared. Statistical analysis shows that significant differences exist in data from the two sources. The highly drained area lost more information than the less drained area. In addition, it was observed that the loss of information about the number of rivers was higher than that about the length of the channels.

  5. An aerial multispectral thermographic survey of the Oak Ridge Reservation for selected areas K-25, X-10, and Y-12, Oak Ridge, Tennessee

    SciTech Connect

    Ginsberg, I.W.

    1996-10-01

    During June 5-7, 1996, the Department of Energy`s Remote Sensing Laboratory performed day and night multispectral surveys of three areas at the Oak Ridge Reservation: K-25, X-10, and Y-12. Aerial imagery was collected with both a Daedalus DS1268 multispectral scanner and National Aeronautics and Space Administration`s Thermal Infrared Multispectral System, which has six bands in the thermal infrared region of the spectrum. Imagery from the Thermal Infrared Multispectral System was processed to yield images of absolute terrain temperature and of the terrain`s emissivities in the six spectral bands. The thermal infrared channels of the Daedalus DS1268 were radiometrically calibrated and converted to apparent temperature. A recently developed system for geometrically correcting and geographically registering scanner imagery was used with the Daedalus DS1268 multispectral scanner. The corrected and registered 12-channel imagery was orthorectified using a digital elevation model. 1 ref., 5 figs., 5 tabs.

  6. Accuracy of Measurements in Oblique Aerial Images for Urban Environment

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.

    2016-10-01

    Oblique aerial images have been a source of data for urban areas for several years. However, the accuracy of measurements in oblique images during this time has been limited to a single meter due to the use of direct -georeferencing technology and the underlying digital elevation model. Therefore, oblique images have been used mostly for visualization purposes. This situation changed in recent years as new methods, which allowed for a higher accuracy of exterior orientation, were developed. Current developments include the process of determining exterior orientation and the previous but still crucial process of tie point extraction. Progress in this area was shown in the ISPRS/EUROSDR Benchmark on Multi-Platform Photogrammetry and is also noticeable in the growing interest in the use of this kind of imagery. The higher level of accuracy in the orientation of oblique aerial images that has become possible in the last few years should result in a higher level of accuracy in the measurements of these types of images. The main goal of this research was to set and empirically verify the accuracy of measurements in oblique aerial images. The research focused on photogrammetric measurements composed of many images, which use a high overlap within an oblique dataset and different view angles. During the experiments, two series of images of urban areas were used. Both were captured using five DigiCam cameras in a Maltese cross configuration. The tilt angles of the oblique cameras were 45 degrees, and the position of the cameras during flight used a high grade GPS/INS navigation system. The orientation of the images was set using the Pix4D Mapper Pro software with both measurements of the in-flight camera position and the ground control points (measured with GPS RTK technology). To control the accuracy, check points were used (which were also measured with GPS RTK technology). As reference data for the whole study, an area of the city-based map was used. The archived results

  7. Video Screen Capture Basics

    ERIC Educational Resources Information Center

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  8. Commercial feasibility of traffic data collection using satellite imagery

    NASA Astrophysics Data System (ADS)

    Merry, Carolyn J.; McCord, Mark R.; Bossler, John D.

    1995-01-01

    Requests to market remote sensing data at fine spatial resolutions have been proposed. We evaluated the potential of complementing traffic data collection programs with such data. One of the most fundamental issues is the imaging resolution required to identify vehicles on a highway. We simulated the performance of three spatial resolutions (1.0 m, 2.1 m and 4.2 m) by processing aerial photography (0.4-0.7 μm) of the Columbus, Ohio, area. The imagery was used to count and classify two groups of vehicles—large trucks and smaller vehicles—on several highway segments. We found that the 1.0 m resolution performed significantly better than the coarser resolutions for correctly identifying vehicles. We also investigated the coverage of an orbiting satellite for imaging highways. We find that a 1-m resolution satellite would cover approximately 1% of the highways in the continental U.S. per day.

  9. Crop identification and acreage measurement utilizing ERTS imagery

    NASA Technical Reports Server (NTRS)

    Vonsteen, D. H. (Principal Investigator)

    1972-01-01

    There are no author-identified significant results in this report. The microdensitometer will be used to analyze data acquired by ERTS-1 imagery. The classification programs and software packages have been acquired and are being prepared for use with the information as it is received. Photo and digital tapes have been acquired for coverage of virtually 100 percent of the test site areas. These areas are located in South Dakota, Idaho, Missouri, and Kansas. Hass 70mm color infrared, infrared, black and white high altitude aerial photography of the test sites is available. Collection of ground truth for updating the data base has been completed and a computer program written to count the number of fields and give total acres by size group for the segments in each test site. Results are given of data analysis performed on digitized data from densitometer measurements of fields of corn, sugar, beets, and alfalfa in Kansas.

  10. An augmentative gaze directing framework for multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Hsiao, Libby

    Modern digital imaging techniques have made the task of imaging more prolic than ever and the volume of images and data available through multi-spectral imaging methods for exploitation is exceeding that which can be solely processed by human beings. The researchers proposed and developed a novel eye movement contingent framework and display system through adaption of the demonstrated technique of subtle gaze direction by presenting modulations within the displayed image. The system sought to augment visual search task performance of aerial imagery by incorporating multi-spectral image processing algorithms to determine potential regions of interest within an image. The exploratory work conducted was to study the feasibility of visual gaze direction with the specic intent of extending this application to geospatial image analysis without need for overt cueing to areas of potential interest and thereby maintaining the benefits of an undirected and unbiased search by an observer.

  11. The application of unmanned aerial vehicle to precision agriculture: Chlorophyll, nitrogen, and evapotranspiration estimation

    NASA Astrophysics Data System (ADS)

    Elarab, Manal

    Precision agriculture (PA) is an integration of a set of technologies aiming to improve productivity and profitability while sustaining the quality of the surrounding environment. It is a process that vastly relies on high-resolution information to enable greater precision in the management of inputs to production. This dissertation explored the usage of multispectral high resolution aerial imagery acquired by an unmanned aerial systems (UAS) platform to serve precision agriculture application. The UAS acquired imagery in the visual, near infrared and thermal infrared spectra with a resolution of less than a meter (15--60 cm). This research focused on developing two models to estimate cm-scale chlorophyll content and leaf nitrogen. To achieve the estimations a well-established machine learning algorithm (relevance vector machine) was used. The two models were trained on a dataset of in situ collected leaf chlorophyll and leaf nitrogen measurements, and the machine learning algorithm intelligently selected the most appropriate bands and indices for building regressions with the highest prediction accuracy. In addition, this research explored the usage of the high resolution imagery to estimate crop evapotranspiration (ET) at 15 cm resolution. A comparison was also made between the high resolution ET and Landsat derived ET over two different crop cover (field crops and vineyards) to assess the advantages of UAS based high resolution ET. This research aimed to bridge the information embedded in the high resolution imagery with ground crop parameters to provide site specific information to assist farmers adopting precision agriculture. The framework of this dissertation consisted of three components that provide tools to support precision agriculture operational decisions. In general, the results for each of the methods developed were satisfactory, relevant, and encouraging.

  12. Pasadena, California Anaglyph with Aerial Photo Overlay

    NASA Technical Reports Server (NTRS)

    2000-01-01

    and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise,Washington, DC.

    Size: 2.2 km (1.4 miles) x 2.4 km (1.49 miles) Location: 34.16 deg. North lat., 118.16 deg. West lon. Orientation: looking straight down at land Original Data Resolution: SRTM, 30 meters; Aerial Photo, 3 meters. Date Acquired: February 16, 2000 Image: NASA/JPL/NIMA

  13. Video Event Detection Framework on Large-Scale Video Data

    ERIC Educational Resources Information Center

    Park, Dong-Jun

    2011-01-01

    Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data present a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data…

  14. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... video description per calendar quarter during prime time or on children's programming, on each channel... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of...

  15. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... provide 50 hours of video description per calendar quarter, either during prime time or on children's... 47 Telecommunication 4 2013-10-01 2013-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of...

  16. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... provide 50 hours of video description per calendar quarter, either during prime time or on children's... 47 Telecommunication 4 2012-10-01 2012-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of...

  17. Enabling high-quality observations of surface imperviousness for water runoff modelling from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank

    2015-04-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  18. A Spherical Aerial Terrestrial Robot

    NASA Astrophysics Data System (ADS)

    Dudley, Christopher J.

    This thesis focuses on the design of a novel, ultra-lightweight spherical aerial terrestrial robot (ATR). The ATR has the ability to fly through the air or roll on the ground, for applications that include search and rescue, mapping, surveillance, environmental sensing, and entertainment. The design centers around a micro-quadcopter encased in a lightweight spherical exoskeleton that can rotate about the quadcopter. The spherical exoskeleton offers agile ground locomotion while maintaining characteristics of a basic aerial robot in flying mode. A model of the system dynamics for both modes of locomotion is presented and utilized in simulations to generate potential trajectories for aerial and terrestrial locomotion. Details of the quadcopter and exoskeleton design and fabrication are discussed, including the robot's turning characteristic over ground and the spring-steel exoskeleton with carbon fiber axle. The capabilities of the ATR are experimentally tested and are in good agreement with model-simulated performance. An energy analysis is presented to validate the overall efficiency of the robot in both modes of locomotion. Experimentally-supported estimates show that the ATR can roll along the ground for over 12 minutes and cover the distance of 1.7 km, or it can fly for 4.82 minutes and travel 469 m, on a single 350 mAh battery. Compared to a traditional flying-only robot, the ATR traveling over the same distance in rolling mode is 2.63-times more efficient, and in flying mode the system is only 39 percent less efficient. Experimental results also demonstrate the ATR's transition from rolling to flying mode.

  19. Secure video communications system

    DOEpatents

    Smith, Robert L.

    1991-01-01

    A secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  20. Authoring with Video

    ERIC Educational Resources Information Center

    Strassman, Barbara K.; O'Connell, Trisha

    2007-01-01

    Teachers are hungry for strategies that will motivate their students to engage in reading and writing. One promising method is the Authoring With Video (AWV) approach, which encourages teachers to use captioning software and digital video in writing assignments. AWV builds on students' fascination with television and video but removes the audio…

  1. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  2. Video: Modalities and Methodologies

    ERIC Educational Resources Information Center

    Hadfield, Mark; Haw, Kaye

    2012-01-01

    In this article, we set out to explore what we describe as the use of video in various modalities. For us, modality is a synthesizing construct that draws together and differentiates between the notion of "video" both as a method and as a methodology. It encompasses the use of the term video as both product and process, and as a data collection…

  3. Developing a Promotional Video

    ERIC Educational Resources Information Center

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  4. Video Cartridges and Cassettes.

    ERIC Educational Resources Information Center

    Kletter, Richard C.; Hudson, Heather

    The economic and social significance of video cassettes (viewer-controlled playback system) is explored in this report. The potential effect of video cassettes on industrial training, education, libraries, and television is analyzed in conjunction with the anticipated hardware developments. The entire video cassette industry is reviewed firm by…

  5. Image/data storage, manipulation and recall using video/computer technology for emergency applications

    SciTech Connect

    Thorpe, J.M.

    1986-01-01

    Employing a blend of broadcast video and state-of-the-art computer technology the Management Emergency Response Information System (MERIS) is designed to control, manipulate, and distribute the graphic and visual information necessary for decision-making in an emergency response situation or exercise. Instant storage and recall of an extensive library of frames of video imagery allow emergency planners the time and freedom to examine necessary information quickly and efficiently.

  6. ERTS-1 imagery use in reconnaissance prospecting: Evaluation of commercial utility of ERTS-1 imagery in structural reconnaissance for minerals and petroleum

    NASA Technical Reports Server (NTRS)

    Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.

    1973-01-01

    The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.

  7. Landmarks recognition for autonomous aerial navigation by neural networks and Gabor transform

    NASA Astrophysics Data System (ADS)

    Shiguemori, Elcio Hideiti; Martins, Maurício Pozzobon; Monteiro, Marcus Vinícius T.

    2007-02-01

    Template matching in real-time is a fundamental issue in many applications in computer vision such as tracking, stereo vision and autonomous navigation. The goal of this paper is present a system for automatic landmarks recognition in video frames over a georeferenced high resolution satellite image, for autonomous aerial navigation research. The video frames employed were obtained from a camera fixed to a helicopter in a low level flight, simulating the vision system of an unmanned aerial vehicle (UAV). The landmarks descriptors used in recognition task were texture features extracted by a Gabor Wavelet filters bank. The recognition system consists on a supervised neural network trained to recognize the satellite image landmarks texture features. In activation phase, each video frame has its texture feature extracted and the neural network has to classify it as a predefined landmark. The video frames are also preprocessed to reduce their difference of scale and rotation from the satellite image before the texture feature extraction, so the UAV altitude and heading for each frame are considered as known. The neural network techniques present the advantage of low computational cost, been appropriate to real-time applications. Promising results were obtained, mainly during flight over urban areas.

  8. Real-time technology for enhancing long-range imagery

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Kelmelis, Eric; Kozacik, Stephen; Bonnett, James; Fox, Paul

    2015-05-01

    Many ISR applications require constant monitoring of targets from long distance. When capturing over long distances, imagery is often degraded by atmospheric turbulence. This adds a time-variant blurring effect to captured data, and can result in a significant loss of information. To recover it, image processing techniques have been developed to enhance sequences of short exposure images or videos in order to remove frame-specific scintillation and warping. While some of these techniques have been shown to be quite effective, the associated computational complexity and required processing power limits the application of these techniques to post-event analysis. To meet the needs of real-time ISR applications, video enhancement must be done in real-time in order to provide actionable intelligence as the scene unfolds. In this paper, we will provide an overview of an algorithm capable of providing the enhancement desired and focus on its real-time implementation. We will discuss the role that GPUs play in enabling real-time performance. This technology can be used to add performance to ISR applications by improving the quality of long-range imagery as it is collected and effectively extending sensor range.

  9. NEI You Tube Videos: Amblyopia

    MedlinePlus

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia NEI on Twitter NEI on YouTube NEI ...

  10. Effect of instructive visual stimuli on neurofeedback training for motor imagery-based brain-computer interface.

    PubMed

    Kondo, Toshiyuki; Saeki, Midori; Hayashi, Yoshikatsu; Nakayashiki, Kosei; Takata, Yohei

    2015-10-01

    Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI neurofeedback training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.

  11. Aerial networking communication solutions using Micro Air Vehicle (MAV)

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Shyam; de Graaf, Maurits; Hoekstra, Gerard; Corporaal, Henk; Wijtvliet, Mark; Cuadros Linde, Javier

    2014-10-01

    The application of a Micro Air Vehicle (MAV) for wireless networking is slowly gaining significance in the field of network robotics. Aerial transport of data requires efficient network protocols along with accurate positional adjustment of the MAV to minimize transaction times. In our proof of concept, we develop an Aerial networking protocol for data transfer using the technology of Disruption Tolerant Networks (DTN), a store-and-forward approach for environments that deals with disrupted connectivity. Our results show that close interaction between networking and flight behavior helps in efficient data exchange. Potential applications are in areas where network infrastructure is minimal or unavailable and distances may be large. For example, forwarding video recordings during search and rescue, agriculture, swarm communication, among several others. A practical implementation and validation, as described in this paper, presents the complex dynamics of wireless environments and poses new challenges that are not addressed in earlier work on this topic. Several tests are evaluated in a practical setup to display the networking MAV behavior during such an operation.

  12. Drone with thermal infrared camera provides high resolution georeferenced imagery of the Waikite geothermal area, New Zealand

    NASA Astrophysics Data System (ADS)

    Harvey, M. C.; Rowland, J. V.; Luketina, K. M.

    2016-10-01

    Drones are now routinely used for collecting aerial imagery and creating digital elevation models (DEM). Lightweight thermal sensors provide another payload option for generation of very high-resolution aerial thermal orthophotos. This technology allows for the rapid and safe survey of thermal areas, often present in inaccessible or dangerous terrain. Here we present a 2.2 km2 georeferenced, temperature-calibrated thermal orthophoto of the Waikite geothermal area, New Zealand. The image represents a mosaic of nearly 6000 thermal images captured by drone over a period of about 2 weeks. This is thought by the authors to be the first such image published of a significant geothermal area produced by a drone equipped with a thermal camera. Temperature calibration of the image allowed calculation of heat loss (43 ± 12 MW) from thermal lakes and streams in the survey area (loss from evaporation, conduction and radiation). An RGB (visible spectrum) orthomosaic photo and digital elevation model was also produced for this area, with ground resolution and horizontal position error comparable to commercially produced LiDAR and aerial imagery obtained from crewed aircraft. Our results show that thermal imagery collected by drones has the potential to become a key tool in geothermal science, including geological, geochemical and geophysical surveys, environmental baseline and monitoring studies, geotechnical studies and civil works.

  13. Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.; Lichter, Michael J.

    1994-01-01

    Video event trigger (VET) processes video image data to generate trigger signal when image shows significant change like motion or appearance, disappearance, change in color, change in brightness, or dilation of object. System aids in efficient utilization of image-data-storage and image-data-processing equipment in applications in which many video frames show no changes and are wasteful to record and analyze all frames when only relatively few frames show changes of interest. Applications include video recording of automobile crash tests, automated video monitoring of entrances, exits, parking lots, and secure areas.

  14. Death imagery and death anxiety.

    PubMed

    McDonald, R T; Hilgendorf, W A

    1986-01-01

    This study investigated the relationship between positive/negative death imagery and death anxiety. Subjects were 179 undergraduate students at a large, private, midwestern university. Results reveal that on five measures of death anxiety the subjects with low death anxiety scores had significantly more positive death images than did those with high death anxiety scores. The few subjects who imagined death to be young (N = 14) had a significantly more positive image of death than those who perceived it to be an old person. Death was seen as male by 92% of the male respondents and 74% of the female respondents. Significant differences in death imagery and death anxiety were found between subjects enrolled in an introductory psychology course and those enrolled in a thanatology course. No sex differences in death anxiety or positive/negative death imagery were found.

  15. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  16. Astronomical Methods in Aerial Navigation

    NASA Technical Reports Server (NTRS)

    Beij, K Hilding

    1925-01-01

    The astronomical method of determining position is universally used in marine navigation and may also be of service in aerial navigation. The practical application of the method, however, must be modified and adapted to conform to the requirements of aviation. Much of this work of adaptation has already been accomplished, but being scattered through various technical journals in a number of languages, is not readily available. This report is for the purpose of collecting under one cover such previous work as appears to be of value to the aerial navigator, comparing instruments and methods, indicating the best practice, and suggesting future developments. The various methods of determining position and their application and value are outlined, and a brief resume of the theory of the astronomical method is given. Observation instruments are described in detail. A complete discussion of the reduction of observations follows, including a rapid method of finding position from the altitudes of two stars. Maps and map cases are briefly considered. A bibliography of the subject is appended.

  17. IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY

    EPA Science Inventory

    This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...

  18. NOAA's Use of High-Resolution Imagery

    NASA Technical Reports Server (NTRS)

    Hund, Erik

    2007-01-01

    NOAA's use of high-resolution imagery consists of: a) Shoreline mapping and nautical chart revision; b) Coastal land cover mapping; c) Benthic habitat mapping; d) Disaster response; and e) Imagery collection and support for coastal programs.

  19. USGS Earth Explorer Client for Co-Discovery of Aerial and Satellite Data

    NASA Astrophysics Data System (ADS)

    Longhenry, R.; Sohre, T.; McKinney, R.; Mentele, T.

    2011-12-01

    The United States Geological Survey (USGS) Earth Resources Observation Science (EROS) Center is home to one of the largest civilian collections of images of the Earth's surface. These images are collected from recent satellite platforms such as the Landsat, Terra, Aqua and Earth Observer-1, historical airborne systems such as digital cameras and side-looking radar, and digitized historical aerial photography dating to the 1930's. The aircraft scanners include instruments such as the Advanced Solid State Array Spectrometer (ASAS). Also archived at EROS are specialized collections of aerial images, such as high-resolution orthoimagery, extensive collections over Antarctica, and historical airborne campaigns such as the National Aerial Photography Program (NAPP) and the National High Altitude Photography (NHAP) collections. These collections, as well as digital map data, declassified historical space-based photography, and variety of collections such as the Global Land Survey 2000 (GLS2000) and the Shuttle Radar Topography Mission (SRTM) are accessible through the USGS Earth Explorer (EE) client. EE allows for the visual discovery and browse of diverse datasets simultaneously, permitting the co-discovery and selection refinement of both satellite and aircraft imagery. The client, in use for many years was redesigned in 2010 to support requirements for next generation Landsat Data Continuity Mission (LDCM) data access and distribution. The redesigned EE is now supported by standards-based, open source infrastructure. EE gives users the capability to search 189 datasets through one interface, including over 8.4 million frames of aerial imagery. Since April 2011, NASA datasets archived at the Land Processes Distributed Active Archive Center (LP DAAC) including the MODIS land data products and ASTER Level-1B data products over the U.S. and Territories were made available via the EE client enabling users to co-discover aerial data archived at the USGS EROS along with USGS

  20. Benchmarking High Density Image Matching for Oblique Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Haala, N.; Nebiker, S.; Rothermel, M.; Tutzauer, P.

    2014-08-01

    Both, improvements in camera technology and new pixel-wise matching approaches triggered the further development of software tools for image based 3D reconstruction. Meanwhile research groups as well as commercial vendors provide photogrammetric software to generate dense, reliable and accurate 3D point clouds and Digital Surface Models (DSM) from highly overlapping aerial images. In order to evaluate the potential of these algorithms in view of the ongoing software developments, a suitable test bed is provided by the ISPRS/EuroSDR initiative Benchmark on High Density Image Matching for DSM Computation. This paper discusses the proposed test scenario to investigate the potential of dense matching approaches for 3D data capture from oblique airborne imagery. For this purpose, an oblique aerial image block captured at a GSD of 6 cm in the west of Zürich by a Leica RCD30 Oblique Penta camera is used. Within this paper, the potential test scenario is demonstrated using matching results from two software packages, Agisoft PhotoScan and SURE from University of Stuttgart. As oblique images are frequently used for data capture at building facades, 3D point clouds are mainly investigated at such areas. Reference data from terrestrial laser scanning is used to evaluate data quality from dense image matching for several facade patches with respect to accuracy, density and reliability.

  1. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  2. BOREAS Level-0 ER-2 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominquez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the ER-2 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The ER-2 aerial photography consists of color-IR transparencies collected during flights in 1994 and 1996 over the study areas.

  3. 29 CFR 1926.453 - Aerial lifts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be obtained from the American National... 29 Labor 8 2011-07-01 2011-07-01 false Aerial lifts. 1926.453 Section 1926.453 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Scaffolds § 1926.453 Aerial lifts. (a)...

  4. 29 CFR 1926.453 - Aerial lifts.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be obtained from the American National... 29 Labor 8 2014-07-01 2014-07-01 false Aerial lifts. 1926.453 Section 1926.453 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Scaffolds § 1926.453 Aerial lifts. (a)...

  5. 29 CFR 1926.453 - Aerial lifts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies may be obtained from the American National... 29 Labor 8 2010-07-01 2010-07-01 false Aerial lifts. 1926.453 Section 1926.453 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Scaffolds § 1926.453 Aerial lifts. (a)...

  6. Aerial shaking performance of wet Anna's hummingbirds.

    PubMed

    Ortega-Jimenez, Victor Manuel; Dudley, Robert

    2012-05-01

    External wetting poses problems of immediate heat loss and long-term pathogen growth for vertebrates. Beyond these risks, the locomotor ability of smaller animals, and particularly of fliers, may be impaired by water adhering to the body. Here, we report on the remarkable ability of hummingbirds to perform rapid shakes in order to expel water from their plumage even while in flight. Kinematic performance of aerial versus non-aerial shakes (i.e. those performed while perching) was compared. Oscillation frequencies of the head, body and tail were lower in aerial shakes. Tangential speeds and accelerations of the trunk and tail were roughly similar in aerial and non-aerial shakes, but values for head motions while perching were twice as high when compared with aerial shakes [corrected] . Azimuthal angular amplitudes for both aerial and non-aerial shakes reached values greater than 180° for the head, greater than 45° for the body trunk and slightly greater than 90° for the tail and wings. Using a feather on an oscillating disc to mimic shaking motions, we found that bending increased average speeds by up to 36 per cent and accelerations of the feather tip up to fourfold relative to a hypothetical rigid feather. Feather flexibility may help to enhance shedding of water and reduce body oscillations during shaking.

  7. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Aerial wire. 32.2431 Section 32.2431 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire....

  8. 47 CFR 32.2431 - Aerial wire.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Aerial wire. 32.2431 Section 32.2431 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.2431 Aerial wire....

  9. A Classroom Simulation of Aerial Photography.

    ERIC Educational Resources Information Center

    Baker, Simon

    1981-01-01

    Explains how a simulation of aerial photography can help students in a college level beginning course on interpretation of aerial photography understand the interrelationships of the airplane, the camera, and the earth's surface. Procedures, objectives, equipment, and scale are discussed. (DB)

  10. MAPPING NON-INDIGENOUS EELGRASS ZOSTERA JAPONICA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETARIAN HABITATS IN A PACIFIC NORTHWEST ESTUARY USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    We conducted aerial photographic surveys of Oregon's Yaquina Bay estuary during consecutive summers from 1997 through 2001. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communitie...

  11. MAPPING EELGRASS SPECIES ZOSTERA ZAPONICA AND Z. MARINA, ASSOCIATED MACROALGAE AND EMERGENT AQUATIC VEGETATION HABITATS IN PACIFIC NORTHWEST ESTUARIES USING NEAR-INFRARED COLOR AERIAL PHOTOGRAPHY AND A HYBRID IMAGE CLASSIFICATION TECHNIQUE

    EPA Science Inventory

    Aerial photographic surveys of Oregon's Yaquina Bay estuary were conducted during consecutive summers from 1997 through 2000. Imagery was obtained during low tide exposures of intertidal mudflats, allowing use of near-infrared color film to detect and discriminate plant communit...

  12. Monitoring Seabirds and Marine Mammals by Georeferenced Aerial Photography

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Weidauer, A.; Coppack, T.

    2016-06-01

    The assessment of anthropogenic impacts on the marine environment is challenged by the accessibility, accuracy and validity of biogeographical information. Offshore wind farm projects require large-scale ecological surveys before, during and after construction, in order to assess potential effects on the distribution and abundance of protected species. The robustness of site-specific population estimates depends largely on the extent and design of spatial coverage and the accuracy of the applied census technique. Standard environmental assessment studies in Germany have so far included aerial visual surveys to evaluate potential impacts of offshore wind farms on seabirds and marine mammals. However, low flight altitudes, necessary for the visual classification of species, disturb sensitive bird species and also hold significant safety risks for the observers. Thus, aerial surveys based on high-resolution digital imagery, which can be carried out at higher (safer) flight altitudes (beyond the rotor-swept zone of the wind turbines) have become a mandatory requirement, technically solving the problem of distant-related observation bias. A purpose-assembled imagery system including medium-format cameras in conjunction with a dedicated geo-positioning platform delivers series of orthogonal digital images that meet the current technical requirements of authorities for surveying marine wildlife at a comparatively low cost. At a flight altitude of 425 m, a focal length of 110 mm, implemented forward motion compensation (FMC) and exposure times ranging between 1/1600 and 1/1000 s, the twin-camera system generates high quality 16 bit RGB images with a ground sampling distance (GSD) of 2 cm and an image footprint of 155 x 410 m. The image files are readily transferrable to a GIS environment for further editing, taking overlapping image areas and areas affected by glare into account. The imagery can be routinely screened by the human eye guided by purpose-programmed software

  13. Tracking small targets in wide area motion imagery data

    NASA Astrophysics Data System (ADS)

    Mathew, Alex; Asari, Vijayan K.

    2013-03-01

    Object tracking in aerial imagery is of immense interest to the wide area surveillance community. In this paper, we propose a method to track very small targets such as pedestrians in AFRL Columbus Large Image Format (CLIF) Wide Area Motion Imagery (WAMI) data. Extremely small target sizes, combined with low frame rates and significant view changes, make tracking a very challenging task in WAMI data. Two problems should be tackled for object tracking frame registration and feature extraction. We employ SURF for frame registration. Although there are several feature extraction methods that work reasonably well when the scene is of high resolution, most methods fail when the resolution is very low. In our approach, we represent the target as a collection of intensity histograms and use a robust statistical distance to distinguish between the target and the background. We divide the object into m ×n regions and compute the normalized intensity histogram in each region to build a histogram matrix. The features can be compared using the histogram comparison techniques. For tracking, we use a combination of a bearing-only Kalman filter and the proposed feature extraction technique. The problem of template drift is solved by further localizing the target with a blob detection algorithm. The new template is taken as the detected blob. We show the robustness of the algorithm by giving a comparison of feature extraction part of our method with other feature extraction methods like SURF, SIFT and HoG and tracking part with mean-shift tracking.

  14. Quantifying structural physical habitat attributes using LIDAR and hyperspectral imagery.

    PubMed

    Hall, Robert K; Watkins, Russell L; Heggem, Daniel T; Jones, K Bruce; Kaufmann, Philip R; Moore, Steven B; Gregory, Sandra J

    2009-12-01

    Structural physical habitat attributes include indices of stream size, channel gradient, substrate size, habitat complexity, and riparian vegetation cover and structure. The Environmental Monitoring and Assessment Program (EMAP) is designed to assess the status and trends of ecological resources at different scales. High-resolution remote sensing provides unique capabilities in detecting a variety of features and indicators of environmental health and condition. LIDAR is an airborne scanning laser system that provides data on topography, channel dimensions (width, depth), slope, channel complexity (residual pools, volume, morphometric complexity, hydraulic roughness), riparian vegetation (height and density), dimensions of riparian zone, anthropogenic alterations and disturbances, and channel and riparian interaction. Hyperspectral aerial imagery offers the advantage of high spectral and spatial resolution allowing for the detection and identification of riparian vegetation and natural and anthropogenic features at a resolution not possible with satellite imagery. When combined, or fused, these technologies comprise a powerful geospatial data set for assessing and monitoring lentic and lotic environmental characteristics and condition. PMID:19165614

  15. Adaptive planning of emergency aerial photogrammetric mission

    NASA Astrophysics Data System (ADS)

    Shen, Fuqiang; Zhu, Qing; Zhang, Junxiao; Miao, Shuangxi; Zhou, Xingxia; Cao, Zhenyu

    2015-12-01

    Aiming at the diversity of emergency aerial photogrammetric mission requirements, complex ground and air environmental constraints make the planning mission time-consuming. This paper presents a fast adaptation for the UAV aerial photogrammetric mission planning. First, Building emergency aerial UAVs mission the unified expression of UAVs model and mechanical model of performance parameters in the semantic space make the integrated expression of mission requirements and low altitude environment. Proposed match assessment method which based on resource and mission efficiency. Made the Adaptive match of UAV aerial resources and mission. According to the emergency aerial resource properties, considering complex air-ground environment and mission requirements constraints. Made accurate design of UAV route. Experimental results show, the method scientific and efficient, greatly enhanced the emergency response rate.

  16. Terrestrial polarization imagery obtained from the Space Shuttle: characterization and interpretation.

    PubMed

    Egan, W G; Johnson, W R; Whitehead, V S

    1991-02-01

    An experiment to measure the polarization of land, sea, haze, and cloud areas from space was carried aboard the Space Shuttle in Sept. 1985. Digitized polarimetric and photometric imagery in mutually perpendicular planes was derived in the red, green, and blue spectral regions from photographs taken with two synchronized Hasselblad cameras using type 5036 Ektachrome film. Digitization at the NASA Houston Video Digital Analysis Systems Laboratory permitted reduction of the imagery into equipolarimetric contours with a relative accuracy of +/-20% for comparison to ground truth. The Island of Hawaii and adjacent sea and cloud areas were the objects of the specific imagery analyzed. Results show that cloud development is uniquely characterized using percent polarization without requiring precision photometric calibration. Furthermore, sea state and wind direction over the sea could be inferred as well as terrestrial soil texture. PMID:20582011

  17. Terrestrial polarization imagery obtained from the Space Shuttle - Characterization and interpretation

    NASA Technical Reports Server (NTRS)

    Egan, Walter G.; Johnson, W. R.; Whitehead, V. S.

    1991-01-01

    An experiment to measure the polarization of land, sea, haze, and cloud areas from space was carried aboard the Space Shuttle in September 1985. Digitized polarimetric and photometric imagery in mutually perpendicular planes was derived in the red, green, and blue spectral regions from photographs taken with two synchronized Hasselblad cameras using type 5036 Ektachrome film. Digitization at the NASA Houston Video Digital Analysis Systems Laboratory permitted reduction of the imagery into equipolarimetric contours with a relative accuracy of + or - 20 percent for comparison to ground truth. The Island of Hawaii and adjacent sea and cloud areas were the objects of the specific imagery analyzed. Results show that cloud development is uniquely characterized using percent polarization without requiring precision photometric calibration. Furthermore, sea state and wind direction over the sea could be inferred as well as terrestrial soil texture.

  18. Imagery: A Neglected Correlate of Reading Instruction.

    ERIC Educational Resources Information Center

    Fillmer, H. T.; Parkay, Forrest W.

    Imagery has a significant role in cognitive development. Reading research has established the fact that good readers image spontaneously and that there is a high interrelationship between overall preference for a story, the amount of text-related imagery in the story, comprehension, and recall. Imagery researchers agree that everyone is capable of…

  19. Perceptual evaluation of color transformed multispectral imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-04-01

    Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.

  20. Perceptual evaluation of colorized nighttime imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-02-01

    We recently presented a color transform that produces fused nighttime imagery with a realistic color appearance (Hogervorst and Toet, 2010, Information Fusion, 11-2, 69-77). To assess the practical value of this transform we performed two experiments in which we compared human scene recognition for monochrome intensified (II) and longwave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First we investigated the amount of detail observers can perceive in a short time span (the gist of the scene). Participants watched brief image presentations and provided a full report of what they had seen. Our results show that REF and CF imagery yielded the highest precision and recall measures, while both II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty extracting information from monochrome than from color imagery. Next, we measured eye fixations of participants who freely explored the images. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representation such that the resulting fixation behavior resembles the fixation behavior for daylight color imagery.

  1. Writing Assignments in Disguise: Lessons Learned Using Video Projects in the Classroom

    NASA Astrophysics Data System (ADS)

    Wade, P.; Courtney, A.

    2012-12-01

    This study describes the instructional approach of using student-created video documentaries as projects in an undergraduate non-science majors' Energy Perspectives science course. Four years of teaching this course provided many reflective teaching moments from which we have enhanced our instructional approach to teaching students how to construct a quality Ken Burn's style science video. Fundamental to a good video documentary is the story told via a narrative which involves significant writing, editing and rewriting. Many students primarily associate a video documentary with visual imagery and do not realize the importance of writing in the production of the video. Required components of the student-created video include: 1) select a topic, 2) conduct research, 3) write an outline, 4) write a narrative, 5) construct a project storyboard, 6) shoot or acquire video and photos (from legal sources), 7) record the narrative, 8) construct the video documentary, 9) edit and 10) finalize the project. Two knowledge survey instruments (administered pre- and post) were used for assessment purposes. One survey focused on the skills necessary to research and produce video documentaries and the second survey assessed students' content knowledge acquired from each documentary. This talk will focus on the components necessary for video documentaries and the instructional lessons learned over the years. Additionally, results from both surveys and student reflections of the video project will be shared.

  2. Stereoscopic Video Microscope

    NASA Astrophysics Data System (ADS)

    Butterfield, James F.

    1980-11-01

    The new electronic technology of three-dimensional video combined with the established. science of microscopy has created. a new instrument. the Stereoscopic Video Microscope. The specimen is illuminated so the stereoscopic objective lens focuses the stereo-pair of images side-by-side on the video camera's pick-up, tube. The resulting electronic signal can be enhanced, digitized, colorized, quantified, its polarity reverse., and its gray scale expanJed non-linearally. The signal can be transmitted over distances and can be stored on video. tape for later playback. The electronic signal is converted to a stereo-pair of visual images on the video monitor's cathode-ray-tube. A stereo-hood is used to fuse the two images for three-dimensional viewing. The conventional optical microscope has definite limitations, many of which can be eliminated by converting the optical image to an electronic signal in the video microscope. The principal aHvantages of the Stereoscopic Video Microscope compared to the conventional optical microscope are: great ease of viewing; group viewing; ability to easily recohd; and, the capability of processing the electronic signal for video. enhancement. The applications cover nearly all fields of microscopy. These include: microelectronics assembly, inspection, and research; biological, metallurgical, and che.illical research; and other industrial and medical uses. The Stereo-scopic Video Microscope is particularly useful for instructional and recordkeeping purposes. The video microscope can be monoscopic or three dimensional.

  3. Detecting Benthic Megafauna in Underwater Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Kerkez, I.; Oliver, D.; Kuhnz, L.; Cline, D. E.; Walther, D.; Itti, L.

    2004-12-01

    Remotely operated vehicles (ROVs) have revolutionized oceanographic research, supplementing traditional technologies of acoustics and trawling as tools which assess animal diversity, distribution and abundance. Video equipment deployed on ROVs enable quantitative video transects (QVTs) to be recorded from ocean habitats, providing high-resolution imagery on the scale of individual organisms and their associated habitat. Currently, the manual method employed by trained scientists analyzing QVTs is labor-intensive and costly, limiting the amount of data analyzed from ROV dives. An automated system for detecting organisms and identifying objects visible in video would address these concerns. Automated event detection (scene segmentation) is a step towards an automated analytical system for QVTs. In the work presented here, video frames are processed with a neuromorphic selective-attention algorithm. The candidate locations identified by the attention selection module are subject to a number of parameters. These parameters, combined with successful tracking over several frames, determine whether detected events are deemed "interesting" or "boring". "Interesting" events are marked in the video frames for subsequent identification and processing. As reported previously for mid-water QVTs, the system agrees with professional annotations 80% of the time. Poor contrast of small translucent animals in conjunction with the presence of debris ("marine snow") complicates automated event detection. While the visual characteristics of the seafloor (benthic) habitat are very different from the mid-water environment, the system yields a 92% correlation of detected animals on the seafloor compared with professional annotations. We present results detailing the comparison between a) automated detection and b) professional detection and classification, and we outline plans for future development of automated analysis.

  4. Automatic Orientation and Mosaicking of Archived Aerial Photography Using Structure from Motion

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.

    2016-03-01

    Aerial photography has been acquired regularly for topographic mapping since the decade of 1930. In Portugal there are several archives of aerial photos in national mapping institutes, as well as in local authorities, containing a total of nearly one hundred thousand photographs, mainly from the 1940s, 1950s and some from 1930s. These data sets provide important information about the evolution of the territory, for environment and agricultural studies, land planning, and many other examples. There is an interest in making these aerial coverages available in the form of orthorectified mosaics for integration in a GIS. The orthorectification of old photographs may pose several difficulties. Required data about the camera and lens system used, such as the focal distance, fiducial marks coordinates or distortion parameters may not be available, making it difficult to process these data in conventional photogrammetric software. This paper describes an essentially automatic methodology for orientation, orthorectification and mosaic composition of blocks of old aerial photographs, using Agisoft Photoscan structure from motion software. The operation sequence is similar to the processing of UAV imagery. The method was applied to photographs from 1947 and 1958, provided by the Portuguese Army Geographic Institute. The orientation was done with GCPs collected from recent orthophototos and topographic maps. This may be a difficult task, especially in urban areas that went through many changes. Residuals were in general below 1 meter. The agreement of the orthomosaics with recent orthophotos and GIS vector data was in general very good. The process is relatively fast and automatic, and can be considered in the processing of full coverages of old aerial photographs.

  5. Presence for design: conveying atmosphere through video collages.

    PubMed

    Keller, I; Stappers, P J

    2001-04-01

    Product designers use imagery for inspiration in their creative design process. To support creativity, designers apply many tools and techniques, which often rely on their ability to be inspired by found and previously made visual material and to experience the atmosphere of the user environment. Computer tools and developments in VR offer perspectives to support this kind of imagery and presence in the design process. But currently these possibilities come at too high a technological overhead and price to be usable in the design practice. This article proposes an expressive and technically lightweight approach using the possibilities of VR and computer tools, by creating a sketchy environment using video collages. Instead of relying on highly realistic or even "hyperreal" graphics, these video collages use lessons learned from theater and cinema to get a sense of atmosphere across. Product designers can use these video collages to reexperience their observations in the environment in which a product is to be used, and to communicate this atmosphere to their colleagues and clients. For user-centered design, video collages can also provide an environmental context for concept testing with prospective user groups.

  6. Dialectical Imagery and Postmodern Research

    ERIC Educational Resources Information Center

    Davison, Kevin G.

    2006-01-01

    This article suggests utilizing dialectical imagery, as understood by German social philosopher Walter Benjamin, as an additional qualitative data analysis strategy for research into the postmodern condition. The use of images mined from research data may offer epistemological transformative possibilities that will assist in the demystification of…

  7. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses

    PubMed Central

    Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) ‘Patriot’, Zoysia matrella (Zm) ‘Zeon’ and Paspalum vaginatum (Pv) ‘Salam’. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option. PMID:27341674

  8. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses.

    PubMed

    Caturegli, Lisa; Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) 'Patriot', Zoysia matrella (Zm) 'Zeon' and Paspalum vaginatum (Pv) 'Salam'. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option.

  9. Oblique Aerial Images and Their Use in Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2013-07-01

    Oblique images enable three-dimensional (3d) modelling of objects with vertical dimensions. Such imagery is nowadays systematically taken of cities and may easily become available. The documentation of cultural heritage can take advantage of these sources of information. Two new oblique camera systems are presented and characteristics of such images are summarized. A first example uses images of a new multi-camera system for the derivation of orthoimages, façade plots with photo texture, 3d scatter plots, and dynamic 3d models of a historic church. The applied methodology is based on automatically derived point clouds of high density. Each point will be supplemented with colour and other attributes. The problems experienced in these processes and the solutions to these problems are presented. The applied tools are a combination of professional tools, free software, and of own software developments. Special attention is given to the quality of input images. Investigations are carried out on edges in the images. The combination of oblique and nadir images enables new possibilities in the processing. The use of the near-infrared channel besides the red, green, and blue channel of the applied multispectral imagery is also of advantage. Vegetation close to the object of interest can easily be removed. A second example describes the modelling of a monument by means of a non-metric camera and a standard software package. The presented results regard achieved geometric accuracy and image quality. It is concluded that the use of oblique aerial images together with image-based processing methods yield new possibilities of economic and accurate documentation of tall monuments.

  10. Evaluation of Bare Ground on Rangelands using Unmanned Aerial Vehicles

    SciTech Connect

    Robert P. Breckenridge; Maxine Dakins

    2011-01-01

    Attention is currently being given to methods that assess the ecological condition of rangelands throughout the United States. There are a number of different indicators that assess ecological condition of rangelands. Bare Ground is being considered by a number of agencies and resource specialists as a lead indicator that can be evaluated over a broad area. Traditional methods of measuring bare ground rely on field technicians collecting data along a line transect or from a plot. Unmanned aerial vehicles (UAVs) provide an alternative to collecting field data, can monitor a large area in a relative short period of time, and in many cases can enhance safety and time required to collect data. In this study, both fixed wing and helicopter UAVs were used to measure bare ground in a sagebrush steppe ecosystem. The data were collected with digital imagery and read using the image analysis software SamplePoint. The approach was tested over seven different plots and compared against traditional field methods to evaluate accuracy for assessing bare ground. The field plots were located on the Idaho National Laboratory (INL) site west of Idaho Falls, Idaho in locations where there is very little disturbance by humans and the area is grazed only by wildlife. The comparison of fixed-wing and helicopter UAV technology against field estimates shows good agreement for the measurement of bare ground. This study shows that if a high degree of detail and data accuracy is desired, then a helicopter UAV may be a good platform. If the data collection objective is to assess broad-scale landscape level changes, then the collection of imagery with a fixed-wing system is probably more appropriate.

  11. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses.

    PubMed

    Caturegli, Lisa; Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) 'Patriot', Zoysia matrella (Zm) 'Zeon' and Paspalum vaginatum (Pv) 'Salam'. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option. PMID:27341674

  12. Learning, attentional control and action video games

    PubMed Central

    Green, C.S.; Bavelier, D.

    2012-01-01

    While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on ‘action video games’ produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. PMID:22440805

  13. Learning, attentional control, and action video games.

    PubMed

    Green, C S; Bavelier, D

    2012-03-20

    While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.

  14. Whitecap coverage from aerial photography

    NASA Technical Reports Server (NTRS)

    Austin, R. W.

    1970-01-01

    A program for determining the feasibility of deriving sea surface wind speeds by remotely sensing ocean surface radiances in the nonglitter regions is discussed. With a knowledge of the duration and geographical extent of the wind field, information about the conventional sea state may be derived. The use of optical techniques for determining sea state has obvious limitations. For example, such means can be used only in daylight and only when a clear path of sight is available between the sensor and the surface. However, sensors and vehicles capable of providing the data needed for such techniques are planned for the near future; therefore, a secondary or backup capability can be provided with little added effort. The information currently being sought regarding white water coverage is also of direct interest to those working with passive microwave systems, the study of energy transfer between winds and ocean currents, the aerial estimation of wind speeds, and many others.

  15. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe's requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a

  16. A method for generating enhanced vision displays using OpenGL video texture

    NASA Astrophysics Data System (ADS)

    Bernier, Kenneth L.

    2010-04-01

    Degraded visual conditions can marvel the curious and destroy the unprepared. While navigation instruments are trustworthy companions, true visual reference remains king of the hills. Poor visibility may be overcome via imaging sensors such as low light level charge-coupled-device, infrared, and millimeter wave radar. Enhanced Vision systems combine this imagery into a comprehensive situation awareness display, presented to the pilot as reference imagery on a cockpit display, or as world-conformal imagery on head-up or head-mounted displays. This paper demonstrates that Enhanced Vision imaging can be achieved at video rates using typical CPU / GPU architecture, standard video capture hardware, dynamic non-linear ray tracing algorithms, efficient image transfer methods, and simple OpenGL rendering techniques.

  17. Development of an autonomous video rendezvous and docking system, phase 2

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Richardson, T. E.

    1983-01-01

    The critical elements of an autonomous video rendezvous and docking system were built and used successfully in a physical laboratory simulation. The laboratory system demonstrated that a small, inexpensive electronic package and a flight computer of modest size can analyze television images to derive guidance information for spacecraft. In the ultimate application, the system would use a docking aid consisting of three flashing lights mounted on a passive target spacecraft. Television imagery of the docking aid would be processed aboard an active chase vehicle to derive relative positions and attitudes of the two spacecraft. The demonstration system used scale models of the target spacecraft with working docking aids. A television camera mounted on a 6 degree of freedom (DOF) simulator provided imagery of the target to simulate observations from the chase vehicle. A hardware video processor extracted statistics from the imagery, from which a computer quickly computed position and attitude. Computer software known as a Kalman filter derived velocity information from position measurements.

  18. Unmanned aerial survey of elephants.

    PubMed

    Vermeulen, Cédric; Lejeune, Philippe; Lisein, Jonathan; Sawadogo, Prosper; Bouché, Philippe

    2013-01-01

    The use of a UAS (Unmanned Aircraft System) was tested to survey large mammals in the Nazinga Game Ranch in the south of Burkina Faso. The Gatewing ×100™ equipped with a Ricoh GR III camera was used to test animal reaction as the UAS passed, and visibility on the images. No reaction was recorded as the UAS passed at a height of 100 m. Observations, made on a set of more than 7000 images, revealed that only elephants (Loxodonta africana) were easily visible while medium and small sized mammals were not. The easy observation of elephants allows experts to enumerate them on images acquired at a height of 100 m. We, therefore, implemented an aerial strip sample count along transects used for the annual wildlife foot count. A total of 34 elephants were recorded on 4 transects, each overflown twice. The elephant density was estimated at 2.47 elephants/km(2) with a coefficient of variation (CV%) of 36.10%. The main drawback of our UAS was its low autonomy (45 min). Increased endurance of small UAS is required to replace manned aircraft survey of large areas (about 1000 km of transect per day vs 40 km for our UAS). The monitoring strategy should be adapted according to the sampling plan. Also, the UAS is as expensive as a second-hand light aircraft. However the logistic and flight implementation are easier, the running costs are lower and its use is safer. Technological evolution will make civil UAS more efficient, allowing them to compete with light aircraft for aerial wildlife surveys.

  19. Unmanned Aerial Survey of Elephants

    PubMed Central

    Vermeulen, Cédric; Lejeune, Philippe; Lisein, Jonathan; Sawadogo, Prosper; Bouché, Philippe

    2013-01-01

    The use of a UAS (Unmanned Aircraft System) was tested to survey large mammals in the Nazinga Game Ranch in the south of Burkina Faso. The Gatewing ×100™ equipped with a Ricoh GR III camera was used to test animal reaction as the UAS passed, and visibility on the images. No reaction was recorded as the UAS passed at a height of 100 m. Observations, made on a set of more than 7000 images, revealed that only elephants (Loxodonta africana) were easily visible while medium and small sized mammals were not. The easy observation of elephants allows experts to enumerate them on images acquired at a height of 100 m. We, therefore, implemented an aerial strip sample count along transects used for the annual wildlife foot count. A total of 34 elephants were recorded on 4 transects, each overflown twice. The elephant density was estimated at 2.47 elephants/km2 with a coefficient of variation (CV%) of 36.10%. The main drawback of our UAS was its low autonomy (45 min). Increased endurance of small UAS is required to replace manned aircraft survey of large areas (about 1000 km of transect per day vs 40 km for our UAS). The monitoring strategy should be adapted according to the sampling plan. Also, the UAS is as expensive as a second-hand light aircraft. However the logistic and flight implementation are easier, the running costs are lower and its use is safer. Technological evolution will make civil UAS more efficient, allowing them to compete with light aircraft for aerial wildlife surveys. PMID:23405088

  20. The DOE ARM Aerial Facility

    SciTech Connect

    Schmid, Beat; Tomlinson, Jason M.; Hubbe, John M.; Comstock, Jennifer M.; Mei, Fan; Chand, Duli; Pekour, Mikhail S.; Kluzek, Celine D.; Andrews, Elisabeth; Biraud, S.; McFarquhar, Greg

    2014-05-01

    The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites that provide long-term measurements of climate relevant properties, mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months), and the ARM Aerial Facility (AAF). The airborne observations acquired by the AAF enhance the surface-based ARM measurements by providing high-resolution in-situ measurements for process understanding, retrieval-algorithm development, and model evaluation that are not possible using ground- or satellite-based techniques. Several ARM aerial efforts were consolidated into the AAF in 2006. With the exception of a small aircraft used for routine measurements of aerosols and carbon cycle gases, AAF at the time had no dedicated aircraft and only a small number of instruments at its disposal. In this "virtual hangar" mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, AAF started managing operations of the Battelle-owned Gulfstream I (G-1) large twin-turboprop research aircraft. Furthermore, the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of over twenty new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments.

  1. Use of Airborne Thermal Imagery to Detect and Monitor Inshore Oil Spill Residues During Darkness Hours.

    PubMed

    GRIERSON

    1998-11-01

    / Trials were conducted using an airborne video system operating in the visible, near-infrared, and thermal wavelengths to detect two known oil spill releases during darkness at a distance of 10 nautical miles from the shore in St. Vincent's Gulf, South Australia. The oil spills consisted of two 20-liter samples released at 2-h intervals, one sample consisted of paraffinic neutral material and the other of automotive diesel oil. A tracking buoy was sent overboard in conjunction with the release of sample 1, and its movement monitored by satellite relay. Both oil residues were overflown by a light aircraft equipped with thermal, visible, and infrared imagers at a period of approximately 1 h after the release of the second oil residue. Trajectories of the oil residue releases were also modeled and the results compared to those obtained by the airborne video and the tracking buoy. Airborne imagery in the thermal wavelengths successfully located and mapped both oil residue samples during nighttime conditions. Results from the trial suggest that the most advantageous technique would be the combined use of the tracking beacon to obtain an approximate location of the oil spill and the airborne imagery to ascertain its extent and characteristics.KEY WORDS: Airborne video; Thermal imagery; Global positioning; Oil-spill monitoring; Tracking beacon

  2. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  3. Vegetation monitoring using low-altitude, large-scale imagery from radio-controlled drones

    NASA Astrophysics Data System (ADS)

    Quilter, Mark Charles

    As both farmers and range managers are required to manage larger acreage, new methods for vegetation monitoring need to be developed. The methods need to increase information and yield, and at the same time reduce labor requirements and cost. This dissertation discusses how the use of radio controlled aircraft can collect large scale imagery that can be used to monitor vegetation. Several methods are explored which reduce the labor requirements for collecting and recording data. The work demonstrates the effectiveness of these methods and presents details of the procedures used. Many of the techniques have historically been used with aerial photographs and satellite imagery. However, the use of these procedures to collect detailed data at a scale required for vegetation monitoring is new. Image processing procedures are also demonstrated to have promise in changing the way ranges are monitored.

  4. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  5. A Vegetation Analysis on Horn Island Mississippi, ca. 1940 using Habitat Characteristic Dimensions Derived from Historical Aerial Photography

    NASA Astrophysics Data System (ADS)

    Jeter, G. W.; Carter, G. A.

    2013-12-01

    Guy (Will) Wilburn Jeter Jr., Gregory A. Carter University of Southern Mississippi Geography and Geology Gulf Coast Geospatial Center The over-arching goal of this research is to assess habitat change over a seventy year period to better understand the combined effects of global sea level rise and storm impacts on the stability of Horn Island, MS habitats. Historical aerial photography is often overlooked as a resource for use in determining habitat change. However, the spatial information provided even by black and white imagery can give insight into past habitat composition via textural analysis. This research will evaluate characteristic dimensions; most notably patch size of habitat types using simple geo-statistics and textures of brightness values of historical aerial imagery. It is assumed that each cover type has an identifiable patch size that can be used as a unique classifier of each habitat type. Analytical methods applied to the 1940 imagery were developed using 2010 field data and USDA aerial imagery. Textural moving window methods and basic geo-statistics were used to estimate characteristic dimensions of each cover type in 1940 aerial photography. The moving window texture analysis was configured with multiple window sizes to capture the characteristic dimensions of six habitat types; water, bare sand , dune herb land, estuarine shrub land, marsh land and slash pine woodland. Coefficient of variation (CV), contrast, and entropy texture filters were used to analyze the spatial variability of the 1940 and 2010 imagery. (CV) was used to depict the horizontal variability of each habitat characteristic dimension. Contrast was used to represent the variability of bright versus dark pixel values; entropy was used to show the variation in the slash pine woodland habitat type. Results indicate a substantial increase in marshland habitat relative to other habitat types since 1940. Results also reveal each habitat-type, such as dune herb-land, marsh

  6. Marine object detection in UAV full-motion video

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Lane, Corey; Bagnall, Bryan; Buck, Heidi

    2014-06-01

    Recent years have seen an increased use of Unmanned Aerial Vehicles (UAV) with video-recording capability for Maritime Domain Awareness (MDA) and other surveillance operations. In order for these e orts to be effective, there is a need to develop automated algorithms to process the full-motion videos (FMV) captured by UAVs in an efficient and timely manner to extract meaningful information that can assist human analysts and decision makers. This paper presents a generalizeable marine object detection system that is specifically designed to process raw video footage streaming from UAVs in real-time. Our approach does not make any assumptions about the object and/or background characteristics because, in the MDA domain, we encounter varying background and foreground characteristics such as boats, bouys and ships of varying sizes and shapes, wakes, white caps on water, glint from the sun, to name but a few. Our efforts rely on basic signal processing and machine learning approaches to develop a generic object detection system that maintains a high level of performance without making prior assumptions about foreground-background characteristics and does not experience abrupt performance degradation when subjected to variations in lighting, background characteristics, video quality, abrupt changes in video perspective, size, appearance and number of the targets. In the following report, in addition to our marine object detection system, we present representative object detection results on some real-world UAV full-motion video data.

  7. Human-friendly stylization of video content using simulated colored paper mosaics

    NASA Astrophysics Data System (ADS)

    Kim, Seulbeom; Kang, Dongwann; Yoon, Kyunghyun

    2016-07-01

    Video content is used extensively in many fields. However, in some fields, video manipulation techniques are required to improve the human-friendliness of such content. In this paper, we propose a method that automatically generates animations in the style of colored paper mosaics, to create human-friendly, artistic imagery. To enhance temporal coherence while maintaining the characteristics of colored paper mosaics, we also propose a particle video-based method that determines coherent locations for tiles in animations. The proposed method generates evenly distributed particles, which are used to produce animated tiles via our tile modeling process.

  8. Interpretation key for SAR /L-band/ imagery of sea ice

    NASA Technical Reports Server (NTRS)

    Bryan, M. L.

    1976-01-01

    An interpretation key, similar to those previously developed for use with aerial photography and other remotely sensed data, was developed for L-band (25 cm) radar imagery collected over the Arctic Ocean. Data from April, August, and October were considered. The procedure for developing a valid interpretation key for operation use involves substituting time for space. Open water situations (polynyas, leads, flaws), examples of unconsolidated ice (frazil, slush, brash), thin ice (nilas), and annual ice (first year, multi-year ice) situations are examined. It is suggested that the interpretation key will enhance the use of side looking airborne radar data in the qualitative photo interpretation mode.

  9. Monitoring Arctic Sea ice using ERTS imagery. [Bering Sea, Beaufort Sea, Canadian Archipelago, and Greenland Sea

    NASA Technical Reports Server (NTRS)

    Barnes, J. C.; Bowley, C. J.

    1974-01-01

    Because of the effect of sea ice on the heat balance of the Arctic and because of the expanding economic interest in arctic oil and other minerals, extensive monitoring and further study of sea ice is required. The application of ERTS data for mapping ice is evaluated for several arctic areas, including the Bering Sea, the eastern Beaufort Sea, parts of the Canadian Archipelago, and the Greenland Sea. Interpretive techniques are discussed, and the scales and types of ice features that can be detected are described. For the Bering Sea, a sample of ERTS imagery is compared with visual ice reports and aerial photography from the NASA CV-990 aircraft.

  10. Privacy information management for video surveillance

    NASA Astrophysics Data System (ADS)

    Luo, Ying; Cheung, Sen-ching S.

    2013-05-01

    The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.

  11. The Potential Uses of Commercial Satellite Imagery in the Middle East

    SciTech Connect

    Vannoni, M.G.

    1999-06-08

    It became clear during the workshop that the applicability of commercial satellite imagery to the verification of future regional arms control agreements is limited at this time. Non-traditional security topics such as environmental protection, natural resource management, and the development of infrastructure offer the more promising applications for commercial satellite imagery in the short-term. Many problems and opportunities in these topics are regional, or at least multilateral, in nature. A further advantage is that, unlike arms control and nonproliferation applications, cooperative use of imagery in these topics can be done independently of the formal Middle East Peace Process. The value of commercial satellite imagery to regional arms control and nonproliferation, however, will increase during the next three years as new, more capable satellite systems are launched. Aerial imagery, such as that used in the Open Skies Treaty, can also make significant contributions to both traditional and non-traditional security applications but has the disadvantage of requiring access to national airspace and potentially higher cost. There was general consensus that commercial satellite imagery is under-utilized in the Middle East and resources for remote sensing, both human and institutional, are limited. This relative scarcity, however, provides a natural motivation for collaboration in non-traditional security topics. Collaborations between scientists, businesses, universities, and non-governmental organizations can work at the grass-roots level and yield contributions to confidence building as well as scientific and economic results. Joint analysis projects would benefit the region as well as establish precedents for cooperation.

  12. Secure video communications systems

    SciTech Connect

    Smith, R.L.

    1991-10-08

    This patent describes a secure video communications system having at least one command network formed by a combination of subsystems. The combination of subsystems to include a video subsystem, an audio subsystem, a communications subsystem, and a control subsystem. The video communications system to be window driven and mouse operated, and having the ability to allow for secure point-to-point real-time teleconferencing.

  13. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  14. Digital Video and Interactivity

    NASA Astrophysics Data System (ADS)

    Morelli, Alysson K.; Chaves, Gabriel C.; Belchior, Tiago M.

    With the growth of digital video technology, the authors have chosen to explore the potential of the DVD, in terms of interactivity. The research aims at understanding the interaction possibilities of digital video in a DVD player, while still keeping the narrative constraints. This paper explains the project, the resulting DVD, and shows that the relation of the spectator to a video can be changed by the interaction.

  15. The remote characterization of vegetation using Unmanned Aerial Vehicle photography

    NASA Astrophysics Data System (ADS)

    Rango, A.; Laliberte, A.; Winters, C.; Maxwell, C.; Steele, C.

    2008-12-01

    Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial photographic, multispectral and hyperspectral radiometric, LIDAR, and radar data. The characteristics of several small UAVs less than 55lbs (25kg)) along with some payload instruments will be reviewed. Common types of remote sensing coverage available from a small, limited-payload UAV are video and hyperspatial, digital photography. From evaluation of these simple types of remote sensing data, we conclude that UAVs can play an important role in measuring and monitoring vegetation health and structure of the vegetation/soil complex in rangelands. If we fly our MLB Bat-3 at an altitude of 700ft (213m), we can obtain a digital photographic resolution of 6cm. The digital images acquired cover an area of approximately 29,350sq m. Video imaging is usually only useful for monitoring the flight path of the UAV in real time. In our experiments with the 6cm resolution data, we have been able to measure vegetation patch size, crown width, gap sizes between vegetation, percent vegetation and bare soil cover, and type of vegetation. The UAV system is also being tested to acquire height of the vegetation canopy using shadow measurements and a digital elevation model obtained with stereo images. Evaluation of combining the UAV digital photography with LIDAR data of the Jornada Experimental Range in south central New Mexico is ongoing. The use of UAVs is increasing and is becoming a very promising tool for vegetation assessment and change, but there are several operational components to flying UAVs that users need to consider. These include cost, a whole set of, as yet, undefined regulations regarding flying in the National Air Space(NAS), procedures to gain approval for flying in the NAS

  16. Cooperative Lander-Surface/Aerial Microflyer Missions for Mars Exploration

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Lay, Norman; Hine, Butler; Zornetzer, Steven

    2004-01-01

    Concepts are being investigated for exploratory missions to Mars based on Bioinspired Engineering of Exploration Systems (BEES), which is a guiding principle of this effort to develop biomorphic explorers. The novelty lies in the use of a robust telecom architecture for mission data return, utilizing multiple local relays (including the lander itself as a local relay and the explorers in the dual role of a local relay) to enable ranges 10 to 1,000 km and downlink of color imagery. As illustrated in Figure 1, multiple microflyers that can be both surface or aerially launched are envisioned in shepherding, metamorphic, and imaging roles. These microflyers imbibe key bio-inspired principles in their flight control, navigation, and visual search operations. Honey-bee inspired algorithms utilizing visual cues to perform autonomous navigation operations such as terrain following will be utilized. The instrument suite will consist of a panoramic imager and polarization imager specifically optimized to detect ice and water. For microflyers, particularly at small sizes, bio-inspired solutions appear to offer better alternate solutions than conventional engineered approaches. This investigation addresses a wide range of interrelated issues, including desired scientific data, sizes, rates, and communication ranges that can be accomplished in alternative mission scenarios. The mission illustrated in Figure 1 offers the most robust telecom architecture and the longest range for exploration with two landers being available as main local relays in addition to an ephemeral aerial probe local relay. The shepherding or metamorphic plane are in their dual role as local relays and image data collection/storage nodes. Appropriate placement of the landing site for the scout lander with respect to the main mission lander can allow coverage of extremely large ranges and enable exhaustive survey of the area of interest. In particular, this mission could help with the path planning and risk

  17. Improved seagrass mapping using linear spectral unmixing of aerial photographs

    NASA Astrophysics Data System (ADS)

    Uhrin, Amy V.; Townsend, Philip A.

    2016-03-01

    Mapping of seagrass is challenging, particularly in areas where seagrass cover ranges from extensive, continuous meadows to aggregations of patchy mounds often no more than a meter across. Manual delineation of seagrass habitat polygons through visual photointerpretation of high resolution aerial imagery remains the most widely adopted approach for mapping seagrass extent but polygons often include unvegetated gaps. Although mapped polygon data exist for many estuaries, these are likely insufficient to accurately characterize spatial pattern or estimate area actually occupied by seagrass. We evaluated whether a linear spectral unmixing (LSU) classifier applied to manually-delineated seagrass polygons clipped from digital aerial images could improve mapping of seagrass in North Carolina. Representative seagrass endmembers were chosen directly from images and used to unmix image-clipped polygons, resulting in fraction planes (maps) of the proportion of seagrass present in each image pixel. Thresholding was used to generate seagrass maps for each pixel proportion from 0 (no thresholding, all pixel proportions included) to 1 (only pixels having 100% seagrass) in 0.1 increments. The optimal pixel proportion for identifying seagrass was assessed using Euclidean distance calculated from Receiver Operating Characteristic (ROC) curves and overall thematic accuracy calculated from confusion matrices. We assessed overall classifier performance using Kappa statistics and Area Under the (ROC) Curve (AUC). We compared seagrass area calculated from each threshold map to the total area of the corresponding manually-delineated polygon. LSU effectively classified seagrass and performed better than a random classification as indicated by high values for both Kappa statistics (0.72-98) and AUC (0.80-0.99). The LSU classifier effectively distinguished between seagrass and bare substrate resulting in fine-scale seagrass maps with overall thematic accuracies that exceeded our expected

  18. Unmanned aerial optical systems for spatial monitoring of Antarctic mosses

    NASA Astrophysics Data System (ADS)

    Lucieer, Arko; Turner, Darren; Veness, Tony; Malenovsky, Zbynek; Harwin, Stephen; Wallace, Luke; Kelcey, Josh; Robinson, Sharon

    2013-04-01

    The Antarctic continent has experienced major changes in temperature, wind speed and stratospheric ozone levels during the last 50 years. In a manner similar to tree rings, old growth shoots of Antarctic mosses, the only plants on the continent, also preserve a climate record of their surrounding environment. This makes them an ideal bio-indicator of the Antarctic climate change. Spatially extensive ground sampling of mosses is laborious and time limited due to the short Antarctic growing season. Obviously, there is a need for an efficient method to monitor spatially climate change induced stress of the Antarctic moss flora. Cloudy weather and high spatial fragmentation of the moss turfs makes satellite imagery unsuitable for this task. Unmanned aerial systems (UAS), flying at low altitudes and collecting image data even under a full overcast, can, however, overcome the insufficiency of satellite remote sensing. We, therefore, developed scientific UAS, consisting of a remote-controlled micro-copter carrying on-board different remote sensing optical sensors, tailored to perform fast and cost-effective mapping of Antarctic flora at ultra-high spatial resolution (1-10 cm depending on flight altitude). A single lens reflex (SLR) camera carried by UAS acquires multi-view aerial photography, which processed by the Structure from Motion computer vision algorithm provides an accurate three-dimensional digital surface model (DSM) at ultra-high spatial resolution. DSM is the key input parameter for modelling a local seasonal snowmelt run-off, which provides mosses with the vital water supply. A lightweight multispectral camera on-board of UVS is collecting images of six selected spectral wavebands with the full-width-half-maximum (FWHM) of 10 nm. The spectral bands can be used to compute various vegetation optical indices, e.g. Difference Vegetation Index (NDVI) or Photochemical Reflectance Index (PRI), assessing the actual physiological state of polar vegetation. Recently

  19. Multispectral bilateral video fusion.

    PubMed

    Bennett, Eric P; Mason, John L; McMillan, Leonard

    2007-05-01

    We present a technique for enhancing underexposed visible-spectrum video by fusing it with simultaneously captured video from sensors in nonvisible spectra, such as Short Wave IR or Near IR. Although IR sensors can accurately capture video in low-light and night-vision applications, they lack the color and relative luminances of visible-spectrum sensors. RGB sensors do capture color and correct relative luminances, but are underexposed, noisy, and lack fine features due to short video exposure times. Our enhanced fusion output is a reconstruction of the RGB input assisted by the IR data, not an incorporation of elements imaged only in IR. With a temporal noise reduction, we first remove shot noise and increase the color accuracy of the RGB footage. The IR video is then normalized to ensure cross-spectral compatibility with the visible-spectrum video using ratio images. To aid fusion, we decompose the video sources with edge-preserving filters. We introduce a multispectral version of the bilateral filter called the "dual bilateral" that robustly decomposes the RGB video. It utilizes the less-noisy IR for edge detection but also preserves strong visible-spectrum edges not in the IR. We fuse the RGB low frequencies, the IR texture details, and the dual bilateral edges into a noise-reduced video with sharp details, correct chrominances, and natural relative luminances. PMID:17491451

  20. Sampling video compression system

    NASA Technical Reports Server (NTRS)

    Matsumoto, Y.; Lum, H. (Inventor)

    1977-01-01

    A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.