Science.gov

Sample records for aerial multispectral imagery

  1. Analysis of aerial multispectral imagery to assess water quality parameters of Mississippi water bodies

    NASA Astrophysics Data System (ADS)

    Irvin, Shane Adison

    The goal of this study was to demonstrate the application of aerial imagery as a tool in detecting water quality indicators in a three mile segment of Tibbee Creek in, Clay County, Mississippi. Water samples from 10 transects were collected per sampling date over two periods in 2010 and 2011. Temperature and dissolved oxygen (DO) were measured at each point, and water samples were tested for turbidity and total suspended solids (TSS). Relative reflectance was extracted from high resolution (0.5 meter) multispectral aerial images. A regression model was developed for turbidity and TSS as a function of values for specific sampling dates. The best model was used to predict turbidity and TSS using datasets outside the original model date. The development of an appropriate predictive model for water quality assessment based on the relative reflectance of aerial imagery is affected by the quality of imagery and time of sampling.

  2. Comparison of hyperspectral imagery with aerial photography and multispectral imagery for mapping broom snakeweed

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby] is one of the most widespread and abundant rangeland weeds in western North America. The objectives of this study were to evaluate airborne hyperspectral imagery and compare it with aerial color-infrared (CIR) photography and multispe...

  3. Tracking stormwater discharge plumes and water quality of the Tijuana River with multispectral aerial imagery

    NASA Astrophysics Data System (ADS)

    Svejkovsky, Jan; Nezlin, Nikolay P.; Mustain, Neomi M.; Kum, Jamie B.

    2010-04-01

    Spatial-temporal characteristics and environmental factors regulating the behavior of stormwater runoff from the Tijuana River in southern California were analyzed utilizing very high resolution aerial imagery, and time-coincident environmental and bacterial sampling data. Thirty nine multispectral aerial images with 2.1-m spatial resolution were collected after major rainstorms during 2003-2008. Utilizing differences in color reflectance characteristics, the ocean surface was classified into non-plume waters and three components of the runoff plume reflecting differences in age and suspended sediment concentrations. Tijuana River discharge rate was the primary factor regulating the size of the freshest plume component and its shorelong extensions to the north and south. Wave direction was found to affect the shorelong distribution of the shoreline-connected fresh plume components much more strongly than wind direction. Wave-driven sediment resuspension also significantly contributed to the size of the oldest plume component. Surf zone bacterial samples collected near the time of each image acquisition were used to evaluate the contamination characteristics of each plume component. The bacterial contamination of the freshest plume waters was very high (100% of surf zone samples exceeded California standards), but the oldest plume areas were heterogeneous, including both polluted and clean waters. The aerial imagery archive allowed study of river runoff characteristics on a plume component level, not previously done with coarser satellite images. Our findings suggest that high resolution imaging can quickly identify the spatial extents of the most polluted runoff but cannot be relied upon to always identify the entire polluted area. Our results also indicate that wave-driven transport is important in distributing the most contaminated plume areas along the shoreline.

  4. Estimation of cotton yield with varied irrigation and nitrogen treatments using aerial multispectral imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cotton yield varies spatially within a field. The variability can be caused by various production inputs such as soil properties, water management, and fertilizer application. Airborne multispectral imaging is capable of providing data and information to study effects of the inputs on yield qualitat...

  5. Advanced Image Processing of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn; Jobson, Daniel J.; Rahman, Zia-ur; Hines, Glenn

    2006-01-01

    Aerial imagery of the Earth is an invaluable tool for the assessment of ground features, especially during times of disaster. Researchers at the NASA Langley Research Center have developed techniques which have proven to be useful for such imagery. Aerial imagery from various sources, including Langley's Boeing 757 Aries aircraft, has been studied extensively. This paper discusses these studies and demonstrates that better-than-observer imagery can be obtained even when visibility is severely compromised. A real-time, multi-spectral experimental system will be described and numerous examples will be shown.

  6. Detecting new Buffel grass infestations in Australian arid lands: evaluation of methods using high-resolution multispectral imagery and aerial photography.

    PubMed

    Marshall, V M; Lewis, M M; Ostendorf, B

    2014-03-01

    We assess the feasibility of using airborne imagery for Buffel grass detection in Australian arid lands and evaluate four commonly used image classification techniques (visual estimate, manual digitisation, unsupervised classification and normalised difference vegetation index (NDVI) thresholding) for their suitability to this purpose. Colour digital aerial photography captured at approximately 5 cm of ground sample distance (GSD) and four-band (visible–near-infrared) multispectral imagery (25 cm GSD) were acquired (14 February 2012) across overlapping subsets of our study site. In the field, Buffel grass projected cover estimates were collected for quadrates (10 m diameter), which were subsequently used to evaluate the four image classification techniques. Buffel grass was found to be widespread throughout our study site; it was particularly prevalent in riparian land systems and alluvial plains. On hill slopes, Buffel grass was often present in depressions, valleys and crevices of rock outcrops, but the spread appeared to be dependent on soil type and vegetation communities. Visual cover estimates performed best (r 2 0.39), and pixel-based classifiers (unsupervised classification and NDVI thresholding) performed worst (r 2 0.21). Manual digitising consistently underrepresented Buffel grass cover compared with field- and image-based visual cover estimates; we did not find the labours of digitising rewarding. Our recommendation for regional documentation of new infestation of Buffel grass is to acquire ultra-high-resolution aerial photography and have a trained observer score cover against visual standards and use the scored sites to interpolate density across the region.

  7. Vegetation Fraction Mapping with Artificial Neural Network and High Resolution Multispectral Aerial Imagery Acquired During BEAREX07

    NASA Astrophysics Data System (ADS)

    Kersh, K. L.; Gowda, P. H.; Basu, S.; Howell, T. A.; O'Shaughnessy, S.; Rajan, N.; Akasheh, O. Z.

    2009-12-01

    Land surface models use vegetation fraction to more accurately partition latent, sensible and soil heat fluxes for a partial vegetated surface as it affects energy and moisture exchanges between the earth’s surface and atmosphere. In recent years, there is interest to integrate vegetation fraction data into intelligent irrigation scheduling systems to avoid false positive signals to irrigate. Remote sensing can facilitate the rapid collection of vegetation fraction information on individual fields over large areas in a timely and cost-effective manner. In this study, we developed a set of vegetation fraction models using least square regression and artificial neural network (ANN) techniques and evaluated using the data collected during Bushland Evapotranspiration and Agricultural Remote sensing Experiment 2007 (BEAREX07). During the BEAREX07, six aircraft campaigns were made covering bare soil to full crop cover conditions. High resolution multispectral data include 0.5-m visible (green and red) and near infrared images and 1.8-m thermal infrared images over the USDA-ARS-Conservation and Production Research Laboratory in Bushland, Texas [350 11' N, 1020 06' W; 1,170 m elevation MSL]. Atmospheric corrections were applied on these images before extracting spectral signatures for 40 ground truth locations. Field data collection in ground truth locations during the aircraft campaigns included digital pictures of crop cover using a Red/Infrared camera. Vegetation fraction information was derived from digital photos using a supervised classification. Comparison of performance statistics indicate that ANN performed slightly better than least square regression models. Newly developed fraction vegetation models will be used in the evaluation of land surface energy balance based evapotranspiration models.

  8. Scaling Sap Flow Results Over Wide Areas Using High-Resolution Aerial Multispectral Digital Imaging, Leaf Area Index (LAI) and MODIS Satellite Imagery in Saltcedar Stands on the Lower Colorado River

    NASA Astrophysics Data System (ADS)

    Murray, R.; Neale, C.; Nagler, P. L.; Glenn, E. P.

    2008-12-01

    Heat-balance sap flow sensors provide direct estimates of water movement through plant stems and can be used to accurately measure leaf-level transpiration (EL) and stomatal conductance (GS) over time scales ranging from 20-minutes to a month or longer in natural stands of plants. However, their use is limited to relatively small branches on shrubs or trees, as the gauged stem section needs to be uniformly heated by the heating coil to produce valid measurements. This presents a scaling problem in applying the results to whole plants, stands of plants, and larger landscape areas. We used high-resolution aerial multispectral digital imaging with green, red and NIR bands as a bridge between ground measurements of EL and GS, and MODIS satellite imagery of a flood plain on the Lower Colorado River dominated by saltcedar (Tamarix ramosissima). Saltcedar is considered to be a high-water-use plant, and saltcedar removal programs have been proposed to salvage water. Hence, knowledge of actual saltcedar ET rates is needed on western U.S. rivers. Scaling EL and GS to large landscape units requires knowledge of leaf area index (LAI) over large areas. We used a LAI model developed for riparian habitats on Bosque del Apache, New Mexico, to estimate LAI at our study site on the Colorado River. We compared the model estimates to ground measurements of LAI, determined with a Li-Cor LAI-2000 Plant Canopy Analyzer calibrated by leaf harvesting to determine Specific Leaf Area (SLA) (m2 leaf area per g dry weight leaves) of the different species on the floodplain. LAI could be adequately predicted from NDVI from aerial multispectral imagery and could be cross-calibrated with MODIS NDVI and EVI. Hence, we were able to project point measurements of sap flow and LAI over multiple years and over large areas of floodplain using aerial multispectral imagery as a bridge between ground and satellite data. The methods are applicable to riparian corridors throughout the western U.S.

  9. Bathymetric mapping with passive multispectral imagery.

    PubMed

    Philpot, W D

    1989-04-15

    Bathymetric mapping will be most straightforward where water quality and atmospheric conditions are invariant over the scene. Under these conditions, both depth and an effective attenuation coefficient of the water over several different bottom types may be retrieved from passive, multispectral imagery. As scenes become more complex-with changing water type and variable atmospheric conditions-it is probable that a strictly spectral analysis will no longer be sufficient to extract depth from multispectral imagery. In these cases an independent source of information will be required. The most likely sources for such information are spatial and temporal variations in image data.

  10. Multispectral Analysis of NMR Imagery

    NASA Technical Reports Server (NTRS)

    Butterfield, R. L.; Vannier, M. W. And Associates; Jordan, D.

    1985-01-01

    Conference paper discusses initial efforts to adapt multispectral satellite-image analysis to nuclear magnetic resonance (NMR) scans of human body. Flexibility of these techniques makes it possible to present NMR data in variety of formats, including pseudocolor composite images of pathological internal features. Techniques do not have to be greatly modified from form in which used to produce satellite maps of such Earth features as water, rock, or foliage.

  11. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    PubMed

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery.

  12. Multispectral scanner imagery for plant community classification.

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.; Spencer, M. M.

    1973-01-01

    Optimum channel selection among 12 channels of multispectral scanner imagery identified six as providing the best information for computerized classification of 11 plant communities and two nonvegetation classes. Intensive preprocessing of the spectral data was required to eliminate bidirectional reflectance effects of the spectral imagery caused by scanner view angle and varying geometry of the plant canopy. Generalized plant community types - forest, grassland, and hydrophytic systems - were acceptably classified based on ecological analysis. Serious, but soluble, errors occurred with attempts to classify specific community types within the grassland system. However, special clustering analyses provided for improved classification of specific grassland communities.

  13. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D. G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  14. Radiometric Characterization of IKONOS Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Ryan, Robert E.; Kelly, Michelle; Holekamp, Kara; Zanoni, Vicki; Thome, Kurtis; Schiller, Stephen

    2002-01-01

    A radiometric characterization of Space Imaging's IKONOS 4-m multispectral imagery has been performed by a NASA funded team from the John C. Stennis Space Center (SSC), the University of Arizona Remote Sensing Group (UARSG), and South Dakota State University (SDSU). Both intrinsic radiometry and the effects of Space Imaging processing on radiometry were investigated. Relative radiometry was examined with uniform Antarctic and Saharan sites. Absolute radiometric calibration was performed using reflectance-based vicarious calibration methods on several uniform sites imaged by IKONOS, coincident with ground-based surface and atmospheric measurements. Ground-based data and the IKONOS spectral response function served as input to radiative transfer codes to generate a Top-of-Atmosphere radiance estimate. Calibration coefficients derived from each vicarious calibration were combined to generate an IKONOS radiometric gain coefficient for each multispectral band assuming a linear response over the full dynamic range of the instrument. These calibration coefficients were made available to Space Imaging, which subsequently adopted them by updating its initial set of calibration coefficients. IKONOS imagery procured through the NASA Scientific Data Purchase program is processed with or without a Modulation Transfer Function Compensation kernel. The radiometric effects of this kernel on various scene types was also investigated. All imagery characterized was procured through the NASA Scientific Data Purchase program.

  15. Automated oil spill detection with multispectral imagery

    NASA Astrophysics Data System (ADS)

    Bradford, Brian N.; Sanchez-Reyes, Pedro J.

    2011-06-01

    In this publication we present an automated detection method for ocean surface oil, like that which existed in the Gulf of Mexico as a result of the April 20, 2010 Deepwater Horizon drilling rig explosion. Regions of surface oil in airborne imagery are isolated using red, green, and blue bands from multispectral data sets. The oil shape isolation procedure involves a series of image processing functions to draw out the visual phenomenological features of the surface oil. These functions include selective color band combinations, contrast enhancement and histogram warping. An image segmentation process then separates out contiguous regions of oil to provide a raster mask to an analyst. We automate the detection algorithm to allow large volumes of data to be processed in a short time period, which can provide timely oil coverage statistics to response crews. Geo-referenced and mosaicked data sets enable the largest identified oil regions to be mapped to exact geographic coordinates. In our simulation, multispectral imagery came from multiple sources including first-hand data collected from the Gulf. Results of the simulation show the oil spill coverage area as a raster mask, along with histogram statistics of the oil pixels. A rough square footage estimate of the coverage is reported if the image ground sample distance is available.

  16. Development of a multispectral imagery device devoted to weed detection

    NASA Astrophysics Data System (ADS)

    Vioix, Jean-Baptiste; Douzals, Jean-Paul; Truchetet, Frederic; Navar, Pierre

    2003-04-01

    Multispectral imagery is a large domain with number of practical applications: thermography, quality control in industry, food science and agronomy, etc. The main interest is to obtain spectral information of the objects for which reflectance signal can be associated with physical, chemical and/or biological properties. Agronomic applications of multispectral imagery generally involve the acquisition of several images in the wavelengths of visible and near infrared. This paper will first present different kind of multispectral devices used for agronomic issues and will secondly introduce an original multispectral design based on a single CCD. Third, early results obtained for weed detection are presented.

  17. High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations

    NASA Astrophysics Data System (ADS)

    Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas

    2007-10-01

    A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.

  18. Digital computer processing of peach orchard multispectral aerial photography

    NASA Technical Reports Server (NTRS)

    Atkinson, R. J.

    1976-01-01

    Several methods of analysis using digital computers applicable to digitized multispectral aerial photography, are described, with particular application to peach orchard test sites. This effort was stimulated by the recent premature death of peach trees in the Southeastern United States. The techniques discussed are: (1) correction of intensity variations by digital filtering, (2) automatic detection and enumeration of trees in five size categories, (3) determination of unhealthy foliage by infrared reflectances, and (4) four band multispectral classification into healthy and declining categories.

  19. Semantic segmentation of multispectral overhead imagery

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Pope, Paul A.; Sentz, Kari

    2016-05-01

    Land cover classification uses multispectral pixel information to separate image regions into categories. Image segmentation seeks to separate image regions into objects and features based on spectral and spatial image properties. However, making sense of complex imagery typically requires identifying image regions that are often a heterogeneous mixture of categories and features that constitute functional semantic units such as industrial, residential, or commercial areas. This requires leveraging both spectral classification and spatial feature extraction synergistically to synthesize such complex but meaningful image units. We present an efficient graphical model for extracting such semantically cohesive regions. We employ an initial hierarchical segmentation of images into features represented as nodes of an attributed graph that represents feature properties as well as their adjacency relations with other features. This provides a framework to group spectrally and structurally diverse features, which are nevertheless semantically cohesive, based on user-driven identifications of features and their contextual relationships in the graph. We propose an efficient method to construct, store, and search an augmented graph that captures nonadjacent vicinity relationships of features. This graph can be used to query for semantic notional units consisting of ontologically diverse features by constraining it to specific query node types and their indicated/desired spatial interaction characteristics. User interaction with, and labeling of, initially segmented and categorized image feature graph can then be used to learn feature (node) and regional (subgraph) ontologies as constraints, and to identify other similar semantic units as connected components of the constraint-pruned augmented graph of a query image.

  20. An aerial multispectral thermographic survey of the Oak Ridge Reservation for selected areas K-25, X-10, and Y-12, Oak Ridge, Tennessee

    SciTech Connect

    Ginsberg, I.W.

    1996-10-01

    During June 5-7, 1996, the Department of Energy`s Remote Sensing Laboratory performed day and night multispectral surveys of three areas at the Oak Ridge Reservation: K-25, X-10, and Y-12. Aerial imagery was collected with both a Daedalus DS1268 multispectral scanner and National Aeronautics and Space Administration`s Thermal Infrared Multispectral System, which has six bands in the thermal infrared region of the spectrum. Imagery from the Thermal Infrared Multispectral System was processed to yield images of absolute terrain temperature and of the terrain`s emissivities in the six spectral bands. The thermal infrared channels of the Daedalus DS1268 were radiometrically calibrated and converted to apparent temperature. A recently developed system for geometrically correcting and geographically registering scanner imagery was used with the Daedalus DS1268 multispectral scanner. The corrected and registered 12-channel imagery was orthorectified using a digital elevation model. 1 ref., 5 figs., 5 tabs.

  1. Texture analysis for colorectal tumour biopsies using multispectral imagery.

    PubMed

    Peyret, Remy; Bouridane, Ahmed; Al-Maadeed, Somaya Ali; Kunhoth, Suchithra; Khelifi, Fouad

    2015-08-01

    Colorectal cancer is one of the most common cancers in the world. As part of its diagnosis, a histological analysis is often run on biopsy samples. Multispecral imagery taken from cancer tissues can be useful to capture more meaningful features. However, the resulting data is usually very large having a large number of varying feature types. This papers aims to investigate and compare the performances of multispectral imagery taken from colorectal biopsies using different techniques for texture feature extraction inclduing local binary patterns, Haraclick features and local intensity order patterns. Various classifiers such as Support Vector Machine and Random Forest are also investigated. The results show the superiority of multispectral imaging over the classical panchromatic approach. In the multispectral imagery's analysis, the local binary patterns combined with Support Vector Machine classifier gives very good results achieving an accuracy of 91.3%.

  2. COCOA: tracking in aerial imagery

    NASA Astrophysics Data System (ADS)

    Ali, Saad; Shah, Mubarak

    2006-05-01

    Unmanned Aerial Vehicles (UAVs) are becoming a core intelligence asset for reconnaissance, surveillance and target tracking in urban and battlefield settings. In order to achieve the goal of automated tracking of objects in UAV videos we have developed a system called COCOA. It processes the video stream through number of stages. At first stage platform motion compensation is performed. Moving object detection is performed to detect the regions of interest from which object contours are extracted by performing a level set based segmentation. Finally blob based tracking is performed for each detected object. Global tracks are generated which are used for higher level processing. COCOA is customizable to different sensor resolutions and is capable of tracking targets as small as 100 pixels. It works seamlessly for both visible and thermal imaging modes. The system is implemented in Matlab and works in a batch mode.

  3. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  4. Material Characterization using Passive Multispectral Polarimetric Imagery

    DTIC Science & Technology

    2013-03-01

    wavelength due to the tendency of all materials to polarize scattered light very weakly in that regime . The derivative would be near zero for metals and...Applications in Remote Sensing. Oxford University Press, USA, 2009. [6] Coffland, Bruce. “Multispectral scanners for wildfire assessment”, 2008. URL... logs /sept14/media/volcanoo-cone-3.html. [24] National Oceanic and Atmospheric Administration. “Sonar”, Oct 2012. URL http://www.nmfs.noaa.gov/pr

  5. Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping

    NASA Astrophysics Data System (ADS)

    Rapinel, Sébastien; Hubert-Moy, Laurence; Clément, Bernard

    2015-05-01

    Although wetlands play a key role in controlling flooding and nonpoint source pollution, sequestering carbon and providing an abundance of ecological services, the inventory and characterization of wetland habitats are most often limited to small areas. This explains why the understanding of their ecological functioning is still insufficient for a reliable functional assessment on areas larger than a few hectares. While LiDAR data and multispectral Earth Observation (EO) images are often used separately to map wetland habitats, their combined use is currently being assessed for different habitat types. The aim of this study is to evaluate the combination of multispectral and multiseasonal imagery and LiDAR data to precisely map the distribution of wetland habitats. The image classification was performed combining an object-based approach and decision-tree modeling. Four multispectral images with high (SPOT-5) and very high spatial resolution (Quickbird, KOMPSAT-2, aerial photographs) were classified separately. Another classification was then applied integrating summer and winter multispectral image data and three layers derived from LiDAR data: vegetation height, microtopography and intensity return. The comparison of classification results shows that some habitats are better identified on the winter image and others on the summer image (overall accuracies = 58.5 and 57.6%). They also point out that classification accuracy is highly improved (overall accuracy = 86.5%) when combining LiDAR data and multispectral images. Moreover, this study highlights the advantage of integrating vegetation height, microtopography and intensity parameters in the classification process. This article demonstrates that information provided by the synergetic use of multispectral images and LiDAR data can help in wetland functional assessment

  6. Building and road detection from large aerial imagery

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Aoki, Yoshimitsu

    2015-02-01

    Building and road detection from aerial imagery has many applications in a wide range of areas including urban design, real-estate management, and disaster relief. The extracting buildings and roads from aerial imagery has been performed by human experts manually, so that it has been very costly and time-consuming process. Our goal is to develop a system for automatically detecting buildings and roads directly from aerial imagery. Many attempts at automatic aerial imagery interpretation have been proposed in remote sensing literature, but much of early works use local features to classify each pixel or segment to an object label, so that these kind of approach needs some prior knowledge on object appearance or class-conditional distribution of pixel values. Furthermore, some works also need a segmentation step as pre-processing. Therefore, we use Convolutional Neural Networks(CNN) to learn mapping from raw pixel values in aerial imagery to three object labels (buildings, roads, and others), in other words, we generate three-channel maps from raw aerial imagery input. We take a patch-based semantic segmentation approach, so we firstly divide large aerial imagery into small patches and then train the CNN with those patches and corresponding three-channel map patches. Finally, we evaluate our system on a large-scale road and building detection datasets that is publicly available.

  7. Classification Metrics for Improved Atmospheric Correction of Multispectral VNIR Imagery

    PubMed Central

    Richter, Rudolf

    2008-01-01

    Multispectral visible/near-infrared (VNIR) earth observation satellites, e.g., Ikonos, Quickbird, ALOS AVNIR-2, and DMC, usually acquire imagery in a few (3 – 5) spectral bands. Atmospheric correction is a challenging task for these images because the standard methods require at least one shortwave infrared band (around 1.6 or 2.2 μm) or hyperspectral instruments to derive the aerosol optical thickness. New classification metrics for defining cloud, cloud over water, haze, water, and saturation are presented to achieve improvements for an automatic processing system. The background is an ESA contract for the development of a prototype atmospheric processor for the optical payload AVNIR-2 on the ALOS platform. PMID:27873911

  8. Use of remote sensing techniques for geological hazard surveys in vegetated urban regions. [multispectral imagery for lithological mapping

    NASA Technical Reports Server (NTRS)

    Stow, S. H.; Price, R. C.; Hoehner, F.; Wielchowsky, C.

    1976-01-01

    The feasibility of using aerial photography for lithologic differentiation in a heavily vegetated region is investigated using multispectral imagery obtained from LANDSAT satellite and aircraft-borne photography. Delineating and mapping of localized vegetal zones can be accomplished by the use of remote sensing because a difference in morphology and physiology results in different natural reflectances or signatures. An investigation was made to show that these local plant zones are affected by altitude, topography, weathering, and gullying; but are controlled by lithology. Therefore, maps outlining local plant zones were used as a basis for lithologic map construction.

  9. Challenges in collecting hyperspectral imagery of coastal waters using Unmanned Aerial Vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    English, D. C.; Herwitz, S.; Hu, C.; Carlson, P. R., Jr.; Muller-Karger, F. E.; Yates, K. K.; Ramsewak, D.

    2013-12-01

    Airborne multi-band remote sensing is an important tool for many aquatic applications; and the increased spectral information from hyperspectral sensors may increase the utility of coastal surveys. Recent technological advances allow Unmanned Aerial Vehicles (UAVs) to be used as alternatives or complements to manned aircraft or in situ observing platforms, and promise significant advantages for field studies. These include the ability to conduct programmed flight plans, prolonged and coordinated surveys, and agile flight operations under difficult conditions such as measurements made at low altitudes. Hyperspectral imagery collected from UAVs should allow the increased differentiation of water column or shallow benthic communities at relatively small spatial scales. However, the analysis of hyperspectral imagery from airborne platforms over shallow coastal waters differs from that used for terrestrial or oligotrophic ocean color imagery, and the operational constraints and considerations for the collection of such imagery from autonomous platforms also differ from terrestrial surveys using manned aircraft. Multispectral and hyperspectral imagery of shallow seagrass and coral environments in the Florida Keys were collected with various sensor systems mounted on manned and unmanned aircrafts in May 2012, October 2012, and May 2013. The imaging systems deployed on UAVs included NovaSol's Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK), a Tetracam multispectral imaging system, and the Sunflower hyperspectal imager from Galileo Group, Inc. The UAVs carrying these systems were Xtreme Aerial Concepts' Vision-II Rotorcraft UAV, MLB Company's Bat-4 UAV, and NASA's SIERRA UAV, respectively. Additionally, the Galileo Group's manned aircraft also surveyed the areas with their AISA Eagle hyperspectral imaging system. For both manned and autonomous flights, cloud cover and sun glint (solar and viewing angles) were dominant constraints on retrieval of quantitatively

  10. Effective delineation of urban flooded areas based on aerial ortho-photo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Guindon, Bert; Raymond, Don; Hong, Gang

    2016-10-01

    The combination of rapid global urban growth and climate change has resulted in increased occurrence of major urban flood events across the globe. The distribution of flooded area is one of the key information layers for applications of emergency planning and response management. While SAR systems and technologies have been widely used for flood area delineation, radar images suffer from range ambiguities arising from corner reflection effects and shadowing in dense urban settings. A new mapping framework is proposed for the extraction and quantification of flood extent based on aerial optical multi-spectral imagery and ancillary data. This involves first mapping of flood areas directly visible to the sensor. Subsequently, the complete area of submergence is estimated from this initial mapping and inference techniques based on baseline data such as land cover and GIS information such as available digital elevation models. The methodology has been tested and proven effective using aerial photography for the case of the 2013 flood in Calgary, Canada.

  11. City of Irving utilizes high resolution multispectral imagery for NPDES compliance

    SciTech Connect

    Monday, H.M.; Urban, J.S.; Mulawa, D.; Benkelman, C.A.

    1994-04-01

    A case history of using high resolution multispectral imagery is described. A statistical clustering method was applied to identify the primary spectral signatures present within the image data. This was for the National Pollution Discharge Elimination System (NPDES).

  12. Application of High Resolution Multispectral Imagery for Levee Slide Detection and Monitoring

    NASA Technical Reports Server (NTRS)

    Hossain, A. K. M. Azad; Easson, Greg

    2007-01-01

    The objective is to develop methods to detect and monitor levee slides using commercially available high resolution multispectral imagery. High resolution multispectral imagery like IKONOS and QuickBird are suitable for detecting and monitoring levee slides. IKONOS is suitable for visual inspection, image classification and Tasseled Cap transform based slide detection. Tasseled Cap based model was found to be the best method for slide detection. QuickBird was suitable for visual inspection and image classification.

  13. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  14. Estimating soil organic carbon using aerial imagery and soil surveys

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Widespread implementation of precision agriculture practices requires low-cost, high-quality, georeferenced soil organic carbon (SOC) maps, but currently these maps require expensive sample collection and analysis. Widely available aerial imagery is a low-cost source of georeferenced data. After til...

  15. Texture mapping based on multiple aerial imageries in urban areas

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Ye, Siqi; Wang, Yuefeng; Han, Caiyun; Wang, Chenxi

    2015-12-01

    In the realistic 3D model reconstruction, the requirement of the texture is very high. Texture is one of the key factors that affecting realistic of the model and using texture mapping technology to realize. In this paper we present a practical approach of texture mapping based on photogrammetry theory from multiple aerial imageries in urban areas. By collinearity equation to matching the model and imageries, and in order to improving the quality of texture, we describe an automatic approach for select the optimal texture to realized 3D building from the aerial imageries of many strip. The texture of buildings can be automatically matching by the algorithm. The experimental results show that the platform of texture mapping process has a high degree of automation and improve the efficiency of the 3D modeling reconstruction.

  16. JACIE Radiometric Assessment of QuickBird Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Carver, David; Holekamp, Kara; Knowlton, Kelly; Ryan, Robert; Zanoni, Vicki; Thome, Kurtis; Aaron, David

    2004-01-01

    Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can place confidence in the imagery they use and can fully understand its properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, the NASA Stennis Space Center (SSC) Earth Science Applications (ESA) directorate,through the Joint Agency for Commercial Imagery Evaluation (JACIE) framework, established a commercial imaging satellite radiometric calibration team consisting of two groups: 1) NASA SSC ESA, supported by South Dakota State University, and 2) the University of Arizona Remote Sensing Group. The two groups determined the absolute radiometric calibration coefficients of the Digital Globe 4-band, 2.4-m QuickBird multispectral product covering the visible through near-infrared spectral region. For a 2-year period beginning in 2002, both groups employed some variant of a reflectance-based vicarious calibration approach, which required ground-based measurements coincident with QuickBird image acquisitions and radiative transfer calculations. The groups chose several study sites throughout the United States that covered nearly the entire dynamic range of the QuickBird sensor. QuickBird at-sensor radiance values were compared with those estimated by the two independent groups to determine the QuickBird sensor's radiometric accuracy. Approximately 20 at-sensor radiance estimates were vicariously determined each year. The estimates were combined to provide a high-precision radiometric gain calibration coefficient. The results of this evaluation provide the user community with an independent assessment of the QuickBird sensor's absolute calibration and stability over the 2-year period. While the techniques and method described reflect those developed at the NASA SSC, the results of both JACIE team groups are

  17. Enhancing the Detectability of Subtle Changes in Multispectral Imagery Through Real-time Change Magnification

    DTIC Science & Technology

    2015-07-27

    changes (movement or temperature fluctuations) in multiband ( visual , near-, shortwave- and longwave-infrared) imagery while simultaneously reducing...dynamic noise. We successfully applied the adapted algorithm to enhance the visibility of small movements in the Visual , Near-Infrared and Thermal (LWIR...image. 15. SUBJECT TERMS EOARD, Multispectral imagery, Temporal visual changes 16. SECURITY CLASSIFICATION OF: 17

  18. Wildlife Multispecies Remote Sensing Using Visible and Thermal Infrared Imagery Acquired from AN Unmanned Aerial Vehicle (uav)

    NASA Astrophysics Data System (ADS)

    Chrétien, L.-P.; Théau, J.; Ménard, P.

    2015-08-01

    Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0-2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.

  19. Automated Road Extraction from High Resolution Multispectral Imagery

    SciTech Connect

    Doucette, Peter J.; Agouris, Peggy; Stefanidis, Anthony

    2004-12-01

    Road networks represent a vital component of geospatial data sets in high demand, and thus contribute significantly to extraction labor costs. Multispectral imagery has only recently become widely available at high spatial resolutions, and modeling spectral content has received limited consideration for road extraction algorithms. This paper presents a methodology that exploits spectral content for fully automated road centerline extraction. Preliminary detection of road centerline pixel candidates is performed with Anti-parallel-edge Centerline Extraction (ACE). This is followed by constructing a road vector topology with a fuzzy grouping model that links nodes from a self-organized mapping of the ACE pixels. Following topology construction, a self-supervised road classification (SSRC) feedback loop is implemented to automate the process of training sample selection and refinement for a road class, as well deriving practical spectral definitions for non-road classes. SSRC demonstrates a potential to provide dramatic improvement in road extraction results by exploiting spectral content. Road centerline extraction results are presented for three 1m color-infrared suburban scenes, which show significant improvement following SSRC.

  20. Usefulness of Skylab color photography and ERTS-1 multispectral imagery for mapping range vegetation types in southwestern Wyoming

    NASA Technical Reports Server (NTRS)

    Gordon, R. C. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Aerial photography at scales of 1:43,400 and 1:104,500 was used to evaluate the usefulness of Skylab color photography (scales of 1:477,979 and 1:712,917) and ERTS-1 multispectral imagery (scale 1:1,000,000) for mapping range vegetation types. The project was successful in producing a range vegetation map of the 68,000 acres of salt desert shrub type in southwestern Wyoming. Techniques for estimation of above-ground green biomass have not yet been confirmed due to the mechanical failure of the photometer used in obtaining relative reflectance measurement. However, graphs of log transmittance versus above-ground green biomass indicate that production estimates may be made for some vegetation types from ERTS imagery. Other vegetation types not suitable for direct ERTS estimation of green biomass may possibly be related to those vegetation types whose production has been estimated from the multispectral imagery.

  1. Converting aerial imagery to application maps

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...

  2. Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues

    NASA Astrophysics Data System (ADS)

    Lazaridou, M. A.; Karagianni, A. Ch.

    2016-06-01

    Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.

  3. Acquisition and registration of aerial video imagery of urban traffic

    SciTech Connect

    Loveland, Rohan C

    2008-01-01

    The amount of information available about urban traffic from aerial video imagery is extremely high. Here we discuss the collection of such video imagery from a helicopter platform with a low-cost sensor, and the post-processing used to correct radial distortion in the data and register it. The radial distortion correction is accomplished using a Harris model. The registration is implemented in a two-step process, using a globally applied polyprojective correction model followed by a fine scale local displacement field adjustment. The resulting cleaned-up data is sufficiently well-registered to allow subsequent straight-forward vehicle tracking.

  4. Real-time aerial multispectral imaging solutions using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Chandler, Eric V.; Fish, David E.

    2014-06-01

    The next generation of multispectral sensors and cameras needs to deliver significant improvements in size, weight, portability, and spectral band customization to support widespread commercial deployment for a variety of purposebuilt aerial, unmanned, and scientific applications. The benefits of multispectral imaging are well established for applications including machine vision, biomedical, authentication, and remote sensing environments - but many aerial and OEM solutions require more compact, robust, and cost-effective production cameras to realize these benefits. A novel implementation uses micropatterning of dichroic filters into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color camera image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. We demonstrate recent results of 4-9 band dichroic filter arrays in multispectral cameras using a variety of sensors including linear, area, silicon, and InGaAs. Specific implementations range from hybrid RGB + NIR sensors to custom sensors with applicationspecific VIS, NIR, and SWIR spectral bands. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and development path. Finally, we report on the wafer-level fabrication of dichroic filter arrays on imaging sensors for scalable production of multispectral sensors and cameras.

  5. The use of ERTS-1 multispectral imagery for crop identification in a semi-arid climate

    NASA Technical Reports Server (NTRS)

    Stockton, J. G.; Bauer, M. E.; Blair, B. O.; Baumgardner, M. F.

    1975-01-01

    Crop identification using multispectral satellite imagery and multivariate pattern recognition was used to identify wheat accurately in Greeley County, Kansas. A classification accuracy of 97 percent was found for wheat and the wheat estimate in hectares was within 5 percent of the USDA's Statistical Reporting Service estimate for 1973. The multispectral response of cotton and sorghum in Texas was not unique enough to distinguish between them nor to separate them from other cultivated crops.

  6. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  7. Estimating crop water requirements of a command area using multispectral video imagery and geographic information systems

    NASA Astrophysics Data System (ADS)

    Ahmed, Rashid Hassan

    This research focused on the potential use of multispectral video remote sensing for irrigation water management. Two methods for estimating crop evapotranspiration were investigated, the energy balance estimation from multispectral video imagery and use of reflectance-based crop coefficients from multitemporal multispectral video imagery. The energy balance method was based on estimating net radiation, and soil and sensible heat fluxes, using input from the multispectral video imagery. The latent heat flux was estimated as a residual. The results were compared to surface heat fluxes measured on the ground. The net radiation was estimated within 5% of the measured values. However, the estimates of sensible and soil heat fluxes were not consistent with the measured values. This discrepancy was attributed to the methods for estimating the two fluxes. The degree of uncertainty in the parameters used in the methods made their application too limited for extrapolation to large agricultural areas. The second method used reflectance-based crop coefficients developed from the multispectral video imagery using alfalfa as a reference crop. The daily evapotranspiration from alfalfa was estimated using a nearby weather station. With the crop coefficients known for a canal command area, irrigation scheduling was simulated using the soil moisture balance method. The estimated soil moisture matched the actual soil moisture measured using the neutron probe method. Also, the overall water requirement estimated by this method was found to be in close agreement with the canal water deliveries. The crop coefficient method has great potential for irrigation management of large agricultural areas.

  8. Encoding and analyzing aerial imagery using geospatial semantic graphs

    SciTech Connect

    Watson, Jean-Paul; Strip, David R.; McLendon, William C.; Parekh, Ojas D.; Diegert, Carl F.; Martin, Shawn Bryan; Rintoul, Mark Daniel

    2014-02-01

    While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

  9. Interactive color display for multispectral imagery using correlation clustering

    NASA Technical Reports Server (NTRS)

    Haskell, R. E. (Inventor)

    1979-01-01

    A method for processing multispectral data is provided, which permits an operator to make parameter level changes during the processing of the data. The system is directed to production of a color classification map on a video display in which a given color represents a localized region in multispectral feature space. Interactive controls permit an operator to alter the size and change the location of these regions, permitting the classification of such region to be changed from a broad to a narrow classification.

  10. Building population mapping with aerial imagery and GIS data

    NASA Astrophysics Data System (ADS)

    Ural, Serkan; Hussain, Ejaz; Shan, Jie

    2011-12-01

    Geospatial distribution of population at a scale of individual buildings is needed for analysis of people's interaction with their local socio-economic and physical environments. High resolution aerial images are capable of capturing urban complexities and considered as a potential source for mapping urban features at this fine scale. This paper studies population mapping for individual buildings by using aerial imagery and other geographic data. Building footprints and heights are first determined from aerial images, digital terrain and surface models. City zoning maps allow the classification of the buildings as residential and non-residential. The use of additional ancillary geographic data further filters residential utility buildings out of the residential area and identifies houses and apartments. In the final step, census block population, which is publicly available from the U.S. Census, is disaggregated and mapped to individual residential buildings. This paper proposes a modified building population mapping model that takes into account the effects of different types of residential buildings. Detailed steps are described that lead to the identification of residential buildings from imagery and other GIS data layers. Estimated building populations are evaluated per census block with reference to the known census records. This paper presents and evaluates the results of building population mapping in areas of West Lafayette, Lafayette, and Wea Township, all in the state of Indiana, USA.

  11. Using airborne multispectral imagery to monitor cotton root rot expansion within a growing season

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cotton root rot is a serious and destructive disease that affects cotton production in the southwestern United States. Accurate delineation of cotton root rot infestations is important for cost-effective management of the disease. The objective of this study was to use airborne multispectral imagery...

  12. Using multi-spectral imagery to detect and map stress induced by Russian wheat aphid

    NASA Astrophysics Data System (ADS)

    Backoulou, Georges Ferdinand

    Scope and Method of Study. The rationale of this study was to assess the stress in wheat field induced by the Russian wheat aphid using multispectral imagery. The study was conducted to (a) determine the relationship between RWA and edaphic and topographic factors; (b) identify and quantify the spatial pattern of RWA infestation within wheat fields; (c) differentiate the stress induced by RWA from other stress causing factors. Data used for the analysis included RWA population density from the wheat field in, Texas, Colorado, Wyoming, and Nebraska, Digital Elevation Model from the Unites States Geological Survey (USGS), soil data from the Soil Survey Geographic database (SSURGO), and multispectral imagery acquired in the panhandle of Oklahoma. Findings and Conclusions. The study revealed that the population density of the Russian wheat aphid was related to topographic and edaphic factors. Slope and sand were predictor variables that were positively related to the density of RWA at the field level. The study has also demonstrated that stress induced by the RWA has a specific spatial pattern that can be distinguished from other stress causing factors using a combination of landscape metrics and topographic and edaphic characteristics of wheat fields. Further field-based studies using multispectral imagery and spatial pattern analysis are suggested. The suggestions require acquiring biweekly multispectral imagery and collecting RWA, topographic and edaphic data at the sampling points during the phonological growth development of wheat plants. This is an approach that may pretend to have great potential for site specific technique for the integrated pest management.

  13. Early identification of cotton fields using mosaicked aerial multispectral imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Early identification of cotton fields is important for advancing boll weevil eradication progress and reducing the risk of reinfestation. Remote sensing has long been used for crop identification, but limited work has been reported on early identification of cotton fields. The objective of this stud...

  14. Evaluation of orthomosics and digital surface models derived from aerial imagery for crop mapping

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Orthomosics derived from aerial imagery acquired by consumer-grade cameras have been used for crop mapping. However, digital surface models (DSM) derived from aerial imagery have not been evaluated for this application. In this study, a novel method was proposed to extract crop height from DSM and t...

  15. Ortho-Rectification of Narrow Band Multi-Spectral Imagery Assisted by Dslr RGB Imagery Acquired by a Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.

    2015-08-01

    Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at

  16. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  17. Computer discrimination procedures applicable to aerial and ERTS multispectral data

    NASA Technical Reports Server (NTRS)

    Richardson, A. J.; Torline, R. J.; Allen, W. A.

    1970-01-01

    Two statistical models are compared in the classification of crops recorded on color aerial photographs. A theory of error ellipses is applied to the pattern recognition problem. An elliptical boundary condition classification model (EBC), useful for recognition of candidate patterns, evolves out of error ellipse theory. The EBC model is compared with the minimum distance to the mean (MDM) classification model in terms of pattern recognition ability. The pattern recognition results of both models are interpreted graphically using scatter diagrams to represent measurement space. Measurement space, for this report, is determined by optical density measurements collected from Kodak Ektachrome Infrared Aero Film 8443 (EIR). The EBC model is shown to be a significant improvement over the MDM model.

  18. Classification of human carcinoma cells using multispectral imagery

    NASA Astrophysics Data System (ADS)

    Ćinar, Umut; Y. Ćetin, Yasemin; Ćetin-Atalay, Rengul; Ćetin, Enis

    2016-03-01

    In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.

  19. Transition, Training, and Assessment of Multispectral Composite Imagery in Support of the NWS Aviation Forecast Mission

    NASA Technical Reports Server (NTRS)

    Fuell, Kevin; Jedlovec, Gary; Leroy, Anita; Schultz, Lori

    2015-01-01

    The NASA/Short-term Prediction, Research, and Transition (SPoRT) Program works closely with NOAA/NWS weather forecasters to transition unique satellite data and capabilities into operations in order to assist with nowcasting and short-term forecasting issues. Several multispectral composite imagery (i.e. RGB) products were introduced to users in the early 2000s to support hydrometeorology and aviation challenges as well as incident support. These activities lead to SPoRT collaboration with the GOES-R Proving Ground efforts where instruments such as MODIS (Aqua, Terra) and S-NPP/VIIRS imagers began to be used as near-realtime proxies to future capabilities of the Advanced Baseline Imager (ABI). One of the composite imagery products introduced to users was the Night-time Microphysics RGB, originally developed by EUMETSAT. SPoRT worked to transition this imagery to NWS users, provide region-specific training, and assess the impact of the imagery to aviation forecast needs. This presentation discusses the method used to interact with users to address specific aviation forecast challenges, including training activities undertaken to prepare for a product assessment. Users who assessed the multispectral imagery ranged from southern U.S. inland and coastal NWS weather forecast offices (WFOs), to those in the Rocky Mountain Front Range region and West Coast, as well as highlatitude forecasters of Alaska. These user-based assessments were documented and shared with the satellite community to support product developers and the broad users of new generation satellite data.

  20. Joint spatio-spectral based edge detection for multispectral infrared imagery.

    SciTech Connect

    Krishna, Sanjay; Hayat, Majeed M.; Bender, Steven C.; Sharma, Yagya D.; Jang, Woo-Yong; Paskalva, Biliana S.

    2010-06-01

    Image segmentation is one of the most important and difficult tasks in digital image processing. It represents a key stage of automated image analysis and interpretation. Segmentation algorithms for gray-scale images utilize basic properties of intensity values such as discontinuity and similarity. However, it is possible to enhance edge-detection capability by means of using spectral information provided by multispectral (MS) or hyperspectral (HS) imagery. In this paper we consider image segmentation algorithms for multispectral images with particular emphasis on detection of multi-color or multispectral edges. More specifically, we report on an algorithm for joint spatio-spectral (JSS) edge detection. By joint we mean simultaneous utilization of spatial and spectral characteristics of a given MS or HS image. The JSS-based edge-detection approach, termed Spectral Ratio Contrast (SRC) edge-detection algorithm, utilizes the novel concept of matching edge signatures. The edge signature represents a combination of spectral ratios calculated using bands that enhance the spectral contrast between the two materials. In conjunction with a spatial mask, the edge signature give rise to a multispectral operator that can be viewed as a three-dimensional extension of the mask. In the extended mask, the third (spectral) dimension of each hyper-pixel can be chosen independently. The SRC is verified using MS and HS imagery from a quantum-dot in a well infrared (IR) focal plane array, and the Airborne Hyperspectral Imager.

  1. Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-12-09

    We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less

  2. Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries

    SciTech Connect

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; Altmann, Garrett L.

    2014-12-09

    We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.

  3. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  4. EVOLUTIONARY COMPUTATION AND POST-WILDFIRE LAND-COVER MAPPING WITH MULTISPECTRAL IMAGERY.

    SciTech Connect

    Brumby, Steven P.; Koch, S. W.; Hansen, L. A.

    2001-01-01

    The Cerro Grande Los Alamos wildfire devastated approximately 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos. The need to monitor the continuing impact of the fire on the local environment has led to the application of a number of advanced remote sensing technologies. During and after the fire, remote-sensing data was acquired fiorn a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique io the automated classification of land cover using multispectral imagery. We apply a hybrid gertelic programminghupervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery fiom the Landsat 7 ETM+ instrument fiom before and after the wildfire. Using an existing land cover classification based on a Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, along with clouds and cloud shadows. The details of our evolved classification are compared to the manually produced land-cover classification. Keywords: Feature Extraction, Genetic programming, Supervised classification, Multi-spectral imagery, Land cover, Wildfire.

  5. Temporal encoding of multispectral satellite imagery for segmentation using pulsed coupled neural networks

    NASA Astrophysics Data System (ADS)

    Tarr, Gregory L.; Carreras, Richard A.; Fender, Janet S.; Clastres, Xavier; Freyss, Laurent; Samuelides, Manuel

    1995-11-01

    Unlike biological vision, most techniques for computer image processing are not robust over large samples of imagery. Natural systems seem unaffected by variation in local illumination and textures which interfere with conventional analysis. While change detection algorithms have been partially successful, many important tasks like extraction of roads and communication lines remain unsolved. The solution to these problems may lie in examining architectures and algorithms used by biological imaging systems. Pulsed oscillatory neural network design, based on biomemetics, seem to solve some of these problems. Pulsed oscillatory neural networks are examined for application to image analysis and segmentation of multispectral imagery from the Satellite Pour l'Observation de la Terre. Using biological systems as a model for image analysis of complex data, a pulsed coupled networks using an integrate and fire mechanism is developed. This architecture, based on layers of pulsed coupled neurons is tested against common image segmentation problems. Using a reset activation pulse similar to that generated by sacatic motor commands, an algorithm is developed which demonstrates the biological vision could be based on adaptive histogram techniques. This architecture is demonstrated to be both biologically plausible and more effective than conventional techniques. Using the pulse time-of-arrival as the information carrier, the image is reduced to a time signal, temporal encoding of imagery, which allows an intelligent filtering based on expectation. This technique is uniquely suited to multispectral/multisensor imagery and other sensor fusion problems.

  6. Current and Future Applications of Multispectral (RGB) Satellite Imagery for Weather Analysis and Forecasting Applications

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Fuell, Kevin K.; LaFontaine, Frank; McGrath, Kevin; Smith, Matt

    2013-01-01

    Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low ]Earth orbits. The NASA Short ]term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA fs Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channels available from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard METEOSAT ]9. This broader suite includes products that discriminate between air mass types associated with synoptic ]scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES ]R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar ]Orbiting Partnership (S ]NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross ]track Infrared Sounder (CrIS), and the Advanced Technology Microwave Sounder (ATMS), which have unrivaled spectral and spatial resolution, as precursors to the JPSS era (i.e., the next generation of polar orbiting satellites. New applications from VIIRS extend multispectral composites available from MODIS and SEVIRI while adding new capabilities through incorporation of additional CrIS channels or information from the Near Constant Contrast or gDay ]Night Band h, which provides moonlit reflectance from clouds and detection of fires or city lights. This presentation will

  7. Automated segmentation of pseudoinvariant features from multispectral imagery

    NASA Astrophysics Data System (ADS)

    Salvaggio, Carl; Schott, John R.

    1988-01-01

    The present automated segmentation algorithm for pseudoinvariant-feature isolation employs rate-of-change information from a thresholding process previously associated with the Volchok and Schott (1986) pseudoinvariant feature-normalization technique. The algorithm was combined with the normalization technique and applied to the six reflective bands of the Landsat TM for both urban and rural scenes. An evaluation of the normalization results' accuracy shows the combined techniques to have consistently produced normalization results whose errors are of the order of about 1-2 reflectance units for both rural and urban TM imagery.

  8. Correlation of ERTS multispectral imagery with suspended matter and chlorophyll in lower Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Bowker, D. E.; Fleischer, P.; Gosink, T. A.; Hanna, W. J.; Ludwick, J. C.

    1973-01-01

    The feasibility of using multispectral satellite imagery to monitor the characteristics of estuarine waters is being investigated. Preliminary comparisons of MSS imagery with suspended matter concentrations, particle counts, chlorophyll, transmittance and bathymetry have been made. Some visual correlation of radiance with particulates and chlorophyll has been established. Effects of bathymetry are present, and their relation to transmittance and radiance is being investigated. Greatest detail in suspended matter is revealed by MSS band 5. Near-surface suspended sediment load and chlorophyll can be observed in bands 6 and 7. Images received to date have partially defined extent and location of high suspensate concentrations. Net quantity of suspended matter in the lower Bay has been decreasing since the inception of the study, and represents the diminution of turbid flood waters carried into the Bay in late September, 1972. The results so far point to the utility of MSS imagery in monitoring estuarine water character for the assessment of siltation, productivity, and water types.

  9. Evaluation of multispectral, fine scale digital imagery as a tool for mapping stream morphology

    NASA Astrophysics Data System (ADS)

    Wright, Andrea; Marcus, W. Andrew; Aspinall, Richard

    2000-05-01

    Multispectral digital imagery acquired from Soda Butte and Cache Creeks, Montana and Wyoming was used in conjunction with field data to classify and map hydrogeomorphic stream units on four stream reaches. The morphologic units that were field mapped were eddy drop zones, glides, low gradient riffles, high gradient riffles, lateral scour pools, attached bars, detached bars, and large woody debris. Unsupervised and supervised classifications of the imagery were used to develop a Maximum Joint Probability classification and an Alternative Joint Probability classification of the stream reaches. The Maximum Joint Probability classification allowed only one of the image classes to represent each hydrogeomorphic unit on the field map and resulted in relatively low overall accuracies for identification of these units of 10% to 50%. The Alternative Joint Probability classification allowed each image class to represent any geomorphic unit where the probability of a correct classification was greater than random. In this technique, two or three image classes were assigned to represent each hydrogeomorphic unit, resulting in higher overall accuracies of 28% to 80%. Accurate classification of hydrogeomorphic units was hampered by poor rectification of imagery with the field maps because of inadequate ground control points. In general, the largest hydrogeomorphic units were most accurately classified, whereas units that were small in area or spatially linear were least likely to be accurately classified. The results of this study demonstrated that multispectral digital imagery has the potential to be a useful tool for mapping hydrogeomorphic stream units at fine scales. Imagery to be an effective tool, however, careful measures such as accurate documentation of ground control points must be taken to ensure accurate rectification of the imagery with field maps.

  10. Fusing Unmanned Aerial Vehicle Imagery with High Resolution Hydrologic Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Pierini, N.; Schreiner-McGraw, A.; Anderson, C.; Saripalli, S.; Rango, A.

    2013-12-01

    After decades of development and applications, high resolution hydrologic models are now common tools in research and increasingly used in practice. More recently, high resolution imagery from unmanned aerial vehicles (UAVs) that provide information on land surface properties have become available for civilian applications. Fusing the two approaches promises to significantly advance the state-of-the-art in terms of hydrologic modeling capabilities. This combination will also challenge assumptions on model processes, parameterizations and scale as land surface characteristics (~0.1 to 1 m) may now surpass traditional model resolutions (~10 to 100 m). Ultimately, predictions from high resolution hydrologic models need to be consistent with the observational data that can be collected from UAVs. This talk will describe our efforts to develop, utilize and test the impact of UAV-derived topographic and vegetation fields on the simulation of two small watersheds in the Sonoran and Chihuahuan Deserts at the Santa Rita Experimental Range (Green Valley, AZ) and the Jornada Experimental Range (Las Cruces, NM). High resolution digital terrain models, image orthomosaics and vegetation species classification were obtained from a fixed wing airplane and a rotary wing helicopter, and compared to coarser analyses and products, including Light Detection and Ranging (LiDAR). We focus the discussion on the relative improvements achieved with UAV-derived fields in terms of terrain-hydrologic-vegetation analyses and summer season simulations using the TIN-based Real-time Integrated Basin Simulator (tRIBS) model. Model simulations are evaluated at each site with respect to a high-resolution sensor network consisting of six rain gauges, forty soil moisture and temperature profiles, four channel runoff flumes, a cosmic-ray soil moisture sensor and an eddy covariance tower over multiple summer periods. We also discuss prospects for the fusion of high resolution models with novel

  11. First results for an image processing workflow for hyperspatial imagery acquired with a low-cost unmanned aerial vehicle (UAV).

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Very high-resolution images from unmanned aerial vehicles (UAVs) have great potential for use in rangeland monitoring and assessment, because the imagery fills the gap between ground-based observations and remotely sensed imagery from aerial or satellite sensors. However, because UAV imagery is ofte...

  12. Texture and scale in object-based analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Imagery acquired with unmanned aerial vehicles (UAVs) has great potential for incorporation into natural resource monitoring protocols due to their ability to be deployed quickly and repeatedly and to fly at low altitudes. While the imagery may have high spatial resolution, the spectral resolution i...

  13. Identification of landslides in clay terrains using Airborne Thematic Mapper (ATM) multispectral imagery

    NASA Astrophysics Data System (ADS)

    Whitworth, Malcolm; Giles, David; Murphy, William

    2002-01-01

    The slopes of the Cotswolds Escarpment in the United Kingdom are mantled by extensive landslide deposits, including both relict and active features. These landslides pose a significant threat to engineering projects and have been the focus of research into the use of airborne remote sensing data sets for landslide mapping. Due to the availability of extensive ground investigation data, a test site was chosen on the slopes of the Cotswolds Escarpment above the village of Broadway, Worcestershire, United Kingdom. Daedalus Airborne Thematic Mapper (ATM) imagery was subsequently acquired by the UK Natural Environment Research Council (NERC) to provide high-resolution multispectral imagery of the Broadway site. This paper assesses the textural enhancement of ATM imagery as an image processing technique for landslide mapping at the Broadway site. Results of three kernel based textural measures, variance, mean euclidean distance (MEUC) and grey level co-occurrence matrix (GLCM) entropy are presented. Problems encountered during textural analysis, associated with the presence of dense woodland within the project area, are discussed and a solution using Principal Component Analysis (PCA) is described. Landslide features in clay dominated terrains can be identified through textural enhancement of airborne multispectral imagery. The kernel based textural measures tested in the current study were all able to enhance areas of slope instability within ATM imagery. Additionally, results from supervised classification of the combined texture-principal component dataset show that texture based image classification can accurately classify landslide regions and that by including a Principal Component image, woodland and landslide classes can be differentiated successfully during the classification process.

  14. DEIMOS-2: cost-effective, very-high resolution multispectral imagery

    NASA Astrophysics Data System (ADS)

    Pirondini, Fabrizio; López, Julio; González, Enrique; González, José Antonio

    2014-10-01

    ELECNOR DEIMOS is a private Spanish company, part of the Elecnor industrial group, which owns and operates DEIMOS-1, the first Spanish Earth Observation satellite. DEIMOS-1, launched in 2009, is among the world leading sources of high resolution data. On June 19th, 2014 ELECNOR DEIMOS launched its second satellite, DEIMOS-2, which is a very-high resolution, agile satellite capable of providing 75-cm pan-sharpened imagery, with a 12kmwide swath. The DEIMOS-2 camera delivers multispectral imagery in 5 bands: Panchromatic, G, R, B and NIR. DEIMOS-2 is the first European satellite completely owned by private capital, which is capable of providing submetric multispectral imagery. The whole end-to-end DEIMOS-2 system is designed to provide a cost-effective, dependable and highly responsive service to cope with the increasing need of fast access to very-high resolution imagery. The same 24/7 commercial service which is now available for DEIMOS-1, including tasking, download, processing and delivery, will become available for DEIMOS-2 as well, as soon as the satellite enters into commercial operations, at the end of its in-orbit commissioning. The DEIMOS-2 satellite has been co-developed by ELECNOR DEIMOS and SATREC-I (South Korea), and it has been integrated and tested in the new ELECNOR DEIMOS Satellite Systems premises in Puertollano (Spain). The DEIMOS-2 ground segment, which includes four receiving/commanding ground stations in Spain, Sweden and Canada, has been completely developed in-house by ELECNOR DEIMOS, based on its Ground Segment for Earth Observation (gs4EO®) suite. In this paper we describe the main features of the DEIMOS-2 system, with emphasis on its initial operations and the quality of the initial imagery, and provide updated information on its mission status.

  15. Using remotely-sensed multispectral imagery to build age models for alluvial fan surfaces

    NASA Astrophysics Data System (ADS)

    D'Arcy, Mitch; Mason, Philippa J.; Roda Boluda, Duna C.; Whittaker, Alexander C.; Lewis, James

    2016-04-01

    Accurate exposure age models are essential for much geomorphological field research, and generally depend on laboratory analyses such as radiocarbon, cosmogenic nuclide, or luminescence techniques. These approaches continue to revolutionise geomorphology, however they cannot be deployed remotely or in situ in the field. Therefore other methods are still needed for producing preliminary age models, performing relative dating of surfaces, or selecting sampling sites for the laboratory analyses above. With the widespread availability of detailed multispectral imagery, a promising approach is to use remotely-sensed data to discriminate surfaces with different ages. Here, we use new Landsat 8 Operational Land Imager (OLI) multispectral imagery to characterise the reflectance of 35 alluvial fan surfaces in the semi-arid Owens Valley, California. Alluvial fans are useful landforms to date, as they are widely used to study the effects of tectonics, climate and sediment transport processes on source-to-sink sedimentation. Our target fan surfaces have all been mapped in detail in the field, and have well-constrained exposure ages ranging from modern to ~ 125 ka measured using a high density of 10Be cosmogenic nuclide samples. Despite all having similar granitic compositions, the spectral properties of these surfaces vary systematically with their exposure ages. Older surfaces demonstrate a predictable shift in reflectance across the visible and short-wave infrared spectrum. Simple calculations, such as the brightness ratios of different wavelengths, generate sensitive power law relationships with exposure age that depend on post-depositional alteration processes affecting these surfaces. We investigate what these processes might be in this dryland location, and evaluate the potential for using remotely-sensed multispectral imagery for developing surface age models. The ability to remotely sense relative exposure ages has useful implications for preliminary mapping, selecting

  16. Error modeling based on geostatistics for uncertainty analysis in crop mapping using Gaofen-1 multispectral imagery

    NASA Astrophysics Data System (ADS)

    You, Jiong; Pei, Zhiyuan

    2015-01-01

    With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for

  17. Pan-Sharpening Approaches Based on Unmixing of Multispectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Palubinskas, G.

    2016-06-01

    Model based analysis or explicit definition/listing of all models/assumptions used in the derivation of a pan-sharpening method allows us to understand the rationale or properties of existing methods and shows a way for a proper usage or proposal/selection of new methods `better' satisfying the needs of a particular application. Most existing pan-sharpening methods are based mainly on the two models/assumptions: spectral consistency for high resolution multispectral data (physical relationship between multispectral and panchromatic data in a high resolution scale) and spatial consistency for multispectral data (so-called Wald's protocol first property or relationship between multispectral data in different resolution scales). Two methods, one based on a linear unmixing model and another one based on spatial unmixing, are described/proposed/modified which respect models assumed and thus can produce correct or physically justified fusion results. Earlier mentioned property `better' should be measurable quantitatively, e.g. by means of so-called quality measures. The difficulty of a quality assessment task in multi-resolution image fusion or pan-sharpening is that a reference image is missing. Existing measures or so-called protocols are still not satisfactory because quite often the rationale or assumptions used are not valid or not fulfilled. From a model based view it follows naturally that a quality assessment measure can be defined as a combination of error model residuals using common or general models assumed in all fusion methods. Thus in this paper a comparison of the two earlier proposed/modified pan-sharpening methods is performed. Preliminary experiments based on visual analysis are carried out in the urban area of Munich city for optical remote sensing multispectral data and panchromatic imagery of the WorldView-2 satellite sensor.

  18. Use of High-Resolution Multispectral Imagery to Estimate Soil and Plant Nitrogen in Oats (Avena sativa)

    NASA Astrophysics Data System (ADS)

    ELarab, M.; Ticlavilca, A. M.; Torres-Rua, A. F.; McKee, M.

    2014-12-01

    Precision agriculture requires high spatial resolution in the application of the inputs to agricultural production. This requires that actionable information about crop and field status be acquired at the same high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high-resolution imagery was obtained through the use of a small, unmanned aerial vehicle, called AggieAirTM, which provides spatial resolution as fine as 15 cm. Simultaneously with AggieAir flights, intensive ground sampling was conducted at precisely determined locations for plant and soil nitrogen among other parameters. This study investigated the spectral signature of oats and formulated a machine learning regression model of reflectance response between the multi-spectral bands available from AggieAir (red, green, blue, near infrared, and thermal), plant nitrogen and soil nitrogen. A multivariate relevance vector machine (MVRVM) was used to develop the linkages between the remotely sensed data and plant and soil nitrogen at approximately 15-cm resolution. The results of this study are presented, including a statistical evaluation of the performance of the model.

  19. Use of High-Resolution Multispectral Imagery to Estimate Chlorophyll and Plant Nitrogen in Oats (Avena sativa)

    NASA Astrophysics Data System (ADS)

    ELarab, M.; Ticlavilca, A. M.; Torres-Rua, A. F.; Maslova, I.; McKee, M.

    2013-12-01

    Precision agriculture requires high spatial resolution in the application of the inputs to agricultural production. This requires that actionable information about crop and field status be acquired at the same high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high-resolution imagery was obtained through the use of a small, unmanned aerial vehicle, called AggieAirTM, that provides spatial resolution as fine as 6 cm. Simultaneously with AggieAir flights, intensive ground sampling was conducted at precisely determined locations for plant chlorophyll, plant nitrogen, and other parameters. This study investigated the spectral signature of a crop of oats (Avena sativa) and formulated machine learning regression models of reflectance response between the multi-spectral bands available from AggieAir (red, green, blue, near infrared, and thermal), plant chlorophyll and plant nitrogen. We tested two, separate relevance vector machines (RVM) and a single multivariate relevance vector machine (MVRVM) to develop the linkages between the remotely sensed data and plant chlorophyll and nitrogen at approximately 15-cm resolution. The results of this study are presented, including a statistical evaluation of the performance of the different models and a comparison of the RVM modeling methods against more traditional approaches that have been used for estimation of plant chlorophyll and nitrogen.

  20. A Combined Texture-principal Component Image Classification Technique For Landslide Identification Using Airborne Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Whitworth, M.; Giles, D.; Murphy, W.

    The Jurassic strata of the Cotswolds escarpment of southern central United Kingdom are associated with extensive mass movement activity, including mudslide systems, rotational and translational landslides. These mass movements can pose a significant engineering risk and have been the focus of research into the use of remote sensing techniques as a tool for landslide identification and delineation on clay slopes. The study has utilised a field site on the Cotswold escarpment above the village of Broad- way, Worcestershire, UK. Geomorphological investigation was initially undertaken at the site in order to establish ground control on landslides and other landforms present at the site. Subsequent to this, Airborne Thematic Mapper (ATM) imagery and colour stereo photography were acquired by the UK Natural Environment Research Coun- cil (NERC) for further analysis and interpretation. This paper describes the textu- ral enhancement of the airborne imagery undertaken using both mean euclidean dis- tance (MEUC) and grey level co-occurrence matrix entropy (GLCM) together with a combined texture-principal component based supervised image classification that was adopted as the method for landslide identification. The study highlights the importance of image texture for discriminating mass movements within multispectral imagery and demonstrates that by adopting a combined texture-principal component image classi- fication we have been able to achieve classification accuracy of 84 % with a Kappa statistic of 0.838 for landslide classes. This paper also highlights the potential prob- lems that can be encountered when using high-resolution multispectral imagery, such as the presence of dense variable woodland present within the image, and presents a solution using principal component analysis.

  1. Land cover classification in multispectral satellite imagery using sparse approximations on learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; Altmann, Garrett L.

    2014-05-01

    Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a modified Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using CoSA: unsupervised Clustering of Sparse Approximations. We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska (USA). Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties (e.g., soil moisture and inundation), and topographic/geomorphic characteristics. In this paper, we explore learning from both raw multispectral imagery, as well as normalized band difference indexes. We explore a quantitative metric to evaluate the spectral properties of the clusters, in order to potentially aid in assigning land cover categories to the cluster labels.

  2. Detection of subpixel anomalies in multispectral infrared imagery using an adaptive Bayesian classifier

    SciTech Connect

    Ashton, E.A.

    1998-03-01

    The detection of subpixel targets with unknown spectral signatures and cluttered backgrounds in multispectral imagery is a topic of great interest for remote surveillance applications. Because no knowledge of the target is assumed, the only way to accomplish such a detection is through a search for anomalous pixels. Two approaches to this problem are examined in this paper. The first is to separate the image into a number of statistical clusters by using an extension of the well-known {kappa}-means algorithm. Each bin of resultant residual vectors is then decorrelated, and the results are thresholded to provide detection. The second approach requires the formation of a probabilistic background model by using an adaptive Bayesian classification algorithm. This allows the calculation of a probability for each pixel, with respect to the model. These probabilities are then thresholded to provide detection. Both algorithms are shown to provide significant improvement over current filtering techniques for anomaly detection in experiments using multispectral IR imagery with both simulated and actual subpixel targets.

  3. Classification of multispectral imagery using wavelet transform and dynamic learning neural network

    NASA Astrophysics Data System (ADS)

    Chen, H. C.; Tzeng, Yu-Chang

    1994-12-01

    A recently developed dynamic learning neural network (DL) has been successfully applied to multispectral imagery classification and parameter inversion. For multispectral imagery classification, it is noises and edges such as streets in the urban area and ridges in the mountain area in an image that result in misclassification or unclassification which reduce the classificalion rate. At the image spectrum point of view, noises and edges are the high frequency components in an image. Therefore, edge detection and noise reduction can be done by extracting the high frequency parts from an image to improve the classification rale. Although both noises and edges are the high frequency components, edges represent some userul information while noises should be removed. Thus, edges and noiscs must be separated when the high frequency parts are extracted. The conventional edge detection or noise reduction melhods could not distinguish edges from noises. A new approach, Wavelet transform, is selected to fulfill this requirement. The edge detection and noise reduction pre-processing using Wavelet transform and image classification using dynamic learning neural network are presented in this paper. The experimental results indicate that it did improve the classification rate.1

  4. Novel round-robin tabu search algorithm for prostate cancer classification and diagnosis using multispectral imagery.

    PubMed

    Tahir, Muhammad Atif; Bouridane, Ahmed

    2006-10-01

    Quantitative cell imagery in cancer pathology has progressed greatly in the last 25 years. The application areas are mainly those in which the diagnosis is still critically reliant upon the analysis of biopsy samples, which remains the only conclusive method for making an accurate diagnosis of the disease. Biopsies are usually analyzed by a trained pathologist who, by analyzing the biopsies under a microscope, assesses the normality or malignancy of the samples submitted. Different grades of malignancy correspond to different structural patterns as well as to apparent textures. In the case of prostate cancer, four major groups have to be recognized: stroma, benign prostatic hyperplasia, prostatic intraepithelial neoplasia, and prostatic carcinoma. Recently, multispectral imagery has been used to solve this multiclass problem. Unlike conventional RGB color space, multispectral images allow the acquisition of a large number of spectral bands within the visible spectrum, resulting in a large feature vector size. For such a high dimensionality, pattern recognition techniques suffer from the well-known "curse-of-dimensionality" problem. This paper proposes a novel round-robin tabu search (RR-TS) algorithm to address the curse-of-dimensionality for this multiclass problem. The experiments have been carried out on a number of prostate cancer textured multispectral images, and the results obtained have been assessed and compared with previously reported works. The system achieved 98%-100% classification accuracy when testing on two datasets. It outperformed principal component/linear discriminant classifier (PCA-LDA), tabu search/nearest neighbor classifier (TS-1NN), and bagging/boosting with decision tree (C4.5) classifier.

  5. Survey of Hyperspectral and Multispectral Imaging Technologies (Etude sur les technologies d’imagerie hyperspectrale et multispectrale)

    DTIC Science & Technology

    2007-05-01

    SET-065-P3 Survey of Hyperspectral and Multispectral Imaging Technologies ( Etude sur les technologies d’imagerie hyperspectrale et multispectrale... Etude sur les technologies d’imagerie hyperspectrale et multispectrale) This Report forms part of RTG-33’s activities in assessing...that will guarantee a solid base for the future. The content of this publication has been reproduced directly from material supplied by RTO or the

  6. Bridging Estimates of Greenness in an Arid Grassland Using Field Observations, Phenocams, and Time Series Unmanned Aerial System (UAS) Imagery

    NASA Astrophysics Data System (ADS)

    Browning, D. M.; Tweedie, C. E.; Rango, A.

    2013-12-01

    Spatially extensive grasslands and savannas in arid and semi-arid ecosystems (i.e., rangelands) require cost-effective, accurate, and consistent approaches for monitoring plant phenology. Remotely sensed imagery offers these capabilities; however contributions of exposed soil due to modest vegetation cover, susceptibility of vegetation to drought, and lack of robust scaling relationships challenge biophysical retrievals using moderate- and coarse-resolution satellite imagery. To evaluate methods for characterizing plant phenology of common rangeland species and to link field measurements to remotely sensed metrics of land surface phenology, we devised a hierarchical study spanning multiple spatial scales. We collect data using weekly standardized field observations on focal plants, daily phenocam estimates of vegetation greenness, and very high spatial resolution imagery from an Unmanned Aerial System (UAS) throughout the growing season. Field observations of phenological condition and vegetation cover serve to verify phenocam greenness indices along with indices derived from time series UAS imagery. UAS imagery is classified using object-oriented image analysis to identify species-specific image objects for which greenness indices are derived. Species-specific image objects facilitate comparisons with phenocam greenness indices and scaling spectral responses to footprints of Landsat and MODIS pixels. Phenocam greenness curves indicated rapid canopy development for the widespread deciduous shrub Prosopis glandulosa over 14 (in April 2012) to 16 (in May 2013) days. The modest peak in greenness for the dominant perennial grass Bouteloua eriopoda occurred in October 2012 following peak summer rainfall. Weekly field estimates of canopy development closely coincided with daily patterns in initial growth and senescence for both species. Field observations improve the precision of the timing of phenophase transitions relative to inflection points calculated from phenocam

  7. Aerial multispectral surveys - from the analysis of architectural monuments to the identification of archaeological sites

    NASA Astrophysics Data System (ADS)

    Mario, Bottoni; Fabretti, Giuseppe; Fabretti, Maurizio

    2010-05-01

    Combined non destructive and extensive multispectral analysis (thermography, photographic infrared and air photogrammetry) can be used, as aerial surveys, to verify and integrate hypotheses based upon investigations conducted on the spot and in the archives, about the location of archaeological sites in a certain area. These techniques using specified sensors (photographic emulsions, semi conductors) enable one to record and visualize different optical phenomena, related to the wavelength of the radiations and to the thermal exchange between structures lying underground and the soil. The information obtained has an extensive characteristic that can be transferred on maps. The results are in practice continuous in the spatial dimension in a non destructive way, leaving the site perfectly undisturbed. Relating to this first survey, it may be possible to locate the most significant areas and to proceed with more punctual multispectral surveys and local excavations. The next step is to compare these results and to extend them to wider areas, establishing the significance of irregularities found with the aerial surveys and creating conclusive thematic maps. These maps will give useful indications to define the archaeological excavation or the course of highways, water mains and other structures on the terrain. This work presents the application of the method to the archaeological site of Fondo Marco Terenzio Varrone Cassino (Frosinone) under the control of the Archaeological Soprintendency of Lazio. The survey made it possible to determine the course of the water main of the town of Cassino through the archaeological area in a few months and with great reliability. Actually use of aerial thermovision demonstrated itself very useful since nineties in the analysis of the microclimatic behaviour of architectonic structures of significant dimensions, such as the dome of Santa Maria del Fiore in Florence. In this situation a mathematical model had been developed aimed to

  8. An application of LANDSAT multispectral imagery for the classification of hydrobiological systems, Shark River Slough, Everglades National Park, Florida

    NASA Technical Reports Server (NTRS)

    Rose, P. W.; Rosendahl, P. C. (Principal Investigator)

    1979-01-01

    Multivariant hydrologic parameters over the Shark River Slough were investigated. Ground truth was established utilizing U-2 infrared photography and comprehensive field data to define a control network which represented all hydrobiological systems in the slough. These data were then applied to LANDSAT imagery utilizing an interactive multispectral processor which generated hydrographic maps through classification of the slough and defined the multispectral surface radiance characteristics of the wetlands areas in the park. The spectral response of each hydrobiological zone was determined and plotted to formulate multispectral relationships between the emittent energy from the slough in order to determine the best possible multispectral wavelength combinations to enhance classification results. The extent of each hydrobiological zone in slough was determined and flow vectors for water movement throughout the slough established.

  9. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  10. Monitoring Geothermal Features in Yellowstone National Park with ATLAS Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph; Berglund, Judith

    2000-01-01

    The National Park Service (NPS) must produce an Environmental Impact Statement for each proposed development in the vicinity of known geothermal resource areas (KGRAs) in Yellowstone National Park. In addition, the NPS monitors indicator KGRAs for environmental quality and is still in the process of mapping many geothermal areas. The NPS currently maps geothermal features with field survey techniques. High resolution aerial multispectral remote sensing in the visible, NIR, SWIR, and thermal spectral regions could enable YNP geothermal features to be mapped more quickly and in greater detail In response, Yellowstone Ecosystems Studies, in partnership with NASA's Commercial Remote Sensing Program, is conducting a study on the use of Airborne Terrestrial Applications Sensor (ATLAS) multispectral data for monitoring geothermal features in the Upper Geyser Basin. ATLAS data were acquired at 2.5 meter resolution on August 17, 2000. These data were processed into land cover classifications and relative temperature maps. For sufficiently large features, the ATLAS data can map geothermal areas in terms of geyser pools and hot springs, plus multiple categories of geothermal runoff that are apparently indicative of temperature gradients and microbial matting communities. In addition, the ATLAS maps clearly identify geyserite areas. The thermal bands contributed to classification success and to the computation of relative temperature. With masking techniques, one can assess the influence of geothermal features on the Firehole River. Preliminary results appear to confirm ATLAS data utility for mapping and monitoring geothermal features. Future work will include classification refinement and additional validation.

  11. Combination of RGB and multispectral imagery for discrimination of cabernet sauvignon grapevine elements.

    PubMed

    Fernández, Roemi; Montes, Héctor; Salinas, Carlota; Sarria, Javier; Armada, Manuel

    2013-06-19

    This paper proposes a sequential masking algorithm based on the K-means method that combines RGB and multispectral imagery for discrimination of Cabernet Sauvignon grapevine elements in unstructured natural environments, without placing any screen behind the canopy and without any previous preparation of the vineyard. In this way, image pixels are classified into five clusters corresponding to leaves, stems, branches, fruit and background. A custom-made sensory rig that integrates a CCD camera and a servo-controlled filter wheel has been specially designed and manufactured for the acquisition of images during the experimental stage. The proposed algorithm is extremely simple, efficient, and provides a satisfactory rate of classification success. All these features turn out the proposed algorithm into an appropriate candidate to be employed in numerous tasks of the precision viticulture, such as yield estimation, water and nutrients needs estimation, spraying and harvesting.

  12. Focus of attention strategies for finding discrete objects in multispectral imagery

    SciTech Connect

    Harvey, N. R.; Theiler, J. P.

    2004-01-01

    Tools that perform pixel-by-pixel classification of multispectral imagery are useful in broad area mapping applications such as terrain categorization, but are less well-suited to the detection of discrete objects. Pixel-by-pixel classifiers, however, have many advantages: they are relatively simple to design, they can readily employ formal machine learning tools, and they are widely available on a variety of platforms. We describe an approach that enables pixel-by-pixel classifiers to be more effectively used in object-detection settings. This is achieved by optimizing a metric which does not attempt to precisely delineate every pixel comprising the objects of interest, but instead focusses the attention of the analyst to these objects without the distraction of many false alarms. The approach requires only minor modification of exisiting pixel-by-pixel classifiers, and produces substantially improved performance. We will describe algorithms that employ this approach and show how they work on a varitety of object detection problems using remotely-sensed multispectral data.

  13. Using Airborne and Satellite Imagery to Distinguish and Map Black Mangrove

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of studies evaluating color-infrared (CIR) aerial photography, CIR aerial true digital imagery, and high resolution QuickBird multispectral satellite imagery for distinguishing and mapping black mangrove [Avicennia germinans (L.) L.] populations along the lower Texas g...

  14. Coastal and estuarine habitat mapping, using LIDAR height and intensity and multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Chust, Guillem; Galparsoro, Ibon; Borja, Ángel; Franco, Javier; Uriarte, Adolfo

    2008-07-01

    The airborne laser scanning LIDAR (LIght Detection And Ranging) provides high-resolution Digital Terrain Models (DTM) that have been applied recently to the characterization, quantification and monitoring of coastal environments. This study assesses the contribution of LIDAR altimetry and intensity data, topographically-derived features (slope and aspect), and multi-spectral imagery (three visible and a near-infrared band), to map coastal habitats in the Bidasoa estuary and its adjacent coastal area (Basque Country, northern Spain). The performance of high-resolution data sources was individually and jointly tested, with the maximum likelihood algorithm classifier in a rocky shore and a wetland zone; thus, including some of the most extended Cantabrian Sea littoral habitats, within the Bay of Biscay. The results show that reliability of coastal habitat classification was more enhanced with LIDAR-based DTM, compared with the other data sources: slope, aspect, intensity or near-infrared band. The addition of the DTM, to the three visible bands, produced gains of between 10% and 27% in the agreement measures, between the mapped and validation data (i.e. mean producer's and user's accuracy) for the two test sites. Raw LIDAR intensity images are only of limited value here, since they appeared heterogeneous and speckled. However, the enhanced Lee smoothing filter, applied to the LIDAR intensity, improved the overall accuracy measurements of the habitat classification, especially in the wetland zone; here, there were gains up to 7.9% in mean producer's and 11.6% in mean user's accuracy. This suggests that LIDAR can be useful for habitat mapping, when few data sources are available. The synergy between the LIDAR data, with multi-spectral bands, produced high accurate classifications (mean producer's accuracy: 92% for the 16 rocky habitats and 88% for the 11 wetland habitats). Fusion of the data enabled discrimination of intertidal communities, such as Corallina elongata

  15. Acquisition, orthorectification, and object-based classification of unmanned aerial vehicle (UAV) imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this paper, we examine the potential of using a small unmanned aerial vehicle (UAV) for rangeland inventory, assessment and monitoring. Imagery with 8-cm resolution was acquired over 290 ha in southwestern Idaho. We developed a semi-automated orthorectification procedure suitable for handling lar...

  16. Incremental road discovery from aerial imagery using curvilinear spanning tree (CST) search

    NASA Astrophysics Data System (ADS)

    Wang, Guozhi; Huang, Yuchun; Xie, Rongchang; Zhang, Hongchang

    2016-10-01

    Robust detection of road network in aerial imagery is a challenging task since roads have different pavement texture, road-side surroundings, as well as grades. Roads of different grade have different curvilinear saliency in the aerial imagery. This paper is motivated to incrementally extract roads and construct the topology of the road network of aerial imagery from the higher-grade-first perspective. Inspired by the spanning tree technique, the proposed method starts from the robust extraction of the most salient road segment(s) of the road network, and incrementally connects segments of less saliency of curvilinear structure until all road segments in the network are extracted. The proposed algorithm includes: curvilinear path-based road morphological enhancement, extraction of road segments, and spanning tree search for the incremental road discovery. It is tested on a diverse set of aerial imagery acquired in the city and inter-city areas. Experimental results show that the proposed curvilinear spanning tree (CST) can detect roads efficiently and construct the topology of the road network effectively. It is promising for the change detection of the road network.

  17. Multispectral airborne imagery in the field reveals genetic determinisms of morphological and transpiration traits of an apple tree hybrid population in response to water deficit

    PubMed Central

    Virlet, Nicolas; Costes, Evelyne; Martinez, Sébastien; Kelner, Jean-Jacques; Regnard, Jean-Luc

    2015-01-01

    Genetic studies of response to water deficit in adult trees are limited by low throughput of the usual phenotyping methods in the field. Here, we aimed at overcoming this bottleneck, applying a new methodology using airborne multispectral imagery and in planta measurements to compare a high number of individuals. An apple tree population, grafted on the same rootstock, was submitted to contrasting summer water regimes over two years. Aerial images acquired in visible, near- and thermal-infrared at three dates each year allowed calculation of vegetation and water stress indices. Tree vigour and fruit production were also assessed. Linear mixed models were built accounting for date and year effects on several variables and including the differential response of genotypes between control and drought conditions. Broad-sense heritability of most variables was high and 18 quantitative trait loci (QTLs) independent of the dates were detected on nine linkage groups of the consensus apple genetic map. For vegetation and stress indices, QTLs were related to the means, the intra-crown heterogeneity, and differences induced by water regimes. Most QTLs explained 15−20% of variance. Airborne multispectral imaging proved relevant to acquire simultaneous information on a whole tree population and to decipher genetic determinisms involved in response to water deficit. PMID:26208644

  18. Multispectral airborne imagery in the field reveals genetic determinisms of morphological and transpiration traits of an apple tree hybrid population in response to water deficit.

    PubMed

    Virlet, Nicolas; Costes, Evelyne; Martinez, Sébastien; Kelner, Jean-Jacques; Regnard, Jean-Luc

    2015-09-01

    Genetic studies of response to water deficit in adult trees are limited by low throughput of the usual phenotyping methods in the field. Here, we aimed at overcoming this bottleneck, applying a new methodology using airborne multispectral imagery and in planta measurements to compare a high number of individuals.An apple tree population, grafted on the same rootstock, was submitted to contrasting summer water regimes over two years. Aerial images acquired in visible, near- and thermal-infrared at three dates each year allowed calculation of vegetation and water stress indices. Tree vigour and fruit production were also assessed. Linear mixed models were built accounting for date and year effects on several variables and including the differential response of genotypes between control and drought conditions.Broad-sense heritability of most variables was high and 18 quantitative trait loci (QTLs) independent of the dates were detected on nine linkage groups of the consensus apple genetic map. For vegetation and stress indices, QTLs were related to the means, the intra-crown heterogeneity, and differences induced by water regimes. Most QTLs explained 15-20% of variance.Airborne multispectral imaging proved relevant to acquire simultaneous information on a whole tree population and to decipher genetic determinisms involved in response to water deficit.

  19. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery

    PubMed Central

    Rivas Casado, Monica; Ballesteros Gonzalez, Rocio; Kriechbaumer, Thomas; Veal, Amanda

    2015-01-01

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management. PMID:26556355

  20. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

  1. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  2. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring

  3. Spatial Modeling and Variability Analysis for Modeling and Prediction of Soil and Crop Canopy Coverage Using Multispectral Imagery from an Airborne Remote Sensing System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Based on a previous study on an airborne remote sensing system with automatic camera stabilization for crop management, multispectral imagery was acquired using the MS-4100 multispectral camera at different flight altitudes over a 115 ha cotton field. After the acquired images were geo-registered an...

  4. Sea surface velocities from visible and infrared multispectral atmospheric mapping sensor imagery

    NASA Technical Reports Server (NTRS)

    Pope, P. A.; Emery, W. J.; Radebaugh, M.

    1992-01-01

    High resolution (100 m), sequential Multispectral Atmospheric Mapping Sensor (MAMS) images were used in a study to calculate advective surface velocities using the Maximum Cross Correlation (MCC) technique. Radiance and brightness temperature gradient magnitude images were formed from visible (0.48 microns) and infrared (11.12 microns) image pairs, respectively, of Chandeleur Sound, which is a shallow body of water northeast of the Mississippi delta, at 145546 GMT and 170701 GMT on 30 Mar. 1989. The gradient magnitude images enhanced the surface water feature boundaries, and a lower cutoff on the gradient magnitudes calculated allowed the undesirable sunglare and backscatter gradients in the visible images, and the water vapor absorption gradients in the infrared images, to be reduced in strength. Requiring high (greater than 0.4) maximum cross correlation coefficients and spatial coherence of the vector field aided in the selection of an optimal template size of 10 x 10 pixels (first image) and search limit of 20 pixels (second image) to use in the MCC technique. Use of these optimum input parameters to the MCC algorithm, and high correlation and spatial coherence filtering of the resulting velocity field from the MCC calculation yielded a clustered velocity distribution over the visible and infrared gradient images. The velocity field calculated from the visible gradient image pair agreed well with a subjective analysis of the motion, but the velocity field from the infrared gradient image pair did not. This was attributed to the changing shapes of the gradient features, their nonuniqueness, and large displacements relative to the mean distance between them. These problems implied a lower repeat time for the imagery was needed in order to improve the velocity field derived from gradient imagery. Suggestions are given for optimizing the repeat time of sequential imagery when using the MCC method for motion studies. Applying the MCC method to the infrared

  5. Estimating forest canopy attributes via airborne, high-resolution, multispectral imagery in midwest forest types

    NASA Astrophysics Data System (ADS)

    Gatziolis, Demetrios

    An investigation of the utility of high spatial resolution (sub-meter), 16-bit, multispectral, airborne digital imagery for forest land cover mapping in the heterogeneous and structurally complex forested landscapes of northern Michigan is presented. Imagery frame registration and georeferencing issues are presented and a novel approach for bi-directional reflectance distribution function (BRDF) effects correction and between-frame brightness normalization is introduced. Maximum likelihood classification of five cover type classes is performed over various geographic aggregates of 34 plots established in the study area that were designed according to the Forest Inventory and Analysis protocol. Classification accuracy estimates show that although band registration and BRDF corrections and brightness normalization provide an approximately 5% improvement over the raw imagery data, overall classification accuracy remains relatively low, barely exceeding 50%. Computed kappa coefficients reveal no statistical differences among classification trials. Classification results appear to be independent of geographic aggregations of sampling plots. Estimation of forest stand canopy parameter parameters (stem density, canopy closure, and mean crown diameter) is based on quantifying the spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and slope break analysis, an alternative non-parametric approach. Parameter estimation and cover type classification proceed from the identification of tree apexes. Parameter accuracy assessment is evaluated via value comparison with a spatially precise set of field observations. In general, slope-break-based parameter estimates are superior to those obtained using variograms. Estimated root mean square errors at the plot level for the former average 6.5% for stem density, 3.5% for canopy closure and 2.5% for mean crown diameter, which are less than or equal to error rates obtained via traditional forest stand

  6. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  7. Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre

    2016-06-01

    Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).

  8. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach

  9. Evaluation of unmanned aerial vehicle (UAV) imagery to model vegetation heights in Hulun Buir grassland ecosystem

    NASA Astrophysics Data System (ADS)

    Wang, D.; Xin, X.; Li, Z.

    2015-12-01

    Vertical vegetation structure in grassland ecosystem is needed to assess grassland health and monitor available forage for livestock and wildlife habitat. Traditional ground-based field methods for measuring vegetation heights are time consuming. Most emerging airborne remote sensing techniques capable of measuring surface and vegetation height (e.g., LIDAR) are too expensive to apply at broad scales. Aerial or spaceborne stereo imagery has the cost advantage for mapping height of tall vegetation, such as forest. However, the accuracy and uncertainty of using stereo imagery for modeling heights of short vegetation, such as grass (generally lower than 50cm) needs to be investigated. In this study, 2.5-cm resolution UAV stereo imagery are used to model vegetation heights in Hulun Buir grassland ecosystem. Strong correlations were observed (r > 0.9) between vegetation heights derived from UAV stereo imagery and those field-measured ones at individual and plot level. However, vegetation heights tended to be underestimated in the imagery especially for those areas with high vegetation coverage. The strong correlations between field-collected vegetation heights and metrics derived from UAV stereo imagery suggest that UAV stereo imagery can be used to estimate short vegetation heights such as those in grassland ecosystem. Future work will be needed to verify the extensibility of the methods to other sites and vegetation types.

  10. High-resolution multispectral satellite imagery for extracting bathymetric information of Antarctic shallow lakes

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    High-resolution pansharpened images from WorldView-2 were used for bathymetric mapping around Larsemann Hills and Schirmacher oasis, east Antarctica. We digitized the lake features in which all the lakes from both the study areas were manually extracted. In order to extract the bathymetry values from multispectral imagery we used two different models: (a) Stumpf model and (b) Lyzenga model. Multiband image combinations were used to improve the results of bathymetric information extraction. The derived depths were validated against the in-situ measurements and root mean square error (RMSE) was computed. We also quantified the error between in-situ and satellite-estimated lake depth values. Our results indicated a high correlation (R = 0.60 0.80) between estimated depth and in-situ depth measurements, with RMSE ranging from 0.10 to 1.30 m. This study suggests that the coastal blue band in the WV-2 imagery could retrieve accurate bathymetry information compared to other bands. To test the effect of size and dimension of lake on bathymetry retrieval, we distributed all the lakes on the basis of size and depth (reference data), as some of the lakes were open, some were semi frozen and others were completely frozen. Several tests were performed on open lakes on the basis of size and depth. Based on depth, very shallow lakes provided better correlation (≈ 0.89) compared to shallow (≈ 0.67) and deep lakes (≈ 0.48). Based on size, large lakes yielded better correlation in comparison to medium and small lakes.

  11. High-spatial resolution multispectral and panchromatic satellite imagery for mapping perennial desert plants

    NASA Astrophysics Data System (ADS)

    Alsharrah, Saad A.; Bruce, David A.; Bouabid, Rachid; Somenahalli, Sekhar; Corcoran, Paul A.

    2015-10-01

    The use of remote sensing techniques to extract vegetation cover information for the assessment and monitoring of land degradation in arid environments has gained increased interest in recent years. However, such a task can be challenging, especially for medium-spatial resolution satellite sensors, due to soil background effects and the distribution and structure of perennial desert vegetation. In this study, we utilised Pleiades high-spatial resolution, multispectral (2m) and panchromatic (0.5m) imagery and focused on mapping small shrubs and low-lying trees using three classification techniques: 1) vegetation indices (VI) threshold analysis, 2) pre-built object-oriented image analysis (OBIA), and 3) a developed vegetation shadow model (VSM). We evaluated the success of each approach using a root of the sum of the squares (RSS) metric, which incorporated field data as control and three error metrics relating to commission, omission, and percent cover. Results showed that optimum VI performers returned good vegetation cover estimates at certain thresholds, but failed to accurately map the distribution of the desert plants. Using the pre-built IMAGINE Objective OBIA approach, we improved the vegetation distribution mapping accuracy, but this came at the cost of over classification, similar to results of lowering VI thresholds. We further introduced the VSM which takes into account shadow for further refining vegetation cover classification derived from VI. The results showed significant improvements in vegetation cover and distribution accuracy compared to the other techniques. We argue that the VSM approach using high-spatial resolution imagery provides a more accurate representation of desert landscape vegetation and should be considered in assessments of desertification.

  12. Use of multispectral Ikonos imagery for discriminating between conventional and conservation agricultural tillage practices

    USGS Publications Warehouse

    Vina, Andres; Peters, Albert J.; Ji, Lei

    2003-01-01

    There is a global concern about the increase in atmospheric concentrations of greenhouse gases. One method being discussed to encourage greenhouse gas mitigation efforts is based on a trading system whereby carbon emitters can buy effective mitigation efforts from farmers implementing conservation tillage practices. These practices sequester carbon from the atmosphere, and such a trading system would require a low-cost and accurate method of verification. Remote sensing technology can offer such a verification technique. This paper is focused on the use of standard image processing procedures applied to a multispectral Ikonos image, to determine whether it is possible to validate that farmers have complied with agreements to implement conservation tillage practices. A principal component analysis (PCA) was performed in order to isolate image variance in cropped fields. Analyses of variance (ANOVA) statistical procedures were used to evaluate the capability of each Ikonos band and each principal component to discriminate between conventional and conservation tillage practices. A logistic regression model was implemented on the principal component most effective in discriminating between conventional and conservation tillage, in order to produce a map of the probability of conventional tillage. The Ikonos imagery, in combination with ground-reference information, proved to be a useful tool for verification of conservation tillage practices.

  13. Automatic Reconstruction of Building Roofs Through Effective Integration of LIDAR and Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Zhang, C.; Fraser, C. S.

    2012-07-01

    Automatic 3D reconstruction of building roofs from remotely sensed data is important for many applications including city modeling. This paper proposes a new method for automatic 3D roof reconstruction through an effective integration of LIDAR data and multispectral imagery. Using the ground height from a DEM, the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a 'ground mask'. The second group contains the non-ground points that are used to generate initial roof planes. The structural lines are extracted from the grey-scale version of the orthoimage and they are classified into several classes such as 'ground', 'tree', 'roof edge' and 'roof ridge' using the ground mask, the NDVI image (Normalised Difference Vegetation Index from the multi-band orthoimage) and the entropy image (from the grey-scale orthoimage). The lines from the later two classes are primarily used to fit initial planes to the neighbouring LIDAR points. Other image lines within the vicinity of an initial plane are selected to fit the boundary of the plane. Once the proper image lines are selected and others are discarded, the final plane is reconstructed using the selected lines. Experimental results show that the proposed method can handle irregular and large registration errors between the LIDAR data and orthoimagery.

  14. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  15. An innovative approach to improve SRTM DEM using multispectral imagery and artificial neural network

    NASA Astrophysics Data System (ADS)

    Wendi, Dadiyorto; Liong, Shie-Yui; Sun, Yabin; doan, Chi Dung

    2016-06-01

    Although the Shuttle Radar Topography Mission [SRTM) data are a publicly accessible Digital Elevation Model [DEM) provided at no cost, its accuracy especially at forested area is known to be limited with root mean square error (RMSE) of approx. 14 m in Singapore's forested area. Such inaccuracy is attributed to the 5.6 cm wavelength used by SRTM that does not penetrate vegetation well. This paper considers forested areas of central catchment of Singapore as a proof of concept of an approach to improve the SRTM data set. The approach makes full use of (1) the introduction of multispectral imagery (Landsat 8), of 30 m resolution, into SRTM data; (2) the Artificial Neural Network (ANN) to flex its known strengths in pattern recognition and; (3) a reference DEM of high accuracy (1 m) derived through the integration of stereo imaging of worldview-1 and extensive ground survey points. The study shows a series of significant improvements of the SRTM when assessed with the reference DEM of 2 different areas, with RMSE reduction of ˜68% (from 13.9 m to 4.4 m) and ˜52% (from 14.2 m to 6.7 m). In addition, the assessment of the resulting DEM also includes comparisons with simple denoising methodology (Low Pass Filter) and commercially available product called NEXTMap® World 30™.

  16. Analysis and Exploitation of Automatically Generated Scene Structure from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Nilosek, David R.

    The recent advancements made in the field of computer vision, along with the ever increasing rate of computational power has opened up opportunities in the field of automated photogrammetry. Many researchers have focused on using these powerful computer vision algorithms to extract three-dimensional point clouds of scenes from multi-view imagery, with the ultimate goal of creating a photo-realistic scene model. However, geographically accurate three-dimensional scene models have the potential to be exploited for much more than just visualization. This work looks at utilizing automatically generated scene structure from near-nadir aerial imagery to identify and classify objects within the structure, through the analysis of spatial-spectral information. The limitation to this type of imagery is imposed due to the common availability of this type of aerial imagery. Popular third-party computer-vision algorithms are used to generate the scene structure. A voxel-based approach for surface estimation is developed using Manhattan-world assumptions. A surface estimation confidence metric is also presented. This approach provides the basis for further analysis of surface materials, incorporating spectral information. Two cases of spectral analysis are examined: when additional hyperspectral imagery of the reconstructed scene is available, and when only R,G,B spectral information can be obtained. A method for registering the surface estimation to hyperspectral imagery, through orthorectification, is developed. Atmospherically corrected hyperspectral imagery is used to assign reflectance values to estimated surface facets for physical simulation with DIRSIG. A spatial-spectral region growing-based segmentation algorithm is developed for the R,G,B limited case, in order to identify possible materials for user attribution. Finally, an analysis of the geographic accuracy of automatically generated three-dimensional structure is performed. An end-to-end, semi-automated, workflow

  17. Remote sensing of shorelines using data fusion of hyperspectral and multispectral imagery acquired from mobile and fixed platforms

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.; Frystacky, Heather

    2012-06-01

    An optimized data fusion methodology is presented and makes use of airborne and vessel mounted hyperspectral and multispectral imagery acquired at littoral zones in Florida and the northern Gulf of Mexico. The results demonstrate the use of hyperspectral-multispectral data fusion anomaly detection along shorelines and in surface and subsurface waters. Hyperspectral imagery utilized in the data fusion analysis was collected using a 64-1024 channel, 1376 pixel swath width; temperature stabilized sensing system; an integrated inertial motion unit; and differential GPS. The imaging system is calibrated using dual 18 inch calibration spheres, spectral line sources, and custom line targets. Simultaneously collected multispectral three band imagery used in the data fusion analysis was derived either a 12 inch focal length large format camera using 9 inch high speed AGFA color negative film, a 12.3 megapixel digital camera or dual high speed full definition video cameras. Pushbroom sensor imagery is corrected using Kalman filtering and smoothing in order to correct images for airborne platform motions or motions of a small vessel. Custom software developed for the hyperspectral system and the optimized data fusion process allows for post processing using atmospherically corrected and georeferenced reflectance imagery. The optimized data fusion approach allows for detecting spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly detection is demonstrated using simulated embedded targets in actual imagery. The approach allows one to utilize spectral signature anomalies to identify features and targets that would otherwise not be possible. The optimized data fusion techniques and software has been developed in order to perform sensitivity analysis of the synthetic images in order to optimize the singular value decomposition model building process and the 2-D Butterworth cutoff frequency selection process, using the concept of user defined "feature

  18. A fully-automated approach to land cover mapping with airborne LiDAR and high resolution multispectral imagery in a forested suburban landscape

    NASA Astrophysics Data System (ADS)

    Parent, Jason R.; Volin, John C.; Civco, Daniel L.

    2015-06-01

    Information on land cover is essential for guiding land management decisions and supporting landscape-level ecological research. In recent years, airborne light detection and ranging (LiDAR) and high resolution aerial imagery have become more readily available in many areas. These data have great potential to enable the generation of land cover at a fine scale and across large areas by leveraging 3-dimensional structure and multispectral information. LiDAR and other high resolution datasets must be processed in relatively small subsets due to their large volumes; however, conventional classification techniques cannot be fully automated and thus are unlikely to be feasible options when processing large high-resolution datasets. In this paper, we propose a fully automated rule-based algorithm to develop a 1 m resolution land cover classification from LiDAR data and multispectral imagery. The algorithm we propose uses a series of pixel- and object-based rules to identify eight vegetated and non-vegetated land cover features (deciduous and coniferous tall vegetation, medium vegetation, low vegetation, water, riparian wetlands, buildings, low impervious cover). The rules leverage both structural and spectral properties including height, LiDAR return characteristics, brightness in visible and near-infrared wavelengths, and normalized difference vegetation index (NDVI). Pixel-based properties were used initially to classify each land cover class while minimizing omission error; a series of object-based tests were then used to remove errors of commission. These tests used conservative thresholds, based on diverse test areas, to help avoid over-fitting the algorithm to the test areas. The accuracy assessment of the classification results included a stratified random sample of 3198 validation points distributed across 30 1 × 1 km tiles in eastern Connecticut, USA. The sample tiles were selected in a stratified random manner from locations representing the full range of

  19. A procedure for orthorectification of sub-decimeter resolution imagery obtained with an unmanned aerial vehicle (UAV)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Digital aerial photography acquired with unmanned aerial vehicles (UAVs) has great value for resource management due to the flexibility and relatively low cost for image acquisition, and very high resolution imagery (5 cm) which allows for mapping bare soil and vegetation types, structure and patter...

  20. Feature fusion using ranking for object tracking in aerial imagery

    NASA Astrophysics Data System (ADS)

    Candemir, Sema; Palaniappan, Kannappan; Bunyak, Filiz; Seetharaman, Guna

    2012-06-01

    Aerial wide-area monitoring and tracking using multi-camera arrays poses unique challenges compared to stan- dard full motion video analysis due to low frame rate sampling, accurate registration due to platform motion, low resolution targets, limited image contrast, static and dynamic parallax occlusions.1{3 We have developed a low frame rate tracking system that fuses a rich set of intensity, texture and shape features, which enables adaptation of the tracker to dynamic environment changes and target appearance variabilities. However, improper fusion and overweighting of low quality features can adversely aect target localization and reduce tracking performance. Moreover, the large computational cost associated with extracting a large number of image-based feature sets will in uence tradeos for real-time and on-board tracking. This paper presents a framework for dynamic online ranking-based feature evaluation and fusion in aerial wide-area tracking. We describe a set of ecient descriptors suitable for small sized targets in aerial video based on intensity, texture, and shape feature representations or views. Feature ranking is then used as a selection procedure where target-background discrimination power for each (raw) feature view is scored using a two-class variance ratio approach. A subset of the k-best discriminative features are selected for further processing and fusion. The target match probability or likelihood maps for each of the k features are estimated by comparing target descriptors within a search region using a sliding win- dow approach. The resulting k likelihood maps are fused for target localization using the normalized variance ratio weights. We quantitatively measure the performance of the proposed system using ground-truth tracks within the framework of our tracking evaluation test-bed that incorporates various performance metrics. The proposed feature ranking and fusion approach increases localization accuracy by reducing multimodal eects

  1. Unmanned Aerial Vehicles Produce High-Resolution Seasonally-Relevant Imagery for Classifying Wetland Vegetation

    NASA Astrophysics Data System (ADS)

    Marcaccio, J. V.; Markle, C. E.; Chow-Fraser, P.

    2015-08-01

    With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < 3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km2) wetlands, or portions of larger wetlands throughout a year.

  2. Vehicle detection from very-high-resolution (VHR) aerial imagery using attribute belief propagation (ABP)

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Li, Ying; Zhang, Li; Huang, Yuchun

    2016-10-01

    With the popularity of very-high-resolution (VHR) aerial imagery, the shape, color, and context attribute of vehicles are better characterized. Due to the various road surroundings and imaging conditions, vehicle attributes could be adversely affected so that vehicle is mistakenly detected or missed. This paper is motivated to robustly extract the rich attribute feature for detecting the vehicles of VHR imagery under different scenarios. Based on the hierarchical component tree of vehicle context, attribute belief propagation (ABP) is proposed to detect salient vehicles from the statistical perspective. With the Max-tree data structure, the multi-level component tree around the road network is efficiently created. The spatial relationship between vehicle and its belonging context is established with the belief definition of vehicle attribute. To effectively correct single-level belief error, the inter-level belief linkages enforce consistency of belief assignment between corresponding components at different levels. ABP starts from an initial set of vehicle belief calculated by vehicle attribute, and then iterates through each component by applying inter-level belief passing until convergence. The optimal value of vehicle belief of each component is obtained via minimizing its belief function iteratively. The proposed algorithm is tested on a diverse set of VHR imagery acquired in the city and inter-city areas of the West and South China. Experimental results show that the proposed algorithm can detect vehicle efficiently and suppress the erroneous effectively. The proposed ABP framework is promising to robustly classify the vehicles from VHR Aerial imagery.

  3. Vectorization of Road Data Extracted from Aerial and Uav Imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Häufel, Gisela; Pohl, Melanie

    2016-06-01

    Road databases are essential instances of urban infrastructure. Therefore, automatic road detection from sensor data has been an important research activity during many decades. Given aerial images in a sufficient resolution, dense 3D reconstruction can be performed. Starting at a classification result of road pixels from combined elevation and optical data, we present in this paper a fivestep procedure for creating vectorized road networks. These main steps of the algorithm are: preprocessing, thinning, polygonization, filtering, and generalization. In particular, for the generalization step, which represents the principal area of innovation, two strategies are presented. The first strategy corresponds to a modification of the Douglas-Peucker-algorithm in order to reduce the number of vertices while the second strategy allows a smoother representation of street windings by Bezir curves, which results in reduction - to a decimal power - of the total curvature defined for the dataset. We tested our approach on three datasets with different complexity. The quantitative assessment of the results was performed by means of shapefiles from OpenStreetMap data. For a threshold of 6 m, completeness and correctness values of up to 85% were achieved.

  4. Detecting blind building façades from highly overlapping wide angle aerial imagery

    NASA Astrophysics Data System (ADS)

    Burochin, Jean-Pascal; Vallet, Bruno; Brédif, Mathieu; Mallet, Clément; Brosset, Thomas; Paparoditis, Nicolas

    2014-10-01

    This paper deals with the identification of blind building façades, i.e. façades which have no openings, in wide angle aerial images with a decimeter pixel size, acquired by nadir looking cameras. This blindness characterization is in general crucial for real estate estimation and has, at least in France, a particular importance on the evaluation of legal permission of constructing on a parcel due to local urban planning schemes. We assume that we have at our disposal an aerial survey with a relatively high stereo overlap along-track and across-track and a 3D city model of LoD 1, that can have been generated with the input images. The 3D model is textured with the aerial imagery by taking into account the 3D occlusions and by selecting for each façade the best available resolution texture seeing the whole façade. We then parse all 3D façades textures by looking for evidence of openings (windows or doors). This evidence is characterized by a comprehensive set of basic radiometric and geometrical features. The blindness prognostic is then elaborated through an (SVM) supervised classification. Despite the relatively low resolution of the images, we reach a classification accuracy of around 85% on decimeter resolution imagery with 60 × 40 % stereo overlap. On the one hand, we show that the results are very sensitive to the texturing resampling process and to vegetation presence on façade textures. On the other hand, the most relevant features for our classification framework are related to texture uniformity and horizontal aspect and to the maximal contrast of the opening detections. We conclude that standard aerial imagery used to build 3D city models can also be exploited to some extent and at no additional cost for facade blindness characterisation.

  5. Lake Superior water quality near Duluth from analysis of aerial photos and ERTS imagery

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.; Van Domelen, J. F.

    1973-01-01

    ERTS imagery of Lake Superior in the late summer of 1972 shows dirty water near the city of Duluth. Water samples and simultaneous photographs were taken on three separate days following a heavy storm which caused muddy runoff water. The water samples were analyzed for turbidity, color, and solids. Reflectance and transmittance characteristics of the water samples were determined with a spectrophotometer apparatus. This same apparatus attached to a microdensitometer was used to analyze the photographs for the approximate colors or wavelengths of reflected energy that caused the exposure. Although other parameters do correlate for any one particular day, it is only the water quality parameter of turbidity that correlates with the aerial imagery on all days, as the character of the dirty water changes due to settling and mixing.

  6. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  7. Environmental waste site characterization utilizing aerial photographs and satellite imagery: Three sites in New Mexico, USA

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Becker, N.; Wells, B.; Lewis, A.; David, N.

    1996-04-01

    The proper handling and characterization of past hazardous waste sites is becoming more and more important as world population extends into areas previously deemed undesirable. Historical photographs, past records, current aerial satellite imagery can play an important role in characterizing these sites. These data provide clear insight into defining problem areas which can be surface samples for further detail. Three such areas are discussed in this paper: (1) nuclear wastes buried in trenches at Los Alamos National Laboratory, (2) surface dumping at one site at Los Alamos National Laboratory, and (3) the historical development of a municipal landfill near Las Cruces, New Mexico.

  8. Estimation of walrus populations on sea ice with infrared imagery and aerial photography

    USGS Publications Warehouse

    Udevitz, M.S.; Burn, D.M.; Webber, M.A.

    2008-01-01

    Population sizes of ice-associated pinnipeds have often been estimated with visual or photographic aerial surveys, but these methods require relatively slow speeds and low altitudes, limiting the area they can cover. Recent developments in infrared imagery and its integration with digital photography could allow substantially larger areas to be surveyed and more accurate enumeration of individuals, thereby solving major problems with previous survey methods. We conducted a trial survey in April 2003 to estimate the number of Pacific walruses (Odobenus rosmarus divergens) hauled out on sea ice around St. Lawrence Island, Alaska. The survey used high altitude infrared imagery to detect groups of walruses on strip transects. Low altitude digital photography was used to determine the number of walruses in a sample of detected groups and calibrate the infrared imagery for estimating the total number of walruses. We propose a survey design incorporating this approach with satellite radio telemetry to estimate the proportion of the population in the water and additional low-level flights to estimate the proportion of the hauled-out population in groups too small to be detected in the infrared imagery. We believe that this approach offers the potential for obtaining reliable population estimates for walruses and other ice-associated pinnipeds. ?? 2007 by the Society for Marine Mammalogy.

  9. Current Usage and Future Prospects of Multispectral (RGB) Satellite Imagery in Support of NWS Forecast Offices and National Centers

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Fuell, Kevin K.; Knaff, John; Lee, Thomas

    2012-01-01

    Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low-Earth orbits. The NASA Short-term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA s Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channel s available from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard METEOSAT-9. This broader suite includes products that discriminate between air mass types associated with synoptic-scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Similarly, researchers at NOAA/NESDIS and CIRA have developed air mass discrimination capabilities using channels available from the current GOES Sounders. Other applications of multispectral composites include combinations of high and low frequency, horizontal and vertically polarized passive microwave brightness temperatures to discriminate tropical cyclone structures and other synoptic-scale features. Many of these capabilities have been transitioned for evaluation and operational use at NWS Weather Forecast Offices and National Centers through collaborations with SPoRT and CIRA. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES-R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar-Orbiting Partnership (S-NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross

  10. Aerial Imagery and Other Non-invasive Approaches to Detect Nitrogen and Water Stress in a Potato Crop

    NASA Astrophysics Data System (ADS)

    Nigon, Tyler John

    Post-emergence nitrogen (N) fertilizer is typically split applied to irrigated potato (Solanum tuberosum L.) in Minnesota in order to minimize the likelihood of nitrate leaching and to best match N availability to crop demands. Petiole nitrate-nitrogen (NO3-N) concentration is often used as a diagnostic test to determine the rate and timing of split applications, but using this approach for variable rate applications is difficult. Canopy-level spectral measurements, such as hyperspectral and multispectral imagery, have the potential to be a reliable tool for making in-season N management decisions for precision agriculture applications. The objectives of this two year field study were to evaluate the effects of variety, N treatment, and water stress on growth characteristics and the ability of and canopy-level reflectance to predict N stress in potato. Treatments included two irrigation regimes (unstressed and stressed), five N regimes categorized by three N rates (34 kg N ha-1, 180 kg N ha-1, and 270 kg N ha-1) in which the 270 kg N ha-1 rate had post-emergence N either split applied or applied early in the season, and two potato varieties (Russet Burbank and Alpine Russet). Higher N rates and split applications generally resulted in higher tuber yield for both varieties. Insufficient supplemental water was found to reduce tuber yield and plant N uptake. Of the broadband indices, narrowband indices, and partial least squares regression (PLS) models evaluated, the best predictor of N stress as measured by leaf N concentration was the PLS model using derivative reflectance (r2 of 0.79 for RB and 0.77 for AR). However, the best technique for determining N stress level for variable rate application of N fertilizer was MTCI (MERIS Terrestrial Chlorophyll Index) due to its good relationship with leaf N concentration and high accuracy. As a final aspect of the study, results from the experimental plots were used to predict N stress in a

  11. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  12. A semi-automated single day image differencing technique to identify animals in aerial imagery.

    PubMed

    Terletzky, Pat; Ramsey, Robert Douglas

    2014-01-01

    Our research presents a proof-of-concept that explores a new and innovative method to identify large animals in aerial imagery with single day image differencing. We acquired two aerial images of eight fenced pastures and conducted a principal component analysis of each image. We then subtracted the first principal component of the two pasture images followed by heuristic thresholding to generate polygons. The number of polygons represented the number of potential cattle (Bos taurus) and horses (Equus caballus) in the pasture. The process was considered semi-automated because we were not able to automate the identification of spatial or spectral thresholding values. Imagery was acquired concurrently with ground counts of animal numbers. Across the eight pastures, 82% of the animals were correctly identified, mean percent commission was 53%, and mean percent omission was 18%. The high commission error was due to small mis-alignments generated from image-to-image registration, misidentified shadows, and grouping behavior of animals. The high probability of correctly identifying animals suggests short time interval image differencing could provide a new technique to enumerate wild ungulates occupying grassland ecosystems, especially in isolated or difficult to access areas. To our knowledge, this was the first attempt to use standard change detection techniques to identify and enumerate large ungulates.

  13. Saliency region selection in large aerial imagery using multiscale SLIC segmentation

    NASA Astrophysics Data System (ADS)

    Sahli, Samir; Lavigne, Daniel A.; Sheng, Yunlong

    2012-06-01

    Advents in new sensing hardwares like GigE-cameras and fast growing data transmission capability create an imbalance between the amount of large scale aerial imagery and the means at disposal for treating them. Selection of saliency regions can reduce significantly the prospecting time and computation cost for the detection of objects in large scale aerial imagery. We propose a new approach using multiscale Simple Linear Iterative Clustering (SLIC) technique to compute the saliency regions. The SLIC is fast to create compact and uniform superpixels, based on the distances in both color and geometric spaces. When a salient structure of the object is over-segmented by the SLIC, a number of superpixels will follow the edges in the structure and therefore acquires irregular shapes. Thus, the superpixels deformation betrays presence of salient structures. We quantify the non-compactness of the superpixels as a salience measure, which is computed using the distance transform and the shape factor. To treat objects or object details of various sizes in an image, or the multiscale images, we compute the SLIC segmentations and the salient measures at multiple scales with a set of predetermined sizes of the superpixels. The final saliency map is a sum of the salience measures obtained at multiple scales. The proposed approach is fast, requires no input of user-defined parameter, produces well defined salient regions at full resolution and adapted to multi-scale image processing.

  14. Integration, Testing, and Analysis of Multispectral Imager on Small Unmanned Aerial System for Skin Detection

    DTIC Science & Technology

    2014-03-01

    1.8 Preview Chapter II explores the background literature for skin detection , SUAS, methods for vision processing, and metrics for imagery ...subpixel domain (Morales, 2012). Proper spatial pixel density is essential for a sensor and target detection methods . When targeting spectral...sensor for automated target detection , thus the equation has been modified to work with aberrated imagery (Thurman & Fienup, 2010). /GSD pR f

  15. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  16. Automated detection and mapping of crown discolouration caused by jack pine budworm with 2.5 m resolution multispectral imagery

    NASA Astrophysics Data System (ADS)

    Leckie, Donald G.; Cloney, Ed; Joyce, Steve P.

    2005-05-01

    Jack pine budworm ( Choristoneura pinus pinus (Free.)) is a native insect defoliator of mainly jack pine ( Pinus banksiana Lamb.) in North America east of the Rocky Mountains. Periodic outbreaks of this insect, which generally last two to three years, can cause growth loss and mortality and have an important impact ecologically and economically in terms of timber production and harvest. The jack pine budworm prefers to feed on current year needles. Their characteristic feeding habits cause discolouration or reddening of the canopy. This red colouration is used to map the distribution and intensity of defoliation that has taken place that year (current defoliation). An accurate and consistent map of the distribution and intensity of budworm defoliation (as represented by the red discolouration) at the stand and within stand level is desirable. Automated classification of multispectral imagery, such as is available from airborne and new high resolution satellite systems, was explored as a viable tool for objectively classifying current discolouration. Airborne multispectral imagery was acquired at a 2.5 m resolution with the Multispectral Electro-optical Imaging Sensor (MEIS). It recorded imagery in six nadir looking spectral bands specifically designed to detect discolouration caused by budworm and a near-infrared band viewing forward at 35° was also used. A 2200 nm middle infrared image was acquired with a Daedalus scanner. Training and test areas of different levels of discolouration were created based on field observations and a maximum likelihood supervized classification was used to estimate four classes of discolouration (nil-trace, light, moderate and severe). Good discrimination was achieved with an overall accuracy of 84% for the four discolouration levels. The moderate discolouration class was the poorest at 73%, because of confusion with both the severe and light classes. Accuracy on a stand basis was also good, and regional and within stand

  17. Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery

    USGS Publications Warehouse

    Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.

    2016-01-01

    Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to

  18. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  19. Unsupervised building detection from irregularly spaced LiDAR and aerial imagery

    NASA Astrophysics Data System (ADS)

    Shorter, Nicholas Sven

    As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and

  20. Comparison of different detection methods for citrus greening disease based on airborne multispectral and hyperspectral imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Citrus greening or Huanglongbing (HLB) is a devastating disease spread in many citrus groves since first found in 2005 in Florida. Multispectral (MS) and hyperspectral (HS) airborne images of citrus groves in Florida were taken to detect citrus greening infected trees in 2007 and 2010. Ground truthi...

  1. Detection algorithm for cracks on the surface of tomatoes using Multispectral Vis/NIR Reflectance Imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tomatoes, an important agricultural product in fresh-cut markets, are sometimes a source of foodborne illness, mainly Salmonella spp. Growth cracks on tomatoes can be a pathway for bacteria, so its detection prior to consumption is important for public health. In this study, multispectral Visible/Ne...

  2. Use of multispectral satellite imagery and hyperspectral endmember libraries for urban land cover mapping at the metropolitan scale

    NASA Astrophysics Data System (ADS)

    Priem, Frederik; Okujeni, Akpona; van der Linden, Sebastian; Canters, Frank

    2016-10-01

    The value of characteristic reflectance features for mapping urban materials has been demonstrated in many experiments with airborne imaging spectrometry. Analysis of larger areas requires satellite-based multispectral imagery, which typically lacks the spatial and spectral detail of airborne data. Consequently the need arises to develop mapping methods that exploit the complementary strengths of both data sources. In this paper a workflow for sub-pixel quantification of Vegetation-Impervious-Soil urban land cover is presented, using medium resolution multispectral satellite imagery, hyperspectral endmember libraries and Support Vector Regression. A Landsat 8 Operational Land Imager surface reflectance image covering the greater metropolitan area of Brussels is selected for mapping. Two spectral libraries developed for the cities of Brussels and Berlin based on airborne hyperspectral APEX and HyMap data are used. First the combined endmember library is resampled to match the spectral response of the Landsat sensor. The library is then optimized to avoid spectral redundancy and confusion. Subsequently the spectra of the endmember library are synthetically mixed to produce training data for unmixing. Mapping is carried out using Support Vector Regression models trained with spectra selected through stratified sampling of the mixed library. Validation on building block level (mean size = 46.8 Landsat pixels) yields an overall good fit between reference data and estimation with Mean Absolute Errors of 0.06, 0.06 and 0.08 for vegetation, impervious and soil respectively. Findings of this work may contribute to the use of universal spectral libraries for regional scale land cover fraction mapping using regression approaches.

  3. Identification of wild areas in southern lower Michigan. [terrain analysis from aerial photography, and satellite imagery

    NASA Technical Reports Server (NTRS)

    Habowski, S.; Cialek, C.

    1978-01-01

    An inventory methodology was developed to identify potential wild area sites. A list of site criteria were formulated and tested in six selected counties. Potential sites were initially identified from LANDSAT satellite imagery. A detailed study of the soil, vegetation and relief characteristics of each site based on both high-altitude aerial photographs and existing map data was conducted to eliminate unsuitable sites. Ground reconnaissance of the remaining wild areas was made to verify suitability and acquire information on wildlife and general aesthetics. Physical characteristics of the wild areas in each county are presented in tables. Maps show the potential sites to be set aside for natural preservation and regulation by the state under the Wilderness and Natural Areas Act of 1972.

  4. Application of multispectral radar and LANDSAT imagery to geologic mapping in death valley

    NASA Technical Reports Server (NTRS)

    Daily, M.; Elachi, C.; Farr, T.; Stromberg, W.; Williams, S.; Schaber, G.

    1978-01-01

    Side-Looking Airborne Radar (SLAR) images, acquired by JPL and Strategic Air Command Systems, and visible and near-infrared LANDSAT imagery were applied to studies of the Quaternary alluvial and evaporite deposits in Death Valley, California. Unprocessed radar imagery revealed considerable variation in microwave backscatter, generally correlated with surface roughness. For Death Valley, LANDSAT imagery is of limited value in discriminating the Quaternary units except for alluvial units distinguishable by presence or absence of desert varnish or evaporite units whose extremely rough surfaces are strongly shadowed. In contrast, radar returns are most strongly dependent on surface roughness, a property more strongly correlated with surficial geology than is surface chemistry.

  5. A multispectral scanner survey of the Rocky Flats Environmental Technology Site and surrounding area, Golden, Colorado

    SciTech Connect

    Brewster, S.B. Jr.; Brickey, D.W.; Ross, S.L.; Shines, J.E.

    1997-04-01

    Aerial multispectral scanner imagery was collected of the Rocky Flats Environmental Technology Site in Golden, Colorado, on June 3, 5, 6, and 7, 1994, using a Daedalus AADS1268 multispectral scanner and coincident aerial color and color infrared photography. Flight altitudes were 4,500 feet (1372 meters) above ground level to match prior 1989 survey data; 2,000 feet (609 meters) above ground level for sitewide vegetation mapping; and 1,000 feet (304 meters) above ground level for selected areas of special interest. A multispectral survey was initiated to improve the existing vegetation classification map, to identify seeps and springs, and to generate ARC/INFO Geographic Information System compatible coverages of the vegetation and wetlands for the entire site including the buffer zone. The multispectral scanner imagery and coincident aerial photography were analyzed for the detection, identification, and mapping of vegetation and wetlands. The multispectral scanner data were processed digitally while the color and color infrared photography were manually photo-interpreted to define vegetation and wetlands. Several standard image enhancement techniques were applied to the multispectral scanner data to assist image interpretation. A seep enhancement was applied and a color composite consisting of multispectral scanner channels 11, 7, and 5 (thermal infrared, mid-infrared, and red bands, respectively) proved most useful for detecting seeps, seep zones, and springs. The predawn thermal infrared data were also useful in identifying and locating seeps. The remote sensing data, mapped wetlands, and ancillary Geographic Information System compatible data sets were spatially analyzed for seeps.

  6. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  7. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California.

    PubMed

    Boyda, Edward; Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA.

  8. Use and Assessment of Multi-Spectral Satellite Imagery in NWS Operational Forecasting Environments

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Fuell, Kevin; Stano, Geoffrey; McGrath, Kevin; Schultz, Lori; LeRoy, Anita

    2015-01-01

    NOAA's Satellite Proving Grounds have established partnerships between product developers and NWS WFOs for the evaluation of new capabilities from the GOES-R and JPSS satellite systems. SPoRT has partnered with various WFOs to evaluate multispectral (RGB) products from MODIS, VIIRS and Himawari/AHI to prepare for GOES-R/ABI. Assisted through partnerships with GINA, UW/CIMSS, NOAA, and NASA Direct Broadcast capabilities.

  9. Landslide Identification and Information Extraction Based on Optical and Multispectral UAV Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Jiayuan; Wang, Meimei; Yang, Jia; Yang, Qingxia

    2017-02-01

    Landslide is one of the most serious natural disasters which caused enormous economic losses and casualties in the world. Fast and accurate identification of newly occurred landslide and extraction of relevant information are the premise and foundation for landslide disaster assessment and relief. As the places where landslides occur are often inaccessible for field observation because of the temporary failure in transportation and communication. Therefore, UAV remote sensing can be adopted to collect landslide information efficiently and quickly with the advantages of low cost, flexible launch and landing, safety, under-cloud-flying, and hyperspatial image resolution. Newly occurred landslides are usually accompanied with those phenomena such as vegetation burying and bedrock or bare soil exposure, which can be easily detected in optical or multispectral UAV images. By taking one typical landslide occurred in Wenchuan Earthquake stricken area in 2010 as an example, this paper demonstrates the process of integration of multispectral camera with UAV platform, NDVI generation with multispectral UAV images, three-dimensional terrain and orthophoto generation with optical UAV images, and identification and extraction of landslide information such as its location, impacted area, and earthwork volume.

  10. [Study on artificial neural network combined with multispectral remote sensing imagery for forest site evaluation].

    PubMed

    Gong, Yin-Xi; He, Cheng; Yan, Fei; Feng, Zhong-Ke; Cao, Meng-Lei; Gao, Yuan; Miao, Jie; Zhao, Jin-Long

    2013-10-01

    Multispectral remote sensing data containing rich site information are not fully used by the classic site quality evaluation system, as it merely adopts artificial ground survey data. In order to establish a more effective site quality evaluation system, a neural network model which combined remote sensing spectra factors with site factors and site index relations was established and used to study the sublot site quality evaluation in the Wangyedian Forest Farm in Inner Mongolia Province, Chifeng City. Based on the improved back propagation artificial neural network (BPANN), this model combined multispectral remote sensing data with sublot survey data, and took larch as example, Through training data set sensitivity analysis weak or irrelevant factor was excluded, the size of neural network was simplified, and the efficiency of network training was improved. This optimal site index prediction model had an accuracy up to 95.36%, which was 9.83% higher than that of the neural network model based on classic sublot survey data, and this shows that using multi-spectral remote sensing and small class survey data to determine the status of larch index prediction model has the highest predictive accuracy. The results fully indicate the effectiveness and superiority of this method.

  11. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots

    PubMed Central

    Lelong, Camille C. D.; Burger, Philippe; Jubelin, Guillaume; Roux, Bruno; Labbé, Sylvain; Baret, Frédéric

    2008-01-01

    This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships. PMID:27879893

  12. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots.

    PubMed

    Lelong, Camille C D; Burger, Philippe; Jubelin, Guillaume; Roux, Bruno; Labbé, Sylvain; Baret, Frédéric

    2008-05-26

    This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships.

  13. Using aerial video to train the supervised classification of Landsat TM imagery for coral reef habitats mapping.

    PubMed

    Bello-Pineda, J; Liceaga-Correa, M A; Hernández-Núñez, H; Ponce-Hernández, R

    2005-06-01

    Management of coral reef resources is a challenging task, in many cases, because of the scarcity or inexistence of accurate sources of information and maps. Remote sensing is a not intrusive, but powerful tool, which has been successfully used for the assessment and mapping of natural resources in coral reef areas. In this study we utilized GIS to combine Landsat TM imagery, aerial photography, aerial video and a digital bathymetric model, to assess and to map submerged habitats for Alacranes reef, Yucatán, México. Our main goal was testing the potential of aerial video as the source of data to produce training areas for the supervised classification of Landsat TM imagery. Submerged habitats were ecologically characterized by using a hierarchical classification of field data. Habitats were identified on an overlaid image, consisting of the three types of remote sensing products and the bathymetric model. Pixels representing those habitats were selected as training areas by using GIS tools. Training areas were used to classify the Landsat TM bands 1, 2 and 3 and the bathymetric model by using a maximum likelihood algorithm. The resulting thematic map was compared against field data classification to improve habitats definition. Contextual editing and reclassification were used to obtain the final thematic map with an overall accuracy of 77%. Analysis of aerial video by a specialist in coral reef ecology was found to be a suitable source of information to produce training areas for the supervised classification of Landsat TM imagery in coral reefs at a coarse scale.

  14. Kite Aerial Photography for Low-Cost, Ultra-high Spatial Resolution Multi-Spectral Mapping of Intertidal Landscapes

    PubMed Central

    Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J.; Bongiorno, Daniel

    2013-01-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales. PMID:24069206

  15. Kite aerial photography for low-cost, ultra-high spatial resolution multi-spectral mapping of intertidal landscapes.

    PubMed

    Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel

    2013-01-01

    Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.

  16. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  17. Forest and land inventory using ERTS imagery and aerial photography in the boreal forest region of Alberta, Canada

    NASA Technical Reports Server (NTRS)

    Kirby, C. L.

    1974-01-01

    Satellite imagery and small-scale (1:120,000) infrared ektachrome aerial photography for the development of improved forest and land inventory techniques in the boreal forest region are presented to demonstrate spectral signatures and their application. The forest is predominately mixed, stands of white spruce and poplar, with some pure stands of black spruce, pine and large areas of poorly drained land with peat and sedge type muskegs. This work is part of coordinated program to evaluate ERTS imagery by the Canadian Forestry Service.

  18. Remote sensing for precision agriculture: Within-field spatial variability analysis and mapping with aerial digital multispectral images

    NASA Astrophysics Data System (ADS)

    Gopalapillai, Sreekala

    2000-10-01

    Advances in remote sensing technology and biological sensors provided the motivation for this study on the applications of aerial multispectral remote sensing in precision agriculture. The feasibility of using high-resolution multispectral remote sensing for precision farming applications such as soil type delineation, identification of crop nitrogen levels, and modeling and mapping of weed density distribution and yield potential within a crop field was explored in this study. Some of the issues such as image calibration for variable lighting conditions and soil background influence were also addressed. Intensity normalization and band ratio methods were found to be adequate image calibration methods to compensate for variable illumination and soil background influence. Several within-field variability factors such as growth stage, field conditions, nutrient availability, crop cultivar, and plant population were found to be dominant in different periods. Unsupervised clustering of color infrared (CIR) image of a field soil was able to identify soil mapping units with an average accuracy of 76%. Spectral reflectance from a crop field was highly correlated to the chlorophyll reading. A regression model developed to predict nitrogen stress in corn identified nitrogen-stressed areas from nitrogen-sufficient areas with a high accuracy (R2 = 0.93). Weed density was highly correlated to the spectral reflectance from a field. One month after planting was found to be a good time to map spatial weed density. The optimum range of resolution for weed mapping was 4 m to 4.5 m for the remote sensing system and the experimental field used in this study. Analysis of spatial yield with respect to spectral reflectance showed that the visible and NIR reflectance were negatively correlated to yield and crop population in heavily weed-infested areas. The yield potential was highly correlated to image indices, especially to normalized brightness. The ANN model developed for one of the

  19. Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery

    SciTech Connect

    Bierwirth, P.N.; Lee, T.J.; Burne, R.V. Michigan Environmental Research Inst., Ann Arbor )

    1993-03-01

    A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depth image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.

  20. Outlier and target detection in aerial hyperspectral imagery: a comparison of traditional and percentage occupancy hit or miss transform techniques

    NASA Astrophysics Data System (ADS)

    Young, Andrew; Marshall, Stephen; Gray, Alison

    2016-05-01

    The use of aerial hyperspectral imagery for the purpose of remote sensing is a rapidly growing research area. Currently, targets are generally detected by looking for distinct spectral features of the objects under surveillance. For example, a camouflaged vehicle, deliberately designed to blend into background trees and grass in the visible spectrum, can be revealed using spectral features in the near-infrared spectrum. This work aims to develop improved target detection methods, using a two-stage approach, firstly by development of a physics-based atmospheric correction algorithm to convert radiance into re ectance hyperspectral image data and secondly by use of improved outlier detection techniques. In this paper the use of the Percentage Occupancy Hit or Miss Transform is explored to provide an automated method for target detection in aerial hyperspectral imagery.

  1. Assessing the accuracy of hyperspectral and multispectral satellite imagery for categorical and Quantitative mapping of salinity stress in sugarcane fields

    NASA Astrophysics Data System (ADS)

    Hamzeh, Saeid; Naseri, Abd Ali; AlaviPanah, Seyed Kazem; Bartholomeus, Harm; Herold, Martin

    2016-10-01

    This study evaluates the feasibility of hyperspectral and multispectral satellite imagery for categorical and quantitative mapping of salinity stress in sugarcane fields located in the southwest of Iran. For this purpose a Hyperion image acquired on September 2, 2010 and a Landsat7 ETM+ image acquired on September 7, 2010 were used as hyperspectral and multispectral satellite imagery. Field data including soil salinity in the sugarcane root zone was collected at 191 locations in 25 fields during September 2010. In the first section of the paper, based on the yield potential of sugarcane as influenced by different soil salinity levels provided by FAO, soil salinity was classified into three classes, low salinity (1.7-3.4 dS/m), moderate salinity (3.5-5.9 dS/m) and high salinity (6-9.5) by applying different classification methods including Support Vector Machine (SVM), Spectral Angle Mapper (SAM), Minimum Distance (MD) and Maximum Likelihood (ML) on Hyperion and Landsat images. In the second part of the paper the performance of nine vegetation indices (eight indices from literature and a new developed index in this study) extracted from Hyperion and Landsat data was evaluated for quantitative mapping of salinity stress. The experimental results indicated that for categorical classification of salinity stress, Landsat data resulted in a higher overall accuracy (OA) and Kappa coefficient (KC) than Hyperion, of which the MD classifier using all bands or PCA (1-5) as an input performed best with an overall accuracy and kappa coefficient of 84.84% and 0.77 respectively. Vice versa for the quantitative estimation of salinity stress, Hyperion outperformed Landsat. In this case, the salinity and water stress index (SWSI) has the best prediction of salinity stress with an R2 of 0.68 and RMSE of 1.15 dS/m for Hyperion followed by Landsat data with an R2 and RMSE of 0.56 and 1.75 dS/m respectively. It was concluded that categorical mapping of salinity stress is the best option

  2. Multispectral Thermal Imagery and Its Application to the Geologic Mapping of the Koobi Fora Formation, Northwestern Kenya

    SciTech Connect

    Green, Mary K.

    2005-12-01

    The Koobi Fora Formation in northwestern Kenya has yielded more hominin fossils dated between 2.1 and 1.2 Ma than any other location on Earth. This research was undertaken to discover the spectral signatures of a portion of the Koobi Fora Formation using imagery from the DOE's Multispectral Thermal Imager (MTI) satellite. Creation of a digital geologic map from MTI imagery was a secondary goal of this research. MTI is unique amongst multispectral satellites in that it co-collects data from 15 spectral bands ranging from the visible to the thermal infrared with a ground sample distance of 5 meters per pixel in the visible and 20 meters in the infrared. The map was created in two stages. The first was to correct the base MTI image using spatial accuracy assessment points collected in the field. The second was to mosaic various MTI images together to create the final Koobi Fora map. Absolute spatial accuracy of the final map product is 73 meters. The geologic classification of the Koobi Fora MTI map also took place in two stages. The field work stage involved location of outcrops of different lithologies within the Koobi Fora Formation. Field descriptions of these outcrops were made and their locations recorded. During the second stage, a linear spectral unmixing algorithm was applied to the MTI mosaic. In order to train the linear spectra unmixing algorithm, regions of interest representing four different classes of geologic material (tuff, alluvium, carbonate, and basalt), as well as a vegetation class were defined within the MTI mosaic. The regions of interest were based upon the aforementioned field data as well as overlays of geologic maps from the 1976 Iowa State mapping project. Pure spectra were generated for each class from the regions of interest, and then the unmixing algorithm classified each pixel according to relative percentage of classes found within the pixel based upon the pure spectra values. A total of four unique combinations of geologic classes

  3. Fusion of LIDAR Data and Multispectral Imagery for Effective Building Detection Based on Graph and Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Gilani, S. A. N.; Awrangjeb, M.; Lu, G.

    2015-03-01

    Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets

  4. Use of Vis-SWIR imagery to aid atmospheric correction of multispectral and hyperspectral thermal infrared TIR imagery: The TIR model

    NASA Astrophysics Data System (ADS)

    Gruninger, John H.; Fox, Marsha J.; Lee, Jamine; Ratkowski, Anthony J.; Hoke, Michael L.

    2002-11-01

    The atmospheric correction of thermal infrared (TIR) imagery involves the combined tasks of separation of atmospheric transmittance, downwelling flux and upwelling radiance from the surface material spectral emissivity and temperature. The problem is ill posed and is thus hampered by spectral ambiguity among several possible feasible combinations of atmospheric temperature, constituent profiles, and surface material emissivities and temperatures. For many materials, their reflectance spectra in the Vis-SWIR provide a means of identification or at least classification into generic material types, vegetation, soil, etc. If Vis-SWIR data can be registered to TIR data or collected simultaneously as in sensors like the MASTER sensor, then the additional information on material type can be utilized to help lower the ambiguities in the TIR data. If the Vis-SWIR and TIR are collected simultaneously the water column amounts obtained form the atmospheric correction of the Vis-SWIR can also be utilized in reducing the ambiguity in the atmospheric quantities. The TIR atmospheric correction involves expansions in atmospheric and material emissivity basis sets. The method can be applied to hyperspectral and ultraspectral data, however it is particularly useful for multispectral TIR, where spectral smoothness techniques cannot be readily applied. The algorithm is described, and the approach applied to a MASTER sensor data set.

  5. Geomorphological relationships through the use of 2-D seismic reflection data, Lidar, and aerial imagery

    NASA Astrophysics Data System (ADS)

    Alesce, Meghan Elizabeth

    Barrier Islands are crucial in protecting coastal environments. This study focuses on Dauphin Island, Alabama, located within the Northern Gulf of Mexico (NGOM) Barrier Island complex. It is one of many islands serving as natural protection for NGOM ecosystems and coastal cities. The NGOM barrier islands formed at 4 kya in response to a decrease in rate of sea level rise. The morphology of these islands changes with hurricanes, anthropogenic activity, and tidal and wave action. This study focuses on ancient incised valleys and and the impact on island morphology on hurricane breaches. Using high frequency 2-D seismic reflection data four horizons, including the present seafloor, were interpreted. Subaerial portions of Dauphin Island were imaged using Lidar data and aerial imagery over a ten-year time span, as well as historical maps. Historical shorelines of Dauphin Island were extracted from aerial imagery and historical maps, and were compared to the location of incised valleys seen within the 2-D seismic reflection data. Erosion and deposition volumes of Dauphin Island from 1998 to 2010 (the time span covering hurricanes Ivan and Katrina) in the vicinity of Katrina Cut and Pelican Island were quantified using Lidar data. For the time period prior to Hurricane Ivan an erosional volume of 46,382,552 m3 and depositional volume of 16,113.6 m3 were quantified from Lidar data. The effects of Hurricane Ivan produced a total erosion volume of 4,076,041.5 m3. The erosional and depositional volumes of Katrina Cut being were 7,562,068.5 m3 and 510,936.7 m3, respectively. More volume change was found within Pelican Pass. For the period between hurricanes Ivan and Katrina the erosion volume was 595,713.8 m3. This was mostly located within Katrina Cut. Total deposition for the same period, including in Pelican Pass, was 15,353,961 m3. Hurricane breaches were compared to ancient incised valleys seen within the 2-D seismic reflection results. Breaches from hurricanes from 1849

  6. Mapping of riparian invasive species with supervised classification of Unmanned Aerial System (UAS) imagery

    NASA Astrophysics Data System (ADS)

    Michez, Adrien; Piégay, Hervé; Jonathan, Lisein; Claessens, Hugues; Lejeune, Philippe

    2016-02-01

    Riparian zones are key landscape features, representing the interface between terrestrial and aquatic ecosystems. Although they have been influenced by human activities for centuries, their degradation has increased during the 20th century. Concomitant with (or as consequences of) these disturbances, the invasion of exotic species has increased throughout the world's riparian zones. In our study, we propose a easily reproducible methodological framework to map three riparian invasive taxa using Unmanned Aerial Systems (UAS) imagery: Impatiens glandulifera Royle, Heracleum mantegazzianum Sommier and Levier, and Japanese knotweed (Fallopia sachalinensis (F. Schmidt Petrop.), Fallopia japonica (Houtt.) and hybrids). Based on visible and near-infrared UAS orthophoto, we derived simple spectral and texture image metrics computed at various scales of image segmentation (10, 30, 45, 60 using eCognition software). Supervised classification based on the random forests algorithm was used to identify the most relevant variable (or combination of variables) derived from UAS imagery for mapping riparian invasive plant species. The models were built using 20% of the dataset, the rest of the dataset being used as a test set (80%). Except for H. mantegazzianum, the best results in terms of global accuracy were achieved with the finest scale of analysis (segmentation scale parameter = 10). The best values of overall accuracies reached 72%, 68%, and 97% for I. glandulifera, Japanese knotweed, and H. mantegazzianum respectively. In terms of selected metrics, simple spectral metrics (layer mean/camera brightness) were the most used. Our results also confirm the added value of texture metrics (GLCM derivatives) for mapping riparian invasive species. The results obtained for I. glandulifera and Japanese knotweed do not reach sufficient accuracies for operational applications. However, the results achieved for H. mantegazzianum are encouraging. The high accuracies values combined to

  7. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Huang, Cho-Ying

    2016-04-01

    MiniMCA (Miniature Multiple Camera Array) is a lightweight, frame-based, and multilens composed multispectral sensor, which is suitable to mount on an unmanned aerial systems (UAS) to acquire high spatial and temporal resolution imagery for various remote sensing applications. Since MiniMCA has significant band misregistration effect, an automatic and precise band-to-band registration (BBR) method is proposed in this study. Based on the principle of sensor plane-to-plane projection, a modified projective transformation (MPT) model is developed. It is to estimate all coefficients of MPT from indoor camera calibration, together with two systematic errors correction. Therefore, we can transfer all bands into the same image space. Quantitative error analysis shows that the proposed BBR scheme is scene independent and can achieve 0.33 pixels of accuracy, which demonstrating the proposed method is accurate and reliable. Meanwhile, it is difficult to mark ground control points (GCPs) on the MiniMCA images, as its spatial resolution is low when the flight height is higher than 400 m. In this study, a higher resolution RGB camera is adopted to produce digital surface model (DSM) and assist MiniMCA ortho-image generation. After precise BBR, only one reference band of MiniMCA image is necessary for aerial triangulation because all bands have same exterior and interior orientation parameters. It means that all the MiniMCA imagery can be ortho-rectified through the same exterior and interior orientation parameters of the reference band. The result of the proposed ortho-rectification procedure shows the co-registration errors between MiniMCA reference band and the RGB ortho-images is less than 0.6 pixels.

  8. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  9. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the

  10. Multi-Spectral Satellite Imagery and Land Surface Modeling Supporting Dust Detection and Forecasting

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Case, J.; Zavodsky, B.; Naeger, A. R.; LaFontaine, F.; Smith, M. R.

    2014-12-01

    Current and future multi-spectral satellite sensors provide numerous means and methods for identifying hazards associated with polluting aerosols and dust. For over a decade, the NASA Short-term Prediction Research and Transition (SPoRT) Center at Marshall Space Flight Center in Huntsville has focused on developing new applications from near real-time data sources in support of the operational weather forecasting community. The SPoRT Center achieves these goals by matching appropriate analysis tools, modeling outputs, and other products to forecast challenges, along with appropriate training and end-user feedback to ensure a successful transition. As a spinoff of these capabilities, the SPoRT Center has recently focused on developing collaborations to address challenges with the public health community, specifically focused on the identification of hazards associated with dust and pollution aerosols. Using multispectral satellite data from the SEVIRI instrument on the Meteosat series, the SPoRT team has leveraged EUMETSAT techniques for identifying dust through false color (RGB) composites, which have been used by the National Hurricane Center and other meteorological centers to identify, monitor, and predict the movement of dust aloft. Similar products have also been developed from the MODIS and VIIRS instruments onboard the Terra and Aqua, and Suomi-NPP satellites, respectively, and transitioned for operational forecasting use by offices within NOAA's National Weather Service. In addition, the SPoRT Center incorporates satellite-derived vegetation information and land surface modeling to create high-resolution analyses of soil moisture and other land surface conditions relevant to the lofting of wind-blown dust and identification of other, possible public-health vectors. Examples of land surface modeling and relevant predictions are shown in the context of operational decision making by forecast centers with potential future applications to public health arenas.

  11. Use of shadow for enhancing mapping of perennial desert plants from high-spatial resolution multispectral and panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Alsharrah, Saad A.; Bouabid, Rachid; Bruce, David A.; Somenahalli, Sekhar; Corcoran, Paul A.

    2016-07-01

    Satellite remote-sensing techniques face challenges in extracting vegetation-cover information in desert environments. The limitations in detection are attributed to three major factors: (1) soil background effect, (2) distribution and structure of perennial desert vegetation, and (3) tradeoff between spatial and spectral resolutions of the satellite sensor. In this study, a modified vegetation shadow model (VSM-2) is proposed, which utilizes vegetation shadow as a contextual classifier to counter the limiting factors. Pleiades high spatial resolution, multispectral (2 m), and panchromatic (0.5 m) images were utilized to map small and scattered perennial arid shrubs and trees. We investigated the VSM-2 method in addition to conventional techniques, such as vegetation indices and prebuilt object-based image analysis. The success of each approach was evaluated using a root sum square error metric, which incorporated field data as control and three error metrics related to commission, omission, and percent cover. Results of the VSM-2 revealed significant improvements in perennial vegetation cover and distribution accuracy compared with the other techniques and its predecessor VSM-1. Findings demonstrated that the VSM-2 approach, using high-spatial resolution imagery, can be employed to provide a more accurate representation of perennial arid vegetation and, consequently, should be considered in assessments of desertification.

  12. Random Forest and Objected-Based Classification for Forest Pest Extraction from Uav Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Hu, Xiangyun

    2016-06-01

    Forest pest is one of the most important factors affecting the health of forest. However, since it is difficult to figure out the pest areas and to predict the spreading ways just to partially control and exterminate it has not effective enough so far now. The infected areas by it have continuously spreaded out at present. Thus the introduction of spatial information technology is highly demanded. It is very effective to examine the spatial distribution characteristics that can establish timely proper strategies for control against pests by periodically figuring out the infected situations as soon as possible and by predicting the spreading ways of the infection. Now, with the UAV photography being more and more popular, it has become much cheaper and faster to get UAV images which are very suitable to be used to monitor the health of forest and detect the pest. This paper proposals a new method to effective detect forest pest in UAV aerial imagery. For an image, we segment it to many superpixels at first and then we calculate a 12-dimension statistical texture information for each superpixel which are used to train and classify the data. At last, we refine the classification results by some simple rules. The experiments show that the method is effective for the extraction of forest pest areas in UAV images.

  13. Discrimination of Deciduous Tree Species from Time Series of Unmanned Aerial System Imagery

    PubMed Central

    Lisein, Jonathan; Michez, Adrien; Claessens, Hugues; Lejeune, Philippe

    2015-01-01

    Technology advances can revolutionize Precision Forestry by providing accurate and fine forest information at tree level. This paper addresses the question of how and particularly when Unmanned Aerial System (UAS) should be used in order to efficiently discriminate deciduous tree species. The goal of this research is to determine when is the best time window to achieve an optimal species discrimination. A time series of high resolution UAS imagery was collected to cover the growing season from leaf flush to leaf fall. Full benefit was taken of the temporal resolution of UAS acquisition, one of the most promising features of small drones. The disparity in forest tree phenology is at the maximum during early spring and late autumn. But the phenology state that optimized the classification result is the one that minimizes the spectral variation within tree species groups and, at the same time, maximizes the phenologic differences between species. Sunlit tree crowns (5 deciduous species groups) were classified using a Random Forest approach for monotemporal, two-date and three-date combinations. The end of leaf flushing was the most efficient single-date time window. Multitemporal datasets definitely improve the overall classification accuracy. But single-date high resolution orthophotomosaics, acquired on optimal time-windows, result in a very good classification accuracy (overall out of bag error of 16%). PMID:26600422

  14. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  15. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  16. Automatic extraction of building roofs using LIDAR data and multispectral imagery

    NASA Astrophysics Data System (ADS)

    Awrangjeb, Mohammad; Zhang, Chunsun; Fraser, Clive S.

    2013-09-01

    Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modelling. This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery. Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a 'ground mask'. The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes. The image lines are extracted from the grey-scale version of the orthoimage and then classified into several classes such as 'ground', 'tree', 'roof edge' and 'roof ridge' using the ground mask and colour and texture information from the orthoimagery. During segmentation of the non-ground LIDAR points, the lines from the latter two classes are used as baselines to locate the nearby LIDAR points of the neighbouring planes. For each plane a robust seed region is thereby defined using the nearby non-ground LIDAR points of a baseline and this region is iteratively grown to extract the complete roof plane. Finally, a newly proposed rule-based procedure is applied to remove planes constructed on trees. Experimental results show that the proposed method can successfully remove vegetation and so offers high extraction rates.

  17. Exploration towards the modeling of gable-roofed buildings using a combination of aerial and street-level imagery

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo; Hazelhoff, Lykele; de With, Peter H. N.

    2015-03-01

    Extraction of residential building properties is helpful for numerous applications, such as computer-guided feasibility analysis for solar panel placement, determination of real-estate taxes and assessment of real-estate insurance policies. Therefore, this work explores the automated modeling of buildings with a gable roof (the most common roof type within Western Europe), based on a combination of aerial imagery and street-level panoramic images. This is a challenging task, since buildings show large variations in shape, dimensions and building extensions, and may additionally be captured under non-ideal lighting conditions. The aerial images feature a coarse overview of the building due to the large capturing distance. The building footprint and an initial estimate of the building height is extracted based on the analysis of stereo aerial images. The estimated model is then refined using street-level images, which feature higher resolution and enable more accurate measurements, however, displaying a single building side only. Initial experiments indicate that the footprint dimensions of the main building can be accurately extracted from aerial images, while the building height is extracted with slightly less accuracy. By combining aerial and street-level images, we have found that the accuracies of these height measurements are significantly increased, thereby improving the overall quality of the extracted building model, and resulting in an average inaccuracy of the estimated volume below 10%.

  18. Estimating Evapotranspiration over Heterogeneously Vegetated Surfaces using Large Aperture Scintillometer, LiDAR, and Airborne Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Geli, H. M.; Neale, C. M.; Pack, R. T.; Watts, D. R.; Osterberg, J.

    2011-12-01

    Estimates of evapotranspiration (ET) over heterogeneous areas is challenging especially in water-limited sparsely vegetated environments. New techniques such as airborne full-waveform LiDAR (Light Detection and Ranging) and high resolution multispectral and thermal imagery can provide enough detail of sparse canopies to improve energy balance model estimations as well as footprint analysis of scintillometer data. The objectives of this study were to estimate ET over such areas and develop methodologies for the use of these airborne data technologies. Because of the associated heterogeneity, this study was conducted over the Cibola National wildlife refuge, southern California on an area dominated with tamarisk (salt cedar) forest (90%) interspersed with arrowweed and bare soil (10%). A set of two large aperture scintillometers (LASs) were deployed over the area to provide estimates of sensible heat flux (HLAS). The LASs were distributed over the area in a way that allowed capturing different surface spatial heterogeneity. Bowen ratio systems were used to provide hydrometeorological variables and surface energy balance fluxes (SEBF) (i.e. Rn, G, H, and LE) measurements. Scintillometer-based estimates of HLAS were improved by considering the effect of the corresponding 3D footprint and the associated displacement height (d) and the roughness length (z0) following Geli et al. (2011). The LiDAR data were acquired using the LASSI Lidar developed at Utah State University (USU). The data was used to obtain 1-m spatial resolution DEM's and vegetation canopy height to improve the HLAS estimates. The BR measurements of Rn and G were combined with LAS estimates, HLAS, to provide estimates of LELASas a residual of the energy balance equation. A thermal remote sensing model namely the two source energy balance (TSEB) of Norman et al. (1995) was applied to provide spatial estimates of SEBF. Four airborne images at 1-4 meter spatial resolution acquired using the USU airborne

  19. Detection of Neolithic Settlements in Thessaly (Greece) Through Multispectral and Hyperspectral Satellite Imagery

    PubMed Central

    Alexakis, Dimitrios; Sarris, Apostolos; Astaras, Theodoros; Albanakis, Konstantinos

    2009-01-01

    Thessaly is a low relief region in Greece where hundreds of Neolithic settlements/tells called magoules were established from the Early Neolithic period until the Bronze Age (6,000 – 3,000 BC). Multi-sensor remote sensing was applied to the study area in order to evaluate its potential to detect Neolithic settlements. Hundreds of sites were geo-referenced through systematic GPS surveying throughout the region. Data from four primary sensors were used, namely Landsat ETM, ASTER, EO1 - HYPERION and IKONOS. A range of image processing techniques were originally applied to the hyperspectral imagery in order to detect the settlements and validate the results of GPS surveying. Although specific difficulties were encountered in the automatic classification of archaeological features composed by a similar parent material with the surrounding landscape, the results of the research suggested a different response of each sensor to the detection of the Neolithic settlements, according to their spectral and spatial resolution. PMID:22399961

  20. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery

    PubMed Central

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-01-01

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the

  1. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  2. Segmenting clouds from space : a hybrid multispectral classification algorithm for satellite imagery.

    SciTech Connect

    Post, Brian Nelson; Wilson, Mark P.; Smith, Jody Lynn; Wehlburg, Joseph Cornelius; Nandy, Prabal

    2005-07-01

    This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 um, to long wave infra-red (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one mid-wave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.

  3. Multispectral Photography

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.

  4. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  5. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  6. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  7. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems

    PubMed Central

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-01

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object

  8. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.

    PubMed

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-25

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet 'à trous' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA

  9. Mapping forest stand complexity for woodland caribou habitat assessment using multispectral airborne imagery

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Hu, B.; Woods, M.

    2014-11-01

    The decline of the woodland caribou population is a result of their habitat loss. To conserve the habitat of the woodland caribou and protect it from extinction, it is critical to accurately characterize and monitor its habitat. Conventionally, products derived from low to medium spatial resolution remote sensing data, such as land cover classification and vegetation indices are used for wildlife habitat assessment. These products fail to provide information on the structure complexities of forest canopies which reflect important characteristics of caribou's habitats. Recent studies have employed the LiDAR system (Light Detection And Ranging) to directly retrieve the three dimensional forest attributes. Although promising results have been achieved, the acquisition cost of LiDAR data is very high. In this study, utilizing the very high spatial resolution imagery in characterizing the structural development the of forest canopies was exploited. A stand based image texture analysis was performed to predict forest succession stages. The results were demonstrated to be consistent with those derived from LiDAR data.

  10. New interpretations of the Fort Clark State Historic Site based on aerial color and thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Heller, Andrew Roland

    The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.

  11. Multistage, Multiband and sequential imagery to identify and quantify non-forest vegetation resources

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1971-01-01

    Analysis and recognition processing of multispectral scanner imagery for plant community classification and interpretations of various film-filter-scale aerial photographs are reported. Data analyses and manuscript preparation of research on microdensitometry for plant community and component identification and remote estimates of biomass are included.

  12. A multispectral scanner survey of the Tonopah Test Range, Nevada. Date of survey: August 1993

    SciTech Connect

    Brewster, S.B. Jr.; Howard, M.E.; Shines, J.E.

    1994-08-01

    The Multispectral Remote Sensing Department of the Remote Sensing Laboratory conducted an airborne multispectral scanner survey of a portion of the Tonopah Test Range, Nevada. The survey was conducted on August 21 and 22, 1993, using a Daedalus AADS1268 scanner and coincident aerial color photography. Flight altitudes were 5,000 feet (1,524 meters) above ground level for systematic coverage and 1,000 feet (304 meters) for selected areas of special interest. The multispectral scanner survey was initiated as part of an interim and limited investigation conducted to gather preliminary information regarding historical hazardous material release sites which could have environmental impacts. The overall investigation also includes an inventory of environmental restoration sites, a ground-based geophysical survey, and an aerial radiological survey. The multispectral scanner imagery and coincident aerial photography were analyzed for the detection, identification, and mapping of man-made soil disturbances. Several standard image enhancement techniques were applied to the data to assist image interpretation. A geologic ratio enhancement and a color composite consisting of AADS1268 channels 10, 7, and 9 (mid-infrared, red, and near-infrared spectral bands) proved most useful for detecting soil disturbances. A total of 358 disturbance sites were identified on the imagery and mapped using a geographic information system. Of these sites, 326 were located within the Tonopah Test Range while the remaining sites were present on the imagery but outside the site boundary. The mapped site locations are being used to support ongoing field investigations.

  13. Deglaciation of the Caucasus Mountains, Russia/Georgia, in the 21st century observed with ASTER satellite imagery and aerial photography

    NASA Astrophysics Data System (ADS)

    Shahgedanova, M.; Nosenko, G.; Kutuzov, S.; Rototaeva, O.; Khromova, T.

    2014-12-01

    Changes in the map area of 498 glaciers located on the Main Caucasus ridge (MCR) and on Mt. Elbrus in the Greater Caucasus Mountains (Russia and Georgia) were assessed using multispectral ASTER and panchromatic Landsat imagery with 15 m spatial resolution in 1999/2001 and 2010/2012. Changes in recession rates of glacier snouts between 1987-2001 and 2001-2010 were investigated using aerial photography and ASTER imagery for a sub-sample of 44 glaciers. In total, glacier area decreased by 4.7 ± 2.1% or 19.2 ± 8.7 km2 from 407.3 ± 5.4 km2 to 388.1 ± 5.2 km2. Glaciers located in the central and western MCR lost 13.4 ± 7.3 km2 (4.7 ± 2.5%) in total or 8.5 km2 (5.0 ± 2.4%) and 4.9 km2 (4.1 ± 2.7%) respectively. Glaciers on Mt. Elbrus, although located at higher elevations, lost 5.8 ± 1.4 km2 (4.9 ± 1.2%) of their total area. The recession rates of valley glacier termini increased between 1987-2000/01 and 2000/01-2010 (2000 for the western MCR and 2001 for the central MCR and Mt.~Elbrus) from 3.8 ± 0.8, 3.2 ± 0.9 and 8.3 ± 0.8 m yr-1 to 11.9 ± 1.1, 8.7 ± 1.1 and 14.1 ± 1.1 m yr-1 in the central and western MCR and on Mt. Elbrus respectively. The highest rate of increase in glacier termini retreat was registered on the southern slope of the central MCR where it has tripled. A positive trend in summer temperatures forced glacier recession, and strong positive temperature anomalies in 1998, 2006, and 2010 contributed to the enhanced loss of ice. An increase in accumulation season precipitation observed in the northern MCR since the mid-1980s has not compensated for the effects of summer warming while the negative precipitation anomalies, observed on the southern slope of the central MCR in the 1990s, resulted in stronger glacier wastage.

  14. Decision Level Fusion of LIDAR Data and Aerial Color Imagery Based on Bayesian Theory for Urban Area Classification

    NASA Astrophysics Data System (ADS)

    Rastiveis, H.

    2015-12-01

    Airborne Light Detection and Ranging (LiDAR) generates high-density 3D point clouds to provide a comprehensive information from object surfaces. Combining this data with aerial/satellite imagery is quite promising for improving land cover classification. In this study, fusion of LiDAR data and aerial imagery based on Bayesian theory in a three-level fusion algorithm is presented. In the first level, pixel-level fusion, the proper descriptors for both LiDAR and image data are extracted. In the next level of fusion, feature-level, using extracted features the area are classified into six classes of "Buildings", "Trees", "Asphalt Roads", "Concrete roads", "Grass" and "Cars" using Naïve Bayes classification algorithm. This classification is performed in three different strategies: (1) using merely LiDAR data, (2) using merely image data, and (3) using all extracted features from LiDAR and image. The results of three classifiers are integrated in the last phase, decision level fusion, based on Naïve Bayes algorithm. To evaluate the proposed algorithm, a high resolution color orthophoto and LiDAR data over the urban areas of Zeebruges, Belgium were applied. Obtained results from the decision level fusion phase revealed an improvement in overall accuracy and kappa coefficient.

  15. Detection of two intermixed invasive woody species using color infrared aerial imagery and the support vector machine classifier

    NASA Astrophysics Data System (ADS)

    Mirik, Mustafa; Chaudhuri, Sriroop; Surber, Brady; Ale, Srinivasulu; James Ansley, R.

    2013-01-01

    Both the evergreen redberry juniper (Juniperus pinchotii Sudw.) and deciduous honey mesquite (Prosopis glandulosa Torr.) are destructive and aggressive invaders that affect rangelands and grasslands of the southern Great Plains of the United States. However, their current spatial extent and future expansion trends are unknown. This study was aimed at: (1) exploring the utility of aerial imagery for detecting and mapping intermixed redberry juniper and honey mesquite while both are in full foliage using the support vector machine classifier at two sites in north central Texas and, (2) assessing and comparing the mapping accuracies between sites. Accuracy assessments revealed that the overall accuracies were 90% with the associated kappa coefficient of 0.86% and 89% with the associated kappa coefficient of 0.85 for sites 1 and 2, respectively. Z-statistics (0.102<1.96) used to compare the classification results for both sites indicated an insignificant difference between classifications at 95% probability level. In most instances, juniper and mesquite were identified correctly with <7% being mistaken for the other woody species. These results indicated that assessment of the current infestation extent and severity of these two woody species in a spatial context is possible using aerial remote sensing imagery.

  16. Intergration of LiDAR Data with Aerial Imagery for Estimating Rooftop Solar Photovoltaic Potentials in City of Cape Town

    NASA Astrophysics Data System (ADS)

    Adeleke, A. K.; Smit, J. L.

    2016-06-01

    Apart from the drive to reduce carbon dioxide emissions by carbon-intensive economies like South Africa, the recent spate of electricity load shedding across most part of the country, including Cape Town has left electricity consumers scampering for alternatives, so as to rely less on the national grid. Solar energy, which is adequately available in most part of Africa and regarded as a clean and renewable source of energy, makes it possible to generate electricity by using photovoltaics technology. However, before time and financial resources are invested into rooftop solar photovoltaic systems in urban areas, it is important to evaluate the potential of the building rooftop, intended to be used in harvesting the solar energy. This paper presents methodologies making use of LiDAR data and other ancillary data, such as high-resolution aerial imagery, to automatically extract building rooftops in City of Cape Town and evaluate their potentials for solar photovoltaics systems. Two main processes were involved: (1) automatic extraction of building roofs using the integration of LiDAR data and aerial imagery in order to derive its' outline and areal coverage; and (2) estimating the global solar radiation incidence on each roof surface using an elevation model derived from the LiDAR data, in order to evaluate its solar photovoltaic potential. This resulted in a geodatabase, which can be queried to retrieve salient information about the viability of a particular building roof for solar photovoltaic installation.

  17. Comparison of aerial imagery from manned and unmanned aircraft platforms for monitoring cotton growth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned aircraft systems (UAS) have emerged as a low-cost and versatile remote sensing platform in recent years, but little work has been done on comparing imagery from manned and unmanned platforms for crop assessment. The objective of this study was to compare imagery taken from multiple cameras ...

  18. A study to analyze six band multispectral images and fabricate a Fourier transform detector. [optical data processing - aerial photography/forests

    NASA Technical Reports Server (NTRS)

    Shackelford, R. G.; Walsh, J. R., Jr.

    1975-01-01

    An automatic Fourier transform diffraction pattern sampling system, used to investigate techniques for forestry classification of six band multispectral aerial photography is presented. Photographs and diagrams of the design, development and fabrication of a hybrid optical-digital Fourier transform detector are shown. The detector was designed around a concentric ring fiber optic array. This array was formed from many optical fibers which were sorted into concentric rings about a single fiber. All the fibers in each ring were collected into a bundle and terminated into a single photodetector. An optical/digital interface unit consisting of a high level multiplexer, and an analog-to-digital amplifier was also constructed and is described.

  19. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    USGS Publications Warehouse

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  20. Using high-resolution digital aerial imagery to map land cover

    USGS Publications Warehouse

    Dieck, J.J.; Robinson, Larry

    2014-01-01

    The Upper Midwest Environmental Sciences Center (UMESC) has used aerial photography to map land cover/land use on federally owned and managed lands for over 20 years. Until recently, that process used 23- by 23-centimeter (9- by 9-inch) analog aerial photos to classify vegetation along the Upper Mississippi River System, on National Wildlife Refuges, and in National Parks. With digital aerial cameras becoming more common and offering distinct advantages over analog film, UMESC transitioned to an entirely digital mapping process in 2009. Though not without challenges, this method has proven to be much more accurate and efficient when compared to the analog process.

  1. Monitoring the invasion of Spartina alterniflora using very high resolution unmanned aerial vehicle imagery in Beihai, Guangxi (China).

    PubMed

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Fu, Jingying; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population.

  2. Monitoring the Invasion of Spartina alterniflora Using Very High Resolution Unmanned Aerial Vehicle Imagery in Beihai, Guangxi (China)

    PubMed Central

    Wan, Huawei; Wang, Qiao; Jiang, Dong; Yang, Yipeng; Liu, Xiaoman

    2014-01-01

    Spartina alterniflora was introduced to Beihai, Guangxi (China), for ecological engineering purposes in 1979. However, the exceptional adaptability and reproductive ability of this species have led to its extensive dispersal into other habitats, where it has had a negative impact on native species and threatens the local mangrove and mudflat ecosystems. To obtain the distribution and spread of Spartina alterniflora, we collected HJ-1 CCD imagery from 2009 and 2011 and very high resolution (VHR) imagery from the unmanned aerial vehicle (UAV). The invasion area of Spartina alterniflora was 357.2 ha in 2011, which increased by 19.07% compared with the area in 2009. A field survey was conducted for verification and the total accuracy was 94.0%. The results of this paper show that VHR imagery can provide details on distribution, progress, and early detection of Spartina alterniflora invasion. OBIA, object based image analysis for remote sensing (RS) detection method, can enable control measures to be more effective, accurate, and less expensive than a field survey of the invasive population. PMID:24892066

  3. Preliminary statistical studies concerning the Campos RJ sugar cane area, using LANDSAT imagery and aerial photographs

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.

    1983-01-01

    The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.

  4. Client-Side Data Processing and Training for Multispectral Imagery Applications in the GOES-R Era

    NASA Technical Reports Server (NTRS)

    Fuell, Kevin; Gravelle, Chad; Burks, Jason; Berndt, Emily; Schultz, Lori; Molthan, Andrew; Leroy, Anita

    2016-01-01

    RGB imagery can be created locally (i.e. client-side) from single band imagery already on the system with little impact given recommended change to texture cache in AWIPS II. Training/Reference material accessible to forecasters within their operational display system improves RGB interpretation and application as demonstrated at OPG. Application examples from experienced forecasters are needed to support the larger community use of RGB imagery and these can be integrated into the user's display system.

  5. Forest fuel treatment detection using multi-temporal airborne Lidar data and high resolution aerial imagery ---- A case study at Sierra Nevada, California

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Collins, B.; Fry, D.; Kelly, M.

    2014-12-01

    Forest fuel treatments (FFT) are often employed in Sierra Nevada forest (located in California, US) to enhance forest health, regulate stand density, and reduce wildfire risk. However, there have been concerns that FFTs may have negative impacts on certain protected wildlife species. Due to the constraints and protection of resources (e.g., perennial streams, cultural resources, wildlife habitat, etc.), the actual FFT extents are usually different from planned extents. Identifying the actual extent of treated areas is of primary importance to understand the environmental influence of FFTs. Light detection and ranging (Lidar) is a powerful remote sensing technique that can provide accurate forest structure measurements, which provides great potential to monitor forest changes. This study used canopy height model (CHM) and canopy cover (CC) products derived from multi-temporal airborne Lidar data to detect FFTs by an approach combining a pixel-wise thresholding method and a object-of-interest segmentation method. We also investigated forest change following the implementation of landscape-scale FFT projects through the use of normalized difference vegetation index (NDVI) and standardized principle component analysis (PCA) from multi-temporal high resolution aerial imagery. The same FFT detection routine was applied on the Lidar data and aerial imagery for the purpose of comparing the capability of Lidar data and aerial imagery on FFT detection. Our results demonstrated that the FFT detection using Lidar derived CC products produced both the highest total accuracy and kappa coefficient, and was more robust at identifying areas with light FFTs. The accuracy using Lidar derived CHM products was significantly lower than that of the result using Lidar derived CC, but was still slightly higher than using aerial imagery. FFT detection results using NDVI and standardized PCA using multi-temporal aerial imagery produced almost identical total accuracy and kappa coefficient

  6. Characterizing Sediment Flux Using Reconstructed Topography and Bathymetry from Historical Aerial Imagery on the Willamette River, OR.

    NASA Astrophysics Data System (ADS)

    Langston, T.; Fonstad, M. A.

    2014-12-01

    The Willamette is a gravel-bed river that drains ~28,800 km^2 between the Coast Range and Cascade Range in northwestern Oregon before entering the Columbia River near Portland. In the last 150 years, natural and anthropogenic drivers have altered the sediment transport regime, drastically reducing the geomorphic complexity of the river. Previously dynamic multi-threaded reaches have transformed into stable single channels to the detriment of ecosystem diversity and productivity. Flow regulation by flood-control dams, bank revetments, and conversion of riparian forests to agriculture have been key drivers of channel change. To date, little has been done to quantitatively describe temporal and spatial trends of sediment transport in the Willamette. This knowledge is critical for understanding how modern processes shape landforms and habitats. The goal of this study is to describe large-scale temporal and spatial trends in the sediment budget by reconstructing historical topography and bathymetry from aerial imagery. The area of interest for this project is a reach of the Willamette stretching from the confluence of the McKenzie River to the town of Peoria. While this reach remains one of the most dynamic sections of the river, it has exhibited a great loss in geomorphic complexity. Aerial imagery for this section of the river is available from USDA and USACE projects dating back to the 1930's. Above water surface elevations are extracted using the Imagine Photogrammetry package in ERDAS. Bathymetry is estimated using a method known as Hydraulic Assisted Bathymetry in which hydraulic parameters are used to develop a regression between water depth and pixel values. From this, pixel values are converted to depth below the water surface. Merged together, topography and bathymetry produce a spatially continuous digital elevation model of the geomorphic floodplain. Volumetric changes in sediment stored along the study reach are then estimated for different historic periods

  7. Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.

    2012-01-01

    As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.

  8. Aerial video and ladar imagery fusion for persistent urban vehicle tracking

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Greisokh, Daniel; Anderson, Hyrum; Sandland, Jessica; Knowlton, Robert

    2007-04-01

    We assess the impact of supplementing two-dimensional video with three-dimensional geometry for persistent vehicle tracking in complex urban environments. Using recent video data collected over a city with minimal terrain content, we first quantify erroneous sources of automated tracking termination and identify those which could be ameliorated by detailed height maps. They include imagery misregistration, roadway occlusion and vehicle deceleration. We next develop mathematical models to analyze the tracking value of spatial geometry knowledge in general and high resolution ladar imagery in particular. Simulation results demonstrate how 3D information could eliminate large numbers of false tracks passing through impenetrable structures. Spurious track rejection would permit Kalman filter coasting times to be significantly increased. Track lifetimes for vehicles occluded by trees and buildings as well as for cars slowing down at corners and intersections could consequently be prolonged. We find high resolution 3D imagery can ideally yield an 83% reduction in the rate of automated tracking failure.

  9. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1988-01-19

    approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban

  10. Monitoring a BLM level 5 watershed with very-large aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A fifth order BLM watershed in central Wyoming was flown using a Sport-airplane to acquire high-resolution aerial images from 2 cameras at 2 altitudes. Project phases 1 and 2 obtained images for measuring ground cover, species composition and canopy cover of Wyoming big sagebrush by ecological site....

  11. Surface Temperature Mapping of the University of Northern Iowa Campus Using High Resolution Thermal Infrared Aerial Imageries

    PubMed Central

    Savelyev, Alexander; Sugumaran, Ramanathan

    2008-01-01

    The goal of this project was to map the surface temperature of the University of Northern Iowa campus using high-resolution thermal infrared aerial imageries. A thermal camera with a spectral bandwidth of 3.0-5.0 μm was flown at the average altitude of 600 m, achieving ground resolution of 29 cm. Ground control data was used to construct the pixel- to-temperature conversion model, which was later used to produce temperature maps of the entire campus and also for validation of the model. The temperature map then was used to assess the building rooftop conditions and steam line faults in the study area. Assessment of the temperature map revealed a number of building structures that may be subject to insulation improvement due to their high surface temperatures leaks. Several hot spots were also identified on the campus for steam pipelines faults. High-resolution thermal infrared imagery proved highly effective tool for precise heat anomaly detection on the campus, and it can be used by university facility services for effective future maintenance of buildings and grounds. PMID:27873800

  12. Wavelet-based detection of bush encroachment in a savanna using multi-temporal aerial photographs and satellite imagery

    NASA Astrophysics Data System (ADS)

    Shekede, Munyaradzi D.; Murwira, Amon; Masocha, Mhosisi

    2015-03-01

    Although increased woody plant abundance has been reported in tropical savannas worldwide, techniques for detecting the direction and magnitude of change are mostly based on visual interpretation of historical aerial photography or textural analysis of multi-temporal satellite images. These techniques are prone to human error and do not permit integration of remotely sensed data from diverse sources. Here, we integrate aerial photographs with high spatial resolution satellite imagery and use a discrete wavelet transform to objectively detect the dynamics in bush encroachment at two protected Zimbabwean savanna sites. Based on the recently introduced intensity-dominant scale approach, we test the hypotheses that: (1) the encroachment of woody patches into the surrounding grassland matrix causes a shift in the dominant scale. This shift in the dominant scale can be detected using a discrete wavelet transform regardless of whether aerial photography and satellite data are used; and (2) as the woody patch size stabilises, woody cover tends to increase thereby triggering changes in intensity. The results show that at the first site where tree patches were already established (Lake Chivero Game Reserve), between 1972 and 1984 the dominant scale of woody patches initially increased from 8 m before stabilising at 16 m and 32 m between 1984 and 2012 while the intensity fluctuated during the same period. In contrast, at the second site, which was formely grass-dominated site (Kyle Game Reserve), we observed an unclear dominant scale (1972) which later becomes distinct in 1985, 1996 and 2012. Over the same period, the intensity increased. Our results imply that using our approach we can detect and quantify woody/bush patch dynamics in savanna landscapes.

  13. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover.

  14. Automated Identification of Rivers and Shorelines in Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2011-01-01

    defining the criteria for segmenting the image. For these cases certain automated, unsupervised (or minimally supervised), image classification ...banks, image analysis, edge finding, photography, satellite, texture, entropy 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT...high resolution bank geometry. Much of the globe is covered by various sorts of multi- or hyperspectral imagery and numerous techniques have been

  15. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data.

    PubMed

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data.

  16. Image degradation in aerial imagery duplicates. [photographic processing of photographic film and reproduction (copying)

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    A series of Earth Resources Aircraft Program data flights were made over an aerial test range in Arizona for the evaluation of large cameras. Specifically, both medium altitude and high altitude flights were made to test and evaluate a series of color as well as black-and-white films. Image degradation, inherent in duplication processing, was studied. Resolution losses resulting from resolution characteristics of the film types are given. Color duplicates, in general, are shown to be degraded more than black-and-white films because of the limitations imposed by available aerial color duplicating stock. Results indicate that a greater resolution loss may be expected when the original has higher resolution. Photographs of the duplications are shown.

  17. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  18. Distributed adaptive framework for multispectral/hyperspectral imagery and three-dimensional point cloud fusion

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Khuon, Timothy; Truslow, Eric

    2016-07-01

    A proposed framework using spectral and spatial information is introduced for neural net multisensor data fusion. This consists of a set of independent-sensor neural nets, one for each sensor (type of data), coupled to a fusion net. The neural net of each sensor is trained from a representative data set of the particular sensor to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. During the initial processing, three-dimensional (3-D) point cloud data (PCD) are segmented using a multidimensional mean-shift algorithm into clustered objects. Concurrently, multiband spectral imagery data (multispectral or hyperspectral) are spectrally segmented by the stochastic expectation-maximization into a cluster map containing (spectral-based) pixel classes. For the proposed sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The success of the approach in utilizing sensor synergism for an enhanced classification is demonstrated for the specific case of classifying hyperspectral imagery and PCD extracted from LIDAR, obtained from an airborne data collection over the campus of University of Southern Mississippi, Gulfport, Mississippi.

  19. Part task investigation of multispectral image fusion using gray scale and synthetic color night-vision sensor imagery for helicopter pilotage

    NASA Astrophysics Data System (ADS)

    Steele, Paul M.; Perconti, Philip

    1997-06-01

    Today, night vision sensor and display systems used in the pilotage or navigation of military helicopters are either long wave IR thermal sensors (8 - 12 microns) or image intensified, visible and near IR (0.6 - 0.9 microns), sensors. The sensor imagery is displayed using a monochrome phosphor on a Cathode Ray Tube or night vision goggle. Currently, there is no fielded capability to combine the best attributes of the emissive radiation sensed by the thermal sensor and the reflected radiation sensed by the image intensified sensor into a single fused image. However, recent advances in signal processing have permitted the real time image fusion and display of multispectral sensors in either monochrome or synthetic chromatic form. The merits of such signal processing is explored. A part task simulation using a desktop computer, video playback unit, and a biocular head mounted display was conducted. Response time and accuracy measures of test subject responses to visual perception tasks were taken. Subjective ratings were collected to determine levels of pilot acceptance. In general, fusion based formats resulted in better subject performance. The benefits of integrating synthetic color to fused imagery, however, is dependent on the color algorithm used, the visual task performed, and scene content.

  20. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2015-11-04

    being analyzed, rl is the local range of values across the pixels and rm is the maximum possible range of values. Algorithm Imagery must first be...River, LA The case presented in Figures 1 and 6 represents an ideal case for demonstrating the algorithm in that the surface of the water appears uniform...x 1400 pixel image. A human operator loaded the image in the open source Quantum GIS programme and traced the edges to create a ESRI shape file, which

  1. Derivation of River Bathymetry Using Imagery from Unmanned Aerial Vehicles (UAV)

    DTIC Science & Technology

    2011-09-01

    from gamma rays to radio waves. Near the center of this spectrum are the wavelengths that are of concern for derivation of bathymetry from imagery... airborne manned platforms have been used for bathymetric derivation, but are not in abundance, nor do they have the spatial resolution required to...regarding river water depths, which is a necessity for safe operational planning. Satellite sensors and airborne manned platforms have been used for

  2. [Soil Salinity Estimation Based on Near-Ground Multispectral Imagery in Typical Area of the Yellow River Delta].

    PubMed

    Zhang, Tong-rui; Zhao, Geng-xing; Gao, Ming-xiu; Wang, Zhuo-ran; Jia, Ji-chao; Li, Ping; An, De-yu

    2016-01-01

    This study chooses the core demonstration area of 'Bohai Barn' project as the study area, which is located in Wudi, Shandong Province. We first collected near-ground and multispectral images and surface soil salinity data using ADC portable multispectral camera and EC110 portable salinometer. Then three vegetation indices, namely NDVI, SAVI and GNDVI, were used to build 18 models respectively with the actual measured soil salinity. These models include linear function, exponential function, logarithmic function, exponentiation function, quadratic function and cubic function, from which the best estimation model for soil salinity estimation was selected and used for inverting and analyzing soil salinity status of the study area. Results indicated that all models mentioned above could effectively estimate soil salinity and models using SAVI as the dependent variable were more effective than the others. Among SAVI models, the linear model (Y = -0.524x + 0.663, n = 70) is the best, under which the test value of F is the highest as 141.347 at significance test level, estimated R2 0.797 with a 93.36% accuracy. Soil salinity of the study area is mainly around 2.5 per thousand - 3.5 per thousand, which gradually increases from southwest to northeast. The study has probed into soil salinity estimation methods based on near-ground and multispectral data, and will provide a quick and effective technical soil salinity estimation approach for coastal saline soil of the study area and the whole Yellow River Delta.

  3. Multispectral techniques for general geological surveys evaluation of a four-band photographic system

    NASA Technical Reports Server (NTRS)

    Crowder, D., F.

    1969-01-01

    A general geological survey at 1:62,500 scale of the well exposed rocks of the White Mountains and the adjacent volcanic desert plateau is reported. The tuffs, granites, sedimentary rocks and metavolcanic rocks in this arid region are varicolored and conventional black and white aerial photographs have been a useful mapping aid. A large number of true color and false color aerial photographs and multispectral viewer screen images of the study area are evaluated in order to consider what imagery is the most useful for distinguishing rock types. Photographs of true color film are judged the most useful for recognizing geographic locations.

  4. On the Role of Urban and Vegetative Land Cover in the Identification of Tornado Damage Using Dual-Resolution Multispectral Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Kingfield, D.; de Beurs, K.

    2014-12-01

    It has been demonstrated through various case studies that multispectral satellite imagery can be utilized in the identification of damage caused by a tornado through the change detection process. This process involves the difference in returned surface reflectance between two images and is often summarized through a variety of ratio-based vegetation indices (VIs). Land cover type plays a large contributing role in the change detection process as the reflectance properties of vegetation can vary based on several factors (e.g. species, greenness, density). Consequently, this provides the possibility for a variable magnitude of loss, making certain land cover regimes less reliable in the damage identification process. Furthermore, the tradeoff between sensor resolution and orbital return period may also play a role in the ability to detect catastrophic loss. Moderate resolution imagery (e.g. Moderate Resolution Imaging Spectroradiometer (MODIS)) provides relatively coarse surface detail with a higher update rate which could hinder the identification of small regions that underwent a dynamic change. Alternatively, imagery with higher spatial resolution (e.g. Landsat) have a longer temporal return period between successive images which could result in natural recovery underestimating the absolute magnitude of damage incurred. This study evaluates the role of land cover type and sensor resolution on four high-end (EF3+) tornado events occurring in four different land cover groups (agriculture, forest, grassland, urban) in the spring season. The closest successive clear images from both Landsat 5 and MODIS are quality controlled for each case. Transacts of surface reflectance across a homogenous land cover type both inside and outside the damage swath are extracted. These metrics are synthesized through the calculation of six different VIs to rank the calculated change metrics by land cover type, sensor resolution and VI.

  5. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  6. Oil slick studies using photographic and multispectral scanner data.

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Macintyre, W. G.; Penney, M. E.; Oberholtzer, J. D.

    1971-01-01

    Field studies of spills of Nos. 6 (Bunker C), 4, and 2 fuel oils and menhaden fish oil in the southern Chesapeake Bay have been supplemented with aerial photographic and multispectral scanner data. Thin films showed best in ultraviolet and blue bands and thick films in the green. Color film was effective for all thicknesses. Thermal infrared imagery provided clear detection, but required field temperature and thickness data to distinguish thickness/emissivity variations from temperature variations. Slick spreading rates agree with the theory of Fay (1969); further study of spreading is in progress.

  7. Mapping Urban Tree Canopy Coverage and Structure using Data Fusion of High Resolution Satellite Imagery and Aerial Lidar

    NASA Astrophysics Data System (ADS)

    Elmes, A.; Rogan, J.; Williams, C. A.; Martin, D. G.; Ratick, S.; Nowak, D.

    2015-12-01

    Urban tree canopy (UTC) coverage is a critical component of sustainable urban areas. Trees provide a number of important ecosystem services, including air pollution mitigation, water runoff control, and aesthetic and cultural values. Critically, urban trees also act to mitigate the urban heat island (UHI) effect by shading impervious surfaces and via evaporative cooling. The cooling effect of urban trees can be seen locally, with individual trees reducing home HVAC costs, and at a citywide scale, reducing the extent and magnitude of an urban areas UHI. In order to accurately model the ecosystem services of a given urban forest, it is essential to map in detail the condition and composition of these trees at a fine scale, capturing individual tree crowns and their vertical structure. This paper presents methods for delineating UTC and measuring canopy structure at fine spatial resolution (<1m). These metrics are essential for modeling the HVAC benefits from UTC for individual homes, and for assessing the ecosystem services for entire urban areas. Such maps have previously been made using a variety of methods, typically relying on high resolution aerial or satellite imagery. This paper seeks to contribute to this growing body of methods, relying on a data fusion method to combine the information contained in high resolution WorldView-3 satellite imagery and aerial lidar data using an object-based image classification approach. The study area, Worcester, MA, has recently undergone a large-scale tree removal and reforestation program, following a pest eradication effort. Therefore, the urban canopy in this location provides a wide mix of tree age class and functional type, ideal for illustrating the effectiveness of the proposed methods. Early results show that the object-based classifier is indeed capable of identifying individual tree crowns, while continued research will focus on extracting crown structural characteristics using lidar-derived metrics. Ultimately

  8. Detection and spatiotemporal analysis of methane ebullition on thermokarst lake ice using high-resolution optical aerial imagery

    NASA Astrophysics Data System (ADS)

    Lindgren, P. R.; Grosse, G.; Anthony, K. M. Walter; Meyer, F. J.

    2016-01-01

    Thermokarst lakes are important emitters of methane, a potent greenhouse gas. However, accurate estimation of methane flux from thermokarst lakes is difficult due to their remoteness and observational challenges associated with the heterogeneous nature of ebullition. We used high-resolution (9-11 cm) snow-free aerial images of an interior Alaskan thermokarst lake acquired 2 and 4 days following freeze-up in 2011 and 2012, respectively, to detect and characterize methane ebullition seeps and to estimate whole-lake ebullition. Bubbles impeded by the lake ice sheet form distinct white patches as a function of bubbling when lake ice grows downward and around them, trapping the gas in the ice. Our aerial imagery thus captured a snapshot of bubbles trapped in lake ice during the ebullition events that occurred before the image acquisition. Image analysis showed that low-flux A- and B-type seeps are associated with low brightness patches and are statistically distinct from high-flux C-type and hotspot seeps associated with high brightness patches. Mean whole-lake ebullition based on optical image analysis in combination with bubble-trap flux measurements was estimated to be 174 ± 28 and 216 ± 33 mL gas m-2 d-1 for the years 2011 and 2012, respectively. A large number of seeps demonstrated spatiotemporal stability over our 2-year study period. A strong inverse exponential relationship (R2 > = 0.79) was found between the percent of the surface area of lake ice covered with bubble patches and distance from the active thermokarst lake margin. Even though the narrow timing of optical image acquisition is a critical factor, with respect to both atmospheric pressure changes and snow/no-snow conditions during early lake freeze-up, our study shows that optical remote sensing is a powerful tool to map ebullition seeps on lake ice, to identify their relative strength of ebullition, and to assess their spatiotemporal variability.

  9. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  10. Applicability of ERTS-1 imagery to the study of suspended sediment and aquatic fronts

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Srna, R.; Treasure, W.; Otley, M.

    1973-01-01

    Imagery from three successful ERTS-1 passes over the Delaware Bay and Atlantic Coastal Region have been evaluated to determine visibility of aquatic features. Data gathered from ground truth teams before and during the overflights, in conjunction with aerial photographs taken at various altitudes, were used to interpret the imagery. The overpasses took place on August 16, October 10, 1972, and January 26, 1973, with cloud cover ranging from about zero to twenty percent. (I.D. Nos. 1024-15073, 1079-15133, and 1187-15140). Visual inspection, density slicing and multispectral analysis of the imagery revealed strong suspended sediment patterns and several distinct types of aquatic interfaces or frontal systems.

  11. A study of integration methods of aerial imagery and LIDAR data for a high level of automation in 3D building reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Suyoung; Schenk, Toni F.

    2003-04-01

    This paper describes integration methods to increase the level of automation in building reconstruction. Aerial imagery has been used as a major source in mapping fields and, in recent years, LIDAR data became popular as another type of mapping resources. Regarding to their performances, aerial imagery has abilities to delineate object boundaries but leaves many missing parts of boundaries during feature extraction. LIDAR data provide direct information about heights of object surfaces but have limitation for boundary localization. Efficient methods using complementary characteristics of two sensors are described to generate hypotheses of building boundaries and localize the object features. Tree structures for grid contours of LIDAR data are used for interpretation of contours. Buildings are recognized by analyzing the contour trees and modeled with surface patches with LIDAR data. Hypotheses of building models are generated as combination of wing models and verified by assessing the consistency between the corresponding data sets. Experiments using aerial imagery and laser data are presented. Our approach shows that the building boundaries are successfully recognized through our contour analysis approach and the inference from contours and our modeling method using wing model increase the level of automation in hypothesis generation/verification steps.

  12. Semi-Automated Approach for Mapping Urban Trees from Integrated Aerial LiDAR Point Cloud and Digital Imagery Datasets

    NASA Astrophysics Data System (ADS)

    Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-09-01

    Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  13. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    SciTech Connect

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; Glenn, Nancy F.

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis. The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).

  14. The influence of the in situ camera calibration for direct georeferencing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Barrios, R.; Centeno, J.

    2014-11-01

    The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs

  15. Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery

    NASA Astrophysics Data System (ADS)

    Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet-Brunet, Valérie

    2017-04-01

    Forest stands are the basic units for forest inventory and mapping. Stands are defined as large forested areas (e.g., ⩾ 2 ha) of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red images. This task is tedious, highly time consuming, and should be automated for scalability and efficient updating purposes. In this paper, a method based on the fusion of airborne lidar data and VHR multispectral images is proposed for the automatic delineation of forest stands containing one dominant species (purity superior to 75%). This is the key preliminary task for forest land-cover database update. The multispectral images give information about the tree species whereas 3D lidar point clouds provide geometric information on the trees and allow their individual extraction. Multi-modal features are computed, both at pixel and object levels: the objects are individual trees extracted from lidar data. A supervised classification is then performed at the object level in order to coarsely discriminate the existing tree species in each area of interest. The classification results are further processed to obtain homogeneous areas with smooth borders by employing an energy minimum framework, where additional constraints are joined to form the energy function. The experimental results show that the proposed method provides very satisfactory results both in terms of stand labeling and delineation (overall accuracy ranges between 84 % and 99 %).

  16. Parameter optimization of image classification techniques to delineate crowns of coppice trees on UltraCam-D aerial imagery in woodlands

    NASA Astrophysics Data System (ADS)

    Erfanifard, Yousef; Stereńczak, Krzysztof; Behnia, Negin

    2014-01-01

    Estimating the optimal parameters of some classification techniques becomes their negative aspect as it affects their performance for a given dataset and reduces classification accuracy. It was aimed to optimize the combination of effective parameters of support vector machine (SVM), artificial neural network (ANN), and object-based image analysis (OBIA) classification techniques by the Taguchi method. The optimized techniques were applied to delineate crowns of Persian oak coppice trees on UltraCam-D very high spatial resolution aerial imagery in Zagros semiarid woodlands, Iran. The imagery was classified and the maps were assessed by receiver operating characteristic curve and other performance metrics. The results showed that Taguchi is a robust approach to optimize the combination of effective parameters in these image classification techniques. The area under curve (AUC) showed that the optimized OBIA could well discriminate tree crowns on the imagery (AUC=0.897), while SVM and ANN yielded slightly less AUC performances of 0.819 and 0.850, respectively. The indices of accuracy (0.999) and precision (0.999) and performance metrics of specificity (0.999) and sensitivity (0.999) in the optimized OBIA were higher than with other techniques. The optimization of effective parameters of image classification techniques by the Taguchi method, thus, provided encouraging results to discriminate the crowns of Persian oak coppice trees on UltraCam-D aerial imagery in Zagros semiarid woodlands.

  17. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  18. Mapping freshwater deltaic wetlands and aquatic habitats at multiple scales with high-resolution multispectral WorldView-2 imagery and Indicator Species Analysis

    NASA Astrophysics Data System (ADS)

    Lane, C.; Liu, H.; Anenkhonov, O.; Autrey, B.; Chepinoga, V.

    2012-12-01

    Remote sensing technology has long been used in wetland inventory and monitoring though derived wetland maps were limited in applicability and often unsatisfactory largely due to the relatively coarse spatial resolution of conventional satellite imagery. The advent of high-resolution multispectral satellite systems presents new and exciting capabilities in mapping wetland systems with unprecedented accuracy and spatial detail. This research explores and evaluates the use of high-resolution WorldView-2 (WV2) multispectral imagery in identifying and classifying freshwater deltaic wetland vegetation and aquatic habitats in the Selenga River Delta, a Ramsar Wetland of International Importance that drains into Lake Baikal, Russia - a United Nations World Heritage site. A hybrid approach was designed and applied for WV2 image classification consisting of initial unsupervised classification, training data acquisition and analysis, indicator species analysis, and final supervised classification. A hierarchical scheme was defined and adopted for classifying aquatic habitats and wetland vegetation at genus and community levels at a fine scale, while at a coarser scale representing wetland systems as broad substrate and vegetation classes for regional comparisons under various existing wetland classification systems. Rigorous radiometric correction of WV2 images and orthorectification based on GPS-derived ground control points and an ASTER global digital elevation model resulted in 2- to 3-m positional accuracy. We achieved overall classification accuracy of 86.5% for 22 classes of wetland and aquatic habitats at the finest scale and >91% accuracy for broad vegetation and aquatic classes at more generalized scales. At the finest scale, the addition of four new WV2 spectral bands contributed to a classification accuracy increase of 3.5%. The coastal band of WV2 was found to increase the separation between different open water and aquatic habitats, while yellow, red-edge, and

  19. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle

    DOE PAGES

    Hruska, Ryan; Mitchell, Jessica; Anderson, Matthew; ...

    2012-09-17

    During the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral in-flight calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the U.S. Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis.more » The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 meters (based on RMSE).« less

  20. Improvement of erosion risk modelling using soil information derived from aerial Vis-NIR imagery

    NASA Astrophysics Data System (ADS)

    Ciampalini, Rossano; Raclot, Damien; Le Bissonnais, Yves

    2016-04-01

    The aim of this research is to test the benefit of the hyperspectral imagery in soil surface properties characterisation for soil erosion modelling purposes. The research area is the Lebna catchment located in the in the north of Tunisia (Cap Bon Region). Soil erosion is evaluated with the use of two different soil erosion models: PESERA (Pan-European Soil Erosion Risk Assessment already used for the soil erosion risk mapping for the European Union, Kirkby et al., 2008) and Mesales (Regional Modelling of Soil Erosion Risk developed by Le Bissonnais et al., 1998, 2002); for that, different sources for soil properties and derived parameters such as soil erodibility map and soil crusting map have been evaluated with use of four different supports: 1) IAO soil map (IAO, 2000), 2) Carte Agricole - CA - (Ministry of Agriculture, Tunisia), 3) Hyperspectral VIS-NIR map - HY - (Gomez et al., 2012; Ciampalini t al., 2012), and, 3) a here developed Hybrid map - CY - integrating information from Hyperspectral VIS-NIR and pedological maps. Results show that the data source has a high influence on the estimation of the parameters for both the models with a more evident sensitivity for Pesera. With regard to the classical pedological data, the VIS-NIR data clearly ameliorates the spatialization of the texture, then, the spatial detail of the results. Differences in the output using different maps are more important in Pesera model than in Mesales showing no-change ranges of about 15 to 41% and 53 to 67%, respectively.

  1. Geologic analyses of LANDSAT-1 multispectral imagery of a possible power plant site employing digital and analog image processing. [in Pennsylvania

    NASA Technical Reports Server (NTRS)

    Lovegreen, J. R.; Prosser, W. J.; Millet, R. A.

    1975-01-01

    A site in the Great Valley subsection of the Valley and Ridge physiographic province in eastern Pennsylvania was studied to evaluate the use of digital and analog image processing for geologic investigations. Ground truth at the site was obtained by a field mapping program, a subsurface exploration investigation and a review of available published and unpublished literature. Remote sensing data were analyzed using standard manual techniques. LANDSAT-1 imagery was analyzed using digital image processing employing the multispectral Image 100 system and using analog color processing employing the VP-8 image analyzer. This study deals primarily with linears identified employing image processing and correlation of these linears with known structural features and with linears identified manual interpretation; and the identification of rock outcrops in areas of extensive vegetative cover employing image processing. The results of this study indicate that image processing can be a cost-effective tool for evaluating geologic and linear features for regional studies encompassing large areas such as for power plant siting. Digital image processing can be an effective tool for identifying rock outcrops in areas of heavy vegetative cover.

  2. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  3. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    SciTech Connect

    Hess-Flores, Mauricio

    2011-11-10

    reconstruction pre-processing, where an algorithm detects and discards frames that would lead to inaccurate feature matching, camera pose estimation degeneracies or mathematical instability in structure computation based on a residual error comparison between two different match motion models. The presented algorithms were designed for aerial video but have been proven to work across different scene types and camera motions, and for both real and synthetic scenes.

  4. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  5. Detection of Single Standing Dead Trees from Aerial Color Infrared Imagery by Segmentation with Shape and Intensity Priors

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-03-01

    Standing dead trees, known as snags, are an essential factor in maintaining biodiversity in forest ecosystems. Combined with their role as carbon sinks, this makes for a compelling reason to study their spatial distribution. This paper presents an integrated method to detect and delineate individual dead tree crowns from color infrared aerial imagery. Our approach consists of two steps which incorporate statistical information about prior distributions of both the image intensities and the shapes of the target objects. In the first step, we perform a Gaussian Mixture Model clustering in the pixel color space with priors on the cluster means, obtaining up to 3 components corresponding to dead trees, living trees, and shadows. We then refine the dead tree regions using a level set segmentation method enriched with a generative model of the dead trees' shape distribution as well as a discriminative model of their pixel intensity distribution. The iterative application of the statistical shape template yields the set of delineated dead crowns. The prior information enforces the consistency of the template's shape variation with the shape manifold defined by manually labeled training examples, which makes it possible to separate crowns located in close proximity and prevents the formation of large crown clusters. Also, the statistical information built into the segmentation gives rise to an implicit detection scheme, because the shape template evolves towards an empty contour if not enough evidence for the object is present in the image. We test our method on 3 sample plots from the Bavarian Forest National Park with reference data obtained by manually marking individual dead tree polygons in the images. Our results are scenario-dependent and range from a correctness/completeness of 0.71/0.81 up to 0.77/1, with an average center-of-gravity displacement of 3-5 pixels between the detected and reference polygons.

  6. Mapping quaternary landforms and deposits in the Midwest and Great Plains by means of ERTS-1 multispectral imagery

    NASA Technical Reports Server (NTRS)

    Morrison, R. B.

    1973-01-01

    ERTS-1 multispectral images are proving effective for differentiating many kinds of Quaternary surficial deposits and landforms units in Illinois, Iowa, Missouri, Kansas, Nebraska, and South Dakota. Examples of features that have been distinguished are: (1) the more prominent end moraines of the last glaciation; (2) certain possible palimpsests of older moraines mantled by younger deposits; (3) various abandoned river valleys, including suspected ones deeply filled by deposits; (4) river terraces; and (5) some known faults and a few previously unmapped lineaments that may be faults. The ERTS images are being used for systematic mapping of Quaternary landforms and deposits in about 20 potential study areas. Some study areas, already well mapped, provide checks on the reliability of mapping from the images. For other study areas, previously mapped only partly or not at all, our maps will be the first comprehensive, synoptic ones, and should be useful for regional land-use planning and ground-water, engineering-geology, and other environmental applications.

  7. Assessment of satellite and aircraft multispectral scanner data for strip-mine monitoring

    NASA Technical Reports Server (NTRS)

    Spisz, E. W.; Dooley, J. T.

    1980-01-01

    The application of LANDSAT multispectral scanner data to describe the mining and reclamation changes of a hilltop surface coal mine in the rugged, mountainous area of eastern Kentucky is presented. Original single band satellite imagery, computer enhanced single band imagery, and computer classified imagery are presented for four different data sets in order to demonstrate the land cover changes that can be detected. Data obtained with an 11 band multispectral scanner on board a C-47 aircraft at an altitude of 3000 meters are also presented. Comparing the satellite data with color, infrared aerial photography, and ground survey data shows that significant changes in the disrupted area can be detected from LANDSAT band 5 satellite imagery for mines with more than 100 acres of disturbed area. However, band-ratio (bands 5/6) imagery provides greater contrast than single band imagery and can provide a qualitative level 1 classification of the land cover that may be useful for monitoring either the disturbed mining area or the revegetation progress. However, if a quantitative, accurate classification of the barren or revegetated classes is required, it is necessary to perform a detailed, four band computer classification of the data.

  8. Current Usage and Future Prospects of Multispectral (RGB) Satellite Imagery in Support of NWS Forecast Offices and National Centers

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Fuell, Kevin; Knaff, John; Lee, Thomas

    2012-01-01

    What is an RGB Composite Image? (1) Current and future satellite instruments provide remote sensing at a variety of wavelengths. (2) RGB composite imagery assign individual wavelengths or channel differences to the intensities of the red, green, and blue components of a pixel color. (3) Each red, green, and blue color intensity is related to physical properties within the final composite image. (4) Final color assignments are therefore related to the characteristics of image pixels. (5) Products may simplify the interpretation of data from multiple bands by displaying information in a single image. Current Products and Usage: Collaborations between SPoRT, CIRA, and NRL have facilitated the use and evaluation of RGB products at a variety of NWS forecast offices and National Centers. These products are listed in table.

  9. Hierarchical Object-based Image Analysis approach for classification of sub-meter multispectral imagery in Tanzania

    NASA Astrophysics Data System (ADS)

    Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.

    2015-12-01

    Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.

  10. The Multispectral Imaging Science Working Group. Volume 3: Appendices

    NASA Technical Reports Server (NTRS)

    Cox, S. C. (Editor)

    1982-01-01

    The status and technology requirements for using multispectral sensor imagery in geographic, hydrologic, and geologic applications are examined. Critical issues in image and information science are identified.

  11. Characterization of instream hydraulic and riparian habitat conditions and stream temperatures of the Upper White River Basin, Washington, using multispectral imaging systems

    USGS Publications Warehouse

    Black, Robert W.; Haggland, Alan; Crosby, Greg

    2003-01-01

    Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the

  12. Predicting forest structural parameters using the image texture derived from WorldView-2 multispectral imagery in a dryland forest, Israel

    NASA Astrophysics Data System (ADS)

    Ozdemir, Ibrahim; Karnieli, Arnon

    2011-10-01

    Estimation of forest structural parameters by field-based data collection methods is both expensive and time consuming. Satellite remote sensing is a low-cost alternative in modeling and mapping structural parameters in large forest areas. The current study investigates the potential of using WordView-2 multispectral satellite imagery for predicting forest structural parameters in a dryland plantation forest in Israel. The relationships between image texture features and the several structural parameters such as Number of Trees (NT), Basal Area (BA), Stem Volume (SV), Clark-Evans Index (CEI), Diameter Differentiation Index (DDI), Contagion Index (CI), Gini Coefficient (GC), and Standard Deviation of Diameters at Breast Heights (SDDBH) were examined using correlation analyses. These variables were obtained from 30 m × 30 m square-shaped plots. The Standard Deviation of Gray Levels (SDGL) as a first order texture feature and the second order texture variables based on Gray Level Co-occurrence Matrix (GLCM) were calculated for the pixels that corresponds to field plots. The results of the correlation analysis indicate that the forest structural parameters are significantly correlated with the image texture features. The highest correlation coefficients were calculated for the relationships between the SDDBH and the contrast of red band ( r = 0.75, p < 0.01), the BA and the entropy of blue band ( r = 0.73, p < 0.01), and the GC and the contrast of blue band ( r = 0.71, p < 0.01). Each forest structural parameter was modeled as a function of texture measures derived from the satellite image using stepwise multi linear regression analyses. The determination coefficient ( R2) and root mean square error (RMSE) values of the best fitting models, respectively, are 0.38 and 109.56 ha -1 for the NT; 0.54 and 1.79 m 2 ha -1 for the BA; 0.42 and 27.18 m 3 ha -1 for the SV; 0.23 and 0.16 for the CEI; 0.32 and 0.05 for the DDI; 0.25 and 0.06 for the CI; 0.50 and 0.05 for the GC

  13. A qualitative evaluation of Landsat imagery of Australian rangelands

    USGS Publications Warehouse

    Graetz, R.D.; Carneggie, David M.; Hacker, R.; Lendon, C.; Wilcox, D.G.

    1976-01-01

    The capability of multidate, multispectral ERTS-1 imagery of three different rangeland areas within Australia was evaluated for its usefulness in preparing inventories of rangeland types, assessing on a broad scale range condition within these rangeland types, and assessing the response of rangelands to rainfall events over large areas. For the three divergent rangeland test areas, centered on Broken W, Alice Springs and Kalgoorlie, detailed interpretation of the imagery only partially satisfied the information requirements set. It was most useful in the Broken Hill area where fenceline contrasts in range condition were readily visible. At this and the other sites an overstorey of trees made interpretation difficult. Whilst the low resolution characteristics and the lack of stereoscopic coverage hindered interpretation it was felt that this type of imagery with its vast coverage, present low cost and potential for repeated sampling is a useful addition to conventional aerial photography for all rangeland types.

  14. Fusion of LiDAR and aerial imagery for the estimation of downed tree volume using Support Vector Machines classification and region based object fitting

    NASA Astrophysics Data System (ADS)

    Selvarajan, Sowmya

    The study classifies 3D small footprint full waveform digitized LiDAR fused with aerial imagery to downed trees using Support Vector Machines (SVM) algorithm. Using small footprint waveform LiDAR, airborne LiDAR systems can provide better canopy penetration and very high spatial resolution. The small footprint waveform scanner system Riegl LMS-Q680 is addition with an UltraCamX aerial camera are used to measure and map downed trees in a forest. The various data preprocessing steps helped in the identification of ground points from the dense LiDAR dataset and segment the LiDAR data to help reduce the complexity of the algorithm. The haze filtering process helped to differentiate the spectral signatures of the various classes within the aerial image. Such processes, helped to better select the features from both sensor data. The six features: LiDAR height, LiDAR intensity, LiDAR echo, and three image intensities are utilized. To do so, LiDAR derived, aerial image derived and fused LiDAR-aerial image derived features are used to organize the data for the SVM hypothesis formulation. Several variations of the SVM algorithm with different kernels and soft margin parameter C are experimented. The algorithm is implemented to classify downed trees over a pine trees zone. The LiDAR derived features provided an overall accuracy of 98% of downed trees but with no classification error of 86%. The image derived features provided an overall accuracy of 65% and fusion derived features resulted in an overall accuracy of 88%. The results are observed to be stable and robust. The SVM accuracies were accompanied by high false alarm rates, with the LiDAR classification producing 58.45%, image classification producing 95.74% and finally the fused classification producing 93% false alarm rates The Canny edge correction filter helped control the LiDAR false alarm to 35.99%, image false alarm to 48.56% and fused false alarm to 37.69% The implemented classifiers provided a powerful tool for

  15. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  16. Airborne multispectral and thermal remote sensing for detecting the onset of crop stress caused by multiple factors

    NASA Astrophysics Data System (ADS)

    Huang, Yanbo; Thomson, Steven J.

    2010-10-01

    Remote sensing technology has been developed and applied to provide spatiotemporal information on crop stress for precision management. A series of multispectral images over a field planted cotton, corn and soybean were obtained by a Geospatial Systems MS4100 camera mounted on an Air Tractor 402B airplane equipped with Camera Link in a Magma converter box triggered by Terraverde Dragonfly® flight navigation and imaging control software. The field crops were intentionally stressed by applying glyphosate herbicide via aircraft and allowing it to drift near-field. Aerial multispectral images in the visible and near-infrared bands were manipulated to produce vegetation indices, which were used to quantify the onset of herbicide induced crop stress. The vegetation indices normalized difference vegetation index (NDVI) and soil adjusted vegetation index (SAVI) showed the ability to monitor crop response to herbicide-induced injury by revealing stress at different phenological stages. Two other fields were managed with irrigated versus nonirrigated treatments, and those fields were imaged with both the multispectral system and an Electrophysics PV-320T thermal imaging camera on board an Air Tractor 402B aircraft. Thermal imagery indicated water stress due to deficits in soil moisture, and a proposed method of determining crop cover percentage using thermal imagery was compared with a multispectral imaging method. Development of an image fusion scheme may be necessary to provide synergy and improve overall water stress detection ability.

  17. High-resolution spatial patterns of Soil Organic Carbon content derived from low-altitude aerial multi-band imagery on the Broadbalk Wheat Experiment at Rothamsted,UK

    NASA Astrophysics Data System (ADS)

    Aldana Jague, Emilien; Goulding, Keith; Heckrath, Goswin; Macdonald, Andy; Poulton, Paul; Stevens, Antoine; Van Wesemael, Bas; Van Oost, Kristof

    2014-05-01

    Soil organic C (SOC) contents in arable landscapes change as a function of management, climate and topography (Johnston et al, 2009). Traditional methods to measure soil C stocks are labour intensive, time consuming and expensive. Consequently, there is a need for developing low-cost methods for monitoring SOC contents in agricultural soils. Remote sensing methods based on multi-spectral images may help map SOC variation in surface soils. Recently, the costs of both Unmanned Aerial Vehicles (UAVs) and multi-spectral cameras have dropped dramatically, opening up the possibility for more widespread use of these tools for SOC mapping. Long-term field experiments with distinct SOC contents in adjacent plots, provide a very useful resource for systematically testing remote sensing approaches for measuring SOC. This study focusses on the Broadbalk Wheat Experiment at Rothamsted (UK). The Broadbalk experiment started in 1843. It is widely acknowledged to be the oldest continuing agronomic field experiment in the world. The initial aim of the experiment was to test the effects of different organic manures and inorganic fertilizers on the yield of winter wheat. The experiment initially contained 18 strips, each about 320m long and 6m wide, separated by paths of 1.5-2.5m wide. The strips were subsequently divided into ten sections (>180 plots) to test the effects of other factors (crop rotation, herbicides, pesticides etc.). The different amounts and combinations of mineral fertilisers (N,P,K,Na & Mg) and Farmyard Manure (FYM) applied to these plots for over 160 years has resulted in very different SOC contents in adjacent plots, ranging between 0.8% and 3.5%. In addition to large inter-plot variability in SOC there is evidence of within-plot trends related to the use of discard areas between plots and movement of soil as a result of ploughing. The objectives of this study are (i) to test whether low-altitude multi-band imagery can be used to accurately predict spatial

  18. Building block extraction and classification by means of Markov random fields using aerial imagery and LiDAR data

    NASA Astrophysics Data System (ADS)

    Bratsolis, E.; Sigelle, M.; Charou, E.

    2016-10-01

    Building detection has been a prominent area in the area of image classification. Most of the research effort is adapted to the specific application requirements and available datasets. Our dataset includes aerial orthophotos (with spatial resolution 20cm), a DSM generated from LiDAR (with spatial resolution 1m and elevation resolution 20 cm) and DTM (spatial resolution 2m) from an area of Athens, Greece. Our aim is to classify these data by means of Markov Random Fields (MRFs) in a Bayesian framework for building block extraction and perform a comparative analysis with other supervised classification techniques namely Feed Forward Neural Net (FFNN), Cascade-Correlation Neural Network (CCNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). We evaluated the performance of each method using a subset of the test area. We present the classified images, and statistical measures (confusion matrix, kappa coefficient and overall accuracy). Our results demonstrate that the MRFs and FFNN perform better than the other methods.

  19. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  20. Use of ultra-high spatial resolution aerial imagery in the estimation of chaparral wildfire fuel loads.

    PubMed

    Schmidt, Ian T; O'Leary, John F; Stow, Douglas A; Uyeda, Kellie A; Riggan, Phillip J

    2016-12-01

    Development of methods that more accurately estimate spatial distributions of fuel loads in shrublands allows for improved understanding of ecological processes such as wildfire behavior and postburn recovery. The goal of this study is to develop and test remote sensing methods to upscale field estimates of shrubland fuel to broader-scale biomass estimates using ultra-high spatial resolution imagery captured by a light-sport aircraft. The study is conducted on chaparral shrublands located in eastern San Diego County, CA, USA. We measured the fuel load in the field using a regression relationship between basal area and aboveground biomass of shrubs and estimated ground areal coverage of individual shrub species by using ultra-high spatial resolution imagery and image processing routines. Study results show a strong relationship between image-derived shrub coverage and field-measured fuel loads in three even-age stands that have regrown approximately 7, 28, and 68 years since last wildfire. We conducted ordinary least square analysis using ground coverage as the independent variable regressed against biomass. The analysis yielded R (2) values ranging from 0.80 to 0.96 in the older stands for the live shrub species, while R (2) values for species in the younger stands ranged from 0.32 to 0.89. Pooling species-based data into larger sample sizes consisting of a functional group and all-shrub classes while obtaining suitable linear regression models supports the potential for these methods to be used for upscaling fuel estimates to broader areal extents, without having to classify and map shrubland vegetation at the species level.

  1. Low-altitude aerial imagery and related field observations associated with unmanned aerial systems (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016

    USGS Publications Warehouse

    Sherwood, Christopher R.

    2016-01-01

    launch site; they have horizontal and vertical uncertainties of approximately +/ 0.03 m. The locations of the ground control points can be used to constrain photogrammetric reconstructions based on the aerial imagery. The locations of the 144 transect points can be used for independent evaluation of the photogrammetric products.This data release includes the four sets of original aerial images; tables listing the image file names and locations; locations of the 140 transect points; and locations of the ground control points with photographs of the four in-place features and images showing the location of the two a posteriori points at two zoom levels.Collection of these data were supported by the USGS Coastal and Marine Geology Program and the USGS Innovation Center and were conducted under USGS field activity number 2016-007-FA and National Park Service Scientific Research and Collecting Permit, study number CACO-00285, permit number CACO-2016-SCI-003. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

  2. A comparison of real and simulated airborne multisensor imagery

    NASA Astrophysics Data System (ADS)

    Bloechl, Kevin; De Angelis, Chris; Gartley, Michael; Kerekes, John; Nance, C. Eric

    2014-06-01

    This paper presents a methodology and results for the comparison of simulated imagery to real imagery acquired with multiple sensors hosted on an airborne platform. The dataset includes aerial multi- and hyperspectral imagery with spatial resolutions of one meter or less. The multispectral imagery includes data from an airborne sensor with three-band visible color and calibrated radiance imagery in the long-, mid-, and short-wave infrared. The airborne hyperspectral imagery includes 360 bands of calibrated radiance and reflectance data spanning 400 to 2450 nm in wavelength. Collected in September 2012, the imagery is of a park in Avon, NY, and includes a dirt track and areas of grass, gravel, forest, and agricultural fields. A number of artificial targets were deployed in the scene prior to collection for purposes of target detection, subpixel detection, spectral unmixing, and 3D object recognition. A synthetic reconstruction of the collection site was created in DIRSIG, an image generation and modeling tool developed by the Rochester Institute of Technology, based on ground-measured reflectance data, ground photography, and previous airborne imagery. Simulated airborne images were generated using the scene model, time of observation, estimates of the atmospheric conditions, and approximations of the sensor characteristics. The paper provides a comparison between the empirical and simulated images, including a comparison of achieved performance for classification, detection and unmixing applications. It was found that several differences exist due to the way the image is generated, including finite sampling and incomplete knowledge of the scene, atmospheric conditions and sensor characteristics. The lessons learned from this effort can be used in constructing future simulated scenes and further comparisons between real and simulated imagery.

  3. Identification of areas of recharge and discharge using Landsat-TM satellite imagery and aerial photography mapping techniques

    NASA Astrophysics Data System (ADS)

    Salama, R. B.; Tapley, I.; Ishii, T.; Hawkes, G.

    1994-10-01

    Aerial photographs (AP) and Landsat (TM) colour composites were used to map the geomorphology, geology and structures of the Salt River System of Western Australia. Geomorphic features identified are sand plains, dissected etchplain, colluvium, lateritic duricrust and rock outcrops. The hydrogeomorphic units include streams, lakes and playas, palaeochannels and palaeodeltas. The structural features are linear and curvilinear lineaments, ring structures and dolerite dykes. Suture lines control the course of the main river channel. Permeable areas around the circular granitic plutons were found to be the main areas of recharge in the uplands. Recharge was also found to occur in the highly permeable areas of the sandplains. Discharge was shown to be primarily along the main drainage lines, on the edge of the circular sandplains, in depressions and in lakes. The groundwater occurrence and hydrogeological classification of the recharge potential of the different units were used to classify the mapped areas into recharge and discharge zones. The results also show that TM colour composites provide a viable source of data comparable with AP for mapping and delineating areas of recharge and discharge on a regional scale.

  4. Undercomplete learned dictionaries for land cover classification in multispectral imagery of Arctic landscapes using CoSA: clustering of sparse approximations

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; Gangodagamage, Chandana

    2013-05-01

    Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a Hebbian learning rule to build undercomplete spectral-textural dictionaries that are adapted to the data. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using our CoSA algorithm: unsupervised Clustering of Sparse Approximations. We demonstrate our method using multispectral Worldview-2 data from three Arctic study areas: Barrow, Alaska; the Selawik River, Alaska; and a watershed near the Mackenzie River delta in northwest Canada. Our goal is to develop a robust classification methodology that will allow for the automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and geomorphic characteristics. To interpret and assign land cover categories to the clusters we both evaluate the spectral properties of the clusters and compare the clusters to both field- and remote sensing-derived classifications of landscape attributes. Our work suggests that neuroscience-based models are a promising approach to practical pattern recognition problems in remote sensing.

  5. Analysis of Biophysical Mechanisms of Gilgai Microrelief Formation in Dryland Swelling Soils Using Ultra-High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Krell, N.; DeCarlo, K. F.; Caylor, K. K.

    2015-12-01

    Microrelief formations ("gilgai"), which form due to successive wetting-drying cycles typical of swelling soils, provide ecological hotspots for local fauna and flora, including higher and more robust vegetative growth. The distribution of these gilgai suggests a remarkable degree of regularity. However, it is unclear to what extent the mechanisms that drive gilgai formation are physical, such as desiccation-induced fracturing, or biological in nature, namely antecedent vegetative clustering. We investigated gilgai genesis and pattern formation in a 100 x 100 meter study area with swelling soils in a semiarid grassland at the Mpala Research Center in central Kenya. Our ongoing experiment is composed of three 9m2 treatments: we removed gilgai and limited vegetative growth by herbicide application in one plot, allowed for unrestricted seed dispersal in another, and left gilgai unobstructed in a control plot. To estimate the spatial frequencies of the repeating patterns of gilgai, we obtained ultra-high resolution (0.01-0.03m/pixel) images with an unmanned aerial vehicle (UAV) from which digital elevation models were also generated. Geostatistical analyses using wavelet and fourier methods in 1- and 2-dimensions were employed to characterize gilgai size and distribution. Preliminary results support regular spatial patterning across the gilgaied landscape and heterogeneities may be related to local soil properties and biophysical influences. Local data on gilgai and fracture characteristics suggest that gilgai form at characteristic heights and spacing based on fracture morphology: deep, wide cracks result in large, highly vegetated mounds whereas shallow cracks, induced by animal trails, are less correlated with gilgai size and shape. Our experiments will help elucidate the links between shrink-swell processes and gilgai-vegetation patterning in high activity clay soils and advance our understanding of the mechanisms of gilgai formation in drylands.

  6. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  7. General pattern of the turbid water in the Seto-inland sea extracted from multispectral imageries by the LANDSAT-1 and 2

    NASA Technical Reports Server (NTRS)

    Maruyasu, T. (Principal Investigator); Watanabe, K.

    1976-01-01

    The author has identified the following significant results. Each distribution pattern of turbid water changes with the time in accordance with daily tides, seasonal variation of tides, and occasional rainfall. Two cases of successfully repeated LANDSAT observations for the same sea regions suggested a general pattern of turbid water could be extracted for each region. Photographic and digital processes were used to extract patterns of turbid water separately from the cloud and smog-layer in MSS 4, 5, and 7 imageries. A mosaic of image-masked imageries displays a general pattern of turbid water for almost the entire Seto Inland Sea. No such pattern was extracted for the Aki-Nada south of Hiroshima City where the water is fairly polluted, nor for the Iyo-Nada where the water is generally clearer than in other regions of the Seto Inland Sea.

  8. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  9. Analyses of the cloud contents of multispectral imagery from LANDSAT 2: Mesoscale assessments of cloud and rainfall over the British Isles

    NASA Technical Reports Server (NTRS)

    Barrett, E. C.; Grant, C. K. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. It was demonstrated that satellites with sufficiently high resolution capability in the visible region of the electromagnetic spectrum could be used to check the accuracy of estimates of total cloud amount assessed subjectively from the ground, and to reveal areas of performance in which corrections should be made. It was also demonstrated that, in middle latitude in summer, cloud shadow may obscure at least half as much again of the land surface covered by an individual LANDSAT frame as the cloud itself. That proportion would increase with latitude and/or time of year towards the winter solstice. Analyses of sample multispectral images for six different categories of clouds in summer revealed marked differences between the reflectance characteristics of cloud fields in the visible/near infrared region of the spectrum.

  10. Use of Aerial high resolution visible imagery to produce large river bathymetry: a multi temporal and spatial study over the by-passed Upper Rhine

    NASA Astrophysics Data System (ADS)

    Béal, D.; Piégay, H.; Arnaud, F.; Rollet, A.; Schmitt, L.

    2011-12-01

    Aerial high resolution visible imagery allows producing large river bathymetry assuming that water depth is related to water colour (Beer-Bouguer-Lambert law). In this paper we aim at monitoring Rhine River geometry changes for a diachronic study as well as sediment transport after an artificial injection (25.000 m3 restoration operation). For that a consequent data base of ground measurements of river depth is used, built on 3 different sources: (i) differential GPS acquisitions, (ii) sounder data and (iii) lateral profiles realized by experts. Water depth is estimated using a multi linear regression over neo channels built on a principal component analysis over red, green and blue bands and previously cited depth data. The study site is a 12 km long reach of the by-passed section of the Rhine River that draws French and German border. This section has been heavily impacted by engineering works during the last two centuries: channelization since 1842 for navigation purposes and the construction of a 45 km long lateral canal and 4 consecutive hydroelectric power plants of since 1932. Several bathymetric models are produced based on 3 different spatial resolutions (6, 13 and 20 cm) and 5 acquisitions (January, March, April, August and October) since 2008. Objectives are to find the optimal spatial resolution and to characterize seasonal effects. Best performances according to the 13 cm resolution show a 18 cm accuracy when suspended matters impacted less water transparency. Discussions are oriented to the monitoring of the artificial reload after 2 flood events during winter 2010-2011. Bathymetric models produced are also useful to build 2D hydraulic model's mesh.

  11. A new technique for the detection of large scale landslides in glacio-lacustrine deposits using image correlation based upon aerial imagery: A case study from the French Alps

    NASA Astrophysics Data System (ADS)

    Fernandez, Paz; Whitworth, Malcolm

    2016-10-01

    Landslide monitoring has benefited from recent advances in the use of image correlation of high resolution optical imagery. However, this approach has typically involved satellite imagery that may not be available for all landslides depending on their time of movement and location. This study has investigated the application of image correlation techniques applied to a sequence of aerial imagery to an active landslide in the French Alps. We apply an indirect landslide monitoring technique (COSI-Corr) based upon the cross-correlation between aerial photographs, to obtain horizontal displacement rates. Results for the 2001-2003 time interval are presented, providing a spatial model of landslide activity and motion across the landslide, which is consistent with previous studies. The study has identified areas of new landslide activity in addition to known areas and through image decorrelation has identified and mapped two new lateral landslides within the main landslide complex. This new approach for landslide monitoring is likely to be of wide applicability to other areas characterised by complex ground displacements.

  12. Evaluation of eelgrass beds mapping using a high-resolution airborne multispectral scanner

    USGS Publications Warehouse

    Su, H.; Karna, D.; Fraim, E.; Fitzgerald, M.; Dominguez, R.; Myers, J.S.; Coffland, B.; Handley, L.R.; Mace, T.

    2006-01-01

    Eelgrass (Zostera marina) can provide vital ecological functions in stabilizing sediments, influencing current dynamics, and contributing significant amounts of biomass to numerous food webs in coastal ecosystems. Mapping eelgrass beds is important for coastal water and nearshore estuarine monitoring, management, and planning. This study demonstrated the possible use of high spatial (approximately 5 m) and temporal (maximum low tide) resolution airborne multispectral scanner on mapping eelgrass beds in Northern Puget Sound, Washington. A combination of supervised and unsupervised classification approaches were performed on the multispectral scanner imagery. A normalized difference vegetation index (NDVI) derived from the red and near-infrared bands and ancillary spatial information, were used to extract and mask eelgrass beds and other submerged aquatic vegetation (SAV) in the study area. We evaluated the resulting thematic map (geocoded, classified image) against a conventional aerial photograph interpretation using 260 point locations randomly stratified over five defined classes from the thematic map. We achieved an overall accuracy of 92 percent with 0.92 Kappa Coefficient in the study area. This study demonstrates that the airborne multispectral scanner can be useful for mapping eelgrass beds in a local or regional scale, especially in regions for which optical remote sensing from space is constrained by climatic and tidal conditions. ?? 2006 American Society for Photogrammetry and Remote Sensing.

  13. Multispectral Photography: the obscure becomes the obvious

    ERIC Educational Resources Information Center

    Polgrean, John

    1974-01-01

    Commonly used in map making, real estate zoning, and highway route location, aerial photography planes equipped with multispectral cameras may, among many environmental applications, now be used to locate mineral deposits, define marshland boundaries, study water pollution, and detect diseases in crops and forests. (KM)

  14. Mapping of lithologic and structural units using multispectral imagery. [Afar-Triangle/Ethiopia and adjacent areas (Ethiopian Plateau, Somali Plateau, and parts of Yemen and Saudi Arabia)

    NASA Technical Reports Server (NTRS)

    Kronberg, P. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. ERTS-1 MSS imagery covering the Afar-Triangle/Ethiopia and adjacent regions (Ethiopian Plateau, Somali Plateau, and parts of Yemen and Saudi Arabi) was applied to the mapping of lithologic and structural units of the test area at a scale 1:1,000,000. Results of the geological evaluation of the ERTS-1 imagery of the Afar have proven the usefullness of this type of satellite data for regional geological mapping. Evaluation of the ERTS images also resulted in new aspects of the structural setting and tectonic development of the Afar-Triangle, where three large rift systems, the oceanic rifts of the Red Sea and Gulf of Aden and the continental East African rift system, seem to meet each other. Surface structures mapped by ERTS do not indicate that the oceanic rift of the Gulf of Aden (Sheba Ridge) continues into the area of continental crust west of the Gulf of Tadjura. ERTS data show that the Wonji fault belt of the African rift system does not enter or cut through the central Afar. The Aysha-Horst is not a Horst but an autochthonous spur of the Somali Plateau.

  15. Chernobyl doses. Volume 1. Analysis of forest canopy radiation response from multispectral imagery and the relationship to doses. Technical report, 29 July 1987-30 September 1993

    SciTech Connect

    McClennan, G.E.; Anno, G.H.; Whicker, F.W.

    1994-09-01

    This volume of the report Chernobyl Doses presents details of a new, quantitative method for remotely sensing ionizing radiation dose to vegetation. Analysis of Landsat imagery of the area within a few kilometers of the Chernobyl nuclear reactor station provides maps of radiation dose to pine forest canopy resulting from the accident of April 26, 1986. Detection of the first date of significant, persistent deviation from normal of the spectral reflectance signature of pine foliage produces contours of radiation dose in the 20 to 80 Gy range extending up to 4 km from the site of the reactor explosion. The effective duration of exposure for the pine foliage is about 3 weeks. For this exposure time, the LD50 of Pinus sylvestris (Scotch pine) is about 23 Gy. The practical lower dose limit for the remote detection of radiation dose to pine foliage with the Landsat Thematic Mapper is about 5 Gy or 1/4 of the LD50.

  16. Applications of multispectral imagery to water resources development planning in the lower Mekong Basin (Khmer Republic, Laos, Thailand and Viet-Nam)

    NASA Technical Reports Server (NTRS)

    Vankiere, W. J.

    1973-01-01

    The use of ERTS imagery for water resources planning in the lower Mekong Basin relates to three major issues: (1) it complements data from areas, which have been inaccessible in the past because of security; this concerns mainly forest cover of the watersheds, and geological features, (2) it refines ground surveys; this concerns mainly land forms, and soils of existing and planned irrigation perimeters, and (3) it provides new information, which would be almost or entirely impossible to detect with ground surveys or conventional photography; this concerns the mechanism of flooding and drainage of the delta; siltation of the Great Lake and mapping of acidity, possibly also of salinity, in the lower delta; sedimentation and fisheries in the Mekong Delta estuarine areas.

  17. Optimization of spectral indices and long-term separability analysis for classification of cereal crops using multi-spectral RapidEye imagery

    NASA Astrophysics Data System (ADS)

    Gerstmann, Henning; Möller, Markus; Gläßer, Cornelia

    2016-10-01

    Crop monitoring using remotely sensed image data provides valuable input for a large variety of applications in environmental and agricultural research. However, method development for discrimination between spectrally highly similar crop species remains a challenge in remote sensing. Calculation of vegetation indices is a frequently applied option to amplify the most distinctive parts of a spectrum. Since no vegetation index exist, that is universally best-performing, a method is presented that finds an index that is optimized for the classification of a specific satellite data set to separate two cereal crop types. The η2 (eta-squared) measure of association - presented as novel spectral separability indicator - was used for the evaluation of the numerous tested indices. The approach is first applied on a RapidEye satellite image for the separation of winter wheat and winter barley in a Central German test site. The determined optimized index allows a more accurate classification (97%) than several well-established vegetation indices like NDVI and EVI (<87%). Furthermore, the approach was applied on a RapidEye multi-spectral image time series covering the years 2010-2014. The optimized index for the spectral separation of winter barley and winter wheat for each acquisition date was calculated and its ability to distinct the two classes was assessed. The results indicate that the calculated optimized indices perform better than the standard indices for most seasonal parts of the time series. The red edge spectral region proved to be of high significance for crop classification. Additionally, a time frame of best spectral separability of wheat and barley could be detected in early to mid-summer.

  18. Near infrared-red models for the remote estimation of chlorophyll- a concentration in optically complex turbid productive waters: From in situ measurements to aerial imagery

    NASA Astrophysics Data System (ADS)

    Gurlin, Daniela

    Today the water quality of many inland and coastal waters is compromised by cultural eutrophication in consequence of increased human agricultural and industrial activities and remote sensing is widely applied to monitor the trophic state of these waters. This study explores near infrared-red models for the remote estimation of chlorophyll-a concentration in turbid productive waters and compares several near infrared-red models developed within the last 35 years. Three of these near infrared-red models were calibrated for a dataset with chlorophyll-a concentrations from 2.3 to 81.2 mg m -3 and validated for independent and statistically significantly different datasets with chlorophyll-a concentrations from 4.0 to 95.5 mg m-3 and 4.0 to 24.2 mg m-3 for the spectral bands of the MEdium Resolution Imaging Spectrometer (MERIS) and Moderate-resolution Imaging Spectroradiometer (MODIS). The developed MERIS two-band algorithm estimated chlorophyll-a concentrations from 4.0 to 24.2 mg m-3, which are typical for many inland and coastal waters, very accurately with a mean absolute error 1.2 mg m-3. These results indicate a high potential of the simple MERIS two-band algorithm for the reliable estimation of chlorophyll-a concentration without any reduction in accuracy compared to more complex algorithms, even though more research seems required to analyze the sensitivity of this algorithm to differences in the chlorophyll-a specific absorption coefficient of phytoplankton. Three near infrared-red models were calibrated and validated for a smaller dataset of atmospherically corrected multi-temporal aerial imagery collected by the hyperspectral airborne imaging spectrometer for applications (AisaEAGLE). The developed algorithms successfully captured the spatial and temporal variability of the chlorophyll-a concentrations and estimated chlorophyll- a concentrations from 2.3 to 81.2 mg m-3 with mean absolute errors from 4.4 mg m-3 for the AISA two band algorithm to 5.2 mg m-3

  19. Multispectral image processing for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.; Lazaroff, Mark B.; Brennan, Mark W.

    1993-03-01

    New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect the appearance or disappearance of environmental phenomena is also described and an example illustrating its use in detecting urban changes using SPOT imagery is presented.

  20. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  1. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  2. Soil erosion and its correlation with vegetation cover: An assesment using multispectral imagery and pixel-based geographic information system in Gesing Sub-Watershed, Central Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Dirda Gupita, Diwyacitta; Sigit Heru Murti, B. S.

    2017-01-01

    Soil erosion in caused by five factors: rainfall erosivity, soil erodibility, slope and slope length, crop management, and land conservation practices. In theory, vegetation as one of the affecting factors has inversed correlation with soil erosion. This research is aimed to: (1) model RUSLE using pixel-based GIS, and (2) prove whether or not vegetation really has the said correlation with the soil erosion that occurs in Gesing Watershed. The method used in this research is divided into two: the use of RUSLE to estimate the soil erosion rate; and the use of fractional vegetation cover (FVC) formula to estimate the vegetation density in the area. Both methods used Landsat-8 OLI imagery, which is used to extract the RUSLE parameters as well as to derive the vegetation density through NDVI, and pixel-based GIS. The mapping of soil erosion rate distribution done in this research demonstrated that pixel-based modeling is able to represent a much more detailed and logical distribution of a phenomenon. The distribution of soil erosion rate in Gesing Watershed showed that the erosion rate in this area is relatively minor. About 1425.99 hectares and 1587.57 hectares of the total area have erosion rate of 0 – 15 tons/ha/yr (very mild) and 15 – 60 tons/ha/yr (mild) respectively.

  3. Novel SVM-based technique to improve rainfall estimation over the Mediterranean region (north of Algeria) using the multispectral MSG SEVIRI imagery

    NASA Astrophysics Data System (ADS)

    Sehad, Mounir; Lazri, Mourad; Ameur, Soltane

    2017-03-01

    In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.

  4. Unmanned aerial systems for forest reclamation monitoring: throwing balloons in the air

    NASA Astrophysics Data System (ADS)

    Andrade, Rita; Vaz, Eric; Panagopoulos, Thomas; Guerrero, Carlos

    2014-05-01

    Wildfires are a recurrent phenomenon in Mediterranean landscapes, deteriorating environment and ecosystems, calling out for adequate land management. Monitoring burned areas enhances our abilities to reclaim them. Remote sensing has become an increasingly important tool for environmental assessment and land management. It is fast, non-intrusive, and provides continuous spatial coverage. This paper reviews remote sensing methods, based on space-borne, airborne or ground-based multispectral imagery, for monitoring the biophysical properties of forest areas for site specific management. The usage of satellite imagery for land use management has been frequent in the last decades, it is of great use to determine plants health and crop conditions, allowing a synergy between the complexity of environment, anthropogenic landscapes and multi-temporal understanding of spatial dynamics. Aerial photography increments on spatial resolution, nevertheless it is heavily dependent on airborne availability as well as cost. Both these methods are required for wide areas management and policy planning. Comprising an active and high resolution imagery source, that can be brought at a specific instance, reducing cost while maintaining locational flexibility is of utmost importance for local management. In this sense, unmanned aerial vehicles provide maximum flexibility with image collection, they can incorporate thermal and multispectral sensors, however payload and engine operation time limit flight time. Balloon remote sensing is becoming increasingly sought after for site specific management, catering rapid digital analysis, permitting greater control of the spatial resolution as well as of datasets collection in a given time. Different wavelength sensors may be used to map spectral variations in plant growth, monitor water and nutrient stress, assess yield and plant vitality during different stages of development. Proximity could be an asset when monitoring forest plants vitality

  5. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  6. Land cover/use mapping using multi-band imageries captured by Cropcam Unmanned Aerial Vehicle Autopilot (UAV) over Penang Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Fuyi, Tan; Boon Chun, Beh; Mat Jafri, Mohd Zubir; Hwee San, Lim; Abdullah, Khiruddin; Mohammad Tahrin, Norhaslinda

    2012-11-01

    The problem of difficulty in obtaining cloud-free scene at the Equatorial region from satellite platforms can be overcome by using airborne imagery. Airborne digital imagery has proved to be an effective tool for land cover studies. Airborne digital camera imageries were selected in this present study because of the airborne digital image provides higher spatial resolution data for mapping a small study area. The main objective of this study is to classify the RGB bands imageries taken from a low-altitude Cropcam UAV for land cover/use mapping over USM campus, penang Island, Malaysia. A conventional digital camera was used to capture images from an elevation of 320 meter on board on an UAV autopilot. This technique was cheaper and economical compared with other airborne studies. The artificial neural network (NN) and maximum likelihood classifier (MLC) were used to classify the digital imageries captured by using Cropcam UAV over USM campus, Penang Islands, Malaysia. The supervised classifier was chosen based on the highest overall accuracy (<80%) and Kappa statistic (<0.8). The classified land cover map was geometrically corrected to provide a geocoded map. The results produced by this study indicated that land cover features could be clearly identified and classified into a land cover map. This study indicates the use of a conventional digital camera as a sensor on board on an UAV autopilot can provide useful information for planning and development of a small area of coverage.

  7. Unsupervised classification of remote multispectral sensing data

    NASA Technical Reports Server (NTRS)

    Su, M. Y.

    1972-01-01

    The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.

  8. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  9. Processing Of Multispectral Data For Identification Of Rocks

    NASA Technical Reports Server (NTRS)

    Evans, Diane L.

    1990-01-01

    Linear discriminant analysis and supervised classification evaluated. Report discusses processing of multispectral remote-sensing imagery to identify kinds of sedimentary rocks by spectral signatures in geological and geographical contexts. Raw image data are spectra of picture elements in images of seven sedimentary rock units exposed on margin of Wind River Basin in Wyoming. Data acquired by Landsat Thematic Mapper (TM), Thermal Infrared Multispectral Scanner (TIMS), and NASA/JPL airborne synthetic-aperture radar (SAR).

  10. Airborne Hyperspectral Imagery for the Detection of Agricultural Crop Stress

    NASA Technical Reports Server (NTRS)

    Cassady, Philip E.; Perry, Eileen M.; Gardner, Margaret E.; Roberts, Dar A.

    2001-01-01

    Multispectral digital imagery from aircraft or satellite is presently being used to derive basic assessments of crop health for growers and others involved in the agricultural industry. Research indicates that narrow band stress indices derived from hyperspectral imagery should have improved sensitivity to provide more specific information on the type and cause of crop stress, Under funding from the NASA Earth Observation Commercial Applications Program (EOCAP) we are identifying and evaluating scientific and commercial applications of hyperspectral imagery for the remote characterization of agricultural crop stress. During the summer of 1999 a field experiment was conducted with varying nitrogen treatments on a production corn-field in eastern Nebraska. The AVIRIS (Airborne Visible-Infrared Imaging Spectrometer) hyperspectral imager was flown at two critical dates during crop development, at two different altitudes, providing images with approximately 18m pixels and 3m pixels. Simultaneous supporting soil and crop characterization included spectral reflectance measurements above the canopy, biomass characterization, soil sampling, and aerial photography. In this paper we describe the experiment and results, and examine the following three issues relative to the utility of hyperspectral imagery for scientific study and commercial crop stress products: (1) Accuracy of reflectance derived stress indices relative to conventional measures of stress. We compare reflectance-derived indices (both field radiometer and AVIRIS) with applied nitrogen and with leaf level measurement of nitrogen availability and chlorophyll concentrations over the experimental plots (4 replications of 5 different nitrogen levels); (2) Ability of the hyperspectral sensors to detect sub-pixel areas under crop stress. We applied the stress indices to both the 3m and 18m AVIRIS imagery for the entire production corn field using several sub-pixel areas within the field to compare the relative

  11. Perceptual evaluation of colorized nighttime imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-02-01

    We recently presented a color transform that produces fused nighttime imagery with a realistic color appearance (Hogervorst and Toet, 2010, Information Fusion, 11-2, 69-77). To assess the practical value of this transform we performed two experiments in which we compared human scene recognition for monochrome intensified (II) and longwave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First we investigated the amount of detail observers can perceive in a short time span (the gist of the scene). Participants watched brief image presentations and provided a full report of what they had seen. Our results show that REF and CF imagery yielded the highest precision and recall measures, while both II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty extracting information from monochrome than from color imagery. Next, we measured eye fixations of participants who freely explored the images. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representation such that the resulting fixation behavior resembles the fixation behavior for daylight color imagery.

  12. Multispectral Airborne Laser Scanning for Automated Map Updating

    NASA Astrophysics Data System (ADS)

    Matikainen, Leena; Hyyppä, Juha; Litkey, Paula

    2016-06-01

    During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.

  13. Environmental waste site characterization utilizing aerial photographs, remote sensing, and surface geophysics

    SciTech Connect

    Pope, P.; Van Eeckhout, E.; Rofer, C.; Baldridge, S.; Ferguson, J.; Jiracek, G.; Balick, L.; Josten, N.; Carpenter, M.

    1996-04-18

    Six different techniques were used to delineate 40 year old trench boundary at Los Alamos National Laboratory. Data from historical aerial photographs, a magnetic gradient survey, airborne multispectral and thermal infra-red imagery, seismic refraction, DC resistivity, and total field magnetometry were utilized in this process. Each data set indicated a southern and northern edge for the trench. Average locations and 95% confidence limits for each edge were determined along a survey line perpendicular to the trench. Trench edge locations were fairly consistent among all six techniques. Results from a modeling effort performed with the total magnetic field data was the least consistent. However, each method provided unique and complementary information, and the integration of all this information led to a more complete characterization of the trench boundaries and contents.

  14. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  15. Mapping within-field variations of soil organic carbon content using UAV multispectral visible near-infrared images

    NASA Astrophysics Data System (ADS)

    Gilliot, Jean-Marc; Vaudour, Emmanuelle; Michelin, Joël

    2016-04-01

    This study was carried out in the framework of the PROSTOCK-Gessol3 project supported by the French Environment and Energy Management Agency (ADEME), the TOSCA-PLEIADES-CO project of the French Space Agency (CNES) and the SOERE PRO network working on environmental impacts of Organic Waste Products recycling on field crops at long time scale. The organic matter is an important soil fertility parameter and previous studies have shown the potential of spectral information measured in the laboratory or directly in the field using field spectro-radiometer or satellite imagery to predict the soil organic carbon (SOC) content. This work proposes a method for a spatial prediction of bare cultivated topsoil SOC content, from Unmanned Aerial Vehicle (UAV) multispectral imagery. An agricultural plot of 13 ha, located in the western region of Paris France, was analysed in April 2013, shortly before sowing while it was still bare soil. Soils comprised haplic luvisols, rendzic cambisols and calcaric or colluvic cambisols. The UAV platform used was a fixed wing provided by Airinov® flying at an altitude of 150m and was equipped with a four channels multispectral visible near-infrared camera MultiSPEC 4C® (550nm, 660nm, 735 nm and 790 nm). Twenty three ground control points (GCP) were sampled within the plot according to soils descriptions. GCP positions were determined with a centimetric DGPS. Different observations and measurements were made synchronously with the drone flight: soil surface description, spectral measurements (with ASD FieldSpec 3® spectroradiometer), roughness measurements by a photogrammetric method. Each of these locations was sampled for both soil standard physico-chemical analysis and soil water content. A Structure From Motion (SFM) processing was done from the UAV imagery to produce a 15 cm resolution multispectral mosaic using the Agisoft Photoscan® software. The SOC content was modelled by partial least squares regression (PLSR) between the

  16. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970s. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is employed in this study, due to its abundance of coastal habitats and its vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated from multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  17. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joe; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970's. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is imployed in this study, due to its abundance of coastal habitats and ist vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density-sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated form multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  18. Engineering evaluation of 24 channel multispectral scanner. [from flight tests

    NASA Technical Reports Server (NTRS)

    Lambeck, P. F.

    1973-01-01

    The results of flight tests to evaluate the performance of the 24 channel multispectral scanner are reported. The flight plan and test site are described along with the time response and channel registration. The gain and offset drift, and moire patterns are discussed. Aerial photographs of the test site are included.

  19. Changes of multispectral soil patterns with increasing crop canopy

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Baumgardner, M. F.

    1972-01-01

    Multispectral data and automatic data processing were used to map surface soil patterns and to follow the changes in multispectral radiation from a field of maize (Zea mays L.) during a period from seeding to maturity. Panchromatic aerial photography was obtained in early May 1970 and multispectral scanner missions were flown on May 6, June 30, August 11 and September 5, 1970 to obtain energy measurements in 13 wavelength bands. The orange portion of the visible spectrum was used in analyzing the May and June data to cluster relative radiance of the soils into eight different radiance levels. The reflective infrared spectral band was used in analyzing the August and September data to cluster maize into different spectral categories. The computer-produced soil patterns had a striking similarity to the soil pattern of the aerial photograph. These patterns became less distinct as the maize canopy increased.

  20. Automatic vehicle detection based on automatic histogram-based fuzzy C-means algorithm and perceptual grouping using very high-resolution aerial imagery and road vector data

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Gökaşar, Ilgın

    2016-01-01

    This study presents an approach for the automatic detection of vehicles using very high-resolution images and road vector data. Initially, road vector data and aerial images are integrated to extract road regions. Then, the extracted road/street region is clustered using an automatic histogram-based fuzzy C-means algorithm, and edge pixels are detected using the Canny edge detector. In order to automatically detect vehicles, we developed a local perceptual grouping approach based on fusion of edge detection and clustering outputs. To provide the locality, an ellipse is generated using characteristics of the candidate clusters individually. Then, ratio of edge pixels to nonedge pixels in the corresponding ellipse is computed to distinguish the vehicles. Finally, a point-merging rule is conducted to merge the points that satisfy a predefined threshold and are supposed to denote the same vehicles. The experimental validation of the proposed method was carried out on six very high-resolution aerial images that illustrate two highways, two shadowed roads, a crowded narrow street, and a street in a dense urban area with crowded parked vehicles. The evaluation of the results shows that our proposed method performed 86% and 83% in overall correctness and completeness, respectively.

  1. Multispectral microwave imaging radar for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Larson, R. W.; Rawson, R.; Ausherman, D.; Bryan, L.; Porcello, L.

    1974-01-01

    A multispectral airborne microwave radar imaging system, capable of obtaining four images simultaneously is described. The system has been successfully demonstrated in several experiments and one example of results obtained, fresh water ice, is given. Consideration of the digitization of the imagery is given and an image digitizing system described briefly. Preliminary results of digitization experiments are included.

  2. Preliminary Results from the Portable Imagery Quality Assessment Test Field (PIQuAT) of Uav Imagery for Imagery Reconnaissance Purposes

    NASA Astrophysics Data System (ADS)

    Dabrowski, R.; Orych, A.; Jenerowicz, A.; Walczykowski, P.

    2015-08-01

    The article presents a set of initial results of a quality assessment study of 2 different types of sensors mounted on an unmanned aerial vehicle, carried out over an especially designed and constructed test field. The PIQuAT (Portable Imagery Quality Assessment Test Field) field had been designed especially for the purposes of determining the quality parameters of UAV sensors, especially in terms of the spatial, spectral and radiometric resolutions and chosen geometric aspects. The sensor used include a multispectral framing camera and a high-resolution RGB sensor. The flights were conducted from a number of altitudes ranging from 10 m to 200 m above the test field. Acquiring data at a number of different altitudes allowed the authors to evaluate the obtained results and check for possible linearity of the calculated quality assessment parameters. The radiometric properties of the sensors were evaluated from images of the grayscale target section of the PIQuAT field. The spectral resolution of the imagery was determined based on a number of test samples with known spectral reflectance curves. These reference spectral reflectance curves were then compared with spectral reflectance coefficients at the wavelengths registered by the miniMCA camera. Before conducting all of these experiments in field conditions, the interior orientation parameters were calculated for the MiniMCA and RGB sensor in laboratory conditions. These parameters include: the actual pixel size on the detector, distortion parameters, calibrated focal length (CFL) and the coordinates of the principal point of autocollimation (miniMCA - for each of the six channels separately.

  3. Fourier multispectral imaging.

    PubMed

    Jia, Jie; Ni, Chuan; Sarangan, Andrew; Hirakawa, Keigo

    2015-08-24

    Current multispectral imaging systems use narrowband filters to capture the spectral content of a scene, which necessitates different filters to be designed for each application. In this paper, we demonstrate the concept of Fourier multispectral imaging which uses filters with sinusoidally varying transmittance. We designed and built these filters employing a single-cavity resonance, and made spectral measurements with a multispectral LED array. The measurements show that spectral features such as transmission and absorption peaks are preserved with this technique, which makes it a versatile technique than narrowband filters for a wide range of multispectral imaging applications.

  4. Combined synthetic aperture radar/Landsat imagery

    NASA Technical Reports Server (NTRS)

    Marque, R. E.; Maurer, H. E.

    1978-01-01

    This paper presents the results of investigations into merging synthetic aperture radar (SAR) and Landsat multispectral scanner (MSS) images using optical and digital merging techniques. The unique characteristics of airborne and orbital SAR and Landsat MSS imagery are discussed. The case for merging the imagery is presented and tradeoffs between optical and digital merging techniques explored. Examples of Landsat and airborne SAR imagery are used to illustrate optical and digital merging. Analysis of the merged digital imagery illustrates the improved interpretability resulting from combining the outputs from the two sensor systems.

  5. Multispectral vegetative canopy parameter retrieval

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Bunker, David J.

    2011-11-01

    Precision agriculture, forestry and environmental remote sensing are applications uniquely suited to the 8 bands that DigitalGlobe's WorldView-2 provides. At the fine spatial resolution of 0.5 m (panchromatic) and 2 m (multispectral) individual trees can be readily resolved. Recent research [1] has shown that it is possible for hyper-spectral data to invert plant reflectance spectra and estimate nitrogen content, leaf water content, leaf structure, canopy leaf area index and, for sparse canopies, also soil reflectance. The retrieval is based on inverting the SAIL (Scattering by Arbitrary Inclined Leaves) vegetation radiative transfer model for the canopy structure and the reflectance model PROSPECT4/5 for the leaf reflectance. Working on the paper [1] confirmed that a limited number of adjacent bands covering just the visible and near infrared can retrieve the parameters as well, opening up the possibility that this method can be used to analyze multi-spectral WV-2 data. Thus it seems possible to create WV-2 specific inversions using 8 bands and apply them to imagery of various vegetation covered surfaces of agricultural and environmental interest. The capability of retrieving leaf water content and nitrogen content has important applications in determining the health of vegetation, e.g. plant growth status, disease mapping, quantitative drought assessment, nitrogen deficiency, plant vigor, yield, etc.

  6. A color prediction model for imagery analysis

    NASA Technical Reports Server (NTRS)

    Skaley, J. E.; Fisher, J. R.; Hardy, E. E.

    1977-01-01

    A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.

  7. Multi-temporal image analysis of historical aerial photographs and recent satellite imagery reveals evolution of water body surface area and polygonal terrain morphology in Kobuk Valley National Park, Alaska

    NASA Astrophysics Data System (ADS)

    Necsoiu, Marius; Dinwiddie, Cynthia L.; Walter, Gary R.; Larsen, Amy; Stothoff, Stuart A.

    2013-06-01

    Multi-temporal image analysis of very-high-resolution historical aerial and recent satellite imagery of the Ahnewetut Wetlands in Kobuk Valley National Park, Alaska, revealed the nature of thaw lake and polygonal terrain evolution over a 54-year period of record comprising two 27-year intervals (1951-1978, 1978-2005). Using active-contouring-based change detection, high-precision orthorectification and co-registration and the normalized difference index, surface area expansion and contraction of 22 shallow water bodies, ranging in size from 0.09 to 179 ha, and the transition of ice-wedge polygons from a low- to a high-centered morphology were quantified. Total surface area decreased by only 0.4% during the first time interval, but decreased by 5.5% during the second time interval. Twelve water bodies (ten lakes and two ponds) were relatively stable with net surface area decreases of ≤10%, including four lakes that gained area during both time intervals, whereas ten water bodies (five lakes and five ponds) had surface area losses in excess of 10%, including two ponds that drained completely. Polygonal terrain remained relatively stable during the first time interval, but transformation of polygons from low- to high-centered was significant during the second time interval.

  8. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    NASA Astrophysics Data System (ADS)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  9. The use of multispectral sensing techniques to detect ponderosa pines trees under stress from insects or diseases

    NASA Technical Reports Server (NTRS)

    Heller, R. C.; Weber, F. P.; Zealear, K. A.

    1970-01-01

    The detection of stress induced by bark beetles in conifers is reviewed in two sections: (1) the analysis of very small scale aerial photographs taken by NASA's RB-57F aircraft on August 10, 1969, and (2) the analysis of multispectral imagery obtained by the optical-mechanical line scanner. Underexposure of all films taken from the RB-57 aircraft and inadequate flight coverage prevented drawing definitive conclusions regarding optimum scales and film combinations to detect the discolored infestations. Preprocessing of the scanner signals by both analog and digital computers improved the accuracy of target recognition. Selection and ranking of the best channels for signature recognition was the greatest contribution of digital processing. Improvements were made in separating hardwoods from conifers and old-kill pine trees from recent discolored trees and from healthy trees, but accuracy of detecting the green infested trees is still not acceptable on either the SPARC or thermal-contouring processor. From six years of experience in processing line scan data it is clear that the greatest gain in previsual detection of stress will occur when registered multispectral data from a single aperture or common instantaneous field of view scanner system can be collected and processed.

  10. Airborne system for testing multispectral reconnaissance technologies

    NASA Astrophysics Data System (ADS)

    Schmitt, Dirk-Roger; Doergeloh, Heinrich; Keil, Heiko; Wetjen, Wilfried

    1999-07-01

    There is an increasing demand for future airborne reconnaissance systems to obtain aerial images for tactical or peacekeeping operations. Especially Unmanned Aerial Vehicles (UAVs) equipped with multispectral sensor system and with real time jam resistant data transmission capabilities are of high interest. An airborne experimental platform has been developed as testbed to investigate different concepts of reconnaissance systems before their application in UAVs. It is based on a Dornier DO 228 aircraft, which is used as flying platform. Great care has been taken to achieve the possibility to test different kinds of multispectral sensors. Hence basically it is capable to be equipped with an IR sensor head, high resolution aerial cameras of the whole optical spectrum and radar systems. The onboard equipment further includes system for digital image processing, compression, coding, and storage. The data are RF transmitted to the ground station using technologies with high jam resistance. The images, after merging with enhanced vision components, are delivered to the observer who has an uplink data channel available to control flight and imaging parameters.

  11. Identification of disrupted surfaces due to military activity at the Ft. Irwin National Training Center: An aerial photograph and satellite image analysis

    SciTech Connect

    McCarthy, L.E.; Marsh, S.E.; Lee, C.

    1996-07-01

    Concern for environmental management of our natural resources is most often focused on the anthropogenic impacts placed upon these resources. Desert landscapes, in particular, are fragile environments, and minimal stresses on surficial materials can greatly increase the rate and character of erosional responses. The National Training Center, Ft. Irwin, located in the middle of the Mojave Desert, California, provides an isolated study area of intense ORV activity occurring over a 50-year period. Geomorphic surfaces, and surficial disruption from two study sites within the Ft. Irwin area were mapped from 1947, 1:28,400, and 1993 1:12,000 black and white aerial photographs. Several field checks were conducted to verify this mapping. However, mapping from black and white aerial photography relies heavily on tonal differences, patterns, and morphological criteria. Satellite imagery, sensitive to changes in mineralogy, can help improve the ability to distinguish geomorphic units in desert regions. In order to assess both the extent of disrupted surfaces and the surficial geomorphology discemable from satellite imagery, analysis was done on SPOT panchromatic and Landsat Thematic Mapper (TM) multispectral imagery acquired during the spring of 1987 and 1993. The resulting classified images provide a clear indication of the capabilities of the satellite data to aid in the delineation of disrupted geomorphic surfaces.

  12. A comparison of LANDSAT TM to MSS imagery for detecting submerged aquatic vegetation in lower Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1985-01-01

    LANDSAT Thematic Mapper (TM) and Multispectral Scanner (MSS) imagery generated simultaneously over Guinea Marsh, Virginia, are assessed in the ability to detect submerged aquatic, bottom-adhering plant canopies (SAV). An unsupervised clustering algorithm is applied to both image types and the resulting classifications compared to SAV distributions derived from color aerial photography. Class confidence and accuracy are first computed for all water areas and then only shallow areas where water depth is less than 6 feet. In both the TM and MSS imagery, masking water areas deeper than 6 ft. resulted in greater classification accuracy at confidence levels greater than 50%. Both systems perform poorly in detecting SAV with crown cover densities less than 70%. On the basis of the spectral resolution, radiometric sensitivity, and location of visible bands, TM imagery does not offer a significant advantage over MSS data for detecting SAV in Lower Chesapeake Bay. However, because the TM imagery represents a higher spatial resolution, smaller SAV canopies may be detected than is possible with MSS data.

  13. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  14. BOREAS RSS-2 Extracted Reflectance Factors Derived from ASAS Imagery

    NASA Technical Reports Server (NTRS)

    Russell, C.; Hall, Forrest G. (Editor); Nickerson, Jaime (Editor); Dabney, P.; Kovalick, W.; Graham, D.; Bur, Michael; Irons, James R.; Tierney, M.

    2000-01-01

    The BOREAS RSS-2 team derived atmospherically corrected bidirectional reflectance factor means from multispectral, multiangle ASAS imagery for small homogeneous areas near several BOREAS sites. The ASAS imagery was acquired from the C-130 aircraft platform in 1994 and 1996. The data are stored in tabular ASCII files.

  15. Resolution Enhancement of Multilook Imagery

    SciTech Connect

    Galbraith, Amy E.

    2004-07-01

    This dissertation studies the feasibility of enhancing the spatial resolution of multi-look remotely-sensed imagery using an iterative resolution enhancement algorithm known as Projection Onto Convex Sets (POCS). A multi-angle satellite image modeling tool is implemented, and simulated multi-look imagery is formed to test the resolution enhancement algorithm. Experiments are done to determine the optimal con guration and number of multi-angle low-resolution images needed for a quantitative improvement in the spatial resolution of the high-resolution estimate. The important topic of aliasing is examined in the context of the POCS resolution enhancement algorithm performance. In addition, the extension of the method to multispectral sensor images is discussed and an example is shown using multispectral confocal fluorescence imaging microscope data. Finally, the remote sensing issues of atmospheric path radiance and directional reflectance variations are explored to determine their effect on the resolution enhancement performance.

  16. Determine the utility of ERTS-1 imagery in the preparation of hydrologic atlases of arid land watersheds

    NASA Technical Reports Server (NTRS)

    Shown, L. M. (Principal Investigator); Owen, J. R.

    1973-01-01

    The author has identified the following significant results. The 9x9-inch transparencies from the ERTS-1 system seem to have better contrast in vegetation and drainage features than the 70-mm transparencies. This imagery can be magnified about eight times before it becomes excessively grainy. Imagery in band 7 appears to be the best single band product for viewing landform-water complexes. Band 5 best defines vegetation patterns. Multispectral color-additive viewing would appear to improve the separation of vegetation types where the vegetation exhibits moderate to strong infrared reflectance. Multispectral viewing did not appear to improve relief of drainage channel detail. False-color aerial infrared photographs at a scale of 1:120,000 for the Utah test site are excellent quality and can be magnified as much as 15 times without serious loss of contrast or excessive fuzziness. In desert areas with sparse to moderate shrub cover, the contrast between the soil background and the plant cover is so low that texture cannot be seen, even under high magnification. In areas of higher rainfall during the summer it is possible to discriminate coniferous and deciduous trees, grass, and shrub communities and to identify different rangeland treatment practices.

  17. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  18. Physical controls and patterns of recruitment on the Drôme River (SE France): An analysis based on a chronosequence of high resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Piegay, H.; Stella, J. C.; Raepple, B.

    2014-12-01

    Along with the recent recognition of the role of vegetation in influencing channel hydraulics, and thus fluvial morphology comes the need for scientific research on vegetation recruitment and its control factors. Flood disturbance is known to create a suitable physical template for the establishment of woody pioneers. Sapling recruitment patterns and underlying physical controls were investigated on a 5 km braided reach of the Drôme River in South-eastern France, following the 2003 50-year flood event. The approach was based on the analysis of a chronosequence of high resolution aerial images acquired yearly between 2005 and 2011, complemented by airborne LiDAR data and field observations. The study highlights how physical complexity induced by natural variations in hydro-climatic and consequently hydro-geomorphic conditions facilitates variable patterns of recruitment. The initial post-flood vegetative units, which covered up to 10% of the total active channel area in 2005, was seen to double within six years. The variability of hydro-climatic conditions was reflected in the temporal and spatial patterns of recruitment, with a pronounced peak of vegetation expansion in 2007 and a decreasing trend following higher flows in 2009. Recruitment was further seen to be sustained in a variety of geomorphic units, which showed different probabilities and patterns of recruitment. Active channels were the prominent geomorphic unit in terms of total biomass development, while in-channel wood units showed the highest probability for recruitment. Vegetation recruitment understanding is becoming crucial for predicting fluvial system evolution in different hydroclimatic contexts. Applied, these findings may contribute to improve efforts made in the field of flood risk management, as well as restoration planning.

  19. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    NASA Astrophysics Data System (ADS)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  20. Spectral properties of agricultural crops and soils measured from space, aerial, field, and laboratory sensors

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Vanderbilt, V. C.; Robinson, B. F.; Daughtry, C. S. T.

    1981-01-01

    Investigations of the multispectral reflectance characteristics of crops and soils as measured from laboratory, field, aerial, and satellite sensor systems are reviewed. The relationships of important biological and physical characteristics to the spectral properties of crops and soils are addressed.

  1. The pan-sharpening of satellite and UAV imagery for agricultural applications

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata

    2016-10-01

    Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.

  2. Polar bears from space: assessing satellite imagery as a tool to track Arctic wildlife.

    PubMed

    Stapleton, Seth; LaRue, Michelle; Lecomte, Nicolas; Atkinson, Stephen; Garshelis, David; Porter, Claire; Atwood, Todd

    2014-01-01

    Development of efficient techniques for monitoring wildlife is a priority in the Arctic, where the impacts of climate change are acute and remoteness and logistical constraints hinder access. We evaluated high resolution satellite imagery as a tool to track the distribution and abundance of polar bears. We examined satellite images of a small island in Foxe Basin, Canada, occupied by a high density of bears during the summer ice-free season. Bears were distinguished from other light-colored spots by comparing images collected on different dates. A sample of ground-truthed points demonstrated that we accurately classified bears. Independent observers reviewed images and a population estimate was obtained using mark-recapture models. This estimate (N: 94; 95% Confidence Interval: 92-105) was remarkably similar to an abundance estimate derived from a line transect aerial survey conducted a few days earlier (N: 102; 95% CI: 69-152). Our findings suggest that satellite imagery is a promising tool for monitoring polar bears on land, with implications for use with other Arctic wildlife. Large scale applications may require development of automated detection processes to expedite review and analysis. Future research should assess the utility of multi-spectral imagery and examine sites with different environmental characteristics.

  3. Polar bears from space: assessing satellite imagery as a tool to track Arctic wildlife

    USGS Publications Warehouse

    Stapleton, Seth P.; LaRue, Michelle A.; Lecomte, Nicolas; Atkinson, Stephen N.; Garshelis, David L.; Porter, Claire; Atwood, Todd C.

    2014-01-01

    Development of efficient techniques for monitoring wildlife is a priority in the Arctic, where the impacts of climate change are acute and remoteness and logistical constraints hinder access. We evaluated high resolution satellite imagery as a tool to track the distribution and abundance of polar bears. We examined satellite images of a small island in Foxe Basin, Canada, occupied by a high density of bears during the summer ice-free season. Bears were distinguished from other light-colored spots by comparing images collected on different dates. A sample of ground-truthed points demonstrated that we accurately classified bears. Independent observers reviewed images and a population estimate was obtained using mark- recapture models. This estimate (N: 94; 95% Confidence Interval: 92-105) was remarkably similar to an abundance estimate derived from a line transect aerial survey conducted a few days earlier (N: 102; 95% CI: 69-152). Our findings suggest that satellite imagery is a promising tool for monitoring polar bears on land, with implications for use with other Arctic wildlife. Large scale applications may require development of automated detection processes to expedite review and analysis. Future research should assess the utility of multi-spectral imagery and examine sites with different environmental characteristics.

  4. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    NASA Astrophysics Data System (ADS)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  5. Aerial Explorers

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg; Ippolito, Corey

    2005-01-01

    This paper presents recent results from a mission architecture study of planetary aerial explorers. In this study, several mission scenarios were developed in simulation and evaluated on success in meeting mission goals. This aerial explorer mission architecture study is unique in comparison with previous Mars airplane research activities. The study examines how aerial vehicles can find and gain access to otherwise inaccessible terrain features of interest. The aerial explorer also engages in a high-level of (indirect) surface interaction, despite not typically being able to takeoff and land or to engage in multiple flights/sorties. To achieve this goal, a new mission paradigm is proposed: aerial explorers should be considered as an additional element in the overall Entry, Descent, Landing System (EDLS) process. Further, aerial vehicles should be considered primarily as carrier/utility platforms whose purpose is to deliver air-deployed sensors and robotic devices, or symbiotes, to those high-value terrain features of interest.

  6. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  7. Investigation of Satellite Imagery for Regional Planning

    NASA Technical Reports Server (NTRS)

    Harting, W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Satellite multispectral imagery was found to be useful in regional planning for depicting general developed land patterns, wooded areas, and newly constructed highways by using visual photointerpretation methods. Other characteristics, such as residential and nonresidential development, street patterns, development density, and some vacant land components cannot be adequately detected using these standard methods.

  8. Design and implementation of digital airborne multispectral camera system

    NASA Astrophysics Data System (ADS)

    Lin, Zhaorong; Zhang, Xuguo; Wang, Li; Pan, Deai

    2012-10-01

    The multispectral imaging equipment is a kind of new generation remote sensor, which can obtain the target image and the spectra information simultaneously. A digital airborne multispectral camera system using discrete filter method had been designed and implemented for unmanned aerial vehicle (UAV) and manned aircraft platforms. The digital airborne multispectral camera system has the advantages of larger frame, higher resolution, panchromatic and multispectral imaging. It also has great potential applications in the fields of environmental and agricultural monitoring and target detection and discrimination. In order to enhance the measurement precision and accuracy of position and orientation, Inertial Measurement Unit (IMU) is integrated in the digital airborne multispectral camera. Meanwhile, the Temperature Control Unit (TCU) guarantees that the camera can operate in the normal state in different altitudes to avoid the window fogging and frosting which will degrade the imaging quality greatly. Finally, Flying experiments were conducted to demonstrate the functionality and performance of the digital airborne multispectral camera. The resolution capability, positioning accuracy and classification and recognition ability were validated.

  9. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  10. Galileo multispectral imaging of Earth.

    PubMed

    Geissler, P; Thompson, W R; Greenberg, R; Moersch, J; McEwen, A; Sagan, C

    1995-08-25

    Nearly 6000 multispectral images of Earth were acquired by the Galileo spacecraft during its two flybys. The Galileo images offer a unique perspective on our home planet through the spectral capability made possible by four narrowband near-infrared filters, intended for observations of methane in Jupiter's atmosphere, which are not incorporated in any of the currently operating Earth orbital remote sensing systems. Spectral variations due to mineralogy, vegetative cover, and condensed water are effectively mapped by the visible and near-infrared multispectral imagery, showing a wide variety of biological, meteorological, and geological phenomena. Global tectonic and volcanic processes are clearly illustrated by these images, providing a useful basis for comparative planetary geology. Differences between plant species are detected through the narrowband IR filters on Galileo, allowing regional measurements of variation in the "red edge" of chlorophyll and the depth of the 1-micrometer water band, which is diagnostic of leaf moisture content. Although evidence of life is widespread in the Galileo data set, only a single image (at approximately 2 km/pixel) shows geometrization plausibly attributable to our technical civilization. Water vapor can be uniquely imaged in the Galileo 0.73-micrometer band, permitting spectral discrimination of moist and dry clouds with otherwise similar albedo. Surface snow and ice can be readily distinguished from cloud cover by narrowband imaging within the sensitivity range of Galileo's silicon CCD camera. Ice grain size variations can be mapped using the weak H2O absorption at 1 micrometer, a technique which may find important applications in the exploration of the moons of Jupiter. The Galileo images have the potential to make unique contributions to Earth science in the areas of geological, meteorological and biological remote sensing, due to the inclusion of previously untried narrowband IR filters. The vast scale and near global

  11. The use of four band multispectral photography to identify forest cover types

    NASA Technical Reports Server (NTRS)

    Downs, S. W., Jr.

    1977-01-01

    Four-band multispectral aerial photography and a color additive viewer were employed to identify forest cover types in Northern Alabama. The multispectral photography utilized the blue, green, red and near-infrared spectral regions and was made with black and white infrared film. On the basis of color differences alone, a differentiation between conifers and hardwoods was possible; however, supplementary information related to forest ecology proved necessary for the differentiation of various species of pines and hardwoods.

  12. Proceedings of the 2004 High Spatial Resolution Commercial Imagery Workshop

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: NASA Applied Sciences Program; USGS Land Remote Sensing: Overview; QuickBird System Status and Product Overview; ORBIMAGE Overview; IKONOS 2004 Calibration and Validation Status; OrbView-3 Spatial Characterization; On-Orbit Modulation Transfer Function (MTF) Measurement of QuickBird; Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season; Image Quality Evaluation of QuickBird Super Resolution and Revisit of IKONOS: Civil and Commercial Application Project (CCAP); On-Orbit System MTF Measurement; QuickBird Post Launch Geopositional Characterization Update; OrbView-3 Geometric Calibration and Geopositional Accuracy; Geopositional Statistical Methods; QuickBird and OrbView-3 Geopositional Accuracy Assessment; Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images; Laboratory Measurement of Bidirectional Reflectance of Radiometric Tarps; Stennis Space Center Verification and Validation Capabilities; Joint Agency Commercial Imagery Evaluation (JACIE) Team; Adjacency Effects in High Resolution Imagery; Effect of Pulse Width vs. GSD on MTF Estimation; Camera and Sensor Calibration at the USGS; QuickBird Geometric Verification; Comparison of MODTRAN to Heritage-based Results in Vicarious Calibration at University of Arizona; Using Remotely Sensed Imagery to Determine Impervious Surface in Sioux Falls, South Dakota; Estimating Sub-Pixel Proportions of Sagebrush with a Regression Tree; How Do YOU Use the National Land Cover Dataset?; The National Map Hazards Data Distribution System; Recording a Troubled World; What Does This-Have to Do with This?; When Can a Picture Save a Thousand Homes?; InSAR Studies of Alaska Volcanoes; Earth Observing-1 (EO-1) Data Products; Improving Access to the USGS Aerial Film Collections: High Resolution Scanners; Improving Access to the USGS Aerial Film Collections: Phoenix Digitizing System Product Distribution; System and Product Characterization: Issues Approach

  13. Configuration and specifications of an Unmanned Aerial Vehicle (UAV) for early site specific weed management.

    PubMed

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).

  14. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management

    PubMed Central

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997

  15. Mapping variations in weight percent silica measured from multispectral thermal infrared imagery - Examples from the Hiller Mountains, Nevada, USA and Tres Virgenes-La Reforma, Baja California Sur, Mexico

    USGS Publications Warehouse

    Hook, S.J.; Dmochowski, J.E.; Howard, K.A.; Rowan, L.C.; Karlstrom, K.E.; Stock, J.M.

    2005-01-01

    Remotely sensed multispectral thermal infrared (8-13 ??m) images are increasingly being used to map variations in surface silicate mineralogy. These studies utilize the shift to longer wavelengths in the main spectral feature in minerals in this wavelength region (reststrahlen band) as the mineralogy changes from felsic to mafic. An approach is described for determining the amount of this shift and then using the shift with a reference curve, derived from laboratory data, to remotely determine the weight percent SiO2 of the surface. The approach has broad applicability to many study areas and can also be fine-tuned to give greater accuracy in a particular study area if field samples are available. The approach was assessed using airborne multispectral thermal infrared images from the Hiller Mountains, Nevada, USA and the Tres Virgenes-La Reforma, Baja California Sur, Mexico. Results indicate the general approach slightly overestimates the weight percent SiO2 of low silica rocks (e.g. basalt) and underestimates the weight percent SiO2 of high silica rocks (e.g. granite). Fine tuning the general approach with measurements from field samples provided good results for both areas with errors in the recovered weight percent SiO2 of a few percent. The map units identified by these techniques and traditional mapping at the Hiller Mountains demonstrate the continuity of the crystalline rocks from the Hiller Mountains southward to the White Hills supporting the idea that these ranges represent an essentially continuous footwall block below a regional detachment. Results from the Baja California data verify the most recent volcanism to be basaltic-andesite. ?? 2005 Elsevier Inc. All rights reserved.

  16. Active and passive multispectral scanner for earth resources applications: An advanced applications flight experiment

    NASA Technical Reports Server (NTRS)

    Hasell, P. G., Jr.; Peterson, L. M.; Thomson, F. J.; Work, E. A.; Kriegler, F. J.

    1977-01-01

    The development of an experimental airborne multispectral scanner to provide both active (laser illuminated) and passive (solar illuminated) data from a commonly registered surface scene is discussed. The system was constructed according to specifications derived in an initial programs design study. The system was installed in an aircraft and test flown to produce illustrative active and passive multi-spectral imagery. However, data was not collected nor analyzed for any specific application.

  17. Aerial thermography for energy efficiency of buildings: the ChoT project

    NASA Astrophysics Data System (ADS)

    Mandanici, Emanuele; Conte, Paolo

    2016-10-01

    The ChoT project aims at analysing the potential of aerial thermal imagery to produce large scale datasets for energetic efficiency analyses and policies in urban environments. It is funded by the Italian Ministry of Education, University and Research (MIUR) in the framework of the SIR 2014 (Scientific Independence of young Researchers) programme. The city of Bologna (Italy) was chosen as the case study. The acquisition of thermal infrared images at different times by multiple aerial flights is one of the main tasks of the project. The present paper provides an overview of the ChoT project, but it delves into some specific aspects of the data processing chain: the computing of the radiometric quantities of the atmosphere, the estimation of surface emissivity (through an object-oriented classification applied on a very high resolution multispectral image, to distinguish among the major roofing materials) and sky-view factor (by means of a digital surface model). To collect ground truth data, the surface temperature of roofs and road pavings was measured at several locations at the same time as the aircraft acquired the thermal images. Furthermore, the emissivity of some roofing materials was estimated by means of a thermal camera and a contact probe. All the surveys were georeferenced by GPS. The results of the first surveying campaign demonstrate the high sensitivity of the model to the variability of the surface emissivity and the atmospheric parameters.

  18. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses

    PubMed Central

    Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) ‘Patriot’, Zoysia matrella (Zm) ‘Zeon’ and Paspalum vaginatum (Pv) ‘Salam’. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option. PMID:27341674

  19. Unmanned Aerial Vehicle to Estimate Nitrogen Status of Turfgrasses.

    PubMed

    Caturegli, Lisa; Corniglia, Matteo; Gaetani, Monica; Grossi, Nicola; Magni, Simone; Migliazzi, Mauro; Angelini, Luciana; Mazzoncini, Marco; Silvestri, Nicola; Fontanelli, Marco; Raffaelli, Michele; Peruzzi, Andrea; Volterrani, Marco

    2016-01-01

    Spectral reflectance data originating from Unmanned Aerial Vehicle (UAV) imagery is a valuable tool to monitor plant nutrition, reduce nitrogen (N) application to real needs, thus producing both economic and environmental benefits. The objectives of the trial were i) to compare the spectral reflectance of 3 turfgrasses acquired via UAV and by a ground-based instrument; ii) to test the sensitivity of the 2 data acquisition sources in detecting induced variation in N levels. N application gradients from 0 to 250 kg ha-1 were created on 3 different turfgrass species: Cynodon dactylon x transvaalensis (Cdxt) 'Patriot', Zoysia matrella (Zm) 'Zeon' and Paspalum vaginatum (Pv) 'Salam'. Proximity and remote-sensed reflectance measurements were acquired using a GreenSeeker handheld crop sensor and a UAV with onboard a multispectral sensor, to determine Normalized Difference Vegetation Index (NDVI). Proximity-sensed NDVI is highly correlated with data acquired from UAV with r values ranging from 0.83 (Zm) to 0.97 (Cdxt). Relating NDVI-UAV with clippings N, the highest r is for Cdxt (0.95). The most reactive species to N fertilization is Cdxt with a clippings N% ranging from 1.2% to 4.1%. UAV imagery can adequately assess the N status of turfgrasses and its spatial variability within a species, so for large areas, such as golf courses, sod farms or race courses, UAV acquired data can optimize turf management. For relatively small green areas, a hand-held crop sensor can be a less expensive and more practical option.

  20. An Approach to Application of Multispectral Sensors, using AVIRIS Data

    NASA Technical Reports Server (NTRS)

    Warner, Amanda; Blonski, Slawomir; Gasser, Gerald; Ryan, Robert; Zanoni, Vicki

    2001-01-01

    High spatial resolution multispectral/hyperspectral sensors are being developed by private industry with science/research customers as end users. With an increasingly wide range of sensor choices, it is important for the remote sensing science community and commercial community alike to understand the trade-offs between ground sample distance (GSD), spectral resolution, and signal-to-noise ratio (SNR) in selecting a sensor that will best meet their needs. High spatial resolution hyperspectral imagery and super resolution multispectral charge-coupled device imagery can be used to develop prototypes of proposed data acquisition systems without building new systems or collecting large sets of additional data. By using these datasets to emulate proposed and existing systems, imaging systems may be optimized to meet customer needs in a virtual environment. This approach also enables one to determine, a priori, whether an existing dataset will be useful for a given application.

  1. Analysis of the characteristics appearing in LANDSAT multispectral images in the geological structural mapping of the midwestern portion of the Rio Grande do Sul shield. M.S. Thesis - 25 Mar. 1982; [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Ohara, T.

    1982-01-01

    The central-western part of Rio Grande do Sul Shield was geologically mapped to test the use of MSS-LANDSAT data in the study of mineralized regions. Visual interpretation of the images a the scale of 1:500,000 consisted, in the identification and analysis of the different tonal and textural patterns in each spectral band. After the structural geologic mapping of the area, using visual interpretation techniques, the statistical data obtained were evaluated, specially data concerning size and direction of fractures. The IMAGE-100 system was used to enlarge and enhance certain imagery. The LANDSAT MSS data offer several advantages over conventional white and black aerial photographs for geological studies. Its multispectral characteristic (band 6 and false color composition of bands 4, 5 and 7 were best suitable for the study). Coverage of a large imaging area of about 35,000 sq km, giving a synoptical view, is very useful for perceiving the regional geological setting.

  2. Multispectral photography for earth resources

    NASA Technical Reports Server (NTRS)

    Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.

    1972-01-01

    A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.

  3. Multispectral imaging probe

    SciTech Connect

    Sandison, David R.; Platzbecker, Mark R.; Descour, Michael R.; Armour, David L.; Craig, Marcus J.; Richards-Kortum, Rebecca

    1999-01-01

    A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector.

  4. Multispectral imaging probe

    DOEpatents

    Sandison, D.R.; Platzbecker, M.R.; Descour, M.R.; Armour, D.L.; Craig, M.J.; Richards-Kortum, R.

    1999-07-27

    A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector. 8 figs.

  5. Unmanned aerial vehicles for rangeland mapping and monitoring: a comparison of two systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography from unmanned aerial vehicles (UAVs) bridges the gap between ground-based observations and remotely sensed imagery from aerial and satellite platforms. UAVs can be deployed quickly and repeatedly, are less costly and safer than piloted aircraft, and can obtain very high-resolution...

  6. 3-D Scene Reconstruction from Aerial Imagery

    DTIC Science & Technology

    2012-03-01

    63 3.4.2 CMVS /PMVS2...63 28. Twenty six identified reference markers within ground truth...Selection parameters used for CMVS /PMVS2 . . . . . . . . . . . . . . . . . . . . . . 67 3. Number of keypoints extracted from each image at variable

  7. Incorporation of texture, intensity, hue, and saturation for rangeland monitoring with unmanned aircraft imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial photography acquired with unmanned aerial vehicles (UAVs) has great potential for incorporation into rangeland health monitoring protocols, and object-based image analysis is well suited for this hyperspatial imagery. A major drawback, however, is the low spectral resolution of the imagery, b...

  8. Use of Kendall's coefficient of concordance to assess agreement among observers of very high resolution imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ground-based vegetation monitoring methods are expensive, time-consuming, and limited in sample-size. Aerial imagery is appealing to managers because of the reduced time and expense and the increase in sample size. One challenge of aerial imagery is detecting differences among observers of the sam...

  9. Detection of Verticillium wilt of olive trees and downy mildew of opium poppy using hyperspectral and thermal UAV imagery

    NASA Astrophysics Data System (ADS)

    Calderón Madrid, Rocío; Navas Cortés, Juan Antonio; Montes Borrego, Miguel; Landa del Castillo, Blanca Beatriz; Lucena León, Carlos; Jesús Zarco Tejada, Pablo

    2014-05-01

    The present study explored the use of high-resolution thermal, multispectral and hyperspectral imagery as indicators of the infections caused by Verticillium wilt (VW) in olive trees and downy mildew (DM) in opium poppy fields. VW, caused by the soil-borne fungus Verticillium dahliae, and DM, caused by the biotrophic obligate oomycete Peronospora arborescens, are the most economically limiting diseases of olive trees and opium poppy, respectively, worldwide. V. dahliae infects the plant by the roots and colonizes its vascular system, blocking water flow and eventually inducing water stress. P. arborescens colonizes the mesophyll, appearing the first symptoms as small chlorotic leaf lesions, which can evolve to curled and thickened tissues and systemic infections that become deformed and necrotic as the disease develops. The work conducted to detect VW and DM infection consisted on the acquisition of time series of airborne thermal, multispectral and hyperspectral imagery using 2-m and 5-m wingspan electric Unmanned Aerial Vehicles (UAVs) in spring and summer of three consecutive years (2009 to 2011) for VW detection and on three dates in spring of 2009 for DM detection. Two 7-ha commercial olive orchards naturally infected with V. dahliae and two opium poppy field plots artificially infected by P. arborescens were flown. Concurrently to the airborne campaigns, olive orchards and opium poppy fields were assessed "in situ" to assess actual VW severity and DM incidence. Furthermore, field measurements were conducted at leaf and crown level. The field results related to VW detection showed a significant increase in crown temperature (Tc) minus air temperature (Ta) and a decrease in leaf stomatal conductance (G) as VW severity increased. This reduction in G was associated with a significant increase in the Photochemical Reflectance Index (PRI570) and a decrease in chlorophyll fluorescence. DM asymptomatic leaves showed significantly higher NDVI and lower green/red index

  10. Combined aerial and ground technique for assessing structural heat loss

    NASA Astrophysics Data System (ADS)

    Snyder, William C.; Schott, John R.

    1994-03-01

    The results of a combined aerial and ground-based structural heat loss survey are presented. The aerial imagery was collected by a thermal IR line scanner. Enhanced quantitative analysis of the imagery gives the roof heat flow and insulation level. The ground images were collected by a video van and converted to still frames stored on a video disk. A computer based presentation system retrieves the images and other information indexed by street address for screening and dissemination to owners. We conclude that the combined aerial and ground survey effectively discriminates between well insulated and poorly insulated structures, and that such a survey is a cost-effective alternative to site audits.

  11. "A" Is for Aerial Maps and Art

    ERIC Educational Resources Information Center

    Todd, Reese H.; Delahunty, Tina

    2007-01-01

    The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…

  12. Experimental applications of multispectral data to natural resource inventory and survey

    NASA Technical Reports Server (NTRS)

    Mallon, H. J.

    1970-01-01

    The feasibility of using multispectral, color, color infrared, thermal infrared imagery and related ground data to recognize, identify, determine and monitor the status of mineral ore and metals stockpiles is studied. An attempt was made to identify valid, unique spectral signatures of such materials for possible use under a wide variety of environmental circumstances. Research emphasis was upon the analysis of the multiband imagery from the various film-filter combinations, using density analysis techniques.

  13. Galileo multispectral imaging of Earth

    NASA Astrophysics Data System (ADS)

    Geissler, Paul; Thompson, W. Reid; Greenberg, Richard; Moersch, Jeff; McEwen, Alfred; Sagan, Carl

    Nearly 6000 multispectral images of Earth were acquired by the Galileo spacecraft during its two flybys. The Galileo images offer a unique perspective on our home planet through the spectral capability made possible by four narrowband near-infrared filters, intended for observations of methane in Jupiter's atmosphere, which are not incorporated in any of the currently operating Earth orbital remote sensing systems. Spectral variations due to mineralogy, vegetative cover, and condensed water are effectively mapped by the visible and near-infrared multispectral imagery, showing a wide variety of biological, meteorological, and geological phenomena. Global tectonic and volcanic processes are clearly illustrated by these images, providing a useful basis for comparative planetary geology. Differences between plant species are detected through the narrowband IR filters on Galileo, allowing regional measurements of variation in the ``red edge'' of chlorophyll and the depth of the 1-μm water band, which is diagnostic of leaf moisture content. Although evidence of life is widespread in the Galileo data set, only a single image (at ~2 km/pixel) shows geometrization plausibly attributable to our technical civilization. Water vapor can be uniquely imaged in the Galileo 0.73-μm band, permitting spectral discrimination of moist and dry clouds with otherwise similar albedo. Surface snow and ice can be readily distinguished from cloud cover by narrowband imaging within the sensitivity range of Galileo's silicon CCD camera. Ice grain size variations can be mapped using the weak H2O absorption at 1 μm, a technique which may find important applications in the exploration of the moons of Jupiter. The Galileo images have the potential to make unique contributions to Earth science in the areas of geological, meteorological and biological remote sensing, due to the inclusion of previously untried narrowband IR filters. The vast scale and near global coverage of the Galileo data set

  14. Analysis of variograms with various sample sizes from a multispectral image

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...

  15. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  16. Aerial Photography

    NASA Technical Reports Server (NTRS)

    1985-01-01

    John Hill, a pilot and commercial aerial photographer, needed an information base. He consulted NERAC and requested a search of the latest developments in camera optics. NERAC provided information; Hill contacted the manufacturers of camera equipment and reduced his photographic costs significantly.

  17. Unmanned aerial optical systems for spatial monitoring of Antarctic mosses

    NASA Astrophysics Data System (ADS)

    Lucieer, Arko; Turner, Darren; Veness, Tony; Malenovsky, Zbynek; Harwin, Stephen; Wallace, Luke; Kelcey, Josh; Robinson, Sharon

    2013-04-01

    The Antarctic continent has experienced major changes in temperature, wind speed and stratospheric ozone levels during the last 50 years. In a manner similar to tree rings, old growth shoots of Antarctic mosses, the only plants on the continent, also preserve a climate record of their surrounding environment. This makes them an ideal bio-indicator of the Antarctic climate change. Spatially extensive ground sampling of mosses is laborious and time limited due to the short Antarctic growing season. Obviously, there is a need for an efficient method to monitor spatially climate change induced stress of the Antarctic moss flora. Cloudy weather and high spatial fragmentation of the moss turfs makes satellite imagery unsuitable for this task. Unmanned aerial systems (UAS), flying at low altitudes and collecting image data even under a full overcast, can, however, overcome the insufficiency of satellite remote sensing. We, therefore, developed scientific UAS, consisting of a remote-controlled micro-copter carrying on-board different remote sensing optical sensors, tailored to perform fast and cost-effective mapping of Antarctic flora at ultra-high spatial resolution (1-10 cm depending on flight altitude). A single lens reflex (SLR) camera carried by UAS acquires multi-view aerial photography, which processed by the Structure from Motion computer vision algorithm provides an accurate three-dimensional digital surface model (DSM) at ultra-high spatial resolution. DSM is the key input parameter for modelling a local seasonal snowmelt run-off, which provides mosses with the vital water supply. A lightweight multispectral camera on-board of UVS is collecting images of six selected spectral wavebands with the full-width-half-maximum (FWHM) of 10 nm. The spectral bands can be used to compute various vegetation optical indices, e.g. Difference Vegetation Index (NDVI) or Photochemical Reflectance Index (PRI), assessing the actual physiological state of polar vegetation. Recently

  18. Vector statistics of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Underwood, D.

    1977-01-01

    A digitized multispectral image, such as LANDSAT data, is composed of numerous four dimensional vectors, which quantitatively describe the ground scene from which the data are acquired. The statistics of unique vectors that occur in LANDSAT imagery are studied to determine if that information can provide some guidance on reducing image processing costs. A second purpose of this report is to investigate how the vector statistics are changed by various types of image processing techniques and determine if that information can be useful in choosing one processing approach over another.

  19. A methodology for small scale rural land use mapping in semi-arid developing countries using orbital imagery. 1: Introduction

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator); Lock, B. F.

    1976-01-01

    The author has identified the following significant results. This research program has developed a viable methodology for producing small scale rural land use maps in semi-arid developing countries using imagery obtained from orbital multispectral scanners.

  20. Multispectral metamaterial absorber.

    PubMed

    Grant, J; McCrindle, I J H; Li, C; Cumming, D R S

    2014-03-01

    We present the simulation, implementation, and measurement of a multispectral metamaterial absorber (MSMMA) and show that we can realize a simple absorber structure that operates in the mid-IR and terahertz (THz) bands. By embedding an IR metamaterial absorber layer into a standard THz metamaterial absorber stack, a narrowband resonance is induced at a wavelength of 4.3 μm. This resonance is in addition to the THz metamaterial absorption resonance at 109 μm (2.75 THz). We demonstrate the inherent scalability and versatility of our MSMMA by describing a second device whereby the MM-induced IR absorption peak frequency is tuned by varying the IR absorber geometry. Such a MSMMA could be coupled with a suitable sensor and formed into a focal plane array, enabling multispectral imaging.

  1. Multispectral Internet imaging

    NASA Astrophysics Data System (ADS)

    Brettel, Hans; Schmitt, Francis J. M.

    2000-12-01

    We present a system for multispectral image acquisition which is accessible via an Internet connection. The system includes an electronically tunable spectral filter and a monochrome digital camera, both controlled from a PC-type computer acting as a Web server. In contrast to the three fixed color channels of an ordinary WebCam, our system provides a virtually unlimited number of spectral channels. To allow for interactive use of this multispectral image acquisition system through the network, we developed a set of Java servlets which provide access to the system through HyperText Transfer Protocol (HTTP) requests. Since only the standard Common Gateway Interface (CGI) mechanisms for client-server communication are used, the system is accessible from any Web browser.

  2. Polarimetric Multispectral Imaging Technology

    NASA Technical Reports Server (NTRS)

    Cheng, L.-J.; Chao, T.-H.; Dowdy, M.; Mahoney, C.; Reyes, G.

    1993-01-01

    The Jet Propulsion Laboratory is developing a remote sensing technology on which a new generation of compact, lightweight, high-resolution, low-power, reliable, versatile, programmable scientific polarimetric multispectral imaging instruments can be built to meet the challenge of future planetary exploration missions. The instrument is based on the fast programmable acousto-optic tunable filter (AOTF) of tellurium dioxide (TeO2) that operates in the wavelength range of 0.4-5 microns. Basically, the AOTF multispectral imaging instrument measures incoming light intensity as a function of spatial coordinates, wavelength, and polarization. Its operation can be in either sequential, random access, or multiwavelength mode as required. This provides observation flexibility, allowing real-time alternation among desired observations, collecting needed data only, minimizing data transmission, and permitting implementation of new experiments. These will result in optimization of the mission performance with minimal resources. Recently we completed a polarimetric multispectral imaging prototype instrument and performed outdoor field experiments for evaluating application potentials of the technology. We also investigated potential improvements on AOTF performance to strengthen technology readiness for applications. This paper will give a status report on the technology and a prospect toward future planetary exploration.

  3. A preliminary report of multispectral scanner data from the Cleveland harbor study

    NASA Technical Reports Server (NTRS)

    Shook, D.; Raquet, C.; Svehla, R.; Wachter, D.; Salzman, J.; Coney, T.; Gedney, D.

    1975-01-01

    Imagery obtained from an airborne multispectral scanner is presented. A synoptic view of the entire study area is shown for a number of time periods and for a number of spectral bands. Using several bands, sediment distributions, thermal plumes, and Rhodamine B dye distributions are shown.

  4. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  5. Airborne Imagery

    NASA Technical Reports Server (NTRS)

    1983-01-01

    ATM (Airborne Thematic Mapper) was developed for NSTL (National Space Technology Companies) by Daedalus Company. It offers expanded capabilities for timely, accurate and cost effective identification of areas with prospecting potential. A related system is TIMS, Thermal Infrared Multispectral Scanner. Originating from Landsat 4, it is also used for agricultural studies, etc.

  6. Use of ERTS-1 imagery in forest inventory

    NASA Technical Reports Server (NTRS)

    Rennie, J. C.; Birth, E. E.

    1974-01-01

    The utility of ERTS-1 imagery when combined with field observations and with aircraft imagery and field observations is evaluated. Satellite imagery consisted of 9-1/2 inch black and white negatives of four multispectral scanner bands taken over Polk County, Tennessee. Aircraft imagery was obtained by a C-130 flying at 23,000 ft over the same area and provided the basis for locating ground plots for field observations. Correspondence between aircraft and satellite imagery was somewhat inaccurate due to seasonal differences in observations and lack of good photogrammetry with the data processing system used. Better correspondence was found between satellite imagery and ground observations. Ways to obtain more accurate data are discussed, and comparisons between aircraft and satellite observations are tabulated.

  7. MULTISPECTRAL THERMAL IMAGER - OVERVIEW

    SciTech Connect

    P. WEBER

    2001-03-01

    The Multispectral Thermal Imager satellite fills a new and important role in advancing the state of the art in remote sensing sciences. Initial results with the full calibration system operating indicate that the system was already close to achieving the very ambitious goals which we laid out in 1993, and we are confident of reaching all of these goals as we continue our research and improve our analyses. In addition to the DOE interests, the satellite is tasked about one-third of the time with requests from other users supporting research ranging from volcanology to atmospheric sciences.

  8. Multispectral thermal imaging

    SciTech Connect

    Weber, P.G.; Bender, S.C.; Borel, C.C.; Clodius, W.B.; Smith, B.W.; Garrett, A.; Pendergast, M.M.; Kay, R.R.

    1998-12-01

    Many remote sensing applications rely on imaging spectrometry. Here the authors use imaging spectrometry for thermal and multispectral signatures measured from a satellite platform enhanced with a combination of accurate calibrations and on-board data for correcting atmospheric distortions. The approach is supported by physics-based end-to-end modeling and analysis, which permits a cost-effective balance between various hardware and software aspects. The goal is to develop and demonstrate advanced technologies and analysis tools toward meeting the needs of the customer; at the same time, the attributes of this system can address other applications in such areas as environmental change, agriculture, and volcanology.

  9. Improved Prediction of Momentum and Scalar Fluxes Using MODIS Imagery

    NASA Technical Reports Server (NTRS)

    Crago, Richard D.; Jasinski, Michael F.

    2003-01-01

    There are remote sensing and science objectives. The remote sensing objectives are: To develop and test a theoretical method for estimating local momentum aerodynamic roughness length, z(sub 0m), using satellite multispectral imagery. To adapt the method to the MODIS imagery. To develop a high-resolution (approx. 1km) gridded dataset of local momentum roughness for the continental United States and southern Canada, using MODIS imagery and other MODIS derived products. The science objective is: To determine the sensitivity of improved satellite-derived (MODIS-) estimates of surface roughness on the momentum and scalar fluxes, within the context of 3-D atmospheric modeling.

  10. Environmental studies of Iceland with ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.; Boovarsson, A.; Frioriksson, S.; Thorsteinsson, I.; Palmason, G.; Rist, S.; Saemundsson, K.; Sigtryggsson, H.; Thorarinsson, S.

    1974-01-01

    Imagery from the ERTS-1 satellite can be used to study geological and geophysical phenomena which are important in relation to Iceland's natural resources. Multispectral scanner (MSS) imagery can be used to map areas of altered ground, intense thermal emission, fallout from volcanic eruptions, lava flows, volcanic geomorphology, erosion or build-up of land, snow cover, the areal extent of glaciers and ice caps, etc. At least five distinct vegetation types and barren areas can be mapped using MSS false-color composites. Stereoscopic coverage of iceland by side-lapping ERTS imagery permits precise analysis of various natural phenomena.

  11. User interface development for semiautomated imagery exploitation

    NASA Astrophysics Data System (ADS)

    O'Connor, R. P.; Bohling, Edward H.

    1991-08-01

    Operational reconnaissance technical organizations are burdened by greatly increasing workloads due to expanding capabilities for collection and delivery of large-volume near-real- time multisensor/multispectral softcopy imagery. Related to the tasking of reconnaissance platforms to provide the imagery are more stringent timelines for exploiting the imagery in response to the rapidly changing threat environment being monitored. The development of a semi-automated softcopy multisensor image exploitation capability is a critical step toward integrating existing advanced image processing techniques in conjunction with appropriate intelligence and cartographic data for next-generation image exploitation systems. This paper discusses the results of a recent effort to develop computer-assisted aids for the image analyst (IA) in order to rapidly and accurately exploit multispectral/multisensor imagery in combination with intelligence support data and cartographic information for the purpose of target detection and identification. A key challenge of the effort was to design and implement an effective human-computer interface that would satisfy any generic IA task and readily accommodate the needs of a broad range of IAs.

  12. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  13. Landsat multispectral sharpening using a sensor system model and panchromatic image

    USGS Publications Warehouse

    Lemeshewsky, G.P.; ,

    2003-01-01

    The thematic mapper (TM) sensor aboard Landsats 4, 5 and enhanced TM plus (ETM+) on Landsat 7 collect imagery at 30-m sample distance in six spectral bands. New with ETM+ is a 15-m panchromatic (P) band. With image sharpening techniques, this higher resolution P data, or as an alternative, the 10-m (or 5-m) P data of the SPOT satellite, can increase the spatial resolution of the multispectral (MS) data. Sharpening requires that the lower resolution MS image be coregistered and resampled to the P data before high spatial frequency information is transferred to the MS data. For visual interpretation and machine classification tasks, it is important that the sharpened data preserve the spectral characteristics of the original low resolution data. A technique was developed for sharpening (in this case, 3:1 spatial resolution enhancement) visible spectral band data, based on a model of the sensor system point spread function (PSF) in order to maintain spectral fidelity. It combines high-pass (HP) filter sharpening methods with iterative image restoration to reduce degradations caused by sensor-system-induced blurring and resembling. Also there is a spectral fidelity requirement: sharpened MS when filtered by the modeled degradations should reproduce the low resolution source MS. Quantitative evaluation of sharpening performance was made by using simulated low resolution data generated from digital color-IR aerial photography. In comparison to the HP-filter-based sharpening method, results for the technique in this paper with simulated data show improved spectral fidelity. Preliminary results with TM 30-m visible band data sharpened with simulated 10-m panchromatic data are promising but require further study.

  14. The availability of local aerial photography in southern California. [for solution of urban planning problems

    NASA Technical Reports Server (NTRS)

    Allen, W., III; Sledge, B.; Paul, C. K.; Landini, A. J.

    1974-01-01

    Some of the major photography and photogrammetric suppliers and users located in Southern California are listed. Recent trends in aerial photographic coverage of the Los Angeles basin area are also noted, as well as the uses of that imagery.

  15. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  16. Use of remotely sensed imagery to map Sudden Oak Death (Phytophthora ramorum) in the Santa Cruz Mountains

    NASA Astrophysics Data System (ADS)

    Gillis, Trinka

    This project sought a method to map Sudden Oak Death distribution in the Santa Cruz Mountains of California, a coastal mountain range and one of the locations where this disease was first observed. The project researched a method to identify forest affected by SOD using 30 m multi-spectral Landsat satellite imagery to classify tree mortality at the canopy-level throughout the study area, and applied that method to a time series of data to show pattern of spread. A successful methodology would be of interest to scientists trying to identify areas which escaped disease contagion, environmentalists attempting to quantify damage, and land managers evaluating the health of their forests. The more we can learn about the disease, the more chance we have to prevent further spread and damage to existing wild lands. The primary data source for this research was springtime Landsat Climate Data Record surface reflectance data. Non-forest areas were masked out using data produced by the National Land Cover Database and supplemental land cover classification from the Landsat 2011 Climate Data Record image. Areas with other known causes of tree death, as identified by Fire and Resource Assessment Program fire perimeter polygons, and US Department of Agriculture Forest Health Monitoring Program Aerial Detection Survey polygons, were also masked out. Within the remaining forested study area, manually-created points were classified based on the land cover contained by the corresponding Landsat 2011 pixel. These were used to extract value ranges from the Landsat bands and calculated vegetation indices. The range and index which best differentiated healthy from dead trees, SWIR/NIR, was applied to each Landsat scene in the time series to map tree mortality. Results Validation Points, classified using Google Earth high-resolution aerial imagery, were created to evaluate the accuracy of the mapping methodology for the 2011 data.

  17. Multispectral scanner optical system

    NASA Technical Reports Server (NTRS)

    Stokes, R. C.; Koch, N. G. (Inventor)

    1980-01-01

    An optical system for use in a multispectral scanner of the type used in video imaging devices is disclosed. Electromagnetic radiation reflected by a rotating scan mirror is focused by a concave primary telescope mirror and collimated by a second concave mirror. The collimated beam is split by a dichroic filter which transmits radiant energy in the infrared spectrum and reflects visible and near infrared energy. The long wavelength beam is filtered and focused on an infrared detector positioned in a cryogenic environment. The short wavelength beam is dispersed by a pair of prisms, then projected on an array of detectors also mounted in a cryogenic environment and oriented at an angle relative to the optical path of the dispersed short wavelength beam.

  18. Multispectral Resource Sampler Workshop

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The utility of the multispectral resource sampler (MRS) was examined by users in the following disciplines: agriculture, atmospheric studies, engineering, forestry, geology, hydrology/oceanography, land use, and rangelands/soils. Modifications to the sensor design were recommended and the desired types of products and number of scenes required per month were indicated. The history, design, capabilities, and limitations of the MRS are discussed as well as the multilinear spectral array technology which it uses. Designed for small area inventory, the MRS can provide increased temporal, spectral, and spatial resolution, facilitate polarization measurement and atmospheric correction, and test onboard data compression techniques. The advantages of using it along with the thematic mapper are considered.

  19. Multispectral imaging radar

    NASA Technical Reports Server (NTRS)

    Porcello, L. J.; Rendleman, R. A.

    1972-01-01

    A side-looking radar, installed in a C-46 aircraft, was modified to provide it with an initial multispectral imaging capability. The radar is capable of radiating at either of two wavelengths, these being approximately 3 cm and 30 cm, with either horizontal or vertical polarization on each wavelength. Both the horizontally- and vertically-polarized components of the reflected signal can be observed for each wavelength/polarization transmitter configuration. At present, two-wavelength observation of a terrain region can be accomplished within the same day, but not with truly simultaneous observation on both wavelengths. A multiplex circuit to permit this simultaneous observation has been designed. A brief description of the modified radar system and its operating parameters is presented. Emphasis is then placed on initial flight test data and preliminary interpretation. Some considerations pertinent to the calibration of such radars are presented in passing.

  20. Michigan experimental multispectral mapping system: A description of the M7 airborne sensor and its performance

    NASA Technical Reports Server (NTRS)

    Hasell, P. G., Jr.

    1974-01-01

    The development and characteristics of a multispectral band scanner for an airborne mapping system are discussed. The sensor operates in the ultraviolet, visual, and infrared frequencies. Any twelve of the bands may be selected for simultaneous, optically registered recording on a 14-track analog tape recorder. Multispectral imagery recorded on magnetic tape in the aircraft can be laboratory reproduced on film strips for visual analysis or optionally machine processed in analog and/or digital computers before display. The airborne system performance is analyzed.

  1. Multispectral Microimager for Astrobiology

    NASA Technical Reports Server (NTRS)

    Sellar, R. Glenn; Farmer, Jack D.; Kieta, Andrew; Huang, Julie

    2006-01-01

    A primary goal of the astrobiology program is the search for fossil records. The astrobiology exploration strategy calls for the location and return of samples indicative of environments conducive to life, and that best capture and preserve biomarkers. Successfully returning samples from environments conducive to life requires two primary capabilities: (1) in situ mapping of the mineralogy in order to determine whether the desired minerals are present; and (2) nondestructive screening of samples for additional in-situ testing and/or selection for return to laboratories for more in-depth examination. Two of the most powerful identification techniques are micro-imaging and visible/infrared spectroscopy. The design and test results are presented from a compact rugged instrument that combines micro-imaging and spectroscopic capability to provide in-situ analysis, mapping, and sample screening capabilities. Accurate reflectance spectra should be a measure of reflectance as a function of wavelength only. Other compact multispectral microimagers use separate LEDs (light-emitting diodes) for each wavelength and therefore vary the angles of illumination when changing wavelengths. When observing a specularly-reflecting sample, this produces grossly inaccurate spectra due to the variation in the angle of illumination. An advanced design and test results are presented for a multispectral microimager which demonstrates two key advances relative to previous LED-based microimagers: (i) acquisition of actual reflectance spectra in which the flux is a function of wavelength only, rather than a function of both wavelength and illumination geometry; and (ii) increase in the number of spectral bands to eight bands covering a spectral range of 468 to 975 nm.

  2. Real-time compact multispectral imaging solutions using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Chandler, Eric V.; Fish, David E.

    2014-03-01

    The next generation of multispectral sensors and cameras will need to deliver significant improvements in size, weight, portability, and spectral band customization to support widespread commercial deployment. The benefits of multispectral imaging are well established for applications including machine vision, biomedical, authentication, and aerial remote sensing environments - but many OEM solutions require more compact, robust, and cost-effective production cameras to realize these benefits. A novel implementation uses micro-patterning of dichroic filters into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color camera image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. We demonstrate recent results of 4-9 band dichroic filter arrays in multispectral cameras using a variety of sensors including linear, area, silicon, and InGaAs. Specific implementations range from hybrid RGB + NIR sensors to custom sensors with application-specific VIS, NIR, and SWIR spectral bands. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and development path. Finally, we report on the wafer-level fabrication of dichroic filter arrays on imaging sensors for scalable production of multispectral sensors and cameras.

  3. MSS D Multispectral Scanner System

    NASA Technical Reports Server (NTRS)

    Lauletta, A. M.; Johnson, R. L.; Brinkman, K. L. (Principal Investigator)

    1982-01-01

    The development and acceptance testing of the 4-band Multispectral Scanners to be flown on LANDSAT D and LANDSAT D Earth resources satellites are summarized. Emphasis is placed on the acceptance test phase of the program. Test history and acceptance test algorithms are discussed. Trend data of all the key performance parameters are included and discussed separately for each of the two multispectral scanner instruments. Anomalies encountered and their resolutions are included.

  4. Simulation of EO-1 Hyperion Data from ALI Multispectral Data Based on the Spectral Reconstruction Approach.

    PubMed

    Liu, Bo; Zhang, Lifu; Zhang, Xia; Zhang, Bing; Tong, Qingxi

    2009-01-01

    Data simulation is widely used in remote sensing to produce imagery for a new sensor in the design stage, for scale issues of some special applications, or for testing of novel algorithms. Hyperspectral data could provide more abundant information than traditional multispectral data and thus greatly extend the range of remote sensing applications. Unfortunately, hyperspectral data are much more difficult and expensive to acquire and were not available prior to the development of operational hyperspectral instruments, while large amounts of accumulated multispectral data have been collected around the world over the past several decades. Therefore, it is reasonable to examine means of using these multispectral data to simulate or construct hyperspectral data, especially in situations where hyperspectral data are necessary but hard to acquire. Here, a method based on spectral reconstruction is proposed to simulate hyperspectral data (Hyperion data) from multispectral Advanced Land Imager data (ALI data). This method involves extraction of the inherent information of source data and reassignment to newly simulated data. A total of 106 bands of Hyperion data were simulated from ALI data covering the same area. To evaluate this method, we compare the simulated and original Hyperion data by visual interpretation, statistical comparison, and classification. The results generally showed good performance of this method and indicated that most bands were well simulated, and the information both preserved and presented well. This makes it possible to simulate hyperspectral data from multispectral data for testing the performance of algorithms, extend the use of multispectral data and help the design of a virtual sensor.

  5. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  6. Topography Dependent Photometric Correction of SELENE Multispectral Imagery

    NASA Astrophysics Data System (ADS)

    Steutel, D.; Ohtake, M.

    2003-12-01

    The SELENE mission to the Moon in 2005 includes the Multiband Imager (MI) [1], a visible/near-infrared imaging spectrometer, and the Terrain Camera (TC), a 10m panchromatic stereoimager for global topography. The ˜1TB of TC data will take years to reduce; initial photometric correction of MI data will not include the effect of topography. We present a method for prioritizing analysis of TC data so topography can be included in photometric correction of MI data at the earliest time to regions of the lunar surface where the effects of topography are most significant. We have calculated the general quantified dependence of photometric correction on incidence angle, emission angle, phase angle, and local topographic slopes. To calculate photometric correction we use the method used for Clementine [2,3] with the following corrections: The factor of 2 is included in the XL function (see [3]), P(α ,g) = (1-g2)/(1+g2+2gcos(α ))1.5, and g1 = D*R30 + E. In order to predict the topography of the Moon to determine the regional distribution of local slopes at the resolution of MI (20m and 62m), we performed a fractal analysis on existing topographic data derived from Clementine LIDAR [4], Earth-based radar of Tycho crater [5], and Apollo surface-based stereoimagery [6]. The fractal parameter H, which describes the relationship between scale and roughness, is 0.65+/-0.02, 0.64+/-0.01, and 0.69+/-0.06 [6] at the 20-75km, 150m-1.5km, and 0.1-10mm scales, respectively. Based on the consistency of H at these disparate scales, we interpolate H=0.65+/-0.03 (a weighted average) at the 20m and 62m scales of the MI cameras. The second fractal parameter, σ (L0), is calculated from Clementine LIDAR data for overlapping 3x3 degree segments over the lunar surface. From this, we predict local topographic slopes for all regions on the Moon -60° to +60° at the 20m and 62m scales based on H=0.65 and σ (L0) as determined for each pixel. These results allow us to prioritize TC data analysis to maximize the scientific return from MI data during the first years of data analysis. This work was supported by the Japan Society for the Promotion of Science and the National Science Foundation's East Asia Summer Institutes. References: [1] Ohtake, M. LPSC XXXIV, abs 1976, 2003. [2] McEwen, A.S. LPSC XXVII, 841-842, 1996. [3] McEwen, A. et al. LPSC XXIX, abs 1466, 1998. [4] Smith, D.E. et al., JGR, 102(E1), 1591-1611, 1997. [5] Margot, J.-L. et al. JGR, 104(E5), 11875-11882, 1999. [6] Helfenstein, P. & M.K. Shepard. Icarus, 141, 107-131, 1999.

  7. Digital image correlation techniques applied to LANDSAT multispectral imagery

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O. (Principal Investigator); Miller, W. J.

    1976-01-01

    The author has identified the following significant results. Automatic image registration and resampling techniques applied to LANDSAT data achieved accuracies, resulting in mean radial displacement errors of less than 0.2 pixel. The process method utilized recursive computational techniques and line-by-line updating on the basis of feedback error signals. Goodness of local feature matching was evaluated through the implementation of a correlation algorithm. An automatic restart allowed the system to derive control point coordinates over a portion of the image and to restart the process, utilizing this new control point information as initial estimates.

  8. Mapping crop ground cover using airborne multispectral digital imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Empirical relationships between remotely sensed vegetation indices and density information, such as leaf area index or ground cover (GC), are commonly used to derive spatial information in many precision farming operations. In this study, we modified an existing methodology that does not depend on e...

  9. Assessing Hurricane Katrina Damage to the Mississippi Gulf Coast Using IKONOS Imagery

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph; McKellip, Rodney

    2006-01-01

    Hurricane Katrina hit southeastern Louisiana and the Mississippi Gulf Coast as a Category 3 hurricane with storm surges as high as 9 m. Katrina devastated several coastal towns by destroying or severely damaging hundreds of homes. Several Federal agencies are assessing storm impacts and assisting recovery using high-spatial-resolution remotely sensed data from satellite and airborne platforms. High-quality IKONOS satellite imagery was collected on September 2, 2005, over southwestern Mississippi. Pan-sharpened IKONOS multispectral data and ERDAS IMAGINE software were used to classify post-storm land cover for coastal Hancock and Harrison Counties. This classification included a storm debris category of interest to FEMA for disaster mitigation. The classification resulted from combining traditional unsupervised and supervised classification techniques. Higher spatial resolution aerial and handheld photography were used as reference data. Results suggest that traditional classification techniques and IKONOS data can map wood-dominated storm debris in open areas if relevant training areas are used to develop the unsupervised classification signatures. IKONOS data also enabled other hurricane damage assessment, such as flood-deposited mud on lawns and vegetation foliage loss from the storm. IKONOS data has also aided regional Katrina vegetation damage surveys from multidate Land Remote Sensing Satellite and Moderate Resolution Imaging Spectroradiometer data.

  10. Does the Data Resolution/origin Matter? Satellite, Airborne and Uav Imagery to Tackle Plant Invasions

    NASA Astrophysics Data System (ADS)

    Müllerová, Jana; Brůna, Josef; Dvořák, Petr; Bartaloš, Tomáš; Vítková, Michaela

    2016-06-01

    Invasive plant species represent a serious threat to biodiversity and landscape as well as human health and socio-economy. To successfully fight plant invasions, new methods enabling fast and efficient monitoring, such as remote sensing, are needed. In an ongoing project, optical remote sensing (RS) data of different origin (satellite, aerial and UAV), spectral (panchromatic, multispectral and color), spatial (very high to medium) and temporal resolution, and various technical approaches (object-, pixelbased and combined) are tested to choose the best strategies for monitoring of four invasive plant species (giant hogweed, black locust, tree of heaven and exotic knotweeds). In our study, we address trade-offs between spectral, spatial and temporal resolutions required for balance between the precision of detection and economic feasibility. For the best results, it is necessary to choose best combination of spatial and spectral resolution and phenological stage of the plant in focus. For species forming distinct inflorescences such as giant hogweed iterative semi-automated object-oriented approach was successfully applied even for low spectral resolution data (if pixel size was sufficient) whereas for lower spatial resolution satellite imagery or less distinct species with complicated architecture such as knotweed, combination of pixel and object based approaches was used. High accuracies achieved for very high resolution data indicate the possible application of described methodology for monitoring invasions and their long-term dynamics elsewhere, making management measures comparably precise, fast and efficient. This knowledge serves as a basis for prediction, monitoring and prioritization of management targets.

  11. Multispectral Analysis of Indigenous Rock Art Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Skoog, B.; Helmholz, P.; Belton, D.

    2016-06-01

    Multispectral analysis is a widely used technique in the photogrammetric and remote sensing industry. The use of Terrestrial Laser Scanning (TLS) in combination with imagery is becoming increasingly common, with its applications spreading to a wider range of fields. Both systems benefit from being a non-contact technique that can be used to accurately capture data regarding the target surface. Although multispectral analysis is actively performed within the spatial sciences field, its extent of application within an archaeological context has been limited. This study effectively aims to apply the multispectral techniques commonly used, to a remote Indigenous site that contains an extensive gallery of aging rock art. The ultimate goal for this research is the development of a systematic procedure that could be applied to numerous similar sites for the purpose of heritage preservation and research. The study consisted of extensive data capture of the rock art gallery using two different TLS systems and a digital SLR camera. The data was combined into a common 2D reference frame that allowed for standard image processing to be applied. An unsupervised k-means classifier was applied to the multiband images to detect the different types of rock art present. The result was unsatisfactory as the subsequent classification accuracy was relatively low. The procedure and technique does however show potential and further testing with different classification algorithms could possibly improve the result significantly.

  12. Miniature snapshot multispectral imager

    NASA Astrophysics Data System (ADS)

    Gupta, Neelam; Ashe, Philip R.; Tan, Songsheng

    2011-03-01

    We present a miniature snapshot multispectral imager based on using a monolithic filter array that operates in the short wavelength infrared spectral region and has a number of defense and commercial applications. The system is low-weight, portable with a miniature platform, and requires low power. The imager uses a 4×4 Fabry-Pérot filter array operating from 1487 to 1769 nm with a spectral bandpass ~10 nm. The design of the filters is based on using a shadow mask technique to fabricate an array of Fabry-Pérot etalons with two multilayer dielectric mirrors. The filter array is installed in a commercial handheld InGaAs camera, replacing the imaging lens with a custom designed 4×4 microlens assembly with telecentric imaging performance in each of the 16 subimaging channels. We imaged several indoor and outdoor scenes. The microlens assembly and filter design is quite flexible and can be tailored for any wavelength region from the ultraviolet to the longwave infrared, and the spectral bandpass can also be customized to meet sensing requirements. In this paper we discuss the design and characterization of the filter array, the microlens optical assembly, and imager and present imaging results.

  13. Quantifying autophagy: Measuring LC3 puncta and autolysosome formation in cells using multispectral imaging flow cytometry.

    PubMed

    Pugsley, Haley R

    2017-01-01

    The use of multispectral imaging flow cytometry has been gaining popularity due to its quantitative power, high throughput capabilities, multiplexing potential and its ability to acquire images of every cell. Autophagy is a process in which dysfunctional organelles and cellular components that accumulate during growth and differentiation are degraded via the lysosome and recycled. During autophagy, cytoplasmic LC3 is processed and recruited to the autophagosomal membranes; the autophagosome then fuses with the lysosome to form the autolysosome. Therefore, cells undergoing autophagy can be identified by visualizing fluorescently labeled LC3 puncta and/or the co-localization of fluorescently labeled LC3 and lysosomal markers. Multispectral imaging flow cytometry is able to collect imagery of large numbers of cells and assess autophagy in an objective, quantitative, and statistically robust manner. This review will examine the four predominant methods that have been used to measure autophagy via multispectral imaging flow cytometry.

  14. Comparison of Hyperspectral and Multispectral Satellites for Discriminating Land Cover in Northern California

    NASA Astrophysics Data System (ADS)

    Clark, M. L.; Kilham, N. E.

    2015-12-01

    Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Most land-cover maps at regional to global scales are produced with remote sensing techniques applied to multispectral satellite imagery with 30-500 m pixel sizes (e.g., Landsat, MODIS). Hyperspectral, or imaging spectrometer, imagery measuring the visible to shortwave infrared regions (VSWIR) of the spectrum have shown impressive capacity to map plant species and coarser land-cover associations, yet techniques have not been widely tested at regional and greater spatial scales. The Hyperspectral Infrared Imager (HyspIRI) mission is a VSWIR hyperspectral and thermal satellite being considered for development by NASA. The goal of this study was to assess multi-temporal, HyspIRI-like satellite imagery for improved land cover mapping relative to multispectral satellites. We mapped FAO Land Cover Classification System (LCCS) classes over 22,500 km2 in the San Francisco Bay Area, California using 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery simulated from data acquired by NASA's AVIRIS airborne sensor. Random Forests (RF) and Multiple-Endmember Spectral Mixture Analysis (MESMA) classifiers were applied to the simulated images and accuracies were compared to those from real Landsat 8 images. The RF classifier was superior to MESMA, and multi-temporal data yielded higher accuracy than summer-only data. With RF, hyperspectral data had overall accuracy of 72.2% and 85.1% with full 20-class and reduced 12-class schemes, respectively. Multispectral imagery had lower accuracy. For example, simulated and real Landsat data had 7.5% and 4.6% lower accuracy than HyspIRI data with 12 classes, respectively. In summary, our results indicate increased mapping accuracy using HyspIRI multi-temporal imagery, particularly in discriminating different natural vegetation types, such as

  15. A multispectral scanner survey of the United States Department of Energy's Paducah Gaseous Diffusion Plant

    SciTech Connect

    Not Available

    1991-06-01

    Airborne multispectral scanner data of the Paducah Gaseous Diffusion Plant (PGDP) and surrounding area were acquired during late spring 1990. This survey was conducted by the Remote Sensing Laboratory (RSL) which is operated by EG G Energy Measurements (EG G/EM) for the US Department of Energy (DOE) Nevada Operations Office. It was requested by the US Department of Energy (DOE) Environmental Audit Team which was reviewing environmental conditions at the facility. The objectives of this survey were to: (1) Acquire 12-channel, multispectral scanner data of the PGDP from an altitude of 3000 feet above ground level (AGL); (2) Acquire predawn, digital thermal infrared (TIR) data of the site from the same altitude; (3) Collect color and color-infrared (CIR) aerial photographs over the facilities; and (4) Illustrate how the analyses of these data could benefit environmental monitoring at the PGDP. This report summarizes the two multispectral scanner and aerial photographic missions at the Paducah Gaseous Diffusion Plant. Selected examples of the multispectral data are presented to illustrate its potential for aiding environmental management at the site. 4 refs., 1 fig., 2 tabs.

  16. Multi-spectral synthetic image generation for ground vehicle identification training

    NASA Astrophysics Data System (ADS)

    May, Christopher M.; Pinto, Neil A.; Sanders, Jeffrey S.

    2016-05-01

    There is a ubiquitous and never ending need in the US armed forces for training materials that provide the warfighter with the skills needed to differentiate between friendly and enemy forces on the battlefield. The current state of the art in battlefield identification training is the Recognition of Combat Vehicles (ROC-V) tool created and maintained by the Communications - Electronics Research, Development and Engineering Center Night Vision and Electronic Sensors Directorate (CERDEC NVESD). The ROC-V training package utilizes measured visual and thermal imagery to train soldiers about the critical visual and thermal cues needed to accurately identify modern military vehicles and combatants. This paper presents an approach to augment the existing ROC-V imagery database with synthetically generated multi-spectral imagery that will allow NVESD to provide improved training imagery at significantly lower costs.

  17. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2001-01-01

    Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.

  18. Synergistic use of MOMS-01 and Landsat TM data. [Modular Optoelectronic Multispectral Scanner

    NASA Technical Reports Server (NTRS)

    Rothery, David A.; Francis, Peter W.

    1987-01-01

    Imagery covering the Socompa volcano and debris avalanche deposit in northern Chile was acquired by MOMS-01 when the sun was low in the western sky. Illumination from the west shows many important topographic features to advantage. These are inconspicuous or indistinguishable on Landsat TM images acquired at higher solar elevation. The effective spatial resolution of MOMS-01 is similar to that of the TM and its capacity for spectral discrimination is less. A technique has been developed to combine the multispectral information offered by TM with the topographic detail visible on MOMS-01 imagery recorded at a time of low solar elevation.

  19. Looking for an old aerial photograph

    USGS Publications Warehouse

    ,

    1997-01-01

    Attempts to photograph the surface of the Earth date from the 1800's, when photographers attached cameras to balloons, kites, and even pigeons. Today, aerial photographs and satellite images are commonplace. The rate of acquiring aerial photographs and satellite images has increased rapidly in recent years. Views of the Earth obtained from aircraft or satellites have become valuable tools to Government resource planners and managers, land-use experts, environmentalists, engineers, scientists, and a wide variety of other users. Many people want historical aerial photographs for business or personal reasons. They may want to locate the boundaries of an old farm or a piece of family property. Or they may want a photograph as a record of changes in their neighborhood, or as a gift. The U.S. Geological Survey (USGS) maintains the Earth Science Information Centers (ESIC?s) to sell aerial photographs, remotely sensed images from satellites, a wide array of digital geographic and cartographic data, as well as the Bureau?s wellknown maps. Declassified photographs from early spy satellites were recently added to the ESIC offerings of historical images. Using the Aerial Photography Summary Record System database, ESIC researchers can help customers find imagery in the collections of other Federal agencies and, in some cases, those of private companies that specialize in esoteric products.

  20. Utilizing SAR and Multispectral Integrated Data for Emergency Response

    NASA Astrophysics Data System (ADS)

    Havivi, S.; Schvartzman, I.; Maman, S.; Marinoni, A.; Gamba, P.; Rotman, S. R.; Blumberg, D. G.

    2016-06-01

    Satellite images are used widely in the risk cycle to understand the exposure, refine hazard maps and quickly provide an assessment after a natural or man-made disaster. Though there are different types of satellite images (e.g. optical, radar) these have not been combined for risk assessments. The characteristics of different remote sensing data type may be extremely valuable for monitoring and evaluating the impacts of disaster events, to extract additional information thus making it available for emergency situations. To base this approach, two different change detection methods, for two different sensor's data were used: Coherence Change Detection (CCD) for SAR data and Covariance Equalization (CE) for multispectral imagery. The CCD provides an identification of the stability of an area, and shows where changes have occurred. CCD shows subtle changes with an accuracy of several millimetres to centimetres. The CE method overcomes the atmospheric effects differences between two multispectral images, taken at different times. Therefore, areas that had undergone a major change can be detected. To achieve our goals, we focused on the urban areas affected by the tsunami event in Sendai, Japan that occurred on March 11, 2011 which affected the surrounding area, coastline and inland. High resolution TerraSAR-X (TSX) and Landsat 7 images, covering the research area, were acquired for the period before and after the event. All pre-processed and processed according to each sensor. Both results, of the optical and SAR algorithms, were combined by resampling the spatial resolution of the Multispectral data to the SAR resolution. This was applied by spatial linear interpolation. A score representing the damage level in both products was assigned. The results of both algorithms, high level of damage is shown in the areas closer to the sea and shoreline. Our approach, combining SAR and multispectral images, leads to more reliable information and provides a complete scene for

  1. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    NASA Astrophysics Data System (ADS)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work

  2. Photogeologic mapping in central southwest Bahia, using LANDSAT-1 multispectral images. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Ohara, T.

    1981-01-01

    The interpretation of LANDSAT multispectral imagery for geologic mapping of central southwest Bahia, Brazil is described. Surface features such as drainage, topography, vegetation and land use are identified. The area is composed of low grade Precambrian rocks covered by Mezozoic and Cenozoic sediments. The principal mineral prospects of economic value are fluorite and calcareous rocks. Gold, calcite, rock crystal, copper, potassium nitrate and alumina were also identified.

  3. Multispectral imaging method and apparatus

    DOEpatents

    Sandison, D.R.; Platzbecker, M.R.; Vargo, T.D.; Lockhart, R.R.; Descour, M.R.; Richards-Kortum, R.

    1999-07-06

    A multispectral imaging method and apparatus are described which are adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging. 5 figs.

  4. Multispectral imaging method and apparatus

    DOEpatents

    Sandison, David R.; Platzbecker, Mark R.; Vargo, Timothy D.; Lockhart, Randal R.; Descour, Michael R.; Richards-Kortum, Rebecca

    1999-01-01

    A multispectral imaging method and apparatus adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging

  5. A program system for efficient multispectral classification

    NASA Astrophysics Data System (ADS)

    Åkersten, S. I.

    Pixelwise multispectral classification is an important tool for analyzing remotely sensed imagery data. The computing time for performing this analysis becomes significantly large when large, multilayer images are analyzed. In the classical implementation of the supervised multispectral classification assuming gaussian-shaped multidimensional class-clusters, the computing time is furthermore approximately proportional to the square of the number of image layers. This leads to very appreciable CPU-times when large numbers of multispectral channels are used and/or temporal classification is performed. In order to decrease computer time, a classification program system has been implemented which has the following characteristics: (1) a simple one-dimensional box classifier, (2) a multidimensional box classifier, (3) a class-pivotal "canonical" classifier utilizing full maximum likelihood and making full use of within-class and between-class statistical characteristics, (4) a hybrid classifier (2 and 3 combined), and (5) a local neighbourhood filtering algorithm producing generalized classification results. The heart of the classifier is the class-pivotal canonical classifier. This algorithm is based upon an idea of Dye suggesting the use of linear transformations making possible a simultaneous evaluation of a measure of the pixel being likely not to belong to the candidate class as well as computing its full maximum likelihood ratio. In case it is more likely to be misclassified the full maximum likelihood evaluation can be truncated almost immediately, i.e. the candidate class can often be rejected using only one or two of the available transformed spectral features. The result of this is a classifier with CPU-time which is empirically shown to be linearly dependent upon the number of image layers. The use of the hybrid classifier lowers the CPU-time with another factor of 3-4. Furthermore, for certain problems like classifying water-non water a single spectral band

  6. Orthorectification, mosaicking, and analysis of sub-decimeter resolution UAV imagery for rangeland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Unmanned aerial vehicles (UAVs) offer an attractive platform for acquiring imagery for rangeland monitoring. UAVs can be deployed quickly and repeatedly, and they can obtain sub-decimeter resolution imagery at lower image acquisition costs than with piloted aircraft. Low flying heights result in ima...

  7. Investigation of Skylab imagery for regional planning. [New York, New Jersey, and Connecticut

    NASA Technical Reports Server (NTRS)

    Harting, W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It is feasible to use earth terrain camera imagery to detect four land uses (vacant land, developed land, streets, and water) for general regional planning purposes. Multispectral imagery is suitable for detecting, mapping, and measuring water bodies as small as two acres. Sufficient information can be extracted to prepare graphic and pictorial representations of the general growth and development patterns, but cannot be incorporated into an inventory file for predictive models.

  8. Aerial radiation surveys

    SciTech Connect

    Jobst, J.

    1980-01-01

    A recent aerial radiation survey of the surroundings of the Vitro mill in Salt Lake City shows that uranium mill tailings have been removed to many locations outside their original boundary. To date, 52 remote sites have been discovered within a 100 square kilometer aerial survey perimeter surrounding the mill; 9 of these were discovered with the recent aerial survey map. Five additional sites, also discovered by aerial survey, contained uranium ore, milling equipment, or radioactive slag. Because of the success of this survey, plans are being made to extend the aerial survey program to other parts of the Salt Lake valley where diversions of Vitro tailings are also known to exist.

  9. ERTS imagery for ground-water investigations

    USGS Publications Warehouse

    Moore, Gerald K.; Deutsch, Morris

    1975-01-01

    ERTS imagery offers the first opportunity to apply moderately high-resolution satellite data to the nationwide study of water resources. This imagery is both a tool and a form of basic data. Like other tools and basic data, it should be considered for use in ground-water investigations. The main advantage of its use will be to reduce the need for field work. In addition, however, broad regional features may be seen easily on ERTS imagery, whereas they would be difficult or impossible to see on the ground or on low-altitude aerial photographs. Some present and potential uses of ERTS imagery are to locate new aquifers, to study aquifer recharge and discharge, to estimate ground-water pumpage for irrigation, to predict the location and type of aquifer management problems, and to locate and monitor strip mines which commonly are sources for acid mine drainage. In many cases, boundaries which are gradational on the ground appear to be sharp on ERTS imagery. Initial results indicate that the accuracy of maps produced from ERTS imagery is completely adequate for some purposes.

  10. Structural geologic interpretations from radar imagery

    USGS Publications Warehouse

    Reeves, Robert G.

    1969-01-01

    Certain structural geologic features may be more readily recognized on sidelooking airborne radar (SLAR) images than on conventional aerial photographs, other remote sensor imagery, or by ground observations. SLAR systems look obliquely to one or both sides and their images resemble aerial photographs taken at low sun angle with the sun directly behind the camera. They differ from air photos in geometry, resolution, and information content. Radar operates at much lower frequencies than the human eye, camera, or infrared sensors, and thus "sees" differently. The lower frequency enables it to penetrate most clouds and some precipitation, haze, dust, and some vegetation. Radar provides its own illumination, which can be closely controlled in intensity and frequency. It is narrow band, or essentially monochromatic. Low relief and subdued features are accentuated when viewed from the proper direction. Runs over the same area in significantly different directions (more than 45° from each other), show that images taken in one direction may emphasize features that are not emphasized on those taken in the other direction; optimum direction is determined by those features which need to be emphasized for study purposes. Lineaments interpreted as faults stand out on radar imagery of central and western Nevada; folded sedimentary rocks cut by faults can be clearly seen on radar imagery of northern Alabama. In these areas, certain structural and stratigraphic features are more pronounced on radar images than on conventional photographs; thus radar imagery materially aids structural interpretation.

  11. Mapping giant reed along the Rio Grande using airborne and satellite imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Giant reed (Arundo donax L.) is a perennial invasive weed that presents a severe threat to agroecosystems and riparian areas in the Texas and Mexican portions of the Rio Grande Basin. The objective of this presentation is to give an overview on the use of aerial photography, airborne multispectral a...

  12. Spatial Resolution Characterization for AWiFS Multispectral Images

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Ryan, Robert E.; Pagnutti, Mary; Stanley, Thomas

    2006-01-01

    Within the framework of the Joint Agency Commercial Imagery Evaluation program, the National Aeronautics and Space Administration, the National Geospatial-Intelligence Agency, and the U.S. Geological Survey cooperate in the characterization of high-to-moderate-resolution commercial imagery of mutual interest. One of the systems involved in this effort is the Advanced Wide Field Sensor (AWiFS) onboard the Indian Remote Sensing (IRS) Reourcesat-1 satellite, IRS-P6. Spatial resolution of the AWiFS multispectral images was characterized by estimating the value of the system Modulation Transfer Function (MTF) at the Nyquist spatial frequency. The Nyquist frequency is defined as half the sampling frequency, and the sampling frequency is equal to the inverse of the ground sample distance. The MTF was calculated as a ratio of the Fourier transform of a profile across an AWiFS image of the Lake Pontchartrain Causeway Bridge and the Fourier transform of a profile across an idealized model of the bridge for each spectral band evaluated. The mean MTF value for the AWiFS imagery evaluated was estimated to be 0.1.

  13. Preparing a landslide and shadow inventory map from high-spatial-resolution imagery facilitated by an expert system

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Chien

    2015-01-01

    An expert system was developed to integrate all useful spatial information and help the interpreters determine the landslide and shaded areas quickly and accurately. The intersection of two spectral indices, namely the normalized difference vegetation index and the normalized green red difference index, as well as the first principle component of the panchromatic band, is employed to automatically determine the regional thresholds of nonvegetation and dark areas. These boundaries are overlaid on the locally enhanced image and the digital topography model to closely inspect each area with a preferred viewing direction. The other geospatial information can be switched on and off to facilitate interpretation. This new approach is tested with 2 m pan-sharpened multispectral imagery from Formosat-2 taken on August 24, 2009, for several disaster areas of Typhoon Morakot. The generated inventory of landslide and shadow areas is validated with the one manually delineated from the 25 cm aerial photos taken on the same day. The production, user, and overall accuracies are higher than 82%, 85%, and 98%, respectively. The fall in production and user accuracies mainly comes from the differences in resolution. This new approach is as accurate as the general approach of manual delineation and visual interpretation, yet significantly reduces the required time.

  14. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    PubMed

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  15. Landsat-D thematic mapper simulation using aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Clark, J.; Bryant, N. A.

    1977-01-01

    A simulation of imagery from the upcoming Landsat-D Thematic Mapper was accomplished by using selected channels of aircraft 24-channel multispectral scanner data. The purpose was to simulate Thematic Mapper 30-meter resolution imagery, to compare its spectral quality with the original aircraft MSS data, and to determine changes in thematic classification accuracy for the simulated imagery. The original resolution of approximately 7.5 meters IFOV and simulated resolution of 15, 30, and 60 meters were used to indicate the trend of spectral quality and classification accuracy. The study was based in a 6.5 square kilometer area of urban Los Angeles having a diversity of land use. The original imagery was reduced in resolution by two related methods: pixel matrix averaging, and matrix smoothing with a unity box filter, followed by matrix averaging. Thematic land use classification using training sites and a Bayesian maximum-likelihood algorithm was performed at three levels of standard deviation - 1.0, 2.0, and 3.0 sigma. Plots of relative standard deviation showed that for larger training sites with a normal distribution of data, as the resolution decreased, the distribution range of density values decreased. Also, the classification accuracies for three levels of standard deviation increased as resolution decreased. However, the indication is that a point of diminishing returns had been reached, and 30 meters IFOV should be the best for multispectral classification of urban scenes.

  16. SAR imagery of the Grand Banks (Newfoundland) pack ice pack and its relationship to surface features

    NASA Technical Reports Server (NTRS)

    Argus, S. D.; Carsey, F. D.

    1988-01-01

    Synthetic Aperture Radar (SAR) data and aerial photographs were obtained over pack ice off the East Coast of Canada in March 1987 as part of the Labrador Ice Margin Experiment (LIMEX) pilot project. Examination of this data shows that although the pack ice off the Canadian East Coast appears essentially homogeneous to visible light imagery, two clearly defined zones of ice are apparent on C-band SAR imagery. To identify factors that create the zones seen on the radar image, aerial photographs were compared to the SAR imagery. Floe size data from the aerial photographs was compared to digital number values taken from SAR imagery of the same ice. The SAR data of the inner zone acquired three days apart over the melt period was also examined. The studies indicate that the radar response is governed by floe size and meltwater distribution.

  17. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  18. Aerial Image Systems

    NASA Astrophysics Data System (ADS)

    Clapp, Robert E.

    1987-09-01

    Aerial images produce the best stereoscopic images of the viewed world. Despite the fact that every optic in existence produces an aerial image, few persons are aware of their existence and possible uses. Constant reference to the eye and other optical systems have produced a psychosis of design that only considers "focal planes" in the design and analysis of optical systems. All objects in the field of view of the optical device are imaged by the device as an aerial image. Use of aerial images in vision and visual display systems can provide a true stereoscopic representation of the viewed world. This paper discusses aerial image systems - their applications and designs and presents designs and design concepts that utilize aerial images to obtain superior visual displays, particularly with application to visual simulation.

  19. Using Panchromatic Imagery in Place of Multispectral Imagery for Kelp Detection in Water

    DTIC Science & Technology

    2010-01-01

    False Indications, Blue = Missed Kelp O ri g in al P an ch ro m at ic R es u lt R efin ed P an ch ro m atic R esu lt Figure 5. Comparison of...Missed Kelp O ri g in al P an ch ro m at ic R es u lt R efin ed P an ch ro m atic R esu lt Figure 6. Comparison of kelp map created from

  20. Remote sensing of benthic microalgal biomass with a tower-mounted multispectral scanner

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.; Katzberg, S. J.; Zingmark, R. G.

    1980-01-01

    A remote sensing instrument was mounted on a 50-ft tower overlooking North Inlet Estuary, South Carolina in order to conduct a remote sensing study of benthic microalgae. The instrument was programmed to take multispectral imagery data along a 90 deg horizontal frame in six spectral bands ranging from 400-1050 nm and had a ground resolution of about 3 cm. Imagery measurements were encoded in digital form on magnetic tape and were stored, decoded, and manipulated by computer. Correlation coefficients were calculated on imagery data and chlorophyll a concentrations derived from ground truth data. The most significant correlation occurred in the blue spectral band with numerical values ranging from -0.81 to -0.88 for three separate sampling periods. Mean values of chlorophyll a for a larger section of mudflat were estimated using regression equations. The scanner has provided encouraging results and promises to be a useful tool in sampling the biomass of intertidal benthic microalgae.

  1. Aerial imagery and structure-from-motion based DEM reconstruction of region-sized areas (Sierra Arana, Spain and Namur Province, Belgium) using an high-altitude drifting balloon platform.

    NASA Astrophysics Data System (ADS)

    Burlet, Christian; María Mateos, Rosa; Azañón, Jose Miguel; Perez, José Vicente; Vanbrabant, Yves

    2015-04-01

    different elevations. A 1m/pixel ground resolution set covering an area of about 200km² and mapping the eastern part of the Sierra Arana (Andalucía, Spain) includes a kartsic field directly to the south-east of the ridge and the cliffs of the "Riscos del Moro". A 4m/pixel ground resolution set covering an area of about 900km² includes the landslide active Diezma region (Andalucía, Spain) and the water reserve of Francisco Abellan lake. The third set has a 3m/pixel ground resolution, covers about 100km² and maps the Famennian rocks formations, known as part of "La Calestienne", outcropping near Beauraing and Rochefort in the Namur Province (Belgium). The DEM and orthophoto's have been referenced using ground control points from satellite imagery (Spain, Belgium) and DPGS (Belgium). The quality of produced DEM were then evaluated by comparing the level and accuracy of details and surface artefacts between available topographic data (SRTM- 30m/pixel, topographic maps) and the three Stratochip sets. This evaluation showed that the models were in good correlation with existing data, and can be readily be used in geomorphology, structural and natural hazard studies.

  2. Sub-pixel resolution with the Multispectral Thermal Imager (MTI).

    SciTech Connect

    Decker, Max Louis; Smith, Jody Lynn; Nandy, Prabal

    2003-06-01

    The Multispectral Thermal Imager Satellite (MTI) has been used to test a sub-pixel sampling technique in an effort to obtain higher spatial frequency imagery than that of its original design. The MTI instrument is of particular interest because of its infrared detectors. In this spectral region, the detector size is traditionally the limiting factor in determining the satellite's ground sampling distance (GSD). Additionally, many over-sampling techniques require flexible command and control of the sensor and spacecraft. The MTI sensor is well suited for this task, as it is the only imaging system on the MTI satellite bus. In this super-sampling technique, MTI is maneuvered such that the data are collected at sub-pixel intervals on the ground. The data are then processed using a deconvolution algorithm using in-scene measured point spread functions (PSF) to produce an image with synthetically-boosted GSD.

  3. Improved capabilities of the Multispectral Atmospheric Mapping Sensor (MAMS)

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Batson, K. Bryan; Atkinson, Robert J.; Moeller, Chris C.; Menzel, W. Paul; James, Mark W.

    1989-01-01

    The Multispectral Atmospheric Mapping Sensor (MAMS) is an airborne instrument being investigated as part of NASA's high altitude research program. Findings from work on this and other instruments have been important as the scientific justification of new instrumentation for the Earth Observing System (EOS). This report discusses changes to the instrument which have led to new capabilities, improved data quality, and more accurate calibration methods. In order to provide a summary of the data collected with MAMS, a complete list of flight dates and locations is provided. For many applications, registration of MAMS imagery with landmarks is required. The navigation of this data on the Man-computer Interactive Data Access System (McIDAS) is discussed. Finally, research applications of the data are discussed and specific examples are presented to show the applicability of these measurements to NASA's Earth System Science (ESS) objectives.

  4. Multispectral analysis and cone signal modelling of pseudoisochromatic test plates

    NASA Astrophysics Data System (ADS)

    Luse, K.; Ozolinsh, M.; Fomins, S.; Gutmane, A.

    2013-12-01

    The aim of the study is to determine the consistency of the desired colour reproduction of the stimuli using calibrated printing technology available to anyone (EpsonStylus Pro 7800 printer was). 24 colour vision assessment plates created in the University of Latvia were analysed right after their fabrication on august 2012 and after intense use for 7 months (colour vision screening on 700 people). Multispectral imagery results indicate that the alignment of the samples after seven months of use has maintained on the CIExy confusion lines of deutan deficiency type, but the shift towards achromatic area in the diagram indicate decrease in the total colour difference (ΔE*ab) of test background (achromatic) areas and stimuli (chromatic) areas, thus affecting the testing outcome and deficiency severity level classification ability of the plates.

  5. Spectral properties of agricultural crops and soils measured from space, aerial, field and laboratory sensors

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Vanderbilt, V. C.; Robinson, B. F.; Daughtry, C. S. T.

    1980-01-01

    It is pointed out that in order to develop the full potential of multispectral measurements acquired from satellite or aircraft sensors to monitor, map, and inventory agricultural resources, increased knowledge and understanding of the spectral properties of crops and soils are needed. The present state of knowledge is reviewed, emphasizing current investigations of the multispectral reflectance characteristics of crops and soils as measured from laboratory, field, aerial, and satellite sensor systems. The relationships of important biological and physical characteristics to their spectral properties of crops and soils are discussed. Future research needs are also indicated.

  6. Gimbaled multispectral imaging system and method

    DOEpatents

    Brown, Kevin H.; Crollett, Seferino; Henson, Tammy D.; Napier, Matthew; Stromberg, Peter G.

    2016-01-26

    A gimbaled multispectral imaging system and method is described herein. In an general embodiment, the gimbaled multispectral imaging system has a cross support that defines a first gimbal axis and a second gimbal axis, wherein the cross support is rotatable about the first gimbal axis. The gimbaled multispectral imaging system comprises a telescope that fixed to an upper end of the cross support, such that rotation of the cross support about the first gimbal axis causes the tilt of the telescope to alter. The gimbaled multispectral imaging system includes optics that facilitate on-gimbal detection of visible light and off-gimbal detection of infrared light.

  7. Using Image Tour to Explore Multiangle, Multispectral Satellite Image

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Wegman, Edward J.; Martinez, Wendy; Symanzik, Juergen; Wallet, Brad

    2006-01-01

    This viewgraph presentation reviews the use of Image Tour to explore the multiangle, multispectral satellite imagery. Remote sensing data are spatial arrays of p-dimensional vectors where each component corresponds to one of p variables. Applying the same R(exp p) to R(exp d) projection to all pixels creates new images, which may be easier to analyze than the original because d < p. Image grand tour (IGT) steps through the space of projections, and d=3 outputs a sequence of RGB images, one for each step. In this talk, we apply IGT to multiangle, multispectral data from NASA's MISR instrument. MISR views each pixel in four spectral bands at nine view angles. Multiple views detect photon scattering in different directions and are indicative of physical properties of the scene. IGT allows us to explore MISR's data structure while maintaining spatial context; a key requirement for physical interpretation. We report results highlighting the uniqueness of multiangle data and how IGT can exploit it.

  8. Review of the SAFARI 2000 RC-10 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Myers, Jeff; Shelton, Gary; Annegarn, Harrold; Peterson, David L. (Technical Monitor)

    2001-01-01

    This presentation will review the aerial photography collected by the NASA ER-2 aircraft during the SAFARI (Southern African Regional Science Initiative) year 2000 campaign. It will include specifications on the camera and film, and will show examples of the imagery. It will also detail the extent of coverage, and the procedures to obtain film products from the South African government. Also included will be some sample applications of aerial photography for various environmental applications, and its use in augmenting other SAFARI data sets.

  9. Use of a Multispectral Uav Photogrammetry for Detection and Tracking of Forest Disturbance Dynamics

    NASA Astrophysics Data System (ADS)

    Minařík, R.; Langhammer, J.

    2016-06-01

    This study presents a new methodological approach for assessment of spatial and qualitative aspects of forest disturbance based on the use of multispectral imaging camera with the UAV photogrammetry. We have used the miniaturized multispectral sensor Tetracam Micro Multiple Camera Array (μ-MCA) Snap 6 with the multirotor imaging platform to get multispectral imagery with high spatial resolution. The study area is located in the Sumava Mountains, Central Europe, heavily affected by windstorms, followed by extensive and repeated bark beetle (Ips typographus [L.]) outbreaks in the past 20 years. After two decades, there is apparent continuous spread of forest disturbance as well as rapid regeneration of forest vegetation, related with changes in species and their diversity. For testing of suggested methodology, we have launched imaging campaign in experimental site under various stages of forest disturbance and regeneration. The imagery of high spatial and spectral resolution enabled to analyse the inner structure and dynamics of the processes. The most informative bands for tree stress detection caused by bark beetle infestation are band 2 (650nm) and band 3 (700nm), followed by band 4 (800 nm) from the, red-edge and NIR part of the spectrum. We have identified only three indices, which seems to be able to correctly detect different forest disturbance categories in the complex conditions of mixture of categories. These are Normalized Difference Vegetation Index (NDVI), Simple 800/650 Ratio Pigment specific simple ratio B1 and Red-edge Index.

  10. Uav Multispectral Survey to Map Soil and Crop for Precision Farming Applications

    NASA Astrophysics Data System (ADS)

    Sonaa, Giovanna; Passoni, Daniele; Pinto, Livio; Pagliari, Diana; Masseroni, Daniele; Ortuani, Bianca; Facchi, Arianna

    2016-06-01

    New sensors mounted on UAV and optimal procedures for survey, data acquisition and analysis are continuously developed and tested for applications in precision farming. Procedures to integrate multispectral aerial data about soil and crop and ground-based proximal geophysical data are a recent research topic aimed to delineate homogeneous zones for the management of agricultural inputs (i.e., water, nutrients). Multispectral and multitemporal orthomosaics were produced over a test field (a 100 m x 200 m plot within a maize field), to map vegetation and soil indices, as well as crop heights, with suitable ground resolution. UAV flights were performed in two moments during the crop season, before sowing on bare soil, and just before flowering when maize was nearly at the maximum height. Two cameras, for color (RGB) and false color (NIR-RG) images, were used. The images were processed in Agisoft Photoscan to produce Digital Surface Model (DSM) of bare soil and crop, and multispectral orthophotos. To overcome some difficulties in the automatic searching of matching points for the block adjustment of the crop image, also the scientific software developed by Politecnico of Milan was used to enhance images orientation. Surveys and image processing are described, as well as results about classification of multispectral-multitemporal orthophotos and soil indices.

  11. Evaluating the Potential of Multispectral Airborne LIDAR for Topographic Mapping and Land Cover Classification

    NASA Astrophysics Data System (ADS)

    Wichmann, V.; Bremer, M.; Lindenberger, J.; Rutzinger, M.; Georges, C.; Petrini-Monteferri, F.

    2015-08-01

    Recently multispectral LiDAR became a promising research field for enhanced LiDAR classification workflows and e.g. the assessment of vegetation health. Current analyses on multispectral LiDAR are mainly based on experimental setups, which are often limited transferable to operational tasks. In late 2014 Optech Inc. announced the first commercially available multispectral LiDAR system for airborne topographic mapping. The combined system makes synchronic multispectral LiDAR measurements possible, solving time shift problems of experimental acquisitions. This paper presents an explorative analysis of the first airborne collected data with focus on class specific spectral signatures. Spectral patterns are used for a classification approach, which is evaluated in comparison to a manual reference classification. Typical spectral patterns comparable to optical imagery could be observed for homogeneous and planar surfaces. For rough and volumetric objects such as trees, the spectral signature becomes biased by signal modification due to multi return effects. However, we show that this first flight data set is suitable for conventional geometrical classification and mapping procedures. Additional classes such as sealed and unsealed ground can be separated with high classification accuracies. For vegetation classification the distinction of species and health classes is possible.

  12. Small UAV-Acquired, High-resolution, Georeferenced Still Imagery

    SciTech Connect

    Ryan Hruska

    2005-09-01

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical to use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.

  13. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  14. An approach to optimal hyperspectral and multispectral signature and image fusion for detecting hidden targets on shorelines

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.

    2015-10-01

    Hyperspectral and multispectral imagery of shorelines collected from airborne and shipborne platforms are used following pushbroom imagery corrections using inertial motion motions units and augmented global positioning data and Kalman filtering. Corrected radiance or reflectance images are then used to optimize synthetic high spatial resolution spectral signatures resulting from an optimized data fusion process. The process demonstrated utilizes littoral zone features from imagery acquired in the Gulf of Mexico region. Shoreline imagery along the Banana River, Florida, is presented that utilizes a technique that makes use of numerically embedded targets in both higher spatial resolution multispectral images and lower spatial resolution hyperspectral imagery. The fusion process developed utilizes optimization procedures that include random selection of regions and pixels in the imagery, and minimizing the difference between the synthetic signatures and observed signatures. The optimized data fusion approach allows detection of spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly detection is demonstrated using numerically embedded line targets within actual imagery. The approach allows one to test spectral signature anomaly detection and to identify features and targets. The optimized data fusion techniques and software allows one to perform sensitivity analysis and optimization in the singular value decomposition model building process and the 2-D Butterworth cutoff frequency and order numerical selection process. The data fusion "synthetic imagery" forms a basis for spectral-spatial resolution enhancement for optimal band selection and remote sensing algorithm development within "spectral anomaly areas". Sensitivity analysis demonstrates the data fusion methodology is most sensitive to (a) the pixels and features used in the SVD model building process and (b) the 2-D Butterworth cutoff frequency optimized by application of K

  15. Sandia multispectral analyst remote sensing toolkit (SMART).

    SciTech Connect

    Post, Brian Nelson; Smith, Jody Lynn; Geib, Peter L.; Nandy, Prabal; Wang, Nancy Nairong

    2003-03-01

    This remote sensing science and exploitation work focused on exploitation algorithms and methods targeted at the analyst. SMART is a 'plug-in' to commercial remote sensing software that provides algorithms to enhance the utility of the Multispectral Thermal Imager (MTI) and other multispectral satellite data. This toolkit has been licensed to 22 government organizations.

  16. A multispectral sorting device for wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A low-cost multispectral sorting device was constructed using three visible and three near-infrared light-emitting diodes (LED) with peak emission wavelengths of 470 nm (blue), 527 nm (green), 624 nm (red), 850 nm, 940 nm, and 1070 nm. The multispectral data were collected by rapidly (~12 kHz) blin...

  17. PORTABLE MULTISPECTRAL IMAGING INSTRUMENT FOR FOOD INDUSTRY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this paper is to design and fabricate a hand-held multispectral instrument for real-time contaminant detection. Specifically, the protocol to develop a portable multispectral instrument including optical sensor design, fabrication, calibration, data collection, analysis and algorith...

  18. Applying Neural Networks to Hyperspectral and Multispectral Field Data for Discrimination of Cruciferous Weeds in Winter Crops

    PubMed Central

    de Castro, Ana-Isabel; Jurado-Expósito, Montserrat; Gómez-Casero, María-Teresa; López-Granados, Francisca

    2012-01-01

    In the context of detection of weeds in crops for site-specific weed control, on-ground spectral reflectance measurements are the first step to determine the potential of remote spectral data to classify weeds and crops. Field studies were conducted for four years at different locations in Spain. We aimed to distinguish cruciferous weeds in wheat and broad bean crops, using hyperspectral and multispectral readings in the visible and near-infrared spectrum. To identify differences in reflectance between cruciferous weeds, we applied three classification methods: stepwise discriminant (STEPDISC) analysis and two neural networks, specifically, multilayer perceptron (MLP) and radial basis function (RBF). Hyperspectral and multispectral signatures of cruciferous weeds, and wheat and broad bean crops can be classified using STEPDISC analysis, and MLP and RBF neural networks with different success, being the MLP model the most accurate with 100%, or higher than 98.1%, of classification performance for all the years. Classification accuracy from hyperspectral signatures was similar to that from multispectral and spectral indices, suggesting that little advantage would be obtained by using more expensive airborne hyperspectral imagery. Therefore, for next investigations, we recommend using multispectral remote imagery to explore whether they can potentially discriminate these weeds and crops. PMID:22629171

  19. Applying neural networks to hyperspectral and multispectral field data for discrimination of cruciferous weeds in winter crops.

    PubMed

    de Castro, Ana-Isabel; Jurado-Expósito, Montserrat; Gómez-Casero, María-Teresa; López-Granados, Francisca

    2012-01-01

    In the context of detection of weeds in crops for site-specific weed control, on-ground spectral reflectance measurements are the first step to determine the potential of remote spectral data to classify weeds and crops. Field studies were conducted for four years at different locations in Spain. We aimed to distinguish cruciferous weeds in wheat and broad bean crops, using hyperspectral and multispectral readings in the visible and near-infrared spectrum. To identify differences in reflectance between cruciferous weeds, we applied three classification methods: stepwise discriminant (STEPDISC) analysis and two neural networks, specifically, multilayer perceptron (MLP) and radial basis function (RBF). Hyperspectral and multispectral signatures of cruciferous weeds, and wheat and broad bean crops can be classified using STEPDISC analysis, and MLP and RBF neural networks with different success, being the MLP model the most accurate with 100%, or higher than 98.1%, of classification performance for all the years. Classification accuracy from hyperspectral signatures was similar to that from multispectral and spectral indices, suggesting that little advantage would be obtained by using more expensive airborne hyperspectral imagery. Therefore, for next investigations, we recommend using multispectral remote imagery to explore whether they can potentially discriminate these weeds and crops.

  20. Algorithms for lineaments detection in processing of multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, D.; Jelev, G.; Atanassov, V.; Koprinkova-Hristova, Petia; Alexiev, K.

    2014-10-01

    Satellite remote sensing is a universal tool to investigate the different areas of Earth and environmental sciences. The advancement of the implementation capabilities of the optoelectronic devices which are long-term-tested in the laboratory and the field and are mounted on-board of the remote sensing platforms further improves the capability of instruments to acquire information about the Earth and its resources in global, regional and local scales. With the start of new high-spatial and spectral resolution satellite and aircraft imagery new applications for large-scale mapping and monitoring becomes possible. The integration with Geographic Information Systems (GIS) allows a synergistic processing of the multi-source spatial and spectral data. Here we present the results of a joint project DFNI I01/8 funded by the Bulgarian Science Fund focused on the algorithms of the preprocessing and the processing spectral data by using the methods of the corrections and of the visual and automatic interpretation. The objects of this study are lineaments. The lineaments are basically the line features on the earth's surface which are a sign of the geological structures. The geological lineaments usually appear on the multispectral images like lines or edges or linear shapes which is the result of the color variations of the surface structures. The basic geometry of a line is orientation, length and curve. The detection of the geological lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, in the prospecting of water resources, in the protecting people, etc. In this study the integrated approach for the detecting of the lineaments is applied. It combines together the methods of the visual interpretation of various geological and geographical indications in the multispectral satellite images, the application of the spatial analysis in GIS and the automatic processing of the multispectral images by Canny

  1. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  2. Application of high resolution images from unmanned aerial vehicles for hydrology and rangeland science

    NASA Astrophysics Data System (ADS)

    Rango, A.; Vivoni, E. R.; Anderson, C. A.; Perini, N. A.; Saripalli, S.; Laliberte, A.

    2012-12-01

    A common problem in many natural resource disciplines is the lack of high-enough spatial resolution images that can be used for monitoring and modeling purposes. Advances have been made in the utilization of Unmanned Aerial Vehicles (UAVs) in hydrology and rangeland science. By utilizing low flight altitudes and velocities, UAVs are able to produce high resolution (5 cm) images as well as stereo coverage (with 75% forward overlap and 40% sidelap) to extract digital elevation models (DEM). Another advantage of flying at low altitude is that the potential problems of atmospheric haze obscuration are eliminated. Both small fixed-wing and rotary-wing aircraft have been used in our experiments over two rangeland areas in the Jornada Experimental Range in southern New Mexico and the Santa Rita Experimental Range in southern Arizona. The fixed-wing UAV has a digital camera in the wing and six-band multispectral camera in the nose, while the rotary-wing UAV carries a digital camera as payload. Because we have been acquiring imagery for several years, there are now > 31,000 photos at one of the study sites, and 177 mosaics over rangeland areas have been constructed. Using the DEM obtained from the imagery we have determined the actual catchment areas of three watersheds and compared these to previous estimates. At one site, the UAV-derived watershed area is 4.67 ha which is 22% smaller compared to a manual survey using a GPS unit obtained several years ago. This difference can be significant in constructing a watershed model of the site. From a vegetation species classification, we also determined that two of the shrub types in this small watershed(mesquite and creosote with 6.47 % and 5.82% cover, respectively) grow in similar locations(flat upland areas with deep soils), whereas the most predominant shrub(mariola with 11.9% cover) inhabits hillslopes near stream channels(with steep shallow soils). The positioning of these individual shrubs throughout the catchment using

  3. Evaluation of Landsat-7 ETM+ Panchromatic Band for Image Fusion with Multispectral Bands

    SciTech Connect

    Liu Jianguo

    2000-12-15

    The Landsat-7 ETM+ panchromatic band is taken simultaneously with multispectral bands using the same sensor system. The two data sets, therefore, are coregistered accurately and the solar illumination and other environmental conditions are identical. This makes ETM+ Pan advantageous to SPOT Pan for resolution fusion. A spectral preserve image fusion technique, Smoothing Filter-Based Intensity Modulation (SFIM), can produce optimal fusion data without altering the spectral properties of the original image if the coregistration error is minimal. With TM/SPOT Pan fusion, the technique is superior to HSI and Brovey transform fusion techniques in spectral fidelity, but has slightly degraded edge sharpness as a result of TM/SPOT Pan coregistration error because SFIM is sensitive to coregistration accuracy and temporal changes of edges. The problem is self-resolved for ETM+ because there is virtually no coregistration error between the panchromatic band and the multispectral bands. Quality fusion imagery data thus can be produced.

  4. IMPROVING THE ACCURACY OF HISTORIC SATELLITE IMAGE CLASSIFICATION BY COMBINING LOW-RESOLUTION MULTISPECTRAL DATA WITH HIGH-RESOLUTION PANCHROMATIC DATA

    SciTech Connect

    Getman, Daniel J

    2008-01-01

    Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic data (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.

  5. BOREAS Level-0 C-130 Aerial Photography

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.

  6. High-Resolution Satellite Imagery Is an Important yet Underutilized Resource in Conservation Biology

    PubMed Central

    Boyle, Sarah A.; Kennedy, Christina M.; Torres, Julio; Colman, Karen; Pérez-Estigarribia, Pastor E.; de la Sancha, Noé U.

    2014-01-01

    Technological advances and increasing availability of high-resolution satellite imagery offer the potential for more accurate land cover classifications and pattern analyses, which could greatly improve the detection and quantification of land cover change for conservation. Such remotely-sensed products, however, are often expensive and difficult to acquire, which prohibits or reduces their use. We tested whether imagery of high spatial resolution (≤5 m) differs from lower-resolution imagery (≥30 m) in performance and extent of use for conservation applications. To assess performance, we classified land cover in a heterogeneous region of Interior Atlantic Forest in Paraguay, which has undergone recent and dramatic human-induced habitat loss and fragmentation. We used 4 m multispectral IKONOS and 30 m multispectral Landsat imagery and determined the extent to which resolution influenced the delineation of land cover classes and patch-level metrics. Higher-resolution imagery more accurately delineated cover classes, identified smaller patches, retained patch shape, and detected narrower, linear patches. To assess extent of use, we surveyed three conservation journals (Biological Conservation, Biotropica, Conservation Biology) and found limited application of high-resolution imagery in research, with only 26.8% of land cover studies analyzing satellite imagery, and of these studies only 10.4% used imagery ≤5 m resolution. Our results suggest that high-resolution imagery is warranted yet under-utilized in conservation research, but is needed to adequately monitor and evaluate forest loss and conversion, and to delineate potentially important stepping-stone fragments that may serve as corridors in a human-modified landscape. Greater access to low-cost, multiband, high-resolution satellite imagery would therefore greatly facilitate conservation management and decision-making. PMID:24466287

  7. High-resolution satellite imagery is an important yet underutilized resource in conservation biology.

    PubMed

    Boyle, Sarah A; Kennedy, Christina M; Torres, Julio; Colman, Karen; Pérez-Estigarribia, Pastor E; de la Sancha, Noé U

    2014-01-01

    Technological advances and increasing availability of high-resolution satellite imagery offer the potential for more accurate land cover classifications and pattern analyses, which could greatly improve the detection and quantification of land cover change for conservation. Such remotely-sensed products, however, are often expensive and difficult to acquire, which prohibits or reduces their use. We tested whether imagery of high spatial resolution (≤5 m) differs from lower-resolution imagery (≥30 m) in performance and extent of use for conservation applications. To assess performance, we classified land cover in a heterogeneous region of Interior Atlantic Forest in Paraguay, which has undergone recent and dramatic human-induced habitat loss and fragmentation. We used 4 m multispectral IKONOS and 30 m multispectral Landsat imagery and determined the extent to which resolution influenced the delineation of land cover classes and patch-level metrics. Higher-resolution imagery more accurately delineated cover classes, identified smaller patches, retained patch shape, and detected narrower, linear patches. To assess extent of use, we surveyed three conservation journals (Biological Conservation, Biotropica, Conservation Biology) and found limited application of high-resolution imagery in research, with only 26.8% of land cover studies analyzing satellite imagery, and of these studies only 10.4% used imagery ≤5 m resolution. Our results suggest that high-resolution imagery is warranted yet under-utilized in conservation research, but is needed to adequately monitor and evaluate forest loss and conversion, and to delineate potentially important stepping-stone fragments that may serve as corridors in a human-modified landscape. Greater access to low-cost, multiband, high-resolution satellite imagery would therefore greatly facilitate conservation management and decision-making.

  8. Orientation Strategies for Aerial Oblique Images

    NASA Astrophysics Data System (ADS)

    Wiedemann, A.; Moré, J.

    2012-07-01

    Oblique aerial images become more and more distributed to fill the gap between vertical aerial images and mobile mapping systems. Different systems are on the market. For some applications, like texture mapping, precise orientation data are required. One point is the stable interior orientation, which can be achieved by stable camera systems, the other a precise exterior orientation. A sufficient exterior orientation can be achieved by a large effort in direct sensor orientation, whereas minor errors in the angles have a larger effect than in vertical imagery. The more appropriate approach is by determine the precise orientation parameters by photogrammetric methods using an adapted aerial triangulation. Due to the different points of view towards the object the traditional aerotriangulation matching tools fail, as they produce a bunch of blunders and require a lot of manual work to achieve a sufficient solution. In this paper some approaches are discussed and results are presented for the most promising approaches. We describe a single step approach with an aerotriangulation using all available images; a two step approach with an aerotriangulation only of the vertical images plus a mathematical transformation of the oblique images using the oblique cameras excentricity; and finally the extended functional model for a bundle block adjustment considering the mechanical connection between vertical and oblique images. Beside accuracy also other aspects like efficiency and required manual work have to be considered.

  9. Updating Maps Using High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Shahzad Janjua, Khurram; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Kingdom of Saudi Arabia is one of the most dynamic countries of the world. We have witnessed a very rapid urban development's which are altering Kingdom's landscape on daily basis. In recent years a substantial increase in urban populations is observed which results in the formation of large cities. Considering this fast paced growth, it has become necessary to monitor these changes, in consideration with challenges faced by aerial photography projects. It has been observed that data obtained through aerial photography has a lifecycle of 5-years because of delay caused by extreme weather conditions and dust storms which acts as hindrances or barriers during aerial imagery acquisition, which has increased the costs of aerial survey projects. All of these circumstances require that we must consider some alternatives that can provide us easy and better ways of image acquisition in short span of time for achieving reliable accuracy and cost effectiveness. The approach of this study is to conduct an extensive comparison between different resolutions of data sets which include: Orthophoto of (10 cm) GSD, Stereo images of (50 cm) GSD and Stereo images of (1 m) GSD, for map updating. Different approaches have been applied for digitizing buildings, roads, tracks, airport, roof level changes, filling stations, buildings under construction, property boundaries, mosques buildings and parking places.

  10. AERIAL METHODS OF EXPLORATION

    DTIC Science & Technology

    The development of photointerpretation techniques for identifying kimberlite pipes on aerial photographs is discussed. The geographic area considered is the Daldyn region, which lies in the zone of Northern Taiga of Yakutiya.

  11. Aerial image databases for pipeline rights-of-way management

    NASA Astrophysics Data System (ADS)

    Jadkowski, Mark A.

    1996-03-01

    Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.

  12. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  13. Cucumber disease diagnosis using multispectral images

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Li, Hongning; Shi, Junsheng; Yang, Weiping; Liao, Ningfang

    2009-07-01

    In this paper, multispectral imaging technique for plant diseases diagnosis is presented. Firstly, multispectral imaging system is designed. This system utilizes 15 narrow-band filters, a panchromatic band, a monochrome CCD camera, and standard illumination observing environment. The spectral reflectance and color of 8 Macbeth color patches are reproduced between 400nm and 700nm in the process. In addition, spectral reflectance angle and color difference is obtained through measurements and analysis of color patches using spectrometer and multispectral imaging system. The result shows that 16 narrow-bands multispectral imaging system realizes good accuracy in spectral reflectance and color reproduction. Secondly, a horticultural plant, cucumber' familiar disease are the researching objects. 210 multispectral samples are obtained by multispectral and are classified by BP artificial neural network. The classification accuracies of Sphaerotheca fuliginea, Corynespora cassiicola, Pseudoperonospora cubensis are 100%. Trichothecium roseum and Cladosporium cucumerinum are 96.67% and 90.00%. It is confirmed that the multispectral imaging system realizes good accuracy in the cucumber diseases diagnosis.

  14. Stratigraphic correlation by integrating photostratigraphy and remote sensing multispectral data: An example from Jurassic-Eocene strata, Northern Somalia

    SciTech Connect

    Sgavetti, M.; Ferrari, M.C.; Chiari, R.

    1995-11-01

    Integrated analyses of aerial photographs and multispectral remote sensing images were used for stratigraphic correlation in mainly carbonate and evaporitic rocks. These rocks crop out in an area of northern Somalia characterized by an arid climate. By the aerial photo analysis, we recognized photostratigraphic logs and stratal patterns and established correlations based on the tracing of physical surfaces with chronostratigraphic significance, such as photohorizons and photostratigraphic discontinuities. A limited number of field sections provided the lithological interpretation of the packages of strata delineated in aerial photos. By satellite multispectral (Landsat Thematic Mapper{trademark}) data analysis we identified image facies that represent packages of strata with different lithological characteristics. To interpret the image facies, we compared the responses in the thematic mapper (TM) bands with the laboratory spectroscopic properties of rock samples from the study area, and interpreted the absorption features by petrographic analysis. The Mesozoic and Tertiary strata analyzed herein are part of several formations deposited on a passive margin preceding the Oligocene-Miocene Gulf of Aden rifting and initial drifting. Following this approach, a number of stratigraphic units were recognized and mapped on aerial photos, and a framework of photostratigraphic correlation surfaces was delineated over significantly wide areas. These surfaces approximate time surfaces and are traced both within and across the lithostratigraphic units, improving existing maps. This method represents a mapping tool preliminary to more detailed field work, and is particularly useful in areas of difficult access.

  15. U. S. Department of Energy Aerial Measuring Systems

    SciTech Connect

    J. J. Lease

    1998-10-01

    The Aerial Measuring Systems (AMS) is an aerial surveillance system. This system consists of remote sensing equipment to include radiation detectors; multispectral, thermal, radar, and laser scanners; precision cameras; and electronic imaging and still video systems. This equipment, in varying combinations, is mounted in an airplane or helicopter and flown at different heights in specific patterns to gather various types of data. This system is a key element in the US Department of Energy's (DOE) national emergency response assets. The mission of the AMS program is twofold--first, to respond to emergencies involving radioactive materials by conducting aerial surveys to rapidly track and map the contamination that may exist over a large ground area and second, to conduct routinely scheduled, aerial surveys for environmental monitoring and compliance purposes through the use of credible science and technology. The AMS program evolved from an early program, begun by a predecessor to the DOE--the Atomic Energy Commission--to map the radiation that may have existed within and around the terrestrial environments of DOE facilities, which produced, used, or stored radioactive materials.

  16. On-board multispectral classification study

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.

  17. Fusion of Hyperspectral and Vhr Multispectral Image Classifications in Urban Areas

    NASA Astrophysics Data System (ADS)

    Hervieu, Alexandre; Le Bris, Arnaud; Mallet, Clément

    2016-06-01

    An energetical approach is proposed for classification decision fusion in urban areas using multispectral and hyperspectral imagery at distinct spatial resolutions. Hyperspectral data provides a great ability to discriminate land-cover classes while multispectral data, usually at higher spatial resolution, makes possible a more accurate spatial delineation of the classes. Hence, the aim here is to achieve the most accurate classification maps by taking advantage of both data sources at the decision level: spectral properties of the hyperspectral data and the geometrical resolution of multispectral images. More specifically, the proposed method takes into account probability class membership maps in order to improve the classification fusion process. Such probability maps are available using standard classification techniques such as Random Forests or Support Vector Machines. Classification probability maps are integrated into an energy framework where minimization of a given energy leads to better classification maps. The energy is minimized using a graph-cut method called quadratic pseudo-boolean optimization (QPBO) with ?-expansion. A first model is proposed that gives satisfactory results in terms of classification results and visual interpretation. This model is compared to a standard Potts models adapted to the considered problem. Finally, the model is enhanced by integrating the spatial contrast observed in the data source of higher spatial resolution (i.e., the multispectral image). Obtained results using the proposed energetical decision fusion process are shown on two urban multispectral/hyperspectral datasets. 2-3% improvement is noticed with respect to a Potts formulation and 3-8% compared to a single hyperspectral-based classification.

  18. US open-skies follow-on evaluation program, multispectral hyperspectral (MSHS) sensor survey

    SciTech Connect

    Ryan, R.; Del Guidice, P.; Smith, L.; Soel, M.

    1996-10-01

    The Follow-On Sensor Evaluation Program (FOSEP) has evaluated the potential benefits of hyperspectral and multispectral sensor additions to the existing sensor suite on the U.S. Open Skies aircraft and recommends a multispectral sensor for implementation in the Open Skies program. Potential enhancements to the Open Skies missions include environmental monitoring and improved treaty verification capabilities. A previous study indicated the inadequacy of the current U.S. Open Skies aircraft sensor suite, with or without modifications to existing sensors, in the performance of environmental monitoring. The most beneficial modifications identified in this previous report were multispectral modifications of the existing sensor suite. However, even with these modifications significant inadequacies for many environmental and other missions would still remain. That study concluded that enhancement of Open Skies missions could be achieved with the addition of sensors specifically designed for multispectral imagery. In this current study, we examined and compared a wide range of commercially available airborne imaging spectrometers for a host of Open Skies missions which included both environmental and military applications. A set of tables and figures were developed ensuring that the evaluated sensors matched the Open Skies parameters of interest and flight profiles. These tables provide a common basis for comparing commercially available sensor systems for their suitability to Open Skies missions. Several figures of merit were used to compare different sensors including ground sample distance (GSD), number of spectral bands, signal-to-noise ratio and exportability. This methodology was applied to potential Open Skies multispectral and hyperspectral sensors. Rankings were developed using these of figures of merit and constraints imposed by the existing and future platforms. 6 refs., 2 figs., 4 tabs.

  19. MULTISPECTRAL REMOTE SENSING OF CARBONATE ROCKS IN THE CONFUSION RANGE, UTAH.

    USGS Publications Warehouse

    Crowley, James K.

    1984-01-01

    Multispectral imagery recorded by the NASA/Bendix 24-channel aircraft scanner over the Confusion Range, Utah, proved to be extremely sensitive to lithologic variations in exposed carbonate rocks. Major carbonate units within a 16-km**2 study area were readily distinguished, and some aspects of their structure and stratigraphy could be inferred from image spectral signatures. Spectral data channels centered at 1. 6 and 2. 2 mu m accounted for much of the data sensitivity to lithologic differences. Rock texture, organic matter content, and weathering expression were important lithologic factors producing spectral variation.

  20. Implementation of ILLIAC 4 algorithms for multispectral image interpretation. [earth resources data

    NASA Technical Reports Server (NTRS)

    Ray, R. M.; Thomas, J. D.; Donovan, W. E.; Swain, P. H.

    1974-01-01

    Research has focused on the design and partial implementation of a comprehensive ILLIAC software system for computer-assisted interpretation of multispectral earth resources data such as that now collected by the Earth Resources Technology Satellite. Research suggests generally that the ILLIAC 4 should be as much as two orders of magnitude more cost effective than serial processing computers for digital interpretation of ERTS imagery via multivariate statistical classification techniques. The potential of the ARPA Network as a mechanism for interfacing geographically-dispersed users to an ILLIAC 4 image processing facility is discussed.

  1. Tools for interpretation of multispectral data

    NASA Astrophysics Data System (ADS)

    Speckert, Glen; Carpenter, Loren C.; Russell, Mike; Bradstreet, John; Waite, Tom; Conklin, Charlie

    1990-08-01

    The large size and multiple bands of todays satellite data require increasingly powerful tools in order to display and interpret the acquired imagery in a timely fashion. Pixar has developed two major tools for use in this data interpretation. These tools are the Electronic Light Table (ELT), and an extensive image processing package, ChapiP. These tools operate on images limited only by disk volume size, currently 3 Gbytes. The Electronic Light Table package provides a fully windowed interface to these large 12 bit monochrome and multiband images, passing images through a software defined image interpretation pipeline in real time during an interactive roam. A virtual image software framework allows interactive modification of the visible image. The roam software pipeline consists of a seventh order polynomial warp, bicubic resampling, a user registration affine, histogram drop sampling, a 5x5 unsharp mask, and per window contrast controls. It is important to note that these functions are done in software, and various performance tradeoffs can be made for different applications within a family of hardware configurations. Special high spped zoom, rotate, sharpness, and contrast operators provide interactive region of interest manipulation. Double window operators provide for flicker, fade, shade, and difference of two parent windows in a chained fashion. Overlay graphics capability is provided in a PostScfipt* windowed environment (NeWS**). The image is stored on disk as a multi resolution image pyramid. This allows resampling and other image operations independent of the zoom level. A set of tools layered upon ChapIP allow manipulation of the entire pyramid file. Arbitrary combinations of bands can be computed for arbitrary sized images, as well as other image processing operations. ChapIP can also be used in conjunction with ELT to dynamically operate on the current roaming window to append the image processing function onto the roam pipeline. Multiple Chapi

  2. Onboard and Parts-based Object Detection from Aerial Imagery

    DTIC Science & Technology

    2011-09-01

    reduced operator workload. Additionally, a novel parts- based detection method was developed. A whole-object detector is not well suited for deformable and...reduced operator workload. Additionally, a novel parts- based detection method was developed. A whole-object detector is not well suited for deformable and...Methodology This chapter details the challenges of transitioning from ground station processing to onboard processing, the part- based detection method

  3. High-biomass sorghum yield estimate with aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Abstract. To reach the goals laid out by the U.S. Government for displacing fossil fuels with biofuels, agricultural production of dedicated biomass crops is required. High-biomass sorghum is advantageous across wide regions because it requires less water per unit dry biomass and can produce very hi...

  4. Yield mapping of high-biomass sorghum with aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    To reach the goals laid out by the U.S. Government for displacing fossil fuels with biofuels, agricultural production of dedicated biomass crops is required. High-biomass sorghum is advantageous across wide regions because it requires less water per unit dry biomass and can produce very high biomass...

  5. Multispectral Scanner for Monitoring Plants

    NASA Technical Reports Server (NTRS)

    Gat, Nahum

    2004-01-01

    A multispectral scanner has been adapted to capture spectral images of living plants under various types of illumination for purposes of monitoring the health of, or monitoring the transfer of genes into, the plants. In a health-monitoring application, the plants are illuminated with full-spectrum visible and near infrared light and the scanner is used to acquire a reflected-light spectral signature known to be indicative of the health of the plants. In a gene-transfer- monitoring application, the plants are illuminated with blue or ultraviolet light and the scanner is used to capture fluorescence images from a green fluorescent protein (GFP) that is expressed as result of the gene transfer. The choice of wavelength of the illumination and the wavelength of the fluorescence to be monitored depends on the specific GFP.

  6. Multispectral sensing of moisture stress

    NASA Technical Reports Server (NTRS)

    Olson, C. E., Jr.

    1970-01-01

    Laboratory reflectance data, and field tests with multispectral remote sensors provide support for this hypotheses that differences in moisture content and water deficits are closely related to foliar reflectance from woody plants. When these relationships are taken into account, automatic recognition techniques become more powerful than when they are ignored. Evidence is increasing that moisture relationships inside plant foliage are much more closely related to foliar reflectance characteristics than are external variables such as soil moisture, wind, and air temperature. Short term changes in water deficits seem to have little influence on foliar reflectance, however. This is in distinct contrast to significant short-term changes in foliar emittance from the same plants with changing wind, air temperature, incident radiation, or water deficit conditions.

  7. Commercial Applications Multispectral Sensor System

    NASA Technical Reports Server (NTRS)

    Birk, Ronald J.; Spiering, Bruce

    1993-01-01

    NASA's Office of Commercial Programs is funding a multispectral sensor system to be used in the development of remote sensing applications. The Airborne Terrestrial Applications Sensor (ATLAS) is designed to provide versatility in acquiring spectral and spatial information. The ATLAS system will be a test bed for the development of specifications for airborne and spaceborne remote sensing instrumentation for dedicated applications. This objective requires spectral coverage from the visible through thermal infrared wavelengths, variable spatial resolution from 2-25 meters; high geometric and geo-location accuracy; on-board radiometric calibration; digital recording; and optimized performance for minimized cost, size, and weight. ATLAS is scheduled to be available in 3rd quarter 1992 for acquisition of data for applications such as environmental monitoring, facilities management, geographic information systems data base development, and mineral exploration.

  8. EAARL Coastal Topography and Imagery-Naval Live Oaks Area, Gulf Islands National Seashore, Florida, 2007

    USGS Publications Warehouse

    Nagle, David B.; Nayegandhi, Amar; Yates, Xan; Brock, John C.; Wright, C. Wayne; Bonisteel, Jamie M.; Klipp, Emily S.; Segura, Martha

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) topography, first-surface (FS) topography, and canopy-height (CH) datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Science Center, St. Petersburg, FL; the National Park Service (NPS), Gulf Coast Network, Lafayette, LA; and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, VA. This project provides highly detailed and accurate datasets of the Naval Live Oaks Area in Florida's Gulf Islands National Seashore, acquired June 30, 2007. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area

  9. EAARL coastal topography and imagery-Fire Island National Seashore, New York, 2009

    USGS Publications Warehouse

    Vivekanandan, Saisudha; Klipp, E.S.; Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Fredericks, Xan; Stevens, Sara

    2010-01-01

    These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Fire Island National Seashore in New York, acquired on July 9 and August 3, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral CIR camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar

  10. Multispectral imaging with vertical silicon nanowires

    PubMed Central

    Park, Hyunsung; Crozier, Kenneth B.

    2013-01-01

    Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156

  11. Aerial Photography Summary Record System

    USGS Publications Warehouse

    ,

    1998-01-01

    The Aerial Photography Summary Record System (APSRS) describes aerial photography projects that meet specified criteria over a given geographic area of the United States and its territories. Aerial photographs are an important tool in cartography and a number of other professions. Land use planners, real estate developers, lawyers, environmental specialists, and many other professionals rely on detailed and timely aerial photographs. Until 1975, there was no systematic approach to locate an aerial photograph, or series of photographs, quickly and easily. In that year, the U.S. Geological Survey (USGS) inaugurated the APSRS, which has become a standard reference for users of aerial photographs.

  12. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    PubMed

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.

  13. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images

    PubMed Central

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  14. Toward Multispectral Imaging with Colloidal Metasurface Pixels.

    PubMed

    Stewart, Jon W; Akselrod, Gleb M; Smith, David R; Mikkelsen, Maiken H

    2017-02-01

    Multispectral colloidal metasurfaces are fabricated that exhibit greater than 85% absorption and ≈100 nm linewidths by patterning film-coupled nanocubes in pixels using a fusion of bottom-up and top-down fabrication techniques over wafer-scale areas. With this technique, the authors realize a multispectral pixel array consisting of six resonances between 580 and 1125 nm and reconstruct an RGB image with 9261 color combinations.

  15. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  16. Multispectral Image Analysis of Hurricane Gilbert

    DTIC Science & Technology

    1989-05-19

    Classification) Multispectral Image Analysis of Hurrican Gilbert (unclassified) 12. PERSONAL AUTHOR(S) Kleespies, Thomas J. (GL/LYS) 13a. TYPE OF REPORT...cloud top height. component, of tle image in the red channel, and similarly for the green and blue channels. Multispectral Muti.pectral image analysis can...However, there seems to be few references to the human range of vision, the selection as to which mllti.pp.tral image analysis of scenes or

  17. Multispectral palmprint recognition using a quaternion matrix.

    PubMed

    Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng

    2012-01-01

    Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.

  18. Multispectral Palmprint Recognition Using a Quaternion Matrix

    PubMed Central

    Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng

    2012-01-01

    Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049

  19. Differentiating aquatic plant communities in a eutrophic river using hyperspectral and multispectral remote sensing

    USGS Publications Warehouse

    Tian, Y.Q.; Yu, Q.; Zimmerman, M.J.; Flint, S.; Waldron, M.C.

    2010-01-01

    This study evaluates the efficacy of remote sensing technology to monitor species composition, areal extent and density of aquatic plants (macrophytes and filamentous algae) in impoundments where their presence may violate water-quality standards. Multispectral satellite (IKONOS) images and more than 500 in situ hyperspectral samples were acquired to map aquatic plant distributions. By analyzing field measurements, we created a library of hyperspectral signatures for a variety of aquatic plant species, associations and densities. We also used three vegetation indices. Normalized Difference Vegetation Index (NDVI), near-infrared (NIR)-Green Angle Index (NGAI) and normalized water absorption depth (DH), at wavelengths 554, 680, 820 and 977 nm to differentiate among aquatic plant species composition, areal density and thickness in cases where hyperspectral analysis yielded potentially ambiguous interpretations. We compared the NDVI derived from IKONOS imagery with the in situ, hyperspectral-derived NDVI. The IKONOS-based images were also compared to data obtained through routine visual observations. Our results confirmed that aquatic species composition alters spectral signatures and affects the accuracy of remote sensing of aquatic plant density. The results also demonstrated that the NGAI has apparent advantages in estimating density over the NDVI and the DH. In the feature space of the three indices, 3D scatter plot analysis revealed that hyperspectral data can differentiate several aquatic plant associations. High-resolution multispectral imagery provided useful information to distinguish among biophysical aquatic plant characteristics. Classification analysis indicated that using satellite imagery to assess Lemna coverage yielded an overall agreement of 79% with visual observations and >90% agreement for the densest aquatic plant coverages. Interpretation of biophysical parameters derived from high-resolution satellite or airborne imagery should prove to be a

  20. Comparative Assessment of Very High Resolution Satellite and Aerial Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.

    2015-03-01

    This paper aims to assess the accuracy and radiometric quality of orthorectified high resolution satellite imagery from Pleiades-1B satellites through a comparative evaluation of their quantitative and qualitative properties. A Pleiades-B1 stereopair of high resolution images taken in 2013, two adjacent GeoEye-1 stereopairs from 2011 and aerial orthomosaic (LSO) provided by NCMA S.A (Hellenic Cadastre) from 2007 have been used for the comparison tests. As control dataset orthomosaic from aerial imagery provided also by NCMA S.A (0.25m GSD) from 2012 was selected. The process for DSM and orthoimage production was performed using commercial digital photogrammetric workstations. The two resulting orthoimages and the aerial orthomosaic (LSO) were relatively and absolutely evaluated for their quantitative and qualitative properties. Test measurements were performed using the same check points in order to establish their accuracy both as far as the single point coordinates as well as their distances are concerned. Check points were distributed according to JRC Guidelines for Best Practice and Quality Checking of Ortho Imagery and NSSDA standards while areas with different terrain relief and land cover were also included. The tests performed were based also on JRC and NSSDA accuracy standards. Finally, tests were carried out in order to assess the radiometric quality of the orthoimagery. The results are presented with a statistical analysis and they are evaluated in order to present the merits and demerits of the imaging sensors involved for orthoimage production. The results also serve for a critical approach for the usability and cost efficiency of satellite imagery for the production of Large Scale Orthophotos.

  1. Classifying and monitoring water quality by use of satellite imagery

    NASA Technical Reports Server (NTRS)

    Scherz, J. P.; Crane, D. R.; Rogers, R. H.

    1975-01-01

    A technique in which LANDSAT measurements from very clear lakes are subtracted from measurements from other lakes in order to remove atmospheric and surface noise effects to obtain a residual signal dependent only on the material suspended in the water is described. This residual signal is used by the Multispectral Data Analysis System as a basis for producing color categorized imagery showing lakes by type and concentration of suspended material. Several hundred lakes in the Madison and Spooner, Wisconsin area were categorized for tannin or non-tannin waters and for the degree of algae, silt, weeds, and bottom effects.

  2. Physically-based parameterization of spatially variable soil and vegetation using satellite multispectral data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.

  3. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  4. Landsat sattelite multi-spectral image classification of land cover and land use changes for GIS-based urbanization analysis in irrigation districts of lower Rio Grande Valley of Texas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Lower Rio Grande Valley in the south of Texas is experiencing rapid increase of population to bring up urban growth that continues influencing on the irrigation districts in the region. This study evaluated the Landsat satellite multi-spectral imagery to provide information for GIS-based urbaniz...

  5. Application of airborne thermal imagery to surveys of Pacific walrus

    USGS Publications Warehouse

    Burn, D.M.; Webber, M.A.; Udevitz, M.S.

    2006-01-01

    We conducted tests of airborne thermal imagery of Pacific walrus to determine if this technology can be used to detect walrus groups on sea ice and estimate the number of walruses present in each group. In April 2002 we collected thermal imagery of 37 walrus groups in the Bering Sea at spatial resolutions ranging from 1-4 m. We also collected high-resolution digital aerial photographs of the same groups. Walruses were considerably warmer than the background environment of ice, snow, and seawater and were easily detected in thermal imagery. We found a significant linear relation between walrus group size and the amount of heat measured by the thermal sensor at all 4 spatial resolutions tested. This relation can be used in a double-sampling framework to estimate total walrus numbers from a thermal survey of a sample of units within an area and photographs from a subsample of the thermally detected groups. Previous methods used in visual aerial surveys of Pacific walrus have sampled only a small percentage of available habitat, resulting in population estimates with low precision. Results of this study indicate that an aerial survey using a thermal sensor can cover as much as 4 times the area per hour of flight time with greater reliability than visual observation.

  6. Aerial Explorers and Robotic Ecosystems

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Greg

    2004-01-01

    A unique bio-inspired approach to autonomous aerial vehicle, a.k.a. aerial explorer technology is discussed. The work is focused on defining and studying aerial explorer mission concepts, both as an individual robotic system and as a member of a small robotic "ecosystem." Members of this robotic ecosystem include the aerial explorer, air-deployed sensors and robotic symbiotes, and other assets such as rovers, landers, and orbiters.

  7. Landscape-scale geospatial research utilizing low elevation aerial photography generated with commercial unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Lipo, C. P.; Lee, C.; Wechsler, S.

    2012-12-01

    With the ability to generate on demand high-resolution imagery across landscapes, unmanned aerial systems (UAS) are increasingly become the tools of choice for geospatial researchers. At CSULB, we have implemented a number of aerial systems in order to conduct archaeological, vegetation and terrain analyses. The platforms include the commercially available X100 by Gatewing, a hobby based aircraft, kites, and tethered blimps. From our experience, each platform has advantages and disadvantages n applicability int eh field and derived imagery. The X100, though comparatively more costly, produces images with excellent coverage of areas of interest and can fly in a wide range of weather conditions. The hobby plane solutions are low-cost and flexible in their configuration but their relative lightweight makes them difficult to fly in windy conditions and the sets of images produced can widely vary. The tethered blimp has a large payload and can fly under many conditions but its ability to systematically cover large areas is very limited. Kites are extremely low-cost but have similar limitations to blimps for area coverage and limited payload capabilities. Overall, we have found the greatest return for our investment from the Gatewing X100, despite its relatively higher cost, due to the quality of the images produced. Developments in autopilots, however, may improve the hobby aircraft solution and allow X100 like products to be produced in the near future. Results of imagery and derived products from these UAS missions will be presented and evaluated. Assessment of the viability of these UAS-products will inform the research community of their applicability to a range of applications, and if viable, could provide a lower cost alternative to other image acquisition methods.

  8. Using high resolution multispectral imaging to map Pacific coral reefs in support of UNESCO's World Heritage Central Pacific project

    NASA Astrophysics Data System (ADS)

    Siciliano, Daria; Olsen, Richard C.

    2007-10-01

    Concerns over worldwide declines in marine resources have prompted the search for innovative solutions for their conservation and management, particularly for coral reef ecosystems. Rapid advances in sensor resolution, coupled with image analysis techniques tailored to the unique optical problems of marine environments have enabled the derivation of detailed benthic habitat maps of coral reef habitats from multispectral satellite imagery. Such maps delineate coral reefs' main ecological communities, and are essential for management of these resources as baseline assessments. UNESCO's World Heritage Central Pacific Project plans to afford protection through World Heritage recognition to a number of islands and atolls in the central Pacific Ocean, including the Phoenix Archipelago in the Republic of Kiribati. Most of these islands however lack natural resource maps needed for the identification of priority areas for inclusion in a marine reserve system. Our project provides assistance to UNESCO's World Heritage Centre and the Kiribati Government by developing benthic and terrestrial habitat maps of the Phoenix Islands from high-resolution multispectral imagery. The approach involves: (i) the analysis of new Quickbird multispectral imagery; and (ii) the use of MARXAN, a simulated annealing algorithm that uses a GIS interface. Analysis of satellite imagery was performed with ENVI®, and includes removal of atmospheric effects using ATCOR (a MODTRAN4 radiative transfer model); de-glinting and water column correction algorithms; and a number of unsupervised and supervised classifiers. Previously collected ground-truth data was used to train classifications. The resulting habitat maps are then used as input to MARXAN. This algorithm ultimately identifies a proportion of each habitat to be set aside for protection, and prioritizes conservation areas. The outputs of this research are being delivered to the UNESCO World Heritage Centre office and the Kiribati Government as

  9. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multi